repo_name
stringlengths
6
77
path
stringlengths
8
215
license
stringclasses
15 values
cells
sequence
types
sequence
kota7/mecabwrap-py
notebook/mecabwrap - Python Interface to MeCab for Unix and Windows.ipynb
mit
[ "mecabwrap\nA Python Interface to MeCab for Unix and Windows\n<table align=\"left\">\n<tr> \n <td>\n <a href=\"https://travis-ci.org/kota7/mecabwrap-py\" target=\"_blank\">\n <img src=\"https://travis-ci.org/kota7/mecabwrap-py.svg?branch=master\">\n </a>\n </td> \n <td>\n <a href=\"https://ci.appveyor.com/project/kota7/mecabwrap-py/branch/master \" target=\"_blank\">\n <img src=\"https://ci.appveyor.com/api/projects/status/oidn1rfte6u8kavs/branch/master?svg=true\">\n </a>\n </td>\n <td>\n <a href=\"https://badge.fury.io/py/mecabwrap\" target=\"_blank\">\n <img src=\"https://badge.fury.io/py/mecabwrap.svg\">\n </a>\n </td>\n</tr> \n</table>\n\nmecabwrap is yet another Python interface to MeCab Morphological Analyzer.\nIt is designed to work seamlessly both on Unix and Windows machine.\nRequirement\n\nPython 2.7+ or 3.4+ (May also work on older versions, but not tested any more)\nMeCab 0.996\n\nInstallation\n1. Install MeCab\nUbuntu\nbash\n$ sudo apt-get install mecab libmecab-dev mecab-ipadic-utf8\nMac OSX\nbash\n$ brew install mecab mecab-ipadic\nWindows\nDownload and run the installer.\nSee also: official website \n2. Install this Package\nThe package is now on PyPI, so can be installed by pip command:\nbash\n$ pip install mecabwrap\nOr, the latest development version can be installed from the GitHub.\nbash\n$ git clone --depth 1 https://github.com/kota7/mecabwrap-py.git\n$ cd mecabwrap-py\n$ pip install -U ./\nQuick Check\nFollowing command will print the MeCab version.\nOtherwise, you do not have MeCab installed or MeCab is not on the search path.\n```bash\n$ mecab -v\nshould result in mecab of 0.996 or similar.\n```\nTo verify that the package is successfully installed, try the following:\nbash\n$ python\n```python\n\n\n\nfrom mecabwrap import tokenize\nfor token in tokenize(u\"すもももももももものうち\"): \n... print(token)\n... \nすもも 名詞,一般,,,,,すもも,スモモ,スモモ\nも 助詞,係助詞,,,,,も,モ,モ\nもも 名詞,一般,,,,,もも,モモ,モモ\nも 助詞,係助詞,,,,,も,モ,モ\nもも 名詞,一般,,,,,もも,モモ,モモ\nの 助詞,連体化,,,,,の,ノ,ノ\nうち 名詞,非自立,副詞可能,,,*,うち,ウチ,ウチ\n```", "# Version for this notebook\n!pip list | grep mecabwrap", "Usage\nA Simple Tokenizer\nThe tokenize function is a high level API for splitting a text into tokens.\nIt returns a generator of tokens.", "from mecabwrap import tokenize, print_token\n\nfor token in tokenize('すもももももももものうち'):\n print_token(token)", "Token is defined as a namedtuple (v0.3.2+) with the following fields:\n\nsurface: Word that appear in the text\npos: Part of speech\npos1: Part of speech, detail 1\npos2: Part of speech, detail 2\npos3: Part of speech, detail 3\ninfl_type: Inflection type\ninfl_form: Inflection form\nbaseform: Original form\nreading: Surface written in katakana\nphoenetic: Surface pronunciation\nlemma: Representative form of the word. 語彙素\nlemma_reading: Reading of lemma\n\nAmong these, lemma and lemma_reading are not available in ipadic. They are defined in unidic-based dictionaries.", "token", "Using MeCab Options\nTo configure the MeCab calls, one may use do_ functions that support arbitrary number of MeCab options.\nCurrently, the following three do_ functions are provided.\n- do_mecab: works with a single input text and returns the result as a string.\n- do_mecab_vec: works with a multiple input texts and returns a string of concatenated results.\n- do_mecab_iter: works with a multiple input texts and returns a generator.\nFor example, following code invokes the wakati option, so the outcome be words separated by spaces with no meta information. \nSee the official site for more details.", "from mecabwrap import do_mecab\nout = do_mecab('人生楽ありゃ苦もあるさ', '-Owakati')\nprint(out)", "The exapmle below uses do_mecab_vec to parse multiple texts.\nNote that -F option configures the outcome formatting.", "from mecabwrap import do_mecab_vec\nins = ['春はあけぼの', 'やうやう白くなりゆく山際', '少し明かりて', '紫だちたる雲の細くたなびきたる']\n\nout = do_mecab_vec(ins, '-F%f[6](%f[1]) | ', '-E...ここまで\\n')\nprint(out)", "Returning Iterators\nWhen the number of input text is large, then holding the outcomes in the memory may not be a good idea. do_mecab_iter function, which works for multiple texts, returns a generator of MeCab results.\nWhen byline=True, chunks are separated by line breaks; a chunk corresponds to a token in the default setting.\nWhen byline=False, chunks are separated by EOS; hence a chunk corresponds to a sentence.", "from mecabwrap import do_mecab_iter\n\nins = ['春はあけぼの', 'やうやう白くなりゆく山際', '少し明かりて', '紫だちたる雲の細くたなびきたる']\n\nprint('\\n*** generating tokens ***')\ni = 0\nfor text in do_mecab_iter(ins, byline=True):\n i += 1\n print('(' + str(i) + ')\\t' + text)\n \nprint('\\n*** generating tokenized sentences ***')\ni = 0\nfor text in do_mecab_iter(ins, '-E', '(文の終わり)', byline=False):\n i += 1\n print('---(' + str(i) + ')\\n' + text)", "Writing the outcome to a file\nTo write the MeCab outcomes directly to a file, one may either use -o option or outpath argument. Note that this does not work with do_mecab_iter, since it is designed to write the outcomes to a temporary file.", "do_mecab('すもももももももものうち', '-osumomo1.txt')\n# or,\ndo_mecab('すもももももももものうち', outpath='sumomo2.txt')\n\nwith open('sumomo1.txt') as f: \n print(f.read())\nwith open('sumomo2.txt') as f: \n print(f.read())\n\nimport os\n# clean up\nos.remove('sumomo1.txt')\nos.remove('sumomo2.txt')\n\n\n# these get error\ntry:\n res = do_mecab_iter(['すもももももももものうち'], '-osumomo3.txt')\n next(res)\nexcept Exception as e:\n print(e)\n\ntry:\n res = do_mecab_iter(['すもももももももものうち'], outpath='sumomo3.txt')\n next(res)\nexcept Exception as e:\n print(e)", "Using Dictionary (v0.3.0+)\ndo_ functions accepts dictionary option to specify the location of the system directory.\ndictionary can be either:\n\npath to the system directory\nsub-directory name under the mecab's default dicdir (note: mecab-config is required for this)\n\nThis provides an intuitive syntax for using extended dictionaries such as ipadic-neologd or unidic-nelogd.", "# this cell assumes that mecab-ipadic-neologd is already installed\n# otherwise, follow the instruction at https://github.com/neologd/mecab-ipadic-neologd\nprint(\"*** Default ipadic ***\")\nprint(do_mecab(\"メロンパンを食べたい\"))\n\nprint(\"*** With ipadic neologd ***\")\nprint(do_mecab(\"メロンパンを食べたい\", dictionary=\"mecab-ipadic-neologd\"))\n\n# this is equivalent to giving the path\ndicdir, = !mecab-config --dicdir\nprint(do_mecab(\"メロンパンを食べたい\",\n dictionary=os.path.join(dicdir, \"mecab-ipadic-neologd\")))", "Very Long Input and Buffer Size (v0.2.3+)\nWhen input text is longer than the input buffer size (default: 8192), MeCab automatically split it into two \"sentences\", by inserting an extra EOS (and a few letters are lost around the separation point).\nAs a result, do_mecab_vec and do_mecab_iter might produce output of length longer than the input.\nThe do_ functions provide two workarounds for this:\n1. If the option auto_buffer_size is True, the input-buffer-size option is automatically adjusted to the level as large as covering all input text. Note that it won't work when the input size exceeds the MeCab's maximum buffer size, 8192 * 640 ~ 5MB.\n1. If the option trancate is True, input text is truncated so that they are covered by the input buffer size.\nNote that do_mecab does not have these features.", "import warnings\n\nx = 'すもももももももものうち!' * 225\nprint(\"input buffer size =\", len(x.encode()))\n\nwith warnings.catch_warnings(record=True) as w:\n res1 = list(do_mecab_iter([x]))\n# the text is split into two since it exceeds the input buffer size\nprint(\"output length =\", len(res1))\n\nprint('***\\nEnd of the first element')\nprint(res1[0][-150:])\n\nprint('***\\nBeginning of the second element')\nprint(res1[1][0:150])\n\nimport re\n\nres2 = list(do_mecab_iter([x], auto_buffer_size=True))\nprint(\"output length =\", len(res2))\n\nprint('***\\nEnd of the first element')\nprint(res2[0][-150:])\n\n# count the number of '!', to confirm all 223 repetitions are covered\nprint('number of \"!\" =', len(re.findall(r'!', ''.join(res2))))\n\nprint()\nres3 = list(do_mecab_iter([x], truncate=True))\nprint(\"output length =\", len(res3))\n\nprint('***\\nEnd of the first element')\nprint(res3[0][-150:])\n\n# count the number of '!', to confirm some are not covered due to trancation\nprint('number of \"!\" =', len(re.findall(r'!', ''.join(res3))))\n", "Batch processing (v0.3.2+)\nmecab_batch function supports multiple text input.\nThe function takes a list of strings and apply mecab tokenizer to each.\nThe output is the list of tokenization outcomes.\nmecab_batch_iter function works the similarly but returns a generator instead.", "from mecabwrap import mecab_batch\n\nx = [\"明日は晴れるかな\", \"雨なら読書をしよう\"]\nmecab_batch(x)", "By default, each string is converted into a list of Token objects.\nTo obtain a more concise outcome, We can specify a converter function to the tokens as format_func option.\nformat_func must be a function that takes a single Token object and returns the parsed outcome.", "# use baseform if exists, otherwise surface\nmecab_batch(x, format_func=lambda x: x.baseform or x.surface)", "We can filter certain part-of-speeches by pos_filter option.\nMore complex filtering can be achieved by filter_func option.", "mecab_batch(x, format_func=lambda x: x.baseform or x.surface, pos_filter=(\"名詞\", \"動詞\"))\n\nmecab_batch(x, format_func=lambda x: x.baseform or x.surface, \n filter_func=lambda x: len(x.surface)==2)", "Scikit-learn compatible transformer\nMecabTokenizer is a scikit-learn compatible transformer that applies mecab_batch to a list of string inputs.", "from mecabwrap import MecabTokenizer\n\ntokenizer = MecabTokenizer(format_func=lambda x: x.surface)\ntokenizer.transform(x)\n\nfrom sklearn.pipeline import Pipeline\nfrom sklearn.feature_extraction.text import TfidfVectorizer\nimport pandas as pd\nx = [\"明日は晴れるかな\", \"明日天気になあれ\"]\n\np = Pipeline([\n (\"mecab\", MecabTokenizer(format_func=lambda x: x.surface)),\n (\"tfidf\", TfidfVectorizer(tokenizer=lambda x: x, lowercase=False))\n])\n\ny = p.fit_transform(x).todense()\npd.DataFrame(y, columns=p.steps[-1][-1].get_feature_names())", "Note on Python 2\nAll text inputs are assumed to be unicode.\nIn Python2, inputs must be u'' string, not ''.\nIn python3, str type is unicode, so u'' and '' are equivalent.", "o1 = do_mecab('すもももももももものうち') # this works only for python 3\no2 = do_mecab(u'すもももももももものうち') # this works both for python 2 and 3\nprint(o1)\nprint(o2)", "Note on dictionary encodings\nThe functions takes mecab_enc option, which indicates the encoding of the MeCab dictionary being used. Usually this can be left as the default value None, so that the encoding is automatically detected. Alternatively, one may specify the encoding explicitly.", "# show mecab dict\n! mecab -D | grep charset\nprint()\n\no1 = do_mecab('日本列島改造論', mecab_enc=None) # default\nprint(o1)\n\no2 = do_mecab('日本列島改造論', mecab_enc='utf-8') # explicitly specified\nprint(o2)\n\n#o3 = do_mecab('日本列島改造論', mecab_enc='cp932') # wrong encoding, fails\n" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
robertoalotufo/ia898
src/polar.ipynb
mit
[ "Function iapolar\nIntroduction\n\nThe polar coordinates (r, θ) of a point on the Euclidean plane whose origin is denoted by point 'O' are defined as:\nr: the distance between point 'P' and the point 'O'.\nθ: the angle between the line segment and the axis x.\n\n\n\n<img src='../figures/cartesianas.jpg',width=300pt></img>\n<img src='../figures/polar.jpg',width=300pt></img> \nSynopse\n\n\ng = iapolar(f, domain, thetamax=2*pi)\n\n\ng: Image converted to polar coordinates. \n\n\nf: Image (cartesian coordinates). Input image.\n\ndomain: Domain image\nthetamax: Float. Default = 2*pi, Max theta in the transformation.\n\nDescription\n\nFunction to convert 2D image in cartesian coordinates to polar coordinates. The origin is at the\n center of the image and the transformation is applied to the larger square centered in the image.", "import numpy as np\n\ndef polar(f, domain, thetamax = 2 * np.pi):\n import ia898.src as ia\n \n m,n = f.shape\n dm,dn = domain\n Ry,Rx = np.floor(np.array(f.shape)/2)\n\n b = min(Ry,Rx)/dm\n a = thetamax/dn\n\n y,x = np.indices(domain)\n\n XI = Rx + (b*y)*np.cos(a*x)\n YI = Ry + (b*y)*np.sin(a*x)\n\n g = ia.interpollin(f, np.array([YI.ravel(), XI.ravel()]))\n g = f[YI.astype(np.int),XI.astype(np.int)]\n g.shape = domain \n\n return g", "Examples", "testing = (__name__ == \"__main__\")\nif testing:\n import numpy as np\n import sys,os\n ia898path = os.path.abspath('../../')\n if ia898path not in sys.path:\n sys.path.append(ia898path)\n import ia898.src as ia\n \n %matplotlib inline\n import matplotlib.image as mpimg\n", "Example 1", "if testing:\n f = np.array([[1,0,0,0,0,0],\n [0,0,0,0,0,0],\n [0,0,0,1,0,0],\n [0,0,0,0,0,1],\n [0,0,0,0,0,0]])\n\n g = polar(f, (6,6))\n\n print(g)", "Example 2", "if testing:\n f = mpimg.imread(\"../data/cameraman.tif\")\n ia.adshow(f, \"Figure a) - Original Image\")\n g = polar(f,(250,250))\n ia.adshow(g, \"Figure b) - Image converted to polar coordinates, 0 to 2*pi\")\n\n g = polar(f,(250,250), np.pi)\n ia.adshow(g, \"Figure c) - Image converted to polar coordinates, 0 to pi\")", "Example 3 - non square image", "if testing:\n f = mpimg.imread('../data/astablet.tif')\n ia.adshow(f,'original')\n g = polar(f, (256,256))\n ia.adshow(g,'polar')\n f1 = f.transpose()\n ia.adshow(f1,'f1: transposed')\n g1 = polar(f1, (256,256))\n ia.adshow(g1,'polar of f1')\n ", "Equation\n$$ \\begin{matrix}\n x & = & r * cos \\theta \\\n y & = & r * sin \\theta \\\n\\end{matrix} $$ \n$$ \\begin{matrix}\n r & = & \\sqrt{x^2 + y^2}\n\\end{matrix} $$\nContributions\n\nLuiz Wetzel, 1st semester 2010\nDanilo Rodrigues Pereira, 1st semester 2011" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
ZallenLab/denticleorganization
statistical_modeling_PYTHON/montecarlo_simulations/montecarlo_denticlepositions.ipynb
gpl-3.0
[ "import numpy as np\nimport pandas as pd\nimport scipy.stats as sps\nimport matplotlib.pyplot as plt\n%matplotlib inline\n\n\nfilepath = 'yw_all_RelativePosition.csv'\n\ndef TestPasses(pval, cutoff):\n if pval <= cutoff: \n return 'different'\n elif pval > cutoff: \n return 'same'\n\n\ndef IndivStatTest(simdata, filename_out):\n# IN: 3D np array, list of strings with length=arr[X,:,:] (array axis 0), name of csv file\n\n test_ks = sps.ks_2samp(invivo_d, simdata)\n # outputs [ks-statistic, p-value]\n\n with open(filename_out, 'a') as f:\n csv.writer(f).writerows([[column, test_ks[0], test_ks[1], TestPasses(test_ks[1], 0.05)]])\n \n return test_ks[1], TestPasses(test_ks[1], 0.05)\n ", "Generate uniform random distributions based on the number of cells given", "cellstosim = [(2,12)] #,(2,1140),(3,476),(4,130)]\niterations = 10\n\nfor elem in cellstosim: \n dent, cells = elem\n\n positions = np.zeros(((cells*dent),iterations))\n fname = str(dent)+'_montecarlo_positions_replicates.csv'\n \n for it in range(0,iterations): \n this = np.reshape(np.random.rand(cells, dent),(1,-1)) \n positions[:,it] = this\n \n\n np.savetxt(fname, positions, delimiter=',')\n\npositions.shape", "calculate KS test data, and count how many tests pass for each dentincell number (output in summarydata.csv file)", "def TestPasses(pval, cutoff):\n if pval <= cutoff: \n return 'different'\n elif pval > cutoff: \n return 'same'\n\n\ndef IndivStatTest(simdata, filename_out):\n# IN: 3D np array, list of strings with length=arr[X,:,:] (array axis 0), name of csv file\n\n test_ks = sps.ks_2samp(invivo_d, simdata)\n # outputs [ks-statistic, p-value]\n\n with open(filename_out, 'a') as f:\n csv.writer(f).writerows([[column, test_ks[0], test_ks[1], TestPasses(test_ks[1], 0.05)]])\n \n return test_ks[1], TestPasses(test_ks[1], 0.05)\n \n\ndicmap = ['null','A','B','C','D']\n\n\ninvivo_file = 'yw_all_RelativePosition.csv'\ndentnumbers = [1,2,3,4]\n\n\ninvivo_data = pd.read_csv(invivo_file)\n\nfor dentincell in dentnumbers: \n # clear out missing data\n invivo = invivo_data[dicmap[dentincell]]\n invivo = invivo.replace(0,np.nan) # turn zeros into NaNs\n invivo = invivo.dropna(how='all') # drop any column (axis=0) or row (axis=1) where ALL values are NaN\n\n invivo_d = invivo/100\n\n mcname = str(dentincell)+'_montecarlo_positions_replicates.csv'\n sfname = 'summarydata.csv'\n\n montecarlo = pd.read_csv(mcname,header=None)\n pf = []\n\n for column in montecarlo: \n pval, dif = IndivStatTest(montecarlo[column], 'montecarlo_kstests_'+str(dentincell)+'dent.csv')\n \n pf.append(dif)\n\n pfr = pd.Series(pf)\n with open(sfname,'a') as f:\n f.write(str(dentincell) + ',' + str(pfr[pfr == 'same'].count()) + ',\\n')\n\n\npfr = pd.Series(pf)\nwith open(sfname,'a') as f:\n f.write(str(dentincell) + ',' + str(pfr[pfr == 'same'].count()) + ',\\n')", "make basic plots", "hist, bins = np.histogram(positions,bins=50)\nwidth = 0.7 * (bins[1] - bins[0])\ncenter = (bins[:-1] + bins[1:]) / 2\nplt.bar(center, hist, align='center', width=width)", "pick out first 25 for plotting", "dentincell = 1\n\nmcname = str(dentincell)+'_montecarlo_positions_replicates.csv'\n\nmc = pd.read_csv(mcname,header=None)\nmc = mc.loc[:,0:49]\n\nmc.to_csv('25reps_'+mcname)\n\n\n\nmc" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
esa-as/2016-ml-contest
MSS_Xmas_Trees/ml_seg_sub5_STU.ipynb
apache-2.0
[ "Contest entry by Wouter Kimman\nStrategy:\n*classification of the wells\n*selective use of the training data based on this classification\n*stacking (Using initial predictions as features for the second phase)\n*undersampling for specific difficult facies", "from numpy.fft import rfft\nfrom scipy import signal\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport plotly.plotly as py\nimport pandas as pd\nimport timeit\nfrom sqlalchemy.sql import text\nfrom sklearn import tree\n#from sklearn.model_selection import LeavePGroupsOut\nfrom sklearn import metrics\nfrom sklearn.tree import export_graphviz\nfrom sklearn.ensemble import RandomForestClassifier\nfrom sklearn.linear_model import LogisticRegression\nfrom sklearn import linear_model\n#import sherlock.filesystem as sfs\n#import sherlock.database as sdb\nfrom sklearn import preprocessing\nfrom sklearn.model_selection import cross_val_score\nfrom sklearn.model_selection import train_test_split\nfrom scipy import stats\nfrom sklearn import svm\nfrom sklearn.neighbors import KNeighborsClassifier\nfrom sklearn.preprocessing import StandardScaler\nfrom sklearn.metrics import confusion_matrix\nfrom sklearn.neural_network import MLPClassifier\nfrom sklearn.ensemble import GradientBoostingClassifier\n\n\ndef permute_facies_nr(predicted_super, predicted0, faciesnr):\n predicted=predicted0.copy()\n N=len(predicted)\n for ii in range(N):\n if predicted_super[ii]==1:\n predicted[ii]=faciesnr \n return predicted\n\ndef binarify(dataset0, facies_nr):\n dataset=dataset0.copy()\n mask=dataset != facies_nr\n dataset[mask]=0\n mask=dataset == facies_nr\n dataset[mask]=1 \n return dataset\n\n\ndef make_balanced_binary(df_in, faciesnr, factor):\n df=df_in.copy()\n y=df['Facies'].values\n y0=binarify(y, faciesnr)\n df['Facies']=y0\n\n df1=df[df['Facies']==1]\n X_part1=df1.drop(['Formation', 'Well Name', 'Depth','Facies'], axis=1)\n y_part1=df1['Facies'].values\n N1=len(df1)\n\n df2=df[df['Facies']==0]\n X_part0=df2.drop(['Formation', 'Well Name', 'Depth','Facies'], axis=1)\n y_part0=df2['Facies'].values\n N2=len(df2)\n print \"ratio now:\"\n print float(N2)/float(N1)\n ratio_to_keep=factor*float(N1)/float(N2)\n print \"ratio after:\"\n print float(N2)/(factor*float(N1))\n dum1, X_part2, dum2, y_part2 = train_test_split(X_part0, y_part0, test_size=ratio_to_keep, random_state=42)\n\n tmp=[X_part1, X_part2] \n X = pd.concat(tmp, axis=0)\n y = np.concatenate((y_part1, y_part2))\n return X, y\n\n\n\n\ndef phaseI_model(regime_train, correctA, go_B, clf, pred_array, pred_blind, features_blind): \n clf.fit(regime_train,correctA) \n predicted_B = clf.predict(go_B)\n pred_array = np.vstack((predicted_B, pred_array)) \n predicted_blind1 = clf.predict(features_blind)\n pred_blind = np.vstack((predicted_blind1, pred_blind)) \n return pred_array, pred_blind\n\ndef phaseI_model_scaled(regime_train, correctA, go_B, clf, pred_array, pred_blind, features_blind): \n regime_train=StandardScaler().fit_transform(regime_train)\n go_B=StandardScaler().fit_transform(go_B)\n features_blind=StandardScaler().fit_transform(features_blind)\n clf.fit(regime_train,correctA) \n predicted_B = clf.predict(go_B)\n pred_array = np.vstack((predicted_B, pred_array))\n predicted_blind1 = clf.predict(features_blind)\n pred_blind = np.vstack((predicted_blind1, pred_blind))\n return pred_array, pred_blind\n\ndef create_structure_for_regimes(df):\n allfeats=['GR','ILD_log10','DeltaPHI','PHIND','PE','NM_M','RELPOS']\n data_all = []\n for feat in allfeats:\n dff=df.groupby('Well Name').describe(percentiles=[0.1, 0.25, .5, 0.75, 0.9]).reset_index().pivot(index='Well Name', values=feat, columns='level_1')\n dff = dff.drop(['count'], axis=1)\n cols=dff.columns\n cols_new=[]\n for ii in cols:\n strin=feat + \"_\" + str(ii)\n cols_new.append(strin)\n dff.columns=cols_new \n dff1=dff.reset_index()\n if feat=='GR':\n data_all.append(dff1)\n else:\n data_all.append(dff1.iloc[:,1:])\n data_all = pd.concat(data_all,axis=1)\n return data_all ", "This is the only feature engineering used:", "\ndef magic(df):\n df1=df.copy()\n b, a = signal.butter(2, 0.2, btype='high', analog=False)\n feats0=['GR','ILD_log10','DeltaPHI','PHIND','PE','NM_M','RELPOS']\n #feats01=['GR','ILD_log10','DeltaPHI','PHIND']\n #feats01=['DeltaPHI']\n #feats01=['GR','DeltaPHI','PHIND']\n feats01=['GR',]\n feats02=['PHIND']\n #feats02=[]\n for ii in feats0:\n df1[ii]=df[ii]\n name1=ii + '_1'\n name2=ii + '_2'\n name3=ii + '_3'\n name4=ii + '_4'\n name5=ii + '_5'\n name6=ii + '_6'\n name7=ii + '_7'\n name8=ii + '_8'\n name9=ii + '_9'\n xx1 = list(df[ii])\n xx_mf= signal.medfilt(xx1,9)\n x_min1=np.roll(xx_mf, 1)\n x_min2=np.roll(xx_mf, -1)\n x_min3=np.roll(xx_mf, 3)\n x_min4=np.roll(xx_mf, 4)\n xx1a=xx1-np.mean(xx1)\n xx_fil = signal.filtfilt(b, a, xx1) \n xx_grad=np.gradient(xx1a) \n x_min5=np.roll(xx_grad, 3)\n #df1[name4]=xx_mf\n if ii in feats01: \n df1[name1]=x_min3\n df1[name2]=xx_fil\n df1[name3]=xx_grad\n df1[name4]=xx_mf \n df1[name5]=x_min1\n df1[name6]=x_min2\n df1[name7]=x_min4\n #df1[name8]=x_min5\n #df1[name9]=x_min2\n if ii in feats02:\n df1[name1]=x_min3\n df1[name2]=xx_fil\n df1[name3]=xx_grad\n #df1[name4]=xx_mf \n df1[name5]=x_min1\n #df1[name6]=x_min2 \n #df1[name7]=x_min4\n return df1\n\n \n\n\n \n \n\n#filename = 'training_data.csv'\nfilename = 'facies_vectors.csv'\ntraining_data0 = pd.read_csv(filename)\nfilename = 'validation_data_nofacies.csv'\ntest_data = pd.read_csv(filename)\n\n\nall_wells=training_data0['Well Name'].unique()\nprint all_wells\n\n# what to do with the naans\ntraining_data1=training_data0.copy()\nme_tot=training_data1['PE'].median()\nprint me_tot\nfor well in all_wells:\n df=training_data0[training_data0['Well Name'] == well] \n print well\n print len(df)\n df0=df.dropna()\n #print len(df0)\n if len(df0) > 0:\n print \"using median of local\"\n me=df['PE'].median()\n df=df.fillna(value=me)\n else:\n print \"using median of total\"\n df=df.fillna(value=me_tot)\n training_data1[training_data0['Well Name'] == well] =df\n \n\nprint len(training_data1)\ndf0=training_data1.dropna()\nprint len(df0)\n\n#remove outliers\ndf=training_data1.copy()\nprint len(df)\ndf0=df.dropna()\nprint len(df0)\ndf1 = df0.drop(['Formation', 'Well Name', 'Depth','Facies'], axis=1)\n#df=pd.DataFrame(np.random.randn(20,3))\n#df.iloc[3,2]=5\nprint len(df1)\ndf2=df0[(np.abs(stats.zscore(df1))<8).all(axis=1)]\nprint len(df2)\n\ndf2a=df2[df2['Well Name'] != 'Recruit F9'] \n\ndata_all=create_structure_for_regimes(df2a)\ndata_test=create_structure_for_regimes(test_data)\n\ndata_test", "kmeans clustering to find natural clusters:", "\nframes = [data_all, data_test]\nX = pd.concat(frames)\nX\n\nX1 = X.drop(['Well Name'], axis=1)\n\nfrom sklearn.cluster import KMeans\nkmeans = KMeans(n_clusters=2).fit(X1)\nkmeans.labels_\n\nkmeans = KMeans(n_clusters=4).fit(X1)\nkmeans.labels_", "Through experimenting with the cluster size I've decided on 4 clusters.\nThis corresponds largely with the corresponding similarity in facies distribution\nCRAWFORD is most similar to ALEXANDER and LUKE. This will called cluster 1. (The only ones with facies 1)\nSTUART is most similar to KIMZEY and NOLAN This will be called cluster 2\nCollating the Data:\nbased on the regimes we determined", "# based on kmeans clustering\ndata=[]\ndf = training_data0[training_data0['Well Name'] == 'ALEXANDER D'] \ndata.append(df)\ndf = training_data0[training_data0['Well Name'] == 'LUKE G U'] \ndata.append(df)\nRegime_1 = pd.concat(data, axis=0)\nprint len(Regime_1)\n\ndata=[]\ndf = training_data0[training_data0['Well Name'] == 'KIMZEY A'] \ndata.append(df)\ndf = training_data0[training_data0['Well Name'] == 'NOLAN'] \ndata.append(df)\nRegime_2 = pd.concat(data, axis=0)\nprint len(Regime_2)\n\ndata=[]\ndf = training_data0[training_data0['Well Name'] == 'CHURCHMAN BIBLE'] \ndata.append(df)\ndf = training_data0[training_data0['Well Name'] == 'SHRIMPLIN'] \ndata.append(df)\ndf = training_data0[training_data0['Well Name'] == 'NEWBY'] \ndata.append(df)\ndf = training_data0[training_data0['Well Name'] == 'Recruit F9'] \ndata.append(df)\nRegime_3 = pd.concat(data, axis=0)\nprint len(Regime_3)\n\ndata=[]\ndf = training_data0[training_data0['Well Name'] == 'SHANKLE'] \ndata.append(df)\ndf = training_data0[training_data0['Well Name'] == 'CROSS H CATTLE'] \ndata.append(df)\nRegime_4 = pd.concat(data, axis=0)\nprint len(Regime_4)", "Split the data into 2 parts:\nfrom A We will make initial predictions\nfrom B we will make the final prediction(s)", "df0=Regime_1.dropna()\ndf1 = df0.drop(['Formation', 'Well Name', 'Depth','Facies'], axis=1)\ndf1a=df0[(np.abs(stats.zscore(df1))<8).all(axis=1)]\ndf2a=magic(df1a)\n\n\ny=df2a['Facies'].values\nX=df2a.drop(['Formation', 'Well Name', 'Depth','Facies'], axis=1)\nregime1A_train, regime1B_train, regime1A_test, regime1B_test = train_test_split(X, y, test_size=0.5, random_state=42)\n\n\ndf0=Regime_2.dropna()\ndf1 = df0.drop(['Formation', 'Well Name', 'Depth','Facies'], axis=1)\ndf1a=df0[(np.abs(stats.zscore(df1))<8).all(axis=1)]\ndf2b=magic(df1a)\ny= df2b['Facies'].values\nX= df2b.drop(['Formation', 'Well Name', 'Depth','Facies'], axis=1)\nregime2A_train, regime2B_train, regime2A_test, regime2B_test= train_test_split(X, y, test_size=0.5, random_state=42)\n\ndf0=Regime_3.dropna()\ndf1 = df0.drop(['Formation', 'Well Name', 'Depth','Facies'], axis=1)\ndf1a=df0[(np.abs(stats.zscore(df1))<8).all(axis=1)]\ndf2c=magic(df1a)\ny=df2c['Facies'].values\nX= df2c.drop(['Formation', 'Well Name', 'Depth','Facies'], axis=1)\nregime3A_train, regime3B_train, regime3A_test, regime3B_test = train_test_split(X, y, test_size=0.5, random_state=42)\n\ndf0=Regime_4.dropna()\ndf1 = df0.drop(['Formation', 'Well Name', 'Depth','Facies'], axis=1)\ndf1a=df0[(np.abs(stats.zscore(df1))<8).all(axis=1)]\ndf2d=magic(df1a)\ny=df2d['Facies'].values\nX= df2d.drop(['Formation', 'Well Name', 'Depth','Facies'], axis=1)\nregime4A_train, regime4B_train, regime4A_test, regime4B_test = train_test_split(X, y, test_size=0.5, random_state=42)\n\n", "Phase 1a:\n-Create predictions specifically for the most difficult facies\n-at this stage we focus on TP and FP only\n\ntraining for facies 9 specifically", "df0 = test_data[test_data['Well Name'] == 'STUART'] \ndf1 = df0.drop(['Formation', 'Well Name', 'Depth'], axis=1)\ndf1a=df0[(np.abs(stats.zscore(df1))<8).all(axis=1)]\nblind=magic(df1a)\n\ndf1a.head()\n\n\nfeatures_blind = blind.drop(['Formation', 'Well Name', 'Depth'], axis=1)\n\n#============================================================\ndf0=training_data0.dropna()\ndf1 = df0.drop(['Formation', 'Well Name', 'Depth','Facies'], axis=1)\ndf1a=df0[(np.abs(stats.zscore(df1))<8).all(axis=1)]\nall1=magic(df1a)\nX, y = make_balanced_binary(all1, 9,6)\n#X, y = make_balanced_binary(all1, 9,9)\n#============================================================\ncorrect_train=y\n\nclf = RandomForestClassifier(max_depth = 6, n_estimators=600)\nclf.fit(X,correct_train)\n\npredicted_blind1 = clf.predict(features_blind)\n\npredicted_regime9=predicted_blind1.copy()\nprint(sum(predicted_regime9))", "training for facies 1 specifically", "\nfeatures_blind = blind.drop(['Formation', 'Well Name', 'Depth'], axis=1)\n\n#============================================================\ndf0=training_data0.dropna()\ndf1 = df0.drop(['Formation', 'Well Name', 'Depth','Facies'], axis=1)\ndf1a=df0[(np.abs(stats.zscore(df1))<8).all(axis=1)]\nall1=magic(df1a)\nX, y = make_balanced_binary(all1, 1,5)\n#============================================================\n\n\n\n#=============================================\ngo_A=StandardScaler().fit_transform(X)\ngo_blind=StandardScaler().fit_transform(features_blind)\ncorrect_train_A=binarify(y, 1)\n \n\nclf = linear_model.LogisticRegression()\nclf.fit(go_A,correct_train_A)\npredicted_blind1 = clf.predict(go_blind)\n\nclf = KNeighborsClassifier(n_neighbors=5)\nclf.fit(go_A,correct_train_A) \npredicted_blind2 = clf.predict(go_blind)\n\nclf = svm.SVC(decision_function_shape='ovo')\nclf.fit(go_A,correct_train_A) \npredicted_blind3 = clf.predict(go_blind)\n\nclf = svm.LinearSVC()\nclf.fit(go_A,correct_train_A) \npredicted_blind4 = clf.predict(go_blind)\n\n\n\n#####################################\npredicted_blind=predicted_blind1+predicted_blind2+predicted_blind3+predicted_blind4\nfor ii in range(len(predicted_blind)):\n if predicted_blind[ii] > 3:\n predicted_blind[ii]=1\n else:\n predicted_blind[ii]=0 \n \nfor ii in range(len(predicted_blind)):\n if predicted_blind[ii] == 1 and predicted_blind[ii-1] == 0 and predicted_blind[ii+1] == 0:\n predicted_blind[ii]=0\n if predicted_blind[ii] == 1 and predicted_blind[ii-1] == 0 and predicted_blind[ii+2] == 0:\n predicted_blind[ii]=0 \n if predicted_blind[ii] == 1 and predicted_blind[ii-2] == 0 and predicted_blind[ii+1] == 0:\n predicted_blind[ii]=0 \n##################################### \n\nprint \"-------\"\npredicted_regime1=predicted_blind.copy()\n\nprint(sum(predicted_regime1))", "training for facies 5 specifically", "features_blind = blind.drop(['Formation', 'Well Name', 'Depth'], axis=1)\n\n#============================================================\ndf0=training_data0.dropna()\ndf1 = df0.drop(['Formation', 'Well Name', 'Depth','Facies'], axis=1)\ndf1a=df0[(np.abs(stats.zscore(df1))<8).all(axis=1)]\nall1=magic(df1a)\nX, y = make_balanced_binary(all1, 5,10)\n#X, y = make_balanced_binary(all1, 5,16)\n#============================================================\n\ngo_A=StandardScaler().fit_transform(X)\ngo_blind=StandardScaler().fit_transform(features_blind)\ncorrect_train_A=binarify(y, 1)\n#============================================= \n\nclf = KNeighborsClassifier(n_neighbors=4,algorithm='brute')\nclf.fit(go_A,correct_train_A)\npredicted_blind1 = clf.predict(go_blind)\n\nclf = KNeighborsClassifier(n_neighbors=5,leaf_size=10)\nclf.fit(go_A,correct_train_A) \npredicted_blind2 = clf.predict(go_blind)\n\nclf = KNeighborsClassifier(n_neighbors=5)\nclf.fit(go_A,correct_train_A) \npredicted_blind3 = clf.predict(go_blind)\n\nclf = tree.DecisionTreeClassifier()\nclf.fit(go_A,correct_train_A) \npredicted_blind4 = clf.predict(go_blind)\n\nclf = tree.DecisionTreeClassifier()\nclf.fit(go_A,correct_train_A) \npredicted_blind5 = clf.predict(go_blind)\n\nclf = tree.DecisionTreeClassifier()\nclf.fit(go_A,correct_train_A) \npredicted_blind6 = clf.predict(go_blind)\n\n\n#####################################\npredicted_blind=predicted_blind1+predicted_blind2+predicted_blind3+predicted_blind4+predicted_blind5+predicted_blind6\nfor ii in range(len(predicted_blind)):\n if predicted_blind[ii] > 5:\n predicted_blind[ii]=1\n else:\n predicted_blind[ii]=0 \n##################################### \nprint \"-------\"\n\n\n##################################### \n\nprint \"-------\"\npredicted_regime5=predicted_blind.copy()\nprint(sum(predicted_regime5))", "training for facies 7 specifically", "features_blind = blind.drop(['Formation', 'Well Name', 'Depth'], axis=1)\n\n#============================================================\ndf0=training_data0.dropna()\ndf1 = df0.drop(['Formation', 'Well Name', 'Depth','Facies'], axis=1)\ndf1a=df0[(np.abs(stats.zscore(df1))<8).all(axis=1)]\nall1=magic(df1a)\nX, y = make_balanced_binary(all1, 7,11)\nX, y = make_balanced_binary(all1, 7,13)\n#============================================================\n\ngo_A=StandardScaler().fit_transform(X)\ngo_blind=StandardScaler().fit_transform(features_blind)\ncorrect_train_A=binarify(y, 1)\n#============================================= \n\nclf = KNeighborsClassifier(n_neighbors=4,algorithm='brute')\nclf.fit(go_A,correct_train_A)\npredicted_blind1 = clf.predict(go_blind)\n\n\nclf = KNeighborsClassifier(n_neighbors=5,leaf_size=10)\nclf.fit(go_A,correct_train_A) \npredicted_blind2 = clf.predict(go_blind)\n\nclf = KNeighborsClassifier(n_neighbors=5)\nclf.fit(go_A,correct_train_A) \npredicted_blind3 = clf.predict(go_blind)\n\nclf = tree.DecisionTreeClassifier()\nclf.fit(go_A,correct_train_A) \npredicted_blind4 = clf.predict(go_blind)\n\nclf = tree.DecisionTreeClassifier()\nclf.fit(go_A,correct_train_A) \npredicted_blind5 = clf.predict(go_blind)\n\nclf = tree.DecisionTreeClassifier()\nclf.fit(go_A,correct_train_A) \npredicted_blind6 = clf.predict(go_blind)\n\n\n#####################################\npredicted_blind=predicted_blind1+predicted_blind2+predicted_blind3+predicted_blind4+predicted_blind5+predicted_blind6\nfor ii in range(len(predicted_blind)):\n if predicted_blind[ii] > 5:\n predicted_blind[ii]=1\n else:\n predicted_blind[ii]=0 \n\n\n##################################### \n\nprint \"-------\"\npredicted_regime7=predicted_blind.copy()\nprint(sum(predicted_regime7))", "PHASE Ib\nMaking several predictions using dataset A\nPREPARE THE BLIND DATA FOR SERIAL MODELLING", "#\n#blindwell='CHURCHMAN BIBLE'\n\n#df0 = training_data0[training_data0['Well Name'] == blindwell] \n#df1 = df0.drop(['Formation', 'Well Name', 'Depth','Facies'], axis=1)\n#df1a=df0[(np.abs(stats.zscore(df1))<8).all(axis=1)]\n#blind=magic(df1a)\n#correct_facies_labels = blind['Facies'].values\n#features_blind = blind.drop(['Formation', 'Well Name', 'Depth','Facies'], axis=1)\n#pred_blind=0*correct_facies_labels\n\n\ndf0 = test_data[test_data['Well Name'] == 'STUART'] \ndf1 = df0.drop(['Formation', 'Well Name', 'Depth'], axis=1)\ndf1a=df0[(np.abs(stats.zscore(df1))<8).all(axis=1)]\nblind=magic(df1a)\npred_blind=0*predicted_regime7", "PREPARE THE DATA FOR SERIAL MODELLING\nThis could be done smarter but basically manual at this point\nselecting bias towards the REGIME the blind data has been classified as\nfor CHURCHMAN BIBLE this is regime 3\nFor CRAWFORD this is regime 1\nFor STUART this is regime 2", "main_regime=regime2A_train\nother1=regime1A_train\nother2=regime3A_train\nother3=regime4A_train\n\nmain_test=regime2A_test\nother1_test=regime1A_test\nother2_test=regime3A_test\nother3_test=regime4A_test\n\ntmp2=[regime1B_train, regime2B_train, regime3B_train, regime4B_train] \ngo_B= pd.concat(tmp2, axis=0)\ncorrectB=np.concatenate((regime1B_test, regime2B_test, regime3B_test, regime4B_test))\n\n\n\n#===================================================\ntmp1=[main_regime, other1, other2, other3] \nregime_train1= pd.concat(tmp1, axis=0)\ncorrectA1=np.concatenate((main_test, other1_test, other2_test, other3_test))\n#=================================================== \ntmp1=[main_regime, other2, other3] \nregime_train2= pd.concat(tmp1, axis=0)\ncorrectA2=np.concatenate((main_test, other2_test, other3_test))\n#===================================================\ntmp1=[main_regime, other1, other3] \nregime_train3= pd.concat(tmp1, axis=0)\ncorrectA3=np.concatenate((main_test, other1_test, other3_test))\n#=================================================== \ntmp1=[main_regime, other1, other2] \nregime_train4= pd.concat(tmp1, axis=0)\ncorrectA4=np.concatenate((main_test, other1_test, other2_test))\n#===================================================\ntmp1=[main_regime, other1] \nregime_train5= pd.concat(tmp1, axis=0)\ncorrectA5=np.concatenate((main_test, other1_test))\n#===================================================\ntmp1=[main_regime, other2] \nregime_train6= pd.concat(tmp1, axis=0)\ncorrectA6=np.concatenate((main_test, other2_test))\n#===================================================\ntmp1=[main_regime, other3] \nregime_train7= pd.concat(tmp1, axis=0)\ncorrectA7=np.concatenate((main_test, other3_test))\n#===================================================\ntmp1=[main_regime] \nregime_train8= pd.concat(tmp1, axis=0)\ncorrectA8=main_test", "Create several predictions, varying the dataset and the technique", "pred_array=0*correctB\npred_blind=np.zeros(len(features_blind))\n\nprint \"logistic regression\"\nclf = linear_model.LogisticRegression(class_weight='balanced')\npred_array, pred_blind=phaseI_model_scaled(regime_train1, correctA1, go_B, clf, pred_array, pred_blind, features_blind)\nprint \"svm\"\nclf = svm.SVC(decision_function_shape='ovo',class_weight='balanced')\npred_array, pred_blind=phaseI_model_scaled(regime_train2, correctA2, go_B, clf, pred_array, pred_blind, features_blind)\nprint \"svm lin\"\nclf = svm.LinearSVC(class_weight='balanced')\npred_array, pred_blind=phaseI_model_scaled(regime_train3, correctA3, go_B, clf, pred_array, pred_blind, features_blind)\nprint \"svm3\"\nclf = svm.SVC(C = 100, cache_size=2400, class_weight=None, coef0=0.0,\n decision_function_shape=None, degree=3, gamma=0.01, kernel='rbf',\n max_iter=-1, probability=True, random_state=49, shrinking=True,\n tol=0.001, verbose=False)\npred_array, pred_blind=phaseI_model_scaled(regime_train4, correctA4, go_B, clf, pred_array, pred_blind, features_blind)\nprint \"knn0\"\nclf = KNeighborsClassifier(n_neighbors=8)\npred_array, pred_blind=phaseI_model_scaled(regime_train4, correctA4, go_B, clf, pred_array, pred_blind, features_blind)\nprint \"knn1\"\nclf = KNeighborsClassifier(n_neighbors=4,algorithm='brute')\npred_array, pred_blind=phaseI_model_scaled(regime_train5, correctA5, go_B, clf, pred_array, pred_blind, features_blind)\nprint \"knn2\"\nclf = KNeighborsClassifier(n_neighbors=5,leaf_size=10,p=1)\npred_array, pred_blind=phaseI_model_scaled(regime_train6, correctA6, go_B, clf, pred_array, pred_blind, features_blind)\nprint \"knn3\"\nclf = KNeighborsClassifier(n_neighbors=7,p=1)\npred_array, pred_blind=phaseI_model_scaled(regime_train7, correctA7, go_B, clf, pred_array, pred_blind, features_blind)\nprint \"knn3\"\nclf = KNeighborsClassifier(n_neighbors=6,p=1)\npred_array, pred_blind=phaseI_model_scaled(regime_train1, correctA1, go_B, clf, pred_array, pred_blind, features_blind)\nprint \"knn4\"\nclf = KNeighborsClassifier(n_neighbors=8,p=1)\nregime_train1a=regime_train1.drop(['PE','GR_2','GR_3','GR_4','GR_5','GR_6','GR_7','PHIND_2','PHIND_3','PHIND_5'], axis=1)\ngo_Ba=go_B.drop(['PE','GR_2','GR_3','GR_4','GR_5','GR_6','GR_7','PHIND_2','PHIND_3','PHIND_5'], axis=1)\nfeatures_blinda=features_blind.drop(['PE','GR_2','GR_3','GR_4','GR_5','GR_6','GR_7','PHIND_2','PHIND_3','PHIND_5'], axis=1)\npred_array, pred_blind=phaseI_model_scaled(regime_train1a, correctA1, go_Ba, clf, pred_array, pred_blind, features_blinda)\nprint \"rf1\"\nclf = RandomForestClassifier(max_depth = 5, n_estimators=600)\npred_array, pred_blind=phaseI_model(regime_train2, correctA2, go_B, clf, pred_array, pred_blind, features_blind)\nprint \"rf2\"\nclf = RandomForestClassifier(max_depth = 8, n_estimators=700)\npred_array, pred_blind=phaseI_model(regime_train3, correctA3, go_B, clf, pred_array, pred_blind, features_blind)\nprint \"rf3\"\nclf = RandomForestClassifier(max_depth = 15, n_estimators=2000)\npred_array, pred_blind=phaseI_model(regime_train4, correctA4, go_B, clf, pred_array, pred_blind, features_blind)\nprint \"rf4\"\nclf = RandomForestClassifier(max_depth = 7, n_estimators=800)\npred_array, pred_blind=phaseI_model(regime_train5, correctA5, go_B, clf, pred_array, pred_blind, features_blind)\nprint \"rf5\"\nclf = RandomForestClassifier(max_depth = 10, n_estimators=1200,min_samples_leaf=12)\npred_array, pred_blind=phaseI_model(regime_train6, correctA6, go_B, clf, pred_array, pred_blind, features_blind)\nprint \"rf6\"\nclf = RandomForestClassifier(max_depth = 12, n_estimators=800,min_samples_leaf=14)\npred_array, pred_blind=phaseI_model(regime_train7, correctA7, go_B, clf, pred_array, pred_blind, features_blind)\nprint \"rf7\"\nclf= RandomForestClassifier(max_depth = 15, n_estimators=1600,min_samples_leaf=15)\npred_array, pred_blind=phaseI_model(regime_train1, correctA1, go_B, clf, pred_array, pred_blind, features_blind)\nprint \"rf8\"\nclf= RandomForestClassifier(max_depth = 15, n_estimators=1600,min_samples_leaf=15)\npred_array, pred_blind=phaseI_model(regime_train8, correctA8, go_B, clf, pred_array, pred_blind, features_blind)\nprint \"mlp\"\nclf= MLPClassifier(activation='logistic', alpha=0.01, batch_size='auto',\n beta_1=0.9, beta_2=0.999, early_stopping=False, epsilon=1e-08,\n hidden_layer_sizes=(100,), learning_rate='adaptive',\n learning_rate_init=0.001, max_iter=1000, momentum=0.9,\n nesterovs_momentum=True, power_t=0.5, random_state=49, shuffle=True,\n solver='adam', tol=0.0001, validation_fraction=0.1, verbose=False,\n warm_start=False)\npred_array, pred_blind=phaseI_model_scaled(regime_train1, correctA1, go_B, clf, pred_array, pred_blind, features_blind)\nprint \"gradient boost\"\nclf = GradientBoostingClassifier()\npred_array, pred_blind=phaseI_model(regime_train1, correctA1, go_B, clf, pred_array, pred_blind, features_blind)", "Phase II:\nStacking the predictions from phase Ib. \nNew predictions from data B\n\nFirst prediction of B data without Phase I input:", "clf = RandomForestClassifier(max_depth = 15, n_estimators=1600,min_samples_leaf=15)\nclf.fit(go_B,correctB)\npredicted_blind_PHASE_I = clf.predict(features_blind)\n#out_f1=metrics.f1_score(correct_facies_labels, predicted_blind_PHASE_I, average = 'micro')\n#print \"f1 score on the prediction of blind\"\n#print out_f1\n\npredicted_blind_PHASE_I", "Add the initial predictions as features:", "go_B_PHASE_II=go_B.copy()\nfeatures_blind_PHASE_II=features_blind.copy()\nfor kk in range(len(pred_array)-1):\n name='prediction_' + str(kk) \n featB=pred_array[kk,:]\n go_B_PHASE_II[name]=featB\n feat=pred_blind[kk,:]\n features_blind_PHASE_II[name]=feat", "Make a new prediction, with the best model on the full dataset B:", "clf.fit(go_B_PHASE_II,correctB)\npredicted_blind_PHASE_II = clf.predict(features_blind_PHASE_II)\n#out_f1=metrics.f1_score(correct_facies_labels, predicted_blind_PHASE_II, average = 'micro')\n#print \"f1 score on the prediction of blind\"\n#print out_f1", "TO DO- some more steps here\nPermute facies based on earlier prediction:", "print(sum(predicted_regime5))\npredicted_blind_PHASE_IIa=permute_facies_nr(predicted_regime5, predicted_blind_PHASE_II, 5)\nprint(sum(predicted_regime7))\npredicted_blind_PHASE_IIb=permute_facies_nr(predicted_regime7, predicted_blind_PHASE_IIa, 7)\nprint(sum(predicted_regime1))\npredicted_blind_PHASE_IIc=permute_facies_nr(predicted_regime1, predicted_blind_PHASE_IIb, 1)\nprint(sum(predicted_regime9))\npredicted_blind_PHASE_IId=permute_facies_nr(predicted_regime9, predicted_blind_PHASE_IIc, 9)\n\n\nsum(predicted_blind_PHASE_IIa-predicted_blind_PHASE_IId)\n\npredicted_STUART=predicted_blind_PHASE_IId\npredicted_STUART\n\npredicted_CRAWFORD=\narray([8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 5, 5, 5, 7, 7, 7, 4, 4, 4, 4, 4, 4,\n 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 8,\n 8, 8, 8, 8, 8, 8, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6,\n 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 8, 8, 8, 6, 6,\n 6, 6, 6, 6, 6, 6, 6, 6, 6, 3, 3, 6, 8, 8, 6, 6, 6, 6, 6, 6, 6, 8, 8,\n 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 5, 5, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6,\n 8, 8, 8, 8, 8, 8, 6, 6, 3, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 1, 1, 1,\n 1, 2, 2, 2, 2, 8, 8, 8, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 7, 7, 7,\n 7, 7, 7, 7, 8, 8, 8, 8, 4, 4, 3, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,\n 2, 2, 2, 3, 3, 2, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 6, 2,\n 2, 2, 2, 2, 2, 1, 1, 1, 2, 2, 2, 2, 2, 2, 2, 2, 2, 3, 3, 8, 8, 8, 8,\n 8, 8, 8, 8, 6, 6, 6, 6, 6, 6, 6, 6, 6, 3, 3, 2, 2, 2, 2, 2, 2, 3, 3,\n 2, 2, 2, 8, 8, 8, 8, 8, 8, 8, 8, 5, 7, 7, 7, 7, 7, 7, 7, 3, 3, 4, 4,\n 4, 4, 4, 4, 4, 4, 4, 7, 7, 8, 6, 8, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4,\n 3, 3, 3, 3, 4, 8, 8, 8, 8, 8, 4, 4, 4, 4, 8, 4, 3, 3, 3, 3, 3, 2, 2,\n 2, 2, 2, 2, 2, 2, 3, 3, 3, 3, 3])" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
amsehili/audio-segmentation-by-classification-tutorial
multiclass_audio_segmentation.ipynb
gpl-3.0
[ "Multiclass audio segmentation using auditok and GMMs\nWhat is audio segmentation?\nAudio segmentation means at least two things:\n\nFigure out, within an audio stream (audio file, network stream, audio device, etc.), regions that represent an audio activity (no matter what kind of activity) and those that represent a silence. This type of segmentation is rather referred to as Audio (or Acoustic) Activity Detection (AAD) and is a binary classification problem.\nTell apart, within an audio stream, the nature of audio activities (e.g. speech, engine, bird, glass break, etc.). This is a multi-class classification problem.\n\nThe former is a much simpler problem. In fact, if what we are looking for is to detect the presence of audio activities (of any kind), we can rely on simple parameters such as signal energy. auditok, a tool and API I published recently, can perfectly deal with this problem.\nAs for recognizing the nature of audio activities within an audio stream, this is a much more complex problem. Two kinds of schemes are used to deal with it: Segmentation then Classification or Segmentation by Classification.\nSegmentation then Classification\nIn this strategy, we would first run an AAD to distinguish between silence and noise. Afterwards, we try to classify a detected audio activity to figure out its sound class. The problem with this scheme is that if two or more activities are very close in time, they might be detected as one single activity, to which the audio classifier would give one single audio class.\nSegmentation by Classification\nIn this strategy, we would scan the audio stream to find out when each kind of sound starts and when it ends. This includes the distinction between activities that would have been considered as one single activity by a Segmentation then Classification strategy.\nThe goal of this tutorial is to address the problem of Segmentation by Classification of audio streams using auditok as the base tokenizer but with a more advanced classification algorithm. auditok implements an algorithm based on a 4-state automaton to extract subsequences of data that fulfill a certain number of criteria. It offers flexibility regarding the min./max. length of sequence to extract, max. continuous number of tolerated non valid (e.g. silent frames) symbols within a subsequence, etc.\nauditok is basically used an AAD tool and uses a validator based on the signal's log energy. It can however easily be used with other kinds of data and validator(s). You can check the documentation for illustrative auditok API examples with strings\nSound classes and data\nWe will consider 5 classes for this Segmentation by Classification problem:\n\nSpeech\nBreath\nWhistle\nSewing machine\nWrapping paper (noise of)\n\nWe will use manually cropped and annotated data to create a Gaussian Mixture Model (GMM) for each of these classes. We also create a GMM model for silence.\nThe data are recorded using the same hardware. For speech we use a pronunciation of English numbers from 1 to 5 to train the model.\nFor test, we use an about 50-second long audio file in which each of these classes happens at least twice at arbitrary times. Of course, there is absolutely no overlap between this file and any of training files. For speech, utterances in languages other than English are used (French and German).\nMethod\nUsually, to be classified, a sound is split into a sequence of very short overlapping frames from which we extract spectral and/or cepstral coefficients (although other kinds of coefficients, extracted from raw signal, such as Zero Crossing Rate can also be used). In this tutorial we are going to use MFCC coefficients (Mel-Frequency Cepstral Coefficients) and their first and second derivatives.\nI have been working on sound classification problems for about 6 years (environmental sounds, emotion in voice, speaker identification etc.). The least I can say is that when using a frame-level scheme, an aggregation strategy should be used. With GMM classifiers for instance, we would sum up the log likelihood of feature vectors the sound is made up of. The unknown sound is given the label of the GMM model that yields the highest sum of log likelihoods. Along with frame-level techniques, there are also segment-level techniques. SVM super vector-based approaches for instance transform a sequence of feature vectors into one huge vector (e.g. a super vector made up of stacked mean vectors of an adapted GMM Universal Background Model) that is classified by an SVM classifier.\nauditok's core tokenization class, StreamTokenizer, takes a decision only based on the current frame. It calls the read() method of a DataSource object (the data of which are to be segmented/tokenized) to read a frame at once and asks its encapsulated validator if it is a valid frame or not. The log energy based validator that is part of the auditok package would simply compute the log energy of the given frame and return True if it is >= a user-defined threshold, False otherwise.\nClearly we don't want the GMM-based validator we are going to implement for this tutorial to simply take a decision based on one single frame (though this would work for certain sound classes as we will see below). The solution to this problem is as follows: we implement a DataSource class that can not only return the current frame, but also the context of the frame with the desired scope. The context here stands for the k frames that are just before (i.e. history) and after (i.e. future) the current frame. Thus, each DataSource.read() can optionally (if scope > 0) return a sequence of vectors that the a GMM-based validator will score.\nLater on, we will try different scope values, but first let us explain the code piece by piece. Don't forget to run the code in each cell so it is taken into account in the following ones.\nCode\nImport required classes and modules\nWe will first import GMM class from sklearn.mixture, librosa for audio features extraction and some classes from auditok:", "import wave\nimport pickle\nimport numpy as np\nfrom sklearn.mixture import GMM\nimport librosa\nfrom auditok import DataValidator, ADSFactory, DataSource, StreamTokenizer, BufferAudioSource, player_for", "Define functions to compute MFCC features from librosa\nWe define functions that extract MFCC features from an audio signal or a file with optionally their delat-1 and delta-2 coefficients:", "def extract_mfcc(signal, sr=16000, n_mfcc=16, n_fft=256, hop_length=128, n_mels = 40, delta_1 = False, delta_2 = False):\n \n mfcc = librosa.feature.mfcc(y=signal, sr=sr, n_mfcc=n_mfcc, n_fft=n_fft, hop_length=hop_length, n_mels=n_mels)\n \n if not (delta_1 or delta_2):\n return mfcc.T\n \n feat = [mfcc]\n \n if delta_1:\n mfcc_delta_1 = librosa.feature.delta(mfcc, order=1)\n feat.append(mfcc_delta_1)\n \n if delta_2:\n mfcc_delta_2 = librosa.feature.delta(mfcc, order=2)\n feat.append(mfcc_delta_2)\n \n return np.vstack(feat).T\n\n\ndef file_to_mfcc(filename, sr=16000, **kwargs):\n \n signal, sr = librosa.load(filename, sr = sr)\n \n return extract_mfcc(signal, sr, **kwargs)", "Define a class for multi-class classification using GMMs\nWe define our multi-class classifier that uses sklearn's GMM objects:", "class GMMClassifier():\n \n def __init__(self, models):\n \"\"\"\n models is a dictionary: {\"class_of_sound\" : GMM_model_for_that_class, ...}\n \"\"\" \n self.models = models\n \n def predict(self, data):\n \n result = []\n for cls in self.models:\n \n llk = self.models[cls].score_samples(data)[0]\n llk = np.sum(llk)\n result.append((cls, llk)) \n \n \"\"\"\n return classification result as a sorted list of tuples (\"class_of_sound\", log_likelihood)\n best class is the first element in the list\n \"\"\"\n return sorted(result, key=lambda f: - f[1])", "Default auditok frame validator class is AudioEnergyValidator which only computes the log energy of a given slice of signal (here referred to as frame or analysis window), returns True if the result equals or is above a certain threshold, and False otherwise.\nThus, AudioEnergyValidator is not capable of distinguishing between different classes of sounds such as speech, cough or a noise of an electric engine. To build a validator that can track a particular class of sound (e.g. speech or whistle) over an audio stream, we build a validator that uses a more sophisticated tool to decide whether a frame is valid (belongs to the class of interest) or not.\nA validator that relies on a GMM classifier\nThe following validator encapsulates an instance of the GMMClassifier defined above, and checks, for each frame, if the best label the GMMClassifier returns is the same as its target (i.e. class of interest).", "class ClassifierValidator(DataValidator):\n \n def __init__(self, classifier, target):\n \"\"\"\n classifier: a GMMClassifier object\n target: string\n \"\"\"\n self.classifier = classifier\n self.target = target\n \n def is_valid(self, data):\n \n r = self.classifier.predict(data)\n return r[0][0] == self.target", "A DataSource class that returns feature vectors\nAlthough auditok is meant for audio segmentation, its core class, StreamTokenizer, does not expect a a particular type of data (see API Tutorial for examples that use strings instead of audio data). It just expects an object that has a read() method with no arguments.\nIn the following, we will implement a class that encapsulates an audio stream as sequence of precomputed audio feature vectors (e.g. MFCC) and return one vector each time its read() method is called.\nFurthermore, we want our class to be able to return a vector and its context for a read() call. By context we mean k previous and k next vectors. This is a valuable feature, because, as we well see, for our audio classification problem, GMMs work better if object to classify contains multiple observations (i.e. vectors) and not only one single vector.", "class VectorDataSource(DataSource):\n \n def __init__(self, data, scope=0):\n self.scope = scope\n self._data = data\n self._current = 0\n \n def read(self):\n if self._current >= len(self._data):\n return None\n \n start = self._current - self.scope\n if start < 0:\n start = 0\n \n end = self._current + self.scope + 1\n \n self._current += 1\n return self._data[start : end]\n \n def set_scope(self, scope):\n self.scope = scope\n \n def rewind(self):\n self._current = 0", "Initialize some global variables\nIn the following we are going to define some global variables:", "\"\"\"\nSize of audio window for which MFCC coefficients are calculated\n\"\"\"\nANALYSIS_WINDOW = 0.02 # 0.02 second = 20 ms\n\n\"\"\"\nStep of ANALYSIS_WINDOW \n\"\"\"\nANALYSIS_STEP = 0.01 # 0.01 second overlap between consecutive windows\n\n\"\"\"\nnumber of vectors around the current vector to return.\nThis will cause VectorDataSource.read() method to return\na sequence of (SCOPE_LENGTH * 2 + 1) vectors (if enough\ndata is available), with the current vetor in the middle\n\"\"\"\nSCOPE_LENGTH = 10\n\n\"\"\"\nNumber of Mel filters\n\"\"\"\nMEL_FILTERS = 40\n\n\"\"\"\nNumber of MFCC coefficients to keep\n\"\"\"\nN_MFCC = 16\n\n\"\"\"\nSampling rate of audio data\n\"\"\"\nSAMPLING_RATE = 16000\n\n\"\"\"\nANALYSIS_WINDOW and ANALYSIS_STEP as number of samples\n\"\"\"\nBLOCK_SIZE = int(SAMPLING_RATE * ANALYSIS_WINDOW)\nHOP_SIZE = int(SAMPLING_RATE * ANALYSIS_STEP)\n\n\"\"\"\nCompute delta and delta-delta of MFCC coefficients ?\n\"\"\"\nDELTA_1 = True\nDELTA_2 = True\n\n\"\"\"\nWhere to find data\n\"\"\"\nPREFIX = \"data/train\"", "Train GMM models and initialize validators\nIn the following cell we create our GMM models (one per class of sound) using training audio files. We then create a validator object for each audio class:", "train_data = {}\n\ntrain_data[\"silence\"] = [\"silence_1.wav\", \"silence_2.wav\", \"silence_3.wav\"]\ntrain_data[\"speech\"] = [\"speech_1.wav\", \"speech_2.wav\", \"speech_3.wav\", \"speech_4.wav\", \"speech_5.wav\"]\ntrain_data[\"breath\"] = [\"breath_1.wav\", \"breath_2.wav\", \"breath_3.wav\", \"breath_4.wav\", \"breath_5.wav\"]\ntrain_data[\"whistle\"] = [\"whistle_1.wav\", \"whistle_2.wav\", \"whistle_3.wav\", \"whistle_4.wav\", \"whistle_5.wav\"]\ntrain_data[\"wrapping_paper\"] = [\"wrapping_paper.wav\"]\ntrain_data[\"sewing_machine\"] = [\"sewing_machine.wav\"]\n\nmodels = {}\n\n# build models\nfor cls in train_data:\n\n data = []\n for fname in train_data[cls]:\n data.append(file_to_mfcc(PREFIX + '/' + fname, sr=16000, n_mfcc=N_MFCC, n_fft=BLOCK_SIZE, hop_length=HOP_SIZE, n_mels=MEL_FILTERS, delta_1=DELTA_1, delta_2=DELTA_2))\n\n data = np.vstack(data)\n \n print(\"Class '{0}': {1} training vectors\".format(cls, data.shape[0]))\n\n mod = GMM(n_components=10)\n mod.fit(data)\n models[cls] = mod\n\ngmm_classifier = GMMClassifier(models)\n\n# create a validator for each sound class\nsilence_validator = ClassifierValidator(gmm_classifier, \"silence\")\nspeech_validator = ClassifierValidator(gmm_classifier, \"speech\")\nbreath_validator = ClassifierValidator(gmm_classifier, \"breath\")\nwhistle_validator = ClassifierValidator(gmm_classifier, \"whistle\")\nsewing_machine_validator = ClassifierValidator(gmm_classifier, \"sewing_machine\")\nwrapping_paper_validator = ClassifierValidator(gmm_classifier, \"wrapping_paper\")\n", "Or load pre-trained GMM models\nUnfortunately, sklrean's GMM implementation is not deterministic. If you'd prefer to use exactly the same models as mine, run the following cell:", "models = {}\n\nfor cls in [\"silence\" , \"speech\", \"breath\", \"whistle\", \"sewing_machine\", \"wrapping_paper\"]:\n fp = open(\"models/%s.gmm\" % (cls), \"r\")\n models[cls] = pickle.load(fp)\n fp.close()\n\ngmm_classifier = GMMClassifier(models)\n\n# create a validator for each sound class\nsilence_validator = ClassifierValidator(gmm_classifier, \"silence\")\nspeech_validator = ClassifierValidator(gmm_classifier, \"speech\")\nbreath_validator = ClassifierValidator(gmm_classifier, \"breath\")\nwhistle_validator = ClassifierValidator(gmm_classifier, \"whistle\")\nsewing_machine_validator = ClassifierValidator(gmm_classifier, \"sewing_machine\")\nwrapping_paper_validator = ClassifierValidator(gmm_classifier, \"wrapping_paper\")", "If you wan to save your models to disk, run the following code", "# if you wan to save models\nfor cls in train_data:\n fp = open(\"models/%s.gmm\" % (cls), \"wb\")\n pickle.dump(models[cls], fp, pickle.HIGHEST_PROTOCOL)\n fp.close()\n", "Transform stream to be analyzed into a sequence of vectors\nWe need to transform the audio stream we want to analyze into a sequence of MFCC vectors. We then use the sequence of MFCC vectors to create a VectorDataSource object that will make it possible to read a vector and its surrounding context if required:", "# transform audio stream to be analyzed into a sequence of MFCC vectors\n# create a DataSource object using MFCC vectors\nmfcc_data_source = VectorDataSource(data=file_to_mfcc(\"data/analysis_stream.wav\",\n sr=16000, n_mfcc=N_MFCC,\n n_fft=BLOCK_SIZE, hop_length=HOP_SIZE,\n n_mels=MEL_FILTERS, delta_1=DELTA_1,\n delta_2=DELTA_2), scope=SCOPE_LENGTH)", "Initialize the tokenizer object\nWe will use the same tokenizer object all over our tests. We however need to set a different validator to track a particular sound class (examples below).", "# create a tokenizer\nanalysis_window_per_second = 1. / ANALYSIS_STEP\n\nmin_seg_length = 0.5 # second, min length of an accepted audio segment\nmax_seg_length = 10 # seconds, max length of an accepted audio segment\nmax_silence = 0.3 # second, max length tolerated of tolerated continuous signal that's not from the same class\n\ntokenizer = StreamTokenizer(validator=speech_validator, min_length=int(min_seg_length * analysis_window_per_second),\n max_length=int(max_seg_length * analysis_window_per_second),\n max_continuous_silence= max_silence * analysis_window_per_second,\n mode = StreamTokenizer.DROP_TRAILING_SILENCE)", "Read audio signal used for visualization purposes", "# read all audio data from stream\nwfp = wave.open(\"data/analysis_stream.wav\")\naudio_data = wfp.readframes(-1)\nwidth = wfp.getsampwidth()\nwfp.close()\n\n# data as numpy array will be used to plot signal\nfmt = {1: np.int8 , 2: np.int16, 4: np.int32}\nsignal = np.array(np.frombuffer(audio_data, dtype=fmt[width]), dtype=np.float64)", "Define plot function", "%matplotlib inline\nimport matplotlib.pyplot as plt\nimport matplotlib.pylab as pylab\npylab.rcParams['figure.figsize'] = 24, 18\n\ndef plot_signal_and_segmentation(signal, sampling_rate, segments=[]):\n _time = np.arange(0., np.ceil(float(len(signal))) / sampling_rate, 1./sampling_rate )\n if len(_time) > len(signal):\n _time = _time[: len(signal) - len(_time)]\n \n pylab.subplot(211)\n\n for seg in segments:\n \n fc = seg.get(\"fc\", \"g\")\n ec = seg.get(\"ec\", \"b\")\n lw = seg.get(\"lw\", 2)\n alpha = seg.get(\"alpha\", 0.4)\n \n ts = seg[\"timestamps\"]\n \n # plot first segmentation outside loop to show one single legend for this class\n p = pylab.axvspan(ts[0][0], ts[0][1], fc=fc, ec=ec, lw=lw, alpha=alpha, label = seg.get(\"title\", \"\"))\n \n for start, end in ts[1:]:\n p = pylab.axvspan(start, end, fc=fc, ec=ec, lw=lw, alpha=alpha)\n \n \n pylab.legend(bbox_to_anchor=(0., 1.02, 1., .102), loc=3,\n borderaxespad=0., fontsize=22, ncol=2)\n \n pylab.plot(_time, signal)\n \n pylab.xlabel(\"Time (s)\", fontsize=22)\n pylab.ylabel(\"Signal Amplitude\", fontsize=22)\n pylab.show()\n", "Read and plot manual annotations (used for visualization an comparison purposes)", "annotations = {}\n\nts = [line.rstrip(\"\\r\\n\\t \").split(\" \") for line in open(\"data/speech.lst\").readlines()]\nts = [(float(t[0]), float(t[1])) for t in ts]\nannotations[\"speech\"] = {\"fc\" : \"r\", \"ec\" : \"r\", \"lw\" : 0, \"alpha\" : 0.4, \"title\" : \"Speech\", \"timestamps\" : ts}\n\nts = [line.rstrip(\"\\r\\n\\t \").split(\" \") for line in open(\"data/breath.lst\").readlines()]\nts = [(float(t[0]), float(t[1])) for t in ts]\nannotations[\"breath\"] = {\"fc\" : \"y\", \"ec\" : \"y\", \"lw\" : 0, \"alpha\" : 0.4, \"title\" : \"Breath\", \"timestamps\" : ts}\n\nts = [line.rstrip(\"\\r\\n\\t \").split(\" \") for line in open(\"data/whistle.lst\").readlines()]\nts = [(float(t[0]), float(t[1])) for t in ts]\nannotations[\"whistle\"] = {\"fc\" : \"m\", \"ec\" : \"m\", \"lw\" : 0, \"alpha\" : 0.4, \"title\" : \"Whistle\", \"timestamps\" : ts}\n\n\nts = [line.rstrip(\"\\r\\n\\t \").split(\" \") for line in open(\"data/sewing_machine.lst\").readlines()]\nts = [(float(t[0]), float(t[1])) for t in ts]\nannotations[\"sewing_machine\"] = {\"fc\" : \"g\", \"ec\" : \"g\", \"lw\" : 0, \"alpha\" : 0.4, \"title\" : \"Sewing machine\", \"timestamps\" : ts}\n\nts = [line.rstrip(\"\\r\\n\\t \").split(\" \") for line in open(\"data/wrapping_paper.lst\").readlines()]\nts = [(float(t[0]), float(t[1])) for t in ts]\nannotations[\"wrapping_paper\"] = {\"fc\" : \"b\", \"ec\" : \"b\", \"lw\" : 0, \"alpha\" : 0.4, \"title\" : \"Wrapping paper\", \"timestamps\" : ts}\n\ndef plot_annot():\n plot_signal_and_segmentation(signal, SAMPLING_RATE,\n [annotations[\"speech\"],\n annotations[\"breath\"],\n annotations[\"whistle\"],\n annotations[\"sewing_machine\"],\n annotations[\"wrapping_paper\"]\n ])\n\nplot_annot()", "Try out the the first segmentation with sewing_machine class\nNow, let us start off with a somehow easy class. The sewing_machine is a good candidate. This sound has strong components in low frequencies and less strong high frequencies that both remain very stable over time. It is easy to distinguish from our other classes, even with absolute frame-level validation (i.e. no context, scope = 0)", "tokenizer = StreamTokenizer(validator=speech_validator, min_length= int(0.5 * analysis_window_per_second),\n max_length=int(15 * analysis_window_per_second),\n max_continuous_silence= 0.3 * analysis_window_per_second,\n mode = StreamTokenizer.DROP_TRAILING_SILENCE)\ntokenizer.validator = sewing_machine_validator\nmfcc_data_source.rewind()\nmfcc_data_source.scope = 0\ntokens = tokenizer.tokenize(mfcc_data_source)\nts = [(t[1] * ANALYSIS_STEP, t[2] * ANALYSIS_STEP) for t in tokens]\n\nseg = {\"fc\" : \"g\", \"ec\" : \"g\", \"lw\" : 0, \"alpha\" : 0.3, \"title\" : \"Sewing machine (auto)\", \"timestamps\" : ts}\n\nplot_signal_and_segmentation(signal, SAMPLING_RATE, [seg])", "doesn't, please re-run the training to obtain (hopefully) better models or use the models the worked for me by running the respective cell.\nNote that, we used a scope size of zero. That means that only one single vector is returned by the read() and evaluated by is_valid() methods. This absolute frame-level classification scheme will not have as much success for less stationary classes such speech. Let us try the same strategy with class breath.\nTrack breath with an absolute frame-level scheme\nWe will keep the same tokenizer but set its validator object to breath_validator so that it tracks breath over the stream:", "tokenizer.validator = breath_validator\nmfcc_data_source.rewind()\nmfcc_data_source.scope = 0\ntokens = tokenizer.tokenize(mfcc_data_source)\nts = [(t[1] * ANALYSIS_STEP, t[2] * ANALYSIS_STEP) for t in tokens]\nseg = {\"fc\" : \"y\", \"ec\" : \"y\", \"lw\" : 0, \"alpha\" : 0.4, \"title\" : \"Breath (auto)\", \"timestamps\" : ts}\nplot_signal_and_segmentation(signal, SAMPLING_RATE, [seg])", "As you can see, this results in a considerable number of false alarms, almost all silence is classified as breath (remember that you can plot annotations using plot_annot()).\nThe good news is that only silence and no other class is wrongly classified as breath. Hence, there are good chances that using another audio feature such energy would help.\nTrack breath with a larger scope\nLet us now use a wider scope, so that a vector is evaluated within its context. We will set the scope of our mfcc_data_source to 25. Note that by reading 25 vectors before and after the current vector, we are analyzing audio chunks of 51 * 10 = 510 ms (analysis step is 10 ms).", "mfcc_data_source.rewind()\nmfcc_data_source.scope = 25\ntokens = tokenizer.tokenize(mfcc_data_source)\nts = [(t[1] * ANALYSIS_STEP, t[2] * ANALYSIS_STEP) for t in tokens]\nseg = {\"fc\" : \"y\", \"ec\" : \"y\", \"lw\" : 0, \"alpha\" : 0.4, \"title\" : \"Breath (auto)\", \"timestamps\" : ts}\nplot_signal_and_segmentation(signal, SAMPLING_RATE, [seg])", "Using a wider scope yields a much better segmentation for class breath (again, if you are not having the same result, please load models that worked for me, trained using the SAME training data). The number of false alarms is greatly reduced.\nScope size should however be chosen with precaution. Using very large scopes may lead to poorer temporal precision or increase false alarms.\nTrack all classes, this is multi-class segmentation!\nNow we are going to automatically track all our classes (except silence) within the stream. You might have noticed that the end of the streams contains the most challenging part. It contains 5 juxtaposed sections representing our 5 classes with almost no silence between them. If we intend to use a Segmentation then Classification scheme, an energy-based detector would definitely fail to isolate the five events.\nLet us see if we can do better with a Segmentation by Classification scheme. As you know, StreamTokenizer objects are binary classifiers. For our multi-class classification problem, we will use as much StreamTokenizer objects as there are sound classes.\nWe will therefore run a tokenizer for each class and then plot the whole obtained results. Although one can use some workaround to speed up processing (e.g. use a DataSource of precomputed log likelihoods instead of MFCC vectors, etc.), this is not the goal of this tutorial. The following code will plot the automatic segmentation followed by the manual annotation.", "segments = []\nmfcc_data_source.scope = 25\n\n# track speech\nmfcc_data_source.rewind()\ntokenizer.validator = speech_validator\ntokens = tokenizer.tokenize(mfcc_data_source)\nspeech_ts = [(t[1] * ANALYSIS_STEP, t[2] * ANALYSIS_STEP) for t in tokens]\nseg = {\"fc\" : \"r\", \"ec\" : \"r\", \"lw\" : 0, \"alpha\" : 0.4, \"title\" : \"Speech (auto)\", \"timestamps\" : speech_ts}\nsegments.append(seg)\n\n\n# track breath\nmfcc_data_source.rewind()\ntokenizer.validator = breath_validator\ntokens = tokenizer.tokenize(mfcc_data_source)\nbreath_ts = [(t[1] * ANALYSIS_STEP, t[2] * ANALYSIS_STEP) for t in tokens]\nseg = {\"fc\" : \"y\", \"ec\" : \"y\", \"lw\" : 0, \"alpha\" : 0.4, \"title\" : \"Breath (auto)\", \"timestamps\" : breath_ts}\nsegments.append(seg)\n\n# track whistle\nmfcc_data_source.rewind()\ntokenizer.validator = whistle_validator\ntokens = tokenizer.tokenize(mfcc_data_source)\nwhistle_ts = [(t[1] * ANALYSIS_STEP, t[2] * ANALYSIS_STEP) for t in tokens]\nseg = {\"fc\" : \"m\", \"ec\" : \"m\", \"lw\" : 0, \"alpha\" : 0.4, \"title\" : \"Whistle (auto)\", \"timestamps\" : whistle_ts}\nsegments.append(seg)\n\n# track sewing_machine\nmfcc_data_source.rewind()\ntokenizer.validator = sewing_machine_validator\ntokens = tokenizer.tokenize(mfcc_data_source)\nsewing_machine_ts = [(t[1] * ANALYSIS_STEP, t[2] * ANALYSIS_STEP) for t in tokens]\nseg = {\"fc\" : \"g\", \"ec\" : \"g\", \"lw\" : 0, \"alpha\" : 0.4, \"title\" : \"Sewing machine (auto)\", \"timestamps\" : sewing_machine_ts}\nsegments.append(seg)\n\n# track wrapping_paper\nmfcc_data_source.rewind()\ntokenizer.validator = wrapping_paper_validator\ntokens = tokenizer.tokenize(mfcc_data_source)\nwrapping_paper_ts = [(t[1] * ANALYSIS_STEP, t[2] * ANALYSIS_STEP) for t in tokens]\nseg = {\"fc\" : \"b\", \"ec\" : \"b\", \"lw\" : 0, \"alpha\" : 0.4, \"title\" : \"Wrapping paper (auto)\", \"timestamps\" : wrapping_paper_ts}\nsegments.append(seg)\n\n# plot automatic segmentation\nplot_signal_and_segmentation(signal, SAMPLING_RATE, segments)\n\n# plot manual segmentation\nplot_annot()", "If it wasn't for breath false alarm, we'd have a perfect automatic output...\nIf you want to play some audio segments, prepare this...", "# BufferAudioSource is useful if we want to navigate quickly within audio data and play\nbas = BufferAudioSource(audio_data, SAMPLING_RATE, width, 1)\nbas.open()\n\n# audio playback requires pyaudio\nplayer = player_for(bas)", "Play first instance of wrapping_paper class", "start , end = wrapping_paper_ts[0]\nbas.set_time_position(start)\ndata = bas.read(int((end-start) * bas.get_sampling_rate()))\nplayer.play(data)", "Conclusion\nThis tutorial addresses the problem of audio Segmentation by Classification. The presented technique uses GMM as classification method and leverages auditok sequence extraction algorithm to track a particular kind of sound over an audio stream. The user is encouraged to experience other classification methods.\nIf you're interested in sound recognition and audio indexation problematics, I'll be happy to share my experience with you!\nRepository\nOn Github\nAuthor\nAmine SEHILI &#97;&#109;&#105;&#110;&#101;&#46;&#115;&#101;&#104;&#105;&#108;&#105;&#64;&#103;&#109;&#97;&#105;&#108;&#46;&#99;&#111;&#109;" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
Yu-Group/scikit-learn-sandbox
jupyter/backup_deprecated_nbs/10_RIT_initial_setup.ipynb
mit
[ "Key Requirements for the iRF scikit-learn implementation\n\nThe following is a documentation of the main requirements for the iRF implementation\n\nTypical Setup", "%matplotlib inline\nimport matplotlib.pyplot as plt\n\nimport pydotplus\nimport numpy as np\nimport pprint\nfrom sklearn import metrics\nfrom sklearn import tree\nfrom sklearn.model_selection import train_test_split\nfrom sklearn.ensemble import RandomForestClassifier\nfrom sklearn import tree\nfrom sklearn.tree import _tree\nfrom IPython.display import display, Image\nfrom sklearn.datasets import load_iris\nfrom sklearn.datasets import load_breast_cancer\n\nfrom functools import reduce\n\n# Import our custom utilities\nfrom imp import reload\nfrom utils import utils\nreload(utils)", "Step 1: Fit the Initial Random Forest\n\nJust fit every feature with equal weights per the usual random forest code e.g. DecisionForestClassifier in scikit-learn", "%timeit\nX_train, X_test, y_train, y_test, rf = utils.generate_rf_example(sklearn_ds = load_breast_cancer()\n , train_split_propn = 0.9\n , n_estimators = 3\n , random_state_split = 2017\n , random_state_classifier = 2018)", "Check out the data", "print(\"Training feature dimensions\", X_train.shape, sep = \":\\n\")\nprint(\"\\n\")\nprint(\"Training outcome dimensions\", y_train.shape, sep = \":\\n\")\nprint(\"\\n\")\nprint(\"Test feature dimensions\", X_test.shape, sep = \":\\n\")\nprint(\"\\n\")\nprint(\"Test outcome dimensions\", y_test.shape, sep = \":\\n\")\nprint(\"\\n\")\nprint(\"first 5 rows of the training set features\", X_train[:5], sep = \":\\n\")\nprint(\"\\n\")\nprint(\"first 5 rows of the training set outcomes\", y_train[:5], sep = \":\\n\")\n\nX_train.shape[0]\nbreast_cancer = load_breast_cancer()\nbreast_cancer.data.shape[0]", "Step 2: For each Tree get core leaf node features\n\nFor each decision tree in the classifier, get:\nThe list of leaf nodes\nDepth of the leaf node \nLeaf node predicted class i.e. {0, 1}\nProbability of predicting class in leaf node\nNumber of observations in the leaf node i.e. weight of node\n\n\n\nGet the 2 Decision trees to use for testing", "# Import our custom utilities\nrf.n_estimators\n\nestimator0 = rf.estimators_[0] # First tree\nestimator1 = rf.estimators_[1] # Second tree\nestimator2 = rf.estimators_[2] # Second tree", "Design the single function to get the key tree information\nGet data from the first and second decision tree", "tree_dat0 = utils.getTreeData(X_train = X_train, dtree = estimator0, root_node_id = 0)\ntree_dat1 = utils.getTreeData(X_train = X_train, dtree = estimator1, root_node_id = 0)\ntree_dat1 = utils.getTreeData(X_train = X_train, dtree = estimator2, root_node_id = 0)", "Decision Tree 0 (First) - Get output\nCheck the output against the decision tree graph", "# Now plot the trees individually\nutils.draw_tree(decision_tree = estimator0)\n\nutils.prettyPrintDict(inp_dict = tree_dat0)\n\n# Count the number of samples passing through the leaf nodes\nsum(tree_dat0['tot_leaf_node_values'])", "Step 3: Get the Gini Importance of Weights for the Random Forest\n\nFor the first random forest we just need to get the Gini Importance of Weights\n\nStep 3.1 Get them numerically - most important", "feature_importances = rf.feature_importances_\nstd = np.std([dtree.feature_importances_ for dtree in rf.estimators_]\n , axis=0)\nfeature_importances_rank_idx = np.argsort(feature_importances)[::-1]\n\n# Check that the feature importances are standardized to 1\nprint(sum(feature_importances))", "Step 3.2 Display Feature Importances Graphically (just for interest)", "# Print the feature ranking\nprint(\"Feature ranking:\")\n\nfor f in range(X_train.shape[1]):\n print(\"%d. feature %d (%f)\" % (f + 1\n , feature_importances_rank_idx[f]\n , feature_importances[feature_importances_rank_idx[f]]))\n \n# Plot the feature importances of the forest\nplt.figure()\nplt.title(\"Feature importances\")\nplt.bar(range(X_train.shape[1])\n , feature_importances[feature_importances_rank_idx]\n , color=\"r\"\n , yerr = std[feature_importances_rank_idx], align=\"center\")\nplt.xticks(range(X_train.shape[1]), feature_importances_rank_idx)\nplt.xlim([-1, X_train.shape[1]])\nplt.show()", "Putting it all together\n\nCreate a dictionary object to include all of the random forest objects", "# CHECK: If the random forest objects are going to be really large in size\n# we could just omit them and only return our custom summary outputs\n\nrf_metrics = utils.getValidationMetrics(rf, y_true = y_test, X_test = X_test)\nall_rf_outputs = {\"rf_obj\" : rf,\n \"feature_importances\" : feature_importances,\n \"feature_importances_rank_idx\" : feature_importances_rank_idx,\n \"rf_metrics\" : rf_metrics}\n\n# CHECK: The following should be paralellized!\n# CHECK: Whether we can maintain X_train correctly as required\nfor idx, dtree in enumerate(rf.estimators_):\n dtree_out = utils.getTreeData(X_train = X_train, dtree = dtree, root_node_id = 0)\n # Append output to dictionary\n all_rf_outputs[\"dtree\" + str(idx)] = dtree_out", "Check the final dictionary of outputs", "utils.prettyPrintDict(inp_dict = all_rf_outputs)", "Now we can start setting up the RIT class\nOverview\nAt it's core, the RIT is comprised of 3 main modules\n* FILTERING: Subsetting to either the 1's or the 0's\n* RANDOM SAMPLING: The path-nodes in a weighted manner, with/ without replacement, within tree/ outside tree\n* INTERSECTION: Intersecting the selected node paths in a systematic manner\nFor now we will just work with a single decision tree outputs", "utils.prettyPrintDict(inp_dict = all_rf_outputs['rf_metrics'])\n\nall_rf_outputs['dtree0']", "Get the leaf node 1's paths\nGet the unique feature paths where the leaf node predicted class is just 1", "uniq_feature_paths = all_rf_outputs['dtree0']['all_uniq_leaf_paths_features']\nleaf_node_classes = all_rf_outputs['dtree0']['all_leaf_node_classes']\nones_only = [i for i, j in zip(uniq_feature_paths, leaf_node_classes) \n if j == 1]\nones_only\n\nprint(\"Number of leaf nodes\", len(all_rf_outputs['dtree0']['all_uniq_leaf_paths_features']), sep = \":\\n\")\nprint(\"Number of leaf nodes with 1 class\", len(ones_only), sep = \":\\n\")\n\n# Just pick the last seven cases, we are going to manually construct\n# binary RIT of depth 3 i.e. max 2**3 -1 = 7 intersecting nodes\nones_only_seven = ones_only[-7:]\nones_only_seven\n\n# Construct a binary version of the RIT manually!\n# This should come in useful for unit tests!\nnode0 = ones_only_seven[-1]\nnode1 = np.intersect1d(node0, ones_only_seven[-2])\nnode2 = np.intersect1d(node1, ones_only_seven[-3])\nnode3 = np.intersect1d(node1, ones_only_seven[-4])\nnode4 = np.intersect1d(node0, ones_only_seven[-5])\nnode5 = np.intersect1d(node4, ones_only_seven[-6])\nnode6 = np.intersect1d(node4, ones_only_seven[-7])\n\nintersected_nodes_seven = [node0, node1, node2, node3, node4, node5, node6]\n\nfor idx, node in enumerate(intersected_nodes_seven):\n print(\"node\" + str(idx), node)\n\nrit_output = reduce(np.union1d, (node2, node3, node5, node6))\nrit_output" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
xaibeing/cn-deep-learning
image-classification/dlnd_image_classification.ipynb
mit
[ "图像分类\n在此项目中,你将对 CIFAR-10 数据集 中的图片进行分类。该数据集包含飞机、猫狗和其他物体。你需要预处理这些图片,然后用所有样本训练一个卷积神经网络。图片需要标准化(normalized),标签需要采用 one-hot 编码。你需要应用所学的知识构建卷积的、最大池化(max pooling)、丢弃(dropout)和完全连接(fully connected)的层。最后,你需要在样本图片上看到神经网络的预测结果。\n获取数据\n请运行以下单元,以下载 CIFAR-10 数据集(Python版)。", "\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\nfrom urllib.request import urlretrieve\nfrom os.path import isfile, isdir\nfrom tqdm import tqdm\nimport problem_unittests as tests\nimport tarfile\n\ncifar10_dataset_folder_path = 'cifar-10-batches-py'\n\n# Use Floyd's cifar-10 dataset if present\nfloyd_cifar10_location = '/input/cifar-10/python.tar.gz'\nif isfile(floyd_cifar10_location):\n tar_gz_path = floyd_cifar10_location\nelse:\n tar_gz_path = 'cifar-10-python.tar.gz'\n\nclass DLProgress(tqdm):\n last_block = 0\n\n def hook(self, block_num=1, block_size=1, total_size=None):\n self.total = total_size\n self.update((block_num - self.last_block) * block_size)\n self.last_block = block_num\n\nif not isfile(tar_gz_path):\n with DLProgress(unit='B', unit_scale=True, miniters=1, desc='CIFAR-10 Dataset') as pbar:\n urlretrieve(\n 'https://www.cs.toronto.edu/~kriz/cifar-10-python.tar.gz',\n tar_gz_path,\n pbar.hook)\n\nif not isdir(cifar10_dataset_folder_path):\n with tarfile.open(tar_gz_path) as tar:\n tar.extractall()\n tar.close()\n\n\ntests.test_folder_path(cifar10_dataset_folder_path)", "探索数据\n该数据集分成了几部分/批次(batches),以免你的机器在计算时内存不足。CIFAR-10 数据集包含 5 个部分,名称分别为 data_batch_1、data_batch_2,以此类推。每个部分都包含以下某个类别的标签和图片:\n\n飞机\n汽车\n鸟类\n猫\n鹿\n狗\n青蛙\n马\n船只\n卡车\n\n了解数据集也是对数据进行预测的必经步骤。你可以通过更改 batch_id 和 sample_id 探索下面的代码单元。batch_id 是数据集一个部分的 ID(1 到 5)。sample_id 是该部分中图片和标签对(label pair)的 ID。\n问问你自己:“可能的标签有哪些?”、“图片数据的值范围是多少?”、“标签是按顺序排列,还是随机排列的?”。思考类似的问题,有助于你预处理数据,并使预测结果更准确。", "%matplotlib inline\n%config InlineBackend.figure_format = 'retina'\n\nimport helper\nimport numpy as np\n\n# Explore the dataset\nbatch_id = 1\nsample_id = 5\nhelper.display_stats(cifar10_dataset_folder_path, batch_id, sample_id)", "实现预处理函数\n标准化\n在下面的单元中,实现 normalize 函数,传入图片数据 x,并返回标准化 Numpy 数组。值应该在 0 到 1 的范围内(含 0 和 1)。返回对象应该和 x 的形状一样。", "def normalize(x):\n \"\"\"\n Normalize a list of sample image data in the range of 0 to 1\n : x: List of image data. The image shape is (32, 32, 3)\n : return: Numpy array of normalize data\n \"\"\"\n # TODO: Implement Function\n return x / 255\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_normalize(normalize)", "One-hot 编码\n和之前的代码单元一样,你将为预处理实现一个函数。这次,你将实现 one_hot_encode 函数。输入,也就是 x,是一个标签列表。实现该函数,以返回为 one_hot 编码的 Numpy 数组的标签列表。标签的可能值为 0 到 9。每次调用 one_hot_encode 时,对于每个值,one_hot 编码函数应该返回相同的编码。确保将编码映射保存到该函数外面。\n提示:不要重复发明轮子。", "from sklearn.preprocessing import OneHotEncoder\nenc = OneHotEncoder()\neach_label = np.array(list(range(10))).reshape(-1,1)\nenc.fit(each_label)\nprint(enc.n_values_)\nprint(enc.feature_indices_)\n\ndef one_hot_encode(x):\n \"\"\"\n One hot encode a list of sample labels. Return a one-hot encoded vector for each label.\n : x: List of sample Labels\n : return: Numpy array of one-hot encoded labels\n \"\"\"\n # TODO: Implement Function\n X = np.array(x).reshape(-1, 1)\n return enc.transform(X).toarray()\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_one_hot_encode(one_hot_encode)", "随机化数据\n之前探索数据时,你已经了解到,样本的顺序是随机的。再随机化一次也不会有什么关系,但是对于这个数据集没有必要。\n预处理所有数据并保存\n运行下方的代码单元,将预处理所有 CIFAR-10 数据,并保存到文件中。下面的代码还使用了 10% 的训练数据,用来验证。", "\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\n# Preprocess Training, Validation, and Testing Data\nhelper.preprocess_and_save_data(cifar10_dataset_folder_path, normalize, one_hot_encode)", "检查点\n这是你的第一个检查点。如果你什么时候决定再回到该记事本,或需要重新启动该记事本,你可以从这里开始。预处理的数据已保存到本地。", "\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\nimport pickle\nimport problem_unittests as tests\nimport helper\n\n# Load the Preprocessed Validation data\nvalid_features, valid_labels = pickle.load(open('preprocess_validation.p', mode='rb'))", "构建网络\n对于该神经网络,你需要将每层都构建为一个函数。你看到的大部分代码都位于函数外面。要更全面地测试你的代码,我们需要你将每层放入一个函数中。这样使我们能够提供更好的反馈,并使用我们的统一测试检测简单的错误,然后再提交项目。\n\n注意:如果你觉得每周很难抽出足够的时间学习这门课程,我们为此项目提供了一个小捷径。对于接下来的几个问题,你可以使用 TensorFlow Layers 或 TensorFlow Layers (contrib) 程序包中的类来构建每个层级,但是“卷积和最大池化层级”部分的层级除外。TF Layers 和 Keras 及 TFLearn 层级类似,因此很容易学会。\n但是,如果你想充分利用这门课程,请尝试自己解决所有问题,不使用 TF Layers 程序包中的任何类。你依然可以使用其他程序包中的类,这些类和你在 TF Layers 中的类名称是一样的!例如,你可以使用 TF Neural Network 版本的 conv2d 类 tf.nn.conv2d,而不是 TF Layers 版本的 conv2d 类 tf.layers.conv2d。\n\n我们开始吧!\n输入\n神经网络需要读取图片数据、one-hot 编码标签和丢弃保留概率(dropout keep probability)。请实现以下函数:\n\n实现 neural_net_image_input\n返回 TF Placeholder\n使用 image_shape 设置形状,部分大小设为 None\n使用 TF Placeholder 中的 TensorFlow name 参数对 TensorFlow 占位符 \"x\" 命名\n实现 neural_net_label_input\n返回 TF Placeholder\n使用 n_classes 设置形状,部分大小设为 None\n使用 TF Placeholder 中的 TensorFlow name 参数对 TensorFlow 占位符 \"y\" 命名\n实现 neural_net_keep_prob_input\n返回 TF Placeholder,用于丢弃保留概率\n使用 TF Placeholder 中的 TensorFlow name 参数对 TensorFlow 占位符 \"keep_prob\" 命名\n\n这些名称将在项目结束时,用于加载保存的模型。\n注意:TensorFlow 中的 None 表示形状可以是动态大小。", "import tensorflow as tf\n\ndef neural_net_image_input(image_shape):\n \"\"\"\n Return a Tensor for a batch of image input\n : image_shape: Shape of the images\n : return: Tensor for image input.\n \"\"\"\n # TODO: Implement Function\n return tf.placeholder(tf.float32, shape=[None, image_shape[0], image_shape[1], image_shape[2]], name='x')\n\n\ndef neural_net_label_input(n_classes):\n \"\"\"\n Return a Tensor for a batch of label input\n : n_classes: Number of classes\n : return: Tensor for label input.\n \"\"\"\n # TODO: Implement Function\n return tf.placeholder(tf.float32, shape=[None, n_classes], name='y')\n\n\ndef neural_net_keep_prob_input():\n \"\"\"\n Return a Tensor for keep probability\n : return: Tensor for keep probability.\n \"\"\"\n # TODO: Implement Function\n return tf.placeholder(tf.float32, shape=None, name='keep_prob')\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntf.reset_default_graph()\ntests.test_nn_image_inputs(neural_net_image_input)\ntests.test_nn_label_inputs(neural_net_label_input)\ntests.test_nn_keep_prob_inputs(neural_net_keep_prob_input)", "卷积和最大池化层\n卷积层级适合处理图片。对于此代码单元,你应该实现函数 conv2d_maxpool 以便应用卷积然后进行最大池化:\n\n使用 conv_ksize、conv_num_outputs 和 x_tensor 的形状创建权重(weight)和偏置(bias)。\n使用权重和 conv_strides 对 x_tensor 应用卷积。\n建议使用我们建议的间距(padding),当然也可以使用任何其他间距。\n添加偏置\n向卷积中添加非线性激活(nonlinear activation)\n使用 pool_ksize 和 pool_strides 应用最大池化\n建议使用我们建议的间距(padding),当然也可以使用任何其他间距。\n\n注意:对于此层,请勿使用 TensorFlow Layers 或 TensorFlow Layers (contrib),但是仍然可以使用 TensorFlow 的 Neural Network 包。对于所有其他层,你依然可以使用快捷方法。", "def conv2d_maxpool(x_tensor, conv_num_outputs, conv_ksize, conv_strides, pool_ksize, pool_strides):\n \"\"\"\n Apply convolution then max pooling to x_tensor\n :param x_tensor: TensorFlow Tensor\n :param conv_num_outputs: Number of outputs for the convolutional layer\n :param conv_ksize: kernal size 2-D Tuple for the convolutional layer\n :param conv_strides: Stride 2-D Tuple for convolution\n :param pool_ksize: kernal size 2-D Tuple for pool\n :param pool_strides: Stride 2-D Tuple for pool\n : return: A tensor that represents convolution and max pooling of x_tensor\n \"\"\"\n # TODO: Implement Function\n weights = tf.Variable(tf.truncated_normal([conv_ksize[0], conv_ksize[1], x_tensor.get_shape().as_list()[-1], conv_num_outputs], stddev=0.1))\n biases = tf.Variable(tf.zeros([conv_num_outputs]))\n net = tf.nn.conv2d(x_tensor, weights, [1, conv_strides[0], conv_strides[1], 1], 'SAME')\n net = tf.nn.bias_add(net, biases)\n net = tf.nn.relu(net)\n\n pool_kernel = [1, pool_ksize[0], pool_ksize[1], 1]\n pool_strides = [1, pool_strides[0], pool_strides[1], 1]\n net = tf.nn.max_pool(net, pool_kernel, pool_strides, 'VALID')\n\n return net\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_con_pool(conv2d_maxpool)", "扁平化层\n实现 flatten 函数,将 x_tensor 的维度从四维张量(4-D tensor)变成二维张量。输出应该是形状(部分大小(Batch Size),扁平化图片大小(Flattened Image Size))。快捷方法:对于此层,你可以使用 TensorFlow Layers 或 TensorFlow Layers (contrib) 包中的类。如果你想要更大挑战,可以仅使用其他 TensorFlow 程序包。", "import numpy as np\ndef flatten(x_tensor):\n \"\"\"\n Flatten x_tensor to (Batch Size, Flattened Image Size)\n : x_tensor: A tensor of size (Batch Size, ...), where ... are the image dimensions.\n : return: A tensor of size (Batch Size, Flattened Image Size).\n \"\"\"\n # TODO: Implement Function\n shape = x_tensor.get_shape().as_list()\n dim = np.prod(shape[1:])\n x_tensor = tf.reshape(x_tensor, [-1,dim])\n return x_tensor\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_flatten(flatten)", "全连接层\n实现 fully_conn 函数,以向 x_tensor 应用完全连接的层级,形状为(部分大小(Batch Size),num_outputs)。快捷方法:对于此层,你可以使用 TensorFlow Layers 或 TensorFlow Layers (contrib) 包中的类。如果你想要更大挑战,可以仅使用其他 TensorFlow 程序包。", "def fully_conn(x_tensor, num_outputs):\n \"\"\"\n Apply a fully connected layer to x_tensor using weight and bias\n : x_tensor: A 2-D tensor where the first dimension is batch size.\n : num_outputs: The number of output that the new tensor should be.\n : return: A 2-D tensor where the second dimension is num_outputs.\n \"\"\"\n # TODO: Implement Function\n weights = tf.Variable(tf.truncated_normal(shape=[x_tensor.get_shape().as_list()[-1], num_outputs], mean=0, stddev=1))\n biases = tf.Variable(tf.zeros(shape=[num_outputs]))\n net = tf.nn.bias_add(tf.matmul(x_tensor, weights), biases)\n net = tf.nn.relu(net)\n return net\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_fully_conn(fully_conn)", "输出层\n实现 output 函数,向 x_tensor 应用完全连接的层级,形状为(部分大小(Batch Size),num_outputs)。快捷方法:对于此层,你可以使用 TensorFlow Layers 或 TensorFlow Layers (contrib) 包中的类。如果你想要更大挑战,可以仅使用其他 TensorFlow 程序包。\n注意:该层级不应应用 Activation、softmax 或交叉熵(cross entropy)。", "def output(x_tensor, num_outputs):\n \"\"\"\n Apply a output layer to x_tensor using weight and bias\n : x_tensor: A 2-D tensor where the first dimension is batch size.\n : num_outputs: The number of output that the new tensor should be.\n : return: A 2-D tensor where the second dimension is num_outputs.\n \"\"\"\n # TODO: Implement Function\n weights = tf.Variable(tf.truncated_normal(shape=[x_tensor.get_shape().as_list()[-1], num_outputs], mean=0, stddev=1))\n biases = tf.Variable(tf.zeros(shape=[num_outputs]))\n net = tf.nn.bias_add(tf.matmul(x_tensor, weights), biases)\n return net\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_output(output)", "创建卷积模型\n实现函数 conv_net, 创建卷积神经网络模型。该函数传入一批图片 x,并输出对数(logits)。使用你在上方创建的层创建此模型:\n\n应用 1、2 或 3 个卷积和最大池化层(Convolution and Max Pool layers)\n应用一个扁平层(Flatten Layer)\n应用 1、2 或 3 个完全连接层(Fully Connected Layers)\n应用一个输出层(Output Layer)\n返回输出\n使用 keep_prob 向模型中的一个或多个层应用 TensorFlow 的 Dropout", "def conv_net(x, keep_prob):\n \"\"\"\n Create a convolutional neural network model\n : x: Placeholder tensor that holds image data.\n : keep_prob: Placeholder tensor that hold dropout keep probability.\n : return: Tensor that represents logits\n \"\"\"\n # TODO: Apply 1, 2, or 3 Convolution and Max Pool layers\n # Play around with different number of outputs, kernel size and stride\n # Function Definition from Above:\n # conv2d_maxpool(x_tensor, conv_num_outputs, conv_ksize, conv_strides, pool_ksize, pool_strides)\n \n# net = conv2d_maxpool(x, 32, (5,5), (1,1), (2,2), (2,2))\n# net = tf.nn.dropout(net, keep_prob)\n# net = conv2d_maxpool(net, 32, (5,5), (1,1), (2,2), (2,2))\n# net = conv2d_maxpool(net, 64, (5,5), (1,1), (2,2), (2,2))\n\n net = conv2d_maxpool(x, 32, (3,3), (1,1), (2,2), (2,2))\n net = tf.nn.dropout(net, keep_prob)\n net = conv2d_maxpool(net, 64, (3,3), (1,1), (2,2), (2,2))\n net = tf.nn.dropout(net, keep_prob)\n net = conv2d_maxpool(net, 64, (3,3), (1,1), (2,2), (2,2))\n\n # TODO: Apply a Flatten Layer\n # Function Definition from Above:\n # flatten(x_tensor)\n \n net = flatten(net)\n\n # TODO: Apply 1, 2, or 3 Fully Connected Layers\n # Play around with different number of outputs\n # Function Definition from Above:\n # fully_conn(x_tensor, num_outputs)\n \n net = fully_conn(net, 64)\n \n # TODO: Apply an Output Layer\n # Set this to the number of classes\n # Function Definition from Above:\n # output(x_tensor, num_outputs)\n \n net = output(net, enc.n_values_)\n \n # TODO: return output\n return net\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\n\n##############################\n## Build the Neural Network ##\n##############################\n\n# Remove previous weights, bias, inputs, etc..\ntf.reset_default_graph()\n\n# Inputs\nx = neural_net_image_input((32, 32, 3))\ny = neural_net_label_input(10)\nkeep_prob = neural_net_keep_prob_input()\n\n# Model\nlogits = conv_net(x, keep_prob)\n\n# Name logits Tensor, so that is can be loaded from disk after training\nlogits = tf.identity(logits, name='logits')\n\n# Loss and Optimizer\ncost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=y))\noptimizer = tf.train.AdamOptimizer().minimize(cost)\n\n# Accuracy\ncorrect_pred = tf.equal(tf.argmax(logits, 1), tf.argmax(y, 1))\naccuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32), name='accuracy')\n\ntests.test_conv_net(conv_net)", "训练神经网络\n单次优化\n实现函数 train_neural_network 以进行单次优化(single optimization)。该优化应该使用 optimizer 优化 session,其中 feed_dict 具有以下参数:\n\nx 表示图片输入\ny 表示标签\nkeep_prob 表示丢弃的保留率\n\n每个部分都会调用该函数,所以 tf.global_variables_initializer() 已经被调用。\n注意:不需要返回任何内容。该函数只是用来优化神经网络。", "def train_neural_network(session, optimizer, keep_probability, feature_batch, label_batch):\n \"\"\"\n Optimize the session on a batch of images and labels\n : session: Current TensorFlow session\n : optimizer: TensorFlow optimizer function\n : keep_probability: keep probability\n : feature_batch: Batch of Numpy image data\n : label_batch: Batch of Numpy label data\n \"\"\"\n # TODO: Implement Function\n _ = session.run([optimizer, cost, accuracy], feed_dict={x: feature_batch, y: label_batch, keep_prob: keep_probability})\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_train_nn(train_neural_network)", "显示数据\n实现函数 print_stats 以输出损失和验证准确率。使用全局变量 valid_features 和 valid_labels 计算验证准确率。使用保留率 1.0 计算损失和验证准确率(loss and validation accuracy)。", "def print_stats(session, feature_batch, label_batch, cost, accuracy):\n \"\"\"\n Print information about loss and validation accuracy\n : session: Current TensorFlow session\n : feature_batch: Batch of Numpy image data\n : label_batch: Batch of Numpy label data\n : cost: TensorFlow cost function\n : accuracy: TensorFlow accuracy function\n \"\"\"\n # TODO: Implement Function\n valid_loss, valid_accuracy = session.run([cost, accuracy], feed_dict={x: valid_features, y: valid_labels, keep_prob: 1})\n print(\"valid loss {:.3f}, accuracy {:.3f}\".format(valid_loss, valid_accuracy))", "超参数\n调试以下超参数:\n* 设置 epochs 表示神经网络停止学习或开始过拟合的迭代次数\n* 设置 batch_size,表示机器内存允许的部分最大体积。大部分人设为以下常见内存大小:\n\n64\n128\n256\n...\n设置 keep_probability 表示使用丢弃时保留节点的概率", "# TODO: Tune Parameters\nepochs = 100\nbatch_size = 256\nkeep_probability = 0.8", "在单个 CIFAR-10 部分上训练\n我们先用单个部分,而不是用所有的 CIFAR-10 批次训练神经网络。这样可以节省时间,并对模型进行迭代,以提高准确率。最终验证准确率达到 50% 或以上之后,在下一部分对所有数据运行模型。", "\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\nprint('Checking the Training on a Single Batch...')\nwith tf.Session() as sess:\n # Initializing the variables\n sess.run(tf.global_variables_initializer())\n \n # Training cycle\n for epoch in range(epochs):\n batch_i = 1\n for batch_features, batch_labels in helper.load_preprocess_training_batch(batch_i, batch_size):\n train_neural_network(sess, optimizer, keep_probability, batch_features, batch_labels)\n print('Epoch {:>2}, CIFAR-10 Batch {}: '.format(epoch + 1, batch_i), end='')\n print_stats(sess, batch_features, batch_labels, cost, accuracy)", "完全训练模型\n现在,单个 CIFAR-10 部分的准确率已经不错了,试试所有五个部分吧。", "\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\nsave_model_path = './image_classification'\n\nprint('Training...')\nwith tf.Session() as sess:\n # Initializing the variables\n sess.run(tf.global_variables_initializer())\n \n # Training cycle\n for epoch in range(epochs):\n # Loop over all batches\n n_batches = 5\n for batch_i in range(1, n_batches + 1):\n for batch_features, batch_labels in helper.load_preprocess_training_batch(batch_i, batch_size):\n train_neural_network(sess, optimizer, keep_probability, batch_features, batch_labels)\n print('Epoch {:>2}, CIFAR-10 Batch {}: '.format(epoch + 1, batch_i), end='')\n print_stats(sess, batch_features, batch_labels, cost, accuracy)\n \n # Save Model\n saver = tf.train.Saver()\n save_path = saver.save(sess, save_model_path)", "检查点\n模型已保存到本地。\n测试模型\n利用测试数据集测试你的模型。这将是最终的准确率。你的准确率应该高于 50%。如果没达到,请继续调整模型结构和参数。", "\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\n%matplotlib inline\n%config InlineBackend.figure_format = 'retina'\n\nimport tensorflow as tf\nimport pickle\nimport helper\nimport random\n\n# Set batch size if not already set\ntry:\n if batch_size:\n pass\nexcept NameError:\n batch_size = 64\n\nsave_model_path = './image_classification'\nn_samples = 4\ntop_n_predictions = 3\n\ndef test_model():\n \"\"\"\n Test the saved model against the test dataset\n \"\"\"\n\n test_features, test_labels = pickle.load(open('preprocess_test.p', mode='rb'))\n loaded_graph = tf.Graph()\n\n with tf.Session(graph=loaded_graph) as sess:\n # Load model\n loader = tf.train.import_meta_graph(save_model_path + '.meta')\n loader.restore(sess, save_model_path)\n\n # Get Tensors from loaded model\n loaded_x = loaded_graph.get_tensor_by_name('x:0')\n loaded_y = loaded_graph.get_tensor_by_name('y:0')\n loaded_keep_prob = loaded_graph.get_tensor_by_name('keep_prob:0')\n loaded_logits = loaded_graph.get_tensor_by_name('logits:0')\n loaded_acc = loaded_graph.get_tensor_by_name('accuracy:0')\n \n # Get accuracy in batches for memory limitations\n test_batch_acc_total = 0\n test_batch_count = 0\n \n for test_feature_batch, test_label_batch in helper.batch_features_labels(test_features, test_labels, batch_size):\n test_batch_acc_total += sess.run(\n loaded_acc,\n feed_dict={loaded_x: test_feature_batch, loaded_y: test_label_batch, loaded_keep_prob: 1.0})\n test_batch_count += 1\n\n print('Testing Accuracy: {}\\n'.format(test_batch_acc_total/test_batch_count))\n\n # Print Random Samples\n random_test_features, random_test_labels = tuple(zip(*random.sample(list(zip(test_features, test_labels)), n_samples)))\n random_test_predictions = sess.run(\n tf.nn.top_k(tf.nn.softmax(loaded_logits), top_n_predictions),\n feed_dict={loaded_x: random_test_features, loaded_y: random_test_labels, loaded_keep_prob: 1.0})\n helper.display_image_predictions(random_test_features, random_test_labels, random_test_predictions)\n\n\ntest_model()", "为何准确率只有50-80%?\n你可能想问,为何准确率不能更高了?首先,对于简单的 CNN 网络来说,50% 已经不低了。纯粹猜测的准确率为10%。但是,你可能注意到有人的准确率远远超过 80%。这是因为我们还没有介绍所有的神经网络知识。我们还需要掌握一些其他技巧。\n提交项目\n提交项目时,确保先运行所有单元,然后再保存记事本。将 notebook 文件另存为“dlnd_image_classification.ipynb”,再在目录 \"File\" -> \"Download as\" 另存为 HTML 格式。请在提交的项目中包含 “helper.py” 和 “problem_unittests.py” 文件。" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
rustychris/stompy
examples/transects_0.ipynb
mit
[ "Plotting Transect Data\nThis notebook demonstrates some ways to plot transect data, i.e. 2D datasets with a vertical dimension and one horizontal dimension.\nThe primary method is plot_utils.transect_tricontourf(), which accepts data in several forms of an xarray.DataArray.\n(xarray is the offspring of numpy and pandas, roughly equivalent to a nice interface to netcdf files.)", "# plotting routines\nimport matplotlib.pyplot as plt\nfrom stompy.plot import plot_utils \nfrom stompy.plot.cmap import load_gradient\n\nimport xarray as xr # data structure\nimport numpy as np \n\ncmap=load_gradient('cbcSpectral.cpt')\n\n%matplotlib inline", "Data on a (unevenly spaced) grid: $c = f(x_i,z_j)$\nWhen $c$ is defined for all points on the grid", "# Fabricate a dataset:\nx=np.sort(1000*np.random.random(10))\nz=np.sort(5*np.random.random(5))\nscalar=np.random.random( (len(x),len(z)) )\nscalar.sort(axis=1)\nscalar.sort(axis=0)\n# Package into data array\ntransect_data = xr.DataArray(scalar,\n coords=[ ('x',x),\n ('z',z)])\n\nfig,axs=plt.subplots(2,1,sharex=True,sharey=True)\n\nY,X = np.meshgrid(transect_data.z,transect_data.x)\naxs[0].scatter(X,Y,30,scalar,cmap=cmap)\naxs[0].set_title('Point Data')\n\ncoll=plot_utils.transect_tricontourf(transect_data,ax=axs[1],V=20,\n cmap=cmap,\n xcoord='x',\n ycoord='z')\naxs[1].set_title('Contoured') ;", "Partial data on a (unevenly spaced) grid: $c = f(x_i,z_j)$\nWhen $c$ is np.nan for some $(x_i,z_j)$.", "# Have to specify the limits of the contours now.\n\n# fabricate unevenly spaced, monotonic x,z variables\nx=np.sort(1000*np.random.random(10))\nz=np.sort(5*np.random.random(5))\nscalar=np.random.random( (len(x),len(z)) )\nscalar.sort(axis=0) ; scalar.sort(axis=1)\n# Randomly drop about 20% of the bottom of each profile\nmask = np.sort(np.random.random( (10,5) ),axis=1) < 0.2\nscalar[mask]=np.nan\n# also supports masked array:\n# scalar=np.ma.masked_array(scalar,mask=mask)\n\n# Same layout for the DataArray, but now scalar is missing some data.\ntransect_data = xr.DataArray(scalar,\n coords=[ ('x',x),\n ('z',z)])\n\nfig,axs=plt.subplots(2,1,sharex=True,sharey=True)\n\nY,X = np.meshgrid(transect_data.z,transect_data.x)\naxs[0].scatter(X,Y,30,scalar,cmap=cmap)\naxs[0].set_title('Point Data')\n\ncoll=plot_utils.transect_tricontourf(transect_data,ax=axs[1],V=np.linspace(0,1,20),\n cmap=cmap,\n xcoord='x',\n ycoord='z')\naxs[1].set_title('Contoured') ;", "Per-profile vertical coordinate: $c = f(x_i,z_{(i,j)})$\nIn other words, data is composed of profiles and each profile has a constant\n$x$ but its own $z$ coordinate.\nThis example also shows how Datasets can be used to organize multiple variables,\nand how pulling out a variable into a DataArray brings the coordinate variablyes\nalong with it.", "# fabricate unevenly spaced, monotonic x variable\nx=np.linspace(0,10000,10) + 500*np.random.random(10)\n# vertical coordinate is now a 2D variable, ~ (profile,sample)\ncast_z=np.sort(-5*np.random.random((10,5)),axis=1)\ncast_z[:,-1]=0\nscalar=np.sort( np.random.random( cast_z.shape),axis=1)\n\nds=xr.Dataset()\nds['x']=('x',x)\nds['cast_z']=( ('x','z'),cast_z)\nds['scalar']=( ('x','z'), scalar )\nds=ds.set_coords( ['x','cast_z'])\ntransect_data = ds['scalar']\n\nfig,axs=plt.subplots(2,1,sharex=True,sharey=True)\n\nY=transect_data.cast_z\nX=np.ones_like(transect_data.cast_z) * transect_data.x.values[:,None]\n\naxs[0].scatter(X,Y,30,scalar,cmap=cmap)\naxs[0].set_title('Point Data')\n\ncoll=plot_utils.transect_tricontourf(transect_data,ax=axs[1],V=np.linspace(0,1,20),\n cmap=cmap,\n xcoord='x',\n ycoord='cast_z')\naxs[1].set_title('Interpolated') ;", "High-order interpolation\nSame data \"shape\" as above, but when the data are sufficiently well-behaved,\nit is possible to use a high-order interpolation.\nThis is also introduces access to the underlying triangulation object, for more \ndetailed plotting and interpolation.\nThe plot shows the smoothed, interpolated field, as well as the original triangulation\nand the refined triangulation.", "import matplotlib.tri as mtri\n\n# fabricate unevenly spaced, monotonic x variable\n# Make the points more evenly spaced to \nx=np.linspace(0,10000,10) + 500*np.random.random(10)\n# vertical coordinate is now a 2D variable, ~ (cast,sample)\ncast_z=np.linspace(0,1,5)[None,:] + 0.1*np.random.random((10,5))\ncast_z=np.sort(-5*cast_z,axis=1)\ncast_z[:,-1]=0\nscalar=np.sort(np.sort(np.random.random( cast_z.shape),axis=0),axis=1)\n\nds=xr.Dataset()\nds['x']=('x',x)\nds['cast_z']=( ('x','z'),cast_z)\nds['scalar']=( ('x','z'), scalar )\nds=ds.set_coords( ['x','cast_z'])\ntransect_data = ds['scalar']\n\nfig,ax=plt.subplots(figsize=(10,7))\n\ntri,mapper=plot_utils.transect_to_triangles(transect_data,xcoord='x',ycoord='cast_z')\n\n\n# This only works with relatively smooth data!\nrefiner = mtri.UniformTriRefiner(tri)\ntri_refi, z_refi = refiner.refine_field(mapper(transect_data.values), subdiv=2)\nplt.tricontourf(tri_refi, z_refi, \n levels=np.linspace(0,1,20), cmap=cmap)\n\n# Show how the interpolation is constructed:\nax.triplot(tri_refi,color='k',lw=0.3,alpha=0.5)\nax.triplot(tri,color='k',lw=0.7,alpha=0.5)\n\nax.set_title('Refined interpolation') ;" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
VitensTC/phreeqpython
examples/4. Gas/7. Gas-Phase Calculations.ipynb
apache-2.0
[ "Gas-Phase Calculations\nhttps://wwwbrr.cr.usgs.gov/projects/GWC_coupled/phreeqc/phreeqc3-html/phreeqc3-62.htm#50528271_44022", "%pylab inline\nimport phreeqpython\nimport pandas as pd\npp = phreeqpython.PhreeqPython(database='phreeqc.dat')", "Add Master, Solution Species and Phases by executing PHREEQC input code", "pp.ip.run_string(\"\"\"\nSOLUTION_MASTER_SPECIES\nN(-3) NH4+ 0.0 N\nSOLUTION_SPECIES\nNH4+ = NH3 + H+\n log_k -9.252\n delta_h 12.48 kcal\n -analytic 0.6322 -0.001225 -2835.76\n \nNO3- + 10 H+ + 8 e- = NH4+ + 3 H2O\n log_k 119.077\n delta_h -187.055 kcal\n -gamma 2.5000 0.0000\nPHASES\nNH3(g)\n NH3 = NH3\n log_k 1.770\n delta_h -8.170 kcal\n\"\"\")", "Run Calculation", "# add empty solution 1\nsolution1 = pp.add_solution({})\n# equalize solution 1 with Calcite and CO2\nsolution1.equalize(['Calcite', 'CO2(g)'], [0,-1.5])\n\n# create a fixed pressure gas phase\nfixed_pressure = pp.add_gas({\n 'CO2(g)': 0,\n 'CH4(g)': 0,\n 'N2(g)': 0,\n 'H2O(g)': 0,\n}, pressure=1.1, fixed_pressure=True)\n\n# create a fixed volume gas phase\nfixed_volume = pp.add_gas({\n 'CO2(g)': 0,\n 'CH4(g)': 0,\n 'N2(g)': 0,\n 'H2O(g)': 0,\n}, volume=23.19, fixed_pressure=False, fixed_volume=True, equilibrate_with=solution1)\n\nmmol = [1, 2, 3, 4, 8, 16, 32, 64, 125, 250, 500, 1000]\n\n# instantiate result lists\nfp_vol = []; fp_pres = []; fp_frac = []; fv_vol = []; fv_pres = []; fv_frac = []\n\nfor m in mmol:\n\n sol = solution1.copy()\n fp = fixed_pressure.copy()\n # equlibriate with solution\n sol.add('CH2O(NH3)0.07', m, 'mmol')\n sol.interact(fp)\n fp_vol.append(fp.volume)\n fp_pres.append(fp.pressure)\n fp_frac.append(fp.partial_pressures)\n\n sol.forget(); fp.forget() # clean up solutions after use\n \n sol = solution1.copy()\n fv = fixed_volume.copy()\n sol.add('CH2O(NH3)0.07', m, 'mmol')\n sol.interact(fv)\n fv_vol.append(fv.volume)\n fv_pres.append(fv.pressure)\n fv_frac.append(fv.partial_pressures)\n \n sol.forget(); fv.forget() # clean up solutions after use", "Total Gas Pressure and Volume", "plt.figure(figsize=[8,5])\n\n# create two y axes\nax1 = plt.gca()\nax2 = ax1.twinx()\n\n# plot pressures\nax1.plot(mmol, np.log10(fp_pres), 'x-', color='tab:purple', label='Fixed_P - Pressure')\nax1.plot(mmol, np.log10(fv_pres), 's-', color='tab:purple', label='Fixed_V - Pressure')\n\n# add dummy handlers for legend\nax1.plot(np.nan, np.nan, 'x-', color='tab:blue', label='Fixed_P - Volume')\nax1.plot(np.nan, np.nan, 's-', color='tab:blue', label='Fixed_V - Volume')\n\n# plot volumes\nax2.plot(mmol, fp_vol, 'x-')\nax2.plot(mmol, fv_vol, 's-', color='tab:blue')\n\n# set log scale to both y axes\nax2.set_xscale('log')\nax2.set_yscale('log')\n\n# set axes limits\nax1.set_xlim([1e0, 1e3])\nax2.set_xlim([1e0, 1e3])\nax1.set_ylim([-5,1])\nax2.set_ylim([1e-3,1e5])\n\n# add legend and gridlines\nax1.legend(loc=4)\nax1.grid()\n\n# set labels\nax1.set_xlabel('Organic matter reacted, in millimoles')\nax1.set_ylabel('Log(Pressure, in atmospheres)')\nax2.set_ylabel('Volume, in liters)')", "Fixed Pressure Gas Composition", "fig = plt.figure(figsize=[16,5])\n\n# plot fixed pressure gas composition\nfig.add_subplot(1,2,1)\npd.DataFrame(fp_frac, index=mmol).apply(np.log10)[2:].plot(style='-x', ax=plt.gca())\nplt.title('Fixed Pressure gas composition')\nplt.xscale('log')\nplt.ylim([-5,1])\nplt.grid()\nplt.xlim(1e0, 1e3)\nplt.xlabel('Organic matter reacted, in millimoles')\nplt.ylabel('Log(Partial pressure, in atmospheres)')\n\n# plot fixed volume gas composition\nfig.add_subplot(1,2,2)\npd.DataFrame(fv_frac, index=mmol).apply(np.log10).plot(style='-o', ax=plt.gca())\nplt.title('Fixed Volume gas composition')\nplt.xscale('log')\nplt.xlabel('Organic matter reacted, in millimoles')\nplt.ylabel('Log(Partial pressure, in atmospheres)')\nplt.grid()\nplt.ylim([-5,1])\n" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
tensorflow/tfx
docs/tutorials/tfx/gcp/vertex_pipelines_vertex_training.ipynb
apache-2.0
[ "Copyright 2021 The TensorFlow Authors.", "#@title Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.", "Vertex AI Training and Serving with TFX and Vertex Pipelines\n<div class=\"devsite-table-wrapper\"><table class=\"tfo-notebook-buttons\" align=\"left\">\n<td><a target=\"_blank\" href=\"https://www.tensorflow.org/tfx/tutorials/tfx/gcp/vertex_pipelines_vertex_training\">\n<img src=\"https://www.tensorflow.org/images/tf_logo_32px.png\"/>View on TensorFlow.org</a></td>\n<td><a target=\"_blank\" href=\"https://colab.research.google.com/github/tensorflow/tfx/blob/master/docs/tutorials/tfx/gcp/vertex_pipelines_vertex_training.ipynb\">\n<img src=\"https://www.tensorflow.org/images/colab_logo_32px.png\">Run in Google Colab</a></td>\n<td><a target=\"_blank\" href=\"https://github.com/tensorflow/tfx/tree/master/docs/tutorials/tfx/gcp/vertex_pipelines_vertex_training.ipynb\">\n<img width=32px src=\"https://www.tensorflow.org/images/GitHub-Mark-32px.png\">View source on GitHub</a></td>\n<td><a href=\"https://storage.googleapis.com/tensorflow_docs/tfx/docs/tutorials/tfx/gcp/vertex_pipelines_vertex_training.ipynb\"><img src=\"https://www.tensorflow.org/images/download_logo_32px.png\" />Download notebook</a></td>\n<td><a href=\"https://console.cloud.google.com/vertex-ai/workbench/deploy-notebook?q=download_url%3Dhttps%253A%252F%252Fraw.githubusercontent.com%252Ftensorflow%252Ftfx%252Fmaster%252Fdocs%252Ftutorials%252Ftfx%252Fgcp%252Fvertex_pipelines_vertex_training.ipynb\"><img src=\"https://www.tensorflow.org/images/download_logo_32px.png\" />Run in Google Cloud Vertex AI Workbench</a></td>\n</table></div>\n\nThis notebook-based tutorial will create and run a TFX pipeline which trains an\nML model using Vertex AI Training service and publishes it to Vertex AI for serving.\nThis notebook is based on the TFX pipeline we built in\nSimple TFX Pipeline for Vertex Pipelines Tutorial.\nIf you have not read that tutorial yet, you should read it before proceeding\nwith this notebook.\nYou can train models on Vertex AI using AutoML, or use custom training. In\ncustom training, you can select many different machine types to power your\ntraining jobs, enable distributed training, use hyperparameter tuning, and\naccelerate with GPUs.\nYou can also serve prediction requests by deploying the trained model to Vertex AI\nModels and creating an endpoint.\nIn this tutorial, we will use Vertex AI Training with custom jobs to train\na model in a TFX pipeline.\nWe will also deploy the model to serve prediction request using Vertex AI.\nThis notebook is intended to be run on\nGoogle Colab or on\nAI Platform Notebooks. If you\nare not using one of these, you can simply click \"Run in Google Colab\" button\nabove.\nSet up\nIf you have completed\nSimple TFX Pipeline for Vertex Pipelines Tutorial,\nyou will have a working GCP project and a GCS bucket and that is all we need\nfor this tutorial. Please read the preliminary tutorial first if you missed it.\nInstall python packages\nWe will install required Python packages including TFX and KFP to author ML\npipelines and submit jobs to Vertex Pipelines.", "# Use the latest version of pip.\n!pip install --upgrade pip\n!pip install --upgrade \"tfx[kfp]<2\"", "Did you restart the runtime?\nIf you are using Google Colab, the first time that you run\nthe cell above, you must restart the runtime by clicking\nabove \"RESTART RUNTIME\" button or using \"Runtime > Restart\nruntime ...\" menu. This is because of the way that Colab\nloads packages.\nIf you are not on Colab, you can restart runtime with following cell.", "# docs_infra: no_execute\nimport sys\nif not 'google.colab' in sys.modules:\n # Automatically restart kernel after installs\n import IPython\n app = IPython.Application.instance()\n app.kernel.do_shutdown(True)", "Login in to Google for this notebook\nIf you are running this notebook on Colab, authenticate with your user account:", "import sys\nif 'google.colab' in sys.modules:\n from google.colab import auth\n auth.authenticate_user()", "If you are on AI Platform Notebooks, authenticate with Google Cloud before\nrunning the next section, by running\nsh\ngcloud auth login\nin the Terminal window (which you can open via File > New in the\nmenu). You only need to do this once per notebook instance.\nCheck the package versions.", "import tensorflow as tf\nprint('TensorFlow version: {}'.format(tf.__version__))\nfrom tfx import v1 as tfx\nprint('TFX version: {}'.format(tfx.__version__))\nimport kfp\nprint('KFP version: {}'.format(kfp.__version__))", "Set up variables\nWe will set up some variables used to customize the pipelines below. Following\ninformation is required:\n\nGCP Project id. See\nIdentifying your project id.\nGCP Region to run pipelines. For more information about the regions that\nVertex Pipelines is available in, see the\nVertex AI locations guide.\nGoogle Cloud Storage Bucket to store pipeline outputs.\n\nEnter required values in the cell below before running it.", "GOOGLE_CLOUD_PROJECT = '' # <--- ENTER THIS\nGOOGLE_CLOUD_REGION = '' # <--- ENTER THIS\nGCS_BUCKET_NAME = '' # <--- ENTER THIS\n\nif not (GOOGLE_CLOUD_PROJECT and GOOGLE_CLOUD_REGION and GCS_BUCKET_NAME):\n from absl import logging\n logging.error('Please set all required parameters.')", "Set gcloud to use your project.", "!gcloud config set project {GOOGLE_CLOUD_PROJECT}\n\nPIPELINE_NAME = 'penguin-vertex-training'\n\n# Path to various pipeline artifact.\nPIPELINE_ROOT = 'gs://{}/pipeline_root/{}'.format(GCS_BUCKET_NAME, PIPELINE_NAME)\n\n# Paths for users' Python module.\nMODULE_ROOT = 'gs://{}/pipeline_module/{}'.format(GCS_BUCKET_NAME, PIPELINE_NAME)\n\n# Paths for users' data.\nDATA_ROOT = 'gs://{}/data/{}'.format(GCS_BUCKET_NAME, PIPELINE_NAME)\n\n# Name of Vertex AI Endpoint.\nENDPOINT_NAME = 'prediction-' + PIPELINE_NAME\n\nprint('PIPELINE_ROOT: {}'.format(PIPELINE_ROOT))", "Prepare example data\nWe will use the same\nPalmer Penguins dataset\nas\nSimple TFX Pipeline Tutorial.\nThere are four numeric features in this dataset which were already normalized\nto have range [0,1]. We will build a classification model which predicts the\nspecies of penguins.\nWe need to make our own copy of the dataset. Because TFX ExampleGen reads\ninputs from a directory, we need to create a directory and copy dataset to it\non GCS.", "!gsutil cp gs://download.tensorflow.org/data/palmer_penguins/penguins_processed.csv {DATA_ROOT}/", "Take a quick look at the CSV file.", "!gsutil cat {DATA_ROOT}/penguins_processed.csv | head", "Create a pipeline\nOur pipeline will be very similar to the pipeline we created in\nSimple TFX Pipeline for Vertex Pipelines Tutorial.\nThe pipeline will consists of three components, CsvExampleGen, Trainer and\nPusher. But we will use a special Trainer and Pusher component. The Trainer component will move\ntraining workloads to Vertex AI, and the Pusher component will publish the\ntrained ML model to Vertex AI instead of a filesystem.\nTFX provides a special Trainer to submit training jobs to Vertex AI Training\nservice. All we have to do is use Trainer in the extension module\ninstead of the standard Trainer component along with some required GCP\nparameters.\nIn this tutorial, we will run Vertex AI Training jobs only using CPUs first\nand then with a GPU.\nTFX also provides a special Pusher to upload the model to Vertex AI Models.\nPusher will create Vertex AI Endpoint resource to serve online\nperdictions, too. See\nVertex AI documentation\nto learn more about online predictions provided by Vertex AI.\nWrite model code.\nThe model itself is almost similar to the model in\nSimple TFX Pipeline Tutorial.\nWe will add _get_distribution_strategy() function which creates a\nTensorFlow distribution strategy\nand it is used in run_fn to use MirroredStrategy if GPU is available.", "_trainer_module_file = 'penguin_trainer.py'\n\n%%writefile {_trainer_module_file}\n\n# Copied from https://www.tensorflow.org/tfx/tutorials/tfx/penguin_simple and\n# slightly modified run_fn() to add distribution_strategy.\n\nfrom typing import List\nfrom absl import logging\nimport tensorflow as tf\nfrom tensorflow import keras\nfrom tensorflow_metadata.proto.v0 import schema_pb2\nfrom tensorflow_transform.tf_metadata import schema_utils\n\nfrom tfx import v1 as tfx\nfrom tfx_bsl.public import tfxio\n\n_FEATURE_KEYS = [\n 'culmen_length_mm', 'culmen_depth_mm', 'flipper_length_mm', 'body_mass_g'\n]\n_LABEL_KEY = 'species'\n\n_TRAIN_BATCH_SIZE = 20\n_EVAL_BATCH_SIZE = 10\n\n# Since we're not generating or creating a schema, we will instead create\n# a feature spec. Since there are a fairly small number of features this is\n# manageable for this dataset.\n_FEATURE_SPEC = {\n **{\n feature: tf.io.FixedLenFeature(shape=[1], dtype=tf.float32)\n for feature in _FEATURE_KEYS\n }, _LABEL_KEY: tf.io.FixedLenFeature(shape=[1], dtype=tf.int64)\n}\n\n\ndef _input_fn(file_pattern: List[str],\n data_accessor: tfx.components.DataAccessor,\n schema: schema_pb2.Schema,\n batch_size: int) -> tf.data.Dataset:\n \"\"\"Generates features and label for training.\n\n Args:\n file_pattern: List of paths or patterns of input tfrecord files.\n data_accessor: DataAccessor for converting input to RecordBatch.\n schema: schema of the input data.\n batch_size: representing the number of consecutive elements of returned\n dataset to combine in a single batch\n\n Returns:\n A dataset that contains (features, indices) tuple where features is a\n dictionary of Tensors, and indices is a single Tensor of label indices.\n \"\"\"\n return data_accessor.tf_dataset_factory(\n file_pattern,\n tfxio.TensorFlowDatasetOptions(\n batch_size=batch_size, label_key=_LABEL_KEY),\n schema=schema).repeat()\n\n\ndef _make_keras_model() -> tf.keras.Model:\n \"\"\"Creates a DNN Keras model for classifying penguin data.\n\n Returns:\n A Keras Model.\n \"\"\"\n # The model below is built with Functional API, please refer to\n # https://www.tensorflow.org/guide/keras/overview for all API options.\n inputs = [keras.layers.Input(shape=(1,), name=f) for f in _FEATURE_KEYS]\n d = keras.layers.concatenate(inputs)\n for _ in range(2):\n d = keras.layers.Dense(8, activation='relu')(d)\n outputs = keras.layers.Dense(3)(d)\n\n model = keras.Model(inputs=inputs, outputs=outputs)\n model.compile(\n optimizer=keras.optimizers.Adam(1e-2),\n loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),\n metrics=[keras.metrics.SparseCategoricalAccuracy()])\n\n model.summary(print_fn=logging.info)\n return model\n\n\n# NEW: Read `use_gpu` from the custom_config of the Trainer.\n# if it uses GPU, enable MirroredStrategy.\ndef _get_distribution_strategy(fn_args: tfx.components.FnArgs):\n if fn_args.custom_config.get('use_gpu', False):\n logging.info('Using MirroredStrategy with one GPU.')\n return tf.distribute.MirroredStrategy(devices=['device:GPU:0'])\n return None\n\n\n# TFX Trainer will call this function.\ndef run_fn(fn_args: tfx.components.FnArgs):\n \"\"\"Train the model based on given args.\n\n Args:\n fn_args: Holds args used to train the model as name/value pairs.\n \"\"\"\n\n # This schema is usually either an output of SchemaGen or a manually-curated\n # version provided by pipeline author. A schema can also derived from TFT\n # graph if a Transform component is used. In the case when either is missing,\n # `schema_from_feature_spec` could be used to generate schema from very simple\n # feature_spec, but the schema returned would be very primitive.\n schema = schema_utils.schema_from_feature_spec(_FEATURE_SPEC)\n\n train_dataset = _input_fn(\n fn_args.train_files,\n fn_args.data_accessor,\n schema,\n batch_size=_TRAIN_BATCH_SIZE)\n eval_dataset = _input_fn(\n fn_args.eval_files,\n fn_args.data_accessor,\n schema,\n batch_size=_EVAL_BATCH_SIZE)\n\n # NEW: If we have a distribution strategy, build a model in a strategy scope.\n strategy = _get_distribution_strategy(fn_args)\n if strategy is None:\n model = _make_keras_model()\n else:\n with strategy.scope():\n model = _make_keras_model()\n\n model.fit(\n train_dataset,\n steps_per_epoch=fn_args.train_steps,\n validation_data=eval_dataset,\n validation_steps=fn_args.eval_steps)\n\n # The result of the training should be saved in `fn_args.serving_model_dir`\n # directory.\n model.save(fn_args.serving_model_dir, save_format='tf')", "Copy the module file to GCS which can be accessed from the pipeline components.\nOtherwise, you might want to build a container image including the module file\nand use the image to run the pipeline and AI Platform Training jobs.", "!gsutil cp {_trainer_module_file} {MODULE_ROOT}/", "Write a pipeline definition\nWe will define a function to create a TFX pipeline. It has the same three\nComponents as in\nSimple TFX Pipeline Tutorial,\nbut we use a Trainer and Pusher component in the GCP extension module.\ntfx.extensions.google_cloud_ai_platform.Trainer behaves like a regular\nTrainer, but it just moves the computation for the model training to cloud.\nIt launches a custom job in Vertex AI Training service and the trainer\ncomponent in the orchestration system will just wait until the Vertex AI\nTraining job completes.\ntfx.extensions.google_cloud_ai_platform.Pusher creates a Vertex AI Model and a Vertex AI Endpoint using the\ntrained model.", "def _create_pipeline(pipeline_name: str, pipeline_root: str, data_root: str,\n module_file: str, endpoint_name: str, project_id: str,\n region: str, use_gpu: bool) -> tfx.dsl.Pipeline:\n \"\"\"Implements the penguin pipeline with TFX.\"\"\"\n # Brings data into the pipeline or otherwise joins/converts training data.\n example_gen = tfx.components.CsvExampleGen(input_base=data_root)\n\n # NEW: Configuration for Vertex AI Training.\n # This dictionary will be passed as `CustomJobSpec`.\n vertex_job_spec = {\n 'project': project_id,\n 'worker_pool_specs': [{\n 'machine_spec': {\n 'machine_type': 'n1-standard-4',\n },\n 'replica_count': 1,\n 'container_spec': {\n 'image_uri': 'gcr.io/tfx-oss-public/tfx:{}'.format(tfx.__version__),\n },\n }],\n }\n if use_gpu:\n # See https://cloud.google.com/vertex-ai/docs/reference/rest/v1/MachineSpec#acceleratortype\n # for available machine types.\n vertex_job_spec['worker_pool_specs'][0]['machine_spec'].update({\n 'accelerator_type': 'NVIDIA_TESLA_K80',\n 'accelerator_count': 1\n })\n\n # Trains a model using Vertex AI Training.\n # NEW: We need to specify a Trainer for GCP with related configs.\n trainer = tfx.extensions.google_cloud_ai_platform.Trainer(\n module_file=module_file,\n examples=example_gen.outputs['examples'],\n train_args=tfx.proto.TrainArgs(num_steps=100),\n eval_args=tfx.proto.EvalArgs(num_steps=5),\n custom_config={\n tfx.extensions.google_cloud_ai_platform.ENABLE_VERTEX_KEY:\n True,\n tfx.extensions.google_cloud_ai_platform.VERTEX_REGION_KEY:\n region,\n tfx.extensions.google_cloud_ai_platform.TRAINING_ARGS_KEY:\n vertex_job_spec,\n 'use_gpu':\n use_gpu,\n })\n\n # NEW: Configuration for pusher.\n vertex_serving_spec = {\n 'project_id': project_id,\n 'endpoint_name': endpoint_name,\n # Remaining argument is passed to aiplatform.Model.deploy()\n # See https://cloud.google.com/vertex-ai/docs/predictions/deploy-model-api#deploy_the_model\n # for the detail.\n #\n # Machine type is the compute resource to serve prediction requests.\n # See https://cloud.google.com/vertex-ai/docs/predictions/configure-compute#machine-types\n # for available machine types and acccerators.\n 'machine_type': 'n1-standard-4',\n }\n\n # Vertex AI provides pre-built containers with various configurations for\n # serving.\n # See https://cloud.google.com/vertex-ai/docs/predictions/pre-built-containers\n # for available container images.\n serving_image = 'us-docker.pkg.dev/vertex-ai/prediction/tf2-cpu.2-6:latest'\n if use_gpu:\n vertex_serving_spec.update({\n 'accelerator_type': 'NVIDIA_TESLA_K80',\n 'accelerator_count': 1\n })\n serving_image = 'us-docker.pkg.dev/vertex-ai/prediction/tf2-gpu.2-6:latest'\n\n # NEW: Pushes the model to Vertex AI.\n pusher = tfx.extensions.google_cloud_ai_platform.Pusher(\n model=trainer.outputs['model'],\n custom_config={\n tfx.extensions.google_cloud_ai_platform.ENABLE_VERTEX_KEY:\n True,\n tfx.extensions.google_cloud_ai_platform.VERTEX_REGION_KEY:\n region,\n tfx.extensions.google_cloud_ai_platform.VERTEX_CONTAINER_IMAGE_URI_KEY:\n serving_image,\n tfx.extensions.google_cloud_ai_platform.SERVING_ARGS_KEY:\n vertex_serving_spec,\n })\n\n components = [\n example_gen,\n trainer,\n pusher,\n ]\n\n return tfx.dsl.Pipeline(\n pipeline_name=pipeline_name,\n pipeline_root=pipeline_root,\n components=components)", "Run the pipeline on Vertex Pipelines.\nWe will use Vertex Pipelines to run the pipeline as we did in\nSimple TFX Pipeline for Vertex Pipelines Tutorial.", "# docs_infra: no_execute\nimport os\n\nPIPELINE_DEFINITION_FILE = PIPELINE_NAME + '_pipeline.json'\n\nrunner = tfx.orchestration.experimental.KubeflowV2DagRunner(\n config=tfx.orchestration.experimental.KubeflowV2DagRunnerConfig(),\n output_filename=PIPELINE_DEFINITION_FILE)\n_ = runner.run(\n _create_pipeline(\n pipeline_name=PIPELINE_NAME,\n pipeline_root=PIPELINE_ROOT,\n data_root=DATA_ROOT,\n module_file=os.path.join(MODULE_ROOT, _trainer_module_file),\n endpoint_name=ENDPOINT_NAME,\n project_id=GOOGLE_CLOUD_PROJECT,\n region=GOOGLE_CLOUD_REGION,\n # We will use CPUs only for now.\n use_gpu=False))", "The generated definition file can be submitted using Google Cloud aiplatform\nclient in google-cloud-aiplatform package.", "# docs_infra: no_execute\nfrom google.cloud import aiplatform\nfrom google.cloud.aiplatform import pipeline_jobs\nimport logging\nlogging.getLogger().setLevel(logging.INFO)\n\naiplatform.init(project=GOOGLE_CLOUD_PROJECT, location=GOOGLE_CLOUD_REGION)\n\njob = pipeline_jobs.PipelineJob(template_path=PIPELINE_DEFINITION_FILE,\n display_name=PIPELINE_NAME)\njob.submit()", "Now you can visit the link in the output above or visit 'Vertex AI > Pipelines'\nin Google Cloud Console to see the\nprogress.\nTest with a prediction request\nOnce the pipeline completes, you will find a deployed model at the one of the\nendpoints in 'Vertex AI > Endpoints'. We need to know the id of the endpoint to\nsend a prediction request to the new endpoint. This is different from the\nendpoint name we entered above. You can find the id at the Endpoints page in\nGoogle Cloud Console, it looks like a very long number.\nSet ENDPOINT_ID below before running it.", "ENDPOINT_ID='' # <--- ENTER THIS\nif not ENDPOINT_ID:\n from absl import logging\n logging.error('Please set the endpoint id.')", "We use the same aiplatform client to send a request to the endpoint. We will\nsend a prediction request for Penguin species classification. The input is the four features that we used, and the model will return three values, because our\nmodel outputs one value for each species.\nFor example, the following specific example has the largest value at index '2'\nand will print '2'.", "# docs_infra: no_execute\nimport numpy as np\n\n# The AI Platform services require regional API endpoints.\nclient_options = {\n 'api_endpoint': GOOGLE_CLOUD_REGION + '-aiplatform.googleapis.com'\n }\n# Initialize client that will be used to create and send requests.\nclient = aiplatform.gapic.PredictionServiceClient(client_options=client_options)\n\n# Set data values for the prediction request.\n# Our model expects 4 feature inputs and produces 3 output values for each\n# species. Note that the output is logit value rather than probabilities.\n# See the model code to understand input / output structure.\ninstances = [{\n 'culmen_length_mm':[0.71],\n 'culmen_depth_mm':[0.38],\n 'flipper_length_mm':[0.98],\n 'body_mass_g': [0.78],\n}]\n\nendpoint = client.endpoint_path(\n project=GOOGLE_CLOUD_PROJECT,\n location=GOOGLE_CLOUD_REGION,\n endpoint=ENDPOINT_ID,\n)\n# Send a prediction request and get response.\nresponse = client.predict(endpoint=endpoint, instances=instances)\n\n# Uses argmax to find the index of the maximum value.\nprint('species:', np.argmax(response.predictions[0]))", "For detailed information about online prediction, please visit the\nEndpoints page in\nGoogle Cloud Console. you can find a guide on sending sample requests and\nlinks to more resources.\nRun the pipeline using a GPU\nVertex AI supports training using various machine types including support for\nGPUs. See\nMachine spec reference\nfor available options.\nWe already defined our pipeline to support GPU training. All we need to do is\nsetting use_gpu flag to True. Then a pipeline will be created with a machine\nspec including one NVIDIA_TESLA_K80 and our model training code will use\ntf.distribute.MirroredStrategy.\nNote that use_gpu flag is not a part of the Vertex or TFX API. It is just\nused to control the training code in this tutorial.", "# docs_infra: no_execute\nrunner.run(\n _create_pipeline(\n pipeline_name=PIPELINE_NAME,\n pipeline_root=PIPELINE_ROOT,\n data_root=DATA_ROOT,\n module_file=os.path.join(MODULE_ROOT, _trainer_module_file),\n endpoint_name=ENDPOINT_NAME,\n project_id=GOOGLE_CLOUD_PROJECT,\n region=GOOGLE_CLOUD_REGION,\n # Updated: Use GPUs. We will use a NVIDIA_TESLA_K80 and \n # the model code will use tf.distribute.MirroredStrategy.\n use_gpu=True))\n\njob = pipeline_jobs.PipelineJob(template_path=PIPELINE_DEFINITION_FILE,\n display_name=PIPELINE_NAME)\njob.submit()", "Now you can visit the link in the output above or visit 'Vertex AI > Pipelines'\nin Google Cloud Console to see the\nprogress.\nCleaning up\nYou have created a Vertex AI Model and Endpoint in this tutorial.\nPlease delete these resources to avoid any unwanted charges by going\nto Endpoints and\nundeploying the model from the endpoint first. Then you can delete the\nendpoint and the model separately." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
googledatalab/notebooks
tutorials/BigQuery/Using External Tables from BigQuery.ipynb
apache-2.0
[ "Using External Tables from BigQuery\nGoogle BigQuery has the ability to query data directly from Google Cloud Storage (a feature called \"External Data Sources\"). This feature can be useful when querying small amounts of data that you may not want to load into a BigQuery table. It is not recommended for large queries, however, because BigQuery billing is based on the amount of data read to process a query. BigQuery can very efficiently query subsets of tables in its own store since these are stored in columnar format, so the unused columns are not read and don't add to the cost. But since data stored in Cloud Storage is typically in the form of a compressed CSV file, typically, the entire file must be read. Hence, while querying data in Cloud Storage can he helpful, it should be used judiciously. \nIn this notebook we will show you how to download data from a source on the Internet, put it in Cloud Storage, and then query it directly.\nGetting the Data and Loading into GCS\nFor this sample we want to use external data in a CSV, load it into Cloud Storage, and query it. We will use the Seattle bike station data from the Pronto 2015 Data Challenge dataset.", "from google.datalab import Context\nimport google.datalab.bigquery as bq\nimport google.datalab.storage as gs\n\ntry:\n from urllib2 import urlopen\nexcept ImportError:\n from urllib.request import urlopen\ndata_source = \"https://storage.googleapis.com/cloud-datalab-samples/udfsample/2015_station_data.csv\"\n\nf = urlopen(data_source)\ndata = f.read()\nf.close()\n\nprint('Read %d bytes' % len(data))\n\n# Get a bucket in the current project\nproject = Context.default().project_id\nsample_bucket_name = project + '-station_data'\n\n# Create and write to the GCS item\nsample_bucket = gs.Bucket(sample_bucket_name)\nsample_bucket.create()\nsample_object = sample_bucket.object('station_data.csv')\nsample_object.write_stream(data, 'text/plain')", "Creating an External Data Source Object\nNow we need to create a special ExternalDataSource object that refers to the data, which can, in turn, be used as a table in our BigQuery queries. We need to provide a schema for BigQuery to use the data. The CSV file has a header row that we want to skip; we will use a CSVOptions object to do this.", "options = bq.CSVOptions(skip_leading_rows=1) # Skip the header row\n\nschema = bq.Schema([\n {'name': 'id', 'type': 'INTEGER'}, # row ID\n {'name': 'name', 'type': 'STRING'}, # friendly name\n {'name': 'terminal', 'type': 'STRING'}, # terminal ID\n {'name': 'lat', 'type': 'FLOAT'}, # latitude\n {'name': 'long', 'type': 'FLOAT'}, # longitude\n {'name': 'dockcount', 'type': 'INTEGER'}, # bike capacity\n {'name': 'online', 'type': 'STRING'} # date station opened\n])\n\ndrivedata = bq.ExternalDataSource(source=sample_object.uri, # The gs:// URL of the file \n csv_options=options,\n schema=schema,\n max_bad_records=10)\n\ndrivedata", "Querying the Table\nNow let's verify that we can access the data. We will run a simple query to show the first five rows. Note that we specify the federated table by using a name in the query, and then pass the table in using a data_sources dictionary parameter.", "bq.Query('SELECT * FROM drivedatasource LIMIT 5', data_sources={'drivedatasource': drivedata}).execute().result()", "Finally, let's clean up.", "sample_object.delete()\nsample_bucket.delete()" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
sdpython/pyensae
_doc/notebooks/pyensae_flat2db3.ipynb
mit
[ "Import a flat file into a SQLite database\nImporting a flatfile can be done with pandas. pyensae proposes a function which does so by guessing the schema over the first lines.", "import pyensae\nfrom jyquickhelper import add_notebook_menu\nadd_notebook_menu()", "Mix SQLite and DataFrame\nWhen a dataset is huge (~3Gb), it takes some time to load it into a DataFrame. It is difficult to look at it in any tool (Python, Excel, ...) One option I usually do is to load it a SQL server if you have one. If you do not, then SQLite is the best option. Let's see how it works with a custom datasets.", "import pyensae\nimport pyensae.datasource\npyensae.datasource.download_data(\"velib_vanves.zip\", website = \"xd\")", "As this file is small (just an example), let's see how it looks like with a DataFrame.", "import pandas\ndf = pandas.read_csv(\"velib_vanves.txt\",sep=\"\\t\")\ndf.head(n=2)", "Then we import it into a SQLite3 database. The following function automatically guesses the table schema.", "from pyensae.sql import import_flatfile_into_database\nimport_flatfile_into_database(\"velib_vanves.db3\", \"velib_vanves.txt\", add_key=\"key\")", "We check the database exists:", "import os\nos.listdir(\".\")", "On Windows, you can use SQLiteSpy to visualize the created table. We use pymysintall to download it.", "try:\n from pymyinstall.installcustom import install_sqlitespy\n exe = install_sqlitespy()\nexcept:\n # we skip an exception\n # the website can be down...\n exe = None\nexe", "We just need to run it (see run_cmd).", "if exe:\n from pyquickhelper import run_cmd\n run_cmd(\"SQLiteSpy.exe velib_vanves.db3\")", "You should be able to see something like (on Windows):", "from pyquickhelper.helpgen import NbImage\nNbImage('img_nb_sqlitespy.png')", "It is easier to use that tool to extract a sample of the data. Once it is ready, you can execute the SQL query in Python and converts the results into a DataFrame. The following code extracts a random sample from the original sets.", "sql = \"\"\"SELECT * FROM velib_vanves WHERE key IN ({0})\"\"\"\n\nimport random\nfrom pyquickhelper.loghelper import noLOG\nfrom pyensae.sql import Database\ndb = Database(\"velib_vanves.db3\", LOG = noLOG)\ndb.connect()\nmx = db.execute_view(\"SELECT MAX(key) FROM velib_vanves\")[0][0]\nrnd_ids = [ random.randint(1,mx) for i in range(0,100) ] # liste de 100 id aléatoires\nstrids = \",\".join( str(_) for _ in rnd_ids )\nres = db.execute_view(sql.format (strids))\ndf = db.to_df(sql.format (strids))\ndb.close()\ndf.head()[[\"key\",\"last_update\",\"available_bike_stands\",\"available_bikes\"]]", "<h3 id=\"mem\">Memory Dump</h3>\n\nOnce you have a big dataset available in text format, it takes some time to load into memory and you need to do that every time you need it again after you closed your python instance.", "with open(\"temp_big_file.txt\",\"w\") as f :\n f.write(\"c1\\tc2\\tc3\\n\")\n for i in range(0,10000000):\n x = [ i, random.random(), random.random() ]\n s = [ str(_) for _ in x ]\n f.write( \"\\t\".join(s) + \"\\n\" )\nos.stat(\"temp_big_file.txt\").st_size \n\nimport pandas,time\nt = time.perf_counter()\ndf = pandas.read_csv(\"temp_big_file.txt\",sep=\"\\t\")\nprint(\"duration (s)\",time.perf_counter()-t)", "It is slow considering that many datasets contain many more features. But we can speed it up by doing a kind of memory dump with to_pickle.", "t = time.perf_counter()\ndf.to_pickle(\"temp_big_file.bin\")\nprint(\"duration (s)\",time.perf_counter()-t)", "And we reload it with read_pickle:", "t = time.perf_counter()\ndf = pandas.read_pickle(\"temp_big_file.bin\")\nprint(\"duration (s)\",time.perf_counter()-t)", "It is 10 times faster and usually smaller on the disk." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
acmiyaguchi/data-pipeline
reports/socorro_import/ImportCrashData.ipynb
mpl-2.0
[ "Import Socorro crash data into the Data Platform\nWe want to be able to store Socorro crash data in Parquet form so that it can be made accessible from re:dash.\nSee Bug 1273657 for more details", "!conda install boto3 --yes\n\nimport logging\nlogging.basicConfig(level=logging.INFO)\nlog = logging.getLogger(__name__)", "We create the pyspark datatype for representing the crash data in spark. This is a slightly modified version of peterbe/crash-report-struct-code.", "from pyspark.sql.types import *\n\ndef create_struct(schema):\n \"\"\" Take a JSON schema and return a pyspark StructType of equivalent structure. \"\"\"\n \n replace_definitions(schema, schema['definitions'])\n assert '$ref' not in str(schema), 're-write didnt work'\n \n struct = StructType()\n for row in get_rows(schema):\n struct.add(row)\n\n return struct\n\ndef replace_definitions(schema, definitions):\n \"\"\" Replace references in the JSON schema with their definitions.\"\"\"\n\n if 'properties' in schema:\n for prop, meta in schema['properties'].items():\n replace_definitions(meta, definitions)\n elif 'items' in schema:\n if '$ref' in schema['items']:\n ref = schema['items']['$ref'].split('/')[-1]\n schema['items'] = definitions[ref]\n replace_definitions(schema['items'], definitions)\n else:\n replace_definitions(schema['items'], definitions)\n elif '$ref' in str(schema):\n err_msg = \"Reference not found for schema: {}\".format(str(schema))\n log.error(err_msg)\n raise ValueError(err_msg)\n\ndef get_rows(schema):\n \"\"\" Map the fields in a JSON schema to corresponding data structures in pyspark.\"\"\"\n \n if 'properties' not in schema:\n err_msg = \"Invalid JSON schema: properties field is missing.\"\n log.error(err_msg)\n raise ValueError(err_msg)\n \n for prop in sorted(schema['properties']):\n meta = schema['properties'][prop]\n if 'string' in meta['type']:\n logging.debug(\"{!r} allows the type to be String AND Integer\".format(prop))\n yield StructField(prop, StringType(), 'null' in meta['type'])\n elif 'integer' in meta['type']:\n yield StructField(prop, IntegerType(), 'null' in meta['type'])\n elif 'boolean' in meta['type']:\n yield StructField(prop, BooleanType(), 'null' in meta['type'])\n elif meta['type'] == 'array' and 'items' not in meta:\n # Assuming strings in the array\n yield StructField(prop, ArrayType(StringType(), False), True)\n elif meta['type'] == 'array' and 'items' in meta:\n struct = StructType()\n for row in get_rows(meta['items']):\n struct.add(row)\n yield StructField(prop, ArrayType(struct), True)\n elif meta['type'] == 'object':\n struct = StructType()\n for row in get_rows(meta):\n struct.add(row)\n yield StructField(prop, struct, True)\n else:\n err_msg = \"Invalid JSON schema: {}\".format(str(meta)[:100])\n log.error(err_msg)\n raise ValueError(err_msg)", "First fetch from the primary source in s3 as per bug 1312006. We fall back to the github location if this is not available.", "import boto3\nimport botocore\nimport json\nimport tempfile\nimport urllib2\n\ndef fetch_schema():\n \"\"\" Fetch the crash data schema from an s3 location or github location. This\n returns the corresponding JSON schema in a python dictionary. \"\"\"\n \n region = \"us-west-2\"\n bucket = \"org-mozilla-telemetry-crashes\"\n key = \"crash_report.json\"\n fallback_url = \"https://raw.githubusercontent.com/mozilla/socorro/master/socorro/schemas/crash_report.json\"\n\n try:\n log.info(\"Fetching latest crash data schema from s3://{}/{}\".format(bucket, key))\n s3 = boto3.client('s3', region_name=region)\n # download schema to memory via a file like object\n resp = tempfile.TemporaryFile()\n s3.download_fileobj(bucket, key, resp)\n resp.seek(0)\n except botocore.exceptions.ClientError as e:\n log.warning((\"Could not fetch schema from s3://{}/{}: {}\\n\"\n \"Fetching crash data schema from {}\")\n .format(bucket, key, e, fallback_url))\n resp = urllib2.urlopen(fallback_url)\n\n return json.load(resp)", "Read crash data as json, convert it to parquet", "from datetime import datetime as dt, timedelta, date\nfrom pyspark.sql import SQLContext\n\n\ndef daterange(start_date, end_date):\n for n in range(int((end_date - start_date).days) + 1):\n yield (end_date - timedelta(n)).strftime(\"%Y%m%d\")\n\ndef import_day(d, schema, version):\n \"\"\"Convert JSON data stored in an S3 bucket into parquet, indexed by crash_date.\"\"\"\n source_s3path = \"s3://org-mozilla-telemetry-crashes/v1/crash_report\"\n dest_s3path = \"s3://telemetry-parquet/socorro_crash/\"\n num_partitions = 10\n \n log.info(\"Processing {}, started at {}\".format(d, dt.utcnow()))\n cur_source_s3path = \"{}/{}\".format(source_s3path, d)\n cur_dest_s3path = \"{}/v{}/crash_date={}\".format(dest_s3path, version, d)\n \n df = sqlContext.read.json(cur_source_s3path, schema=schema)\n df.repartition(num_partitions).write.parquet(cur_dest_s3path, mode=\"overwrite\")\n\ndef backfill(start_date_yyyymmdd, schema, version):\n \"\"\" Import data from a start date to yesterday's date.\n Example:\n backfill(\"20160902\", crash_schema, version)\n \"\"\"\n start_date = dt.strptime(start_date_yyyymmdd, \"%Y%m%d\")\n end_date = dt.utcnow() - timedelta(1) # yesterday\n for d in daterange(start_date, end_date):\n try:\n import_day(d)\n except Exception as e:\n log.error(e)\n\nfrom os import environ\n\n# get the relevant date\nyesterday = dt.strftime(dt.utcnow() - timedelta(1), \"%Y%m%d\")\ntarget_date = environ.get('date', yesterday)\n\n# fetch and generate the schema\nschema_data = fetch_schema()\ncrash_schema = create_struct(schema_data)\nversion = schema_data.get('$target_version', 0) # default to v0\n\n# process the data\nimport_day(target_date, crash_schema, version)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
hich28/mytesttxx
tests/python/ltsmin-dve.ipynb
gpl-3.0
[ "import spot\nimport spot.ltsmin\n# The following line causes the notebook to exit with 77 if divine is not \n# installed, therefore skipping this test in the test suite.\nspot.ltsmin.require('divine')\n# This is notebook also tests the limitation of the number of states in the GraphViz output\nspot.setup(max_states=10)", "There are two ways to load a DiVinE model: from a file or from a cell. \nLoading from a file\nWe will first start with the file version, however because this notebook should also be a self-contained test case, we start by writing a model into a file.", "!rm -f test1.dve\n\n%%file test1.dve\nint a = 0, b = 0;\n\nprocess P {\n state x;\n init x;\n\n trans\n x -> x { guard a < 3 && b < 3; effect a = a + 1; },\n x -> x { guard a < 3 && b < 3; effect b = b + 1; };\n}\n\nprocess Q {\n state wait, work;\n init wait;\n trans\n wait -> work { guard b > 1; },\n work -> wait { guard a > 1; };\n}\n\nsystem async;", "The spot.ltsmin.load function compiles the model using the ltlmin interface and load it. This should work with DiVinE models if divine --LTSmin works, and with Promela models if spins is installed.", "m = spot.ltsmin.load('test1.dve')", "Compiling the model creates all several kinds of files. The test1.dve file is converted into a C++ source code test1.dve.cpp which is then compiled into a shared library test1.dve2c. Becauce spot.ltsmin.load() has already loaded this shared library, all those files can be erased. If you do not erase the files, spot.ltsmin.load() will use the timestamps to decide whether the library should be recompiled or not everytime you load the library.\nFor editing and loading DVE file from a notebook, it is a better to use the %%dve as shown next.", "!rm -f test1.dve test1.dve.cpp test1.dve2C", "Loading from a notebook cell\nThe %%dve cell magic implements all of the above steps (saving the model into a temporary file, compiling it, loading it, erasing the temporary files). The variable name that should receive the model (here m) should be indicated on the first line, after %dve.", "%%dve m\nint a = 0, b = 0;\n\nprocess P {\n state x;\n init x;\n\n trans\n x -> x { guard a < 3 && b < 3; effect a = a + 1; },\n x -> x { guard a < 3 && b < 3; effect b = b + 1; };\n}\n\nprocess Q {\n state wait, work;\n init wait;\n trans\n wait -> work { guard b > 1; },\n work -> wait { guard a > 1; };\n}\n\nsystem async;", "Working with an ltsmin model\nPrinting an ltsmin model shows some information about the variables it contains and their types, however the info() methods provide the data in a map that is easier to work with.", "m\n\nsorted(m.info().items())", "To obtain a Kripke structure, call kripke and supply a list of atomic propositions to observe in the model.", "k = m.kripke([\"a<1\", \"b>2\"])\nk\n\nk.show('.<15')\n\nk.show('.<0') # unlimited output\n\na = spot.translate('\"a<1\" U \"b>2\"'); a\n\nspot.otf_product(k, a)", "If we want to create a model_check function that takes a model and formula, we need to get the list of atomic propositions used in the formula using atomic_prop_collect(). This returns an atomic_prop_set:", "a = spot.atomic_prop_collect(spot.formula('\"a < 2\" W \"b == 1\"')); a\n\ndef model_check(f, m):\n f = spot.formula(f)\n ss = m.kripke(spot.atomic_prop_collect(f))\n nf = spot.formula_Not(f).translate()\n return spot.otf_product(ss, nf).is_empty() \n\nmodel_check('\"a<1\" R \"b > 1\"', m)", "Instead of otf_product(x, y).is_empty() we prefer to call !x.intersects(y). There is also x.intersecting_run(y) that can be used to return a counterexample.", "def model_debug(f, m):\n f = spot.formula(f)\n ss = m.kripke(spot.atomic_prop_collect(f))\n nf = spot.formula_Not(f).translate()\n return ss.intersecting_run(nf)\n\nrun = model_debug('\"a<1\" R \"b > 1\"', m); run", "This accepting run can be represented as an automaton (the True argument requires the state names to be preserved). This can be more readable.", "run.as_twa(True)" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
keras-team/keras-io
examples/vision/ipynb/3D_image_classification.ipynb
apache-2.0
[ "3D image classification from CT scans\nAuthor: Hasib Zunair<br>\nDate created: 2020/09/23<br>\nLast modified: 2020/09/23<br>\nDescription: Train a 3D convolutional neural network to predict presence of pneumonia.\nIntroduction\nThis example will show the steps needed to build a 3D convolutional neural network (CNN)\nto predict the presence of viral pneumonia in computer tomography (CT) scans. 2D CNNs are\ncommonly used to process RGB images (3 channels). A 3D CNN is simply the 3D\nequivalent: it takes as input a 3D volume or a sequence of 2D frames (e.g. slices in a CT scan),\n3D CNNs are a powerful model for learning representations for volumetric data.\nReferences\n\nA survey on Deep Learning Advances on Different 3D DataRepresentations\nVoxNet: A 3D Convolutional Neural Network for Real-Time Object Recognition\nFusionNet: 3D Object Classification Using MultipleData Representations\nUniformizing Techniques to Process CT scans with 3D CNNs for Tuberculosis Prediction\n\nSetup", "import os\nimport zipfile\nimport numpy as np\nimport tensorflow as tf\n\nfrom tensorflow import keras\nfrom tensorflow.keras import layers", "Downloading the MosMedData: Chest CT Scans with COVID-19 Related Findings\nIn this example, we use a subset of the\nMosMedData: Chest CT Scans with COVID-19 Related Findings.\nThis dataset consists of lung CT scans with COVID-19 related findings, as well as without such findings.\nWe will be using the associated radiological findings of the CT scans as labels to build\na classifier to predict presence of viral pneumonia.\nHence, the task is a binary classification problem.", "# Download url of normal CT scans.\nurl = \"https://github.com/hasibzunair/3D-image-classification-tutorial/releases/download/v0.2/CT-0.zip\"\nfilename = os.path.join(os.getcwd(), \"CT-0.zip\")\nkeras.utils.get_file(filename, url)\n\n# Download url of abnormal CT scans.\nurl = \"https://github.com/hasibzunair/3D-image-classification-tutorial/releases/download/v0.2/CT-23.zip\"\nfilename = os.path.join(os.getcwd(), \"CT-23.zip\")\nkeras.utils.get_file(filename, url)\n\n# Make a directory to store the data.\nos.makedirs(\"MosMedData\")\n\n# Unzip data in the newly created directory.\nwith zipfile.ZipFile(\"CT-0.zip\", \"r\") as z_fp:\n z_fp.extractall(\"./MosMedData/\")\n\nwith zipfile.ZipFile(\"CT-23.zip\", \"r\") as z_fp:\n z_fp.extractall(\"./MosMedData/\")", "Loading data and preprocessing\nThe files are provided in Nifti format with the extension .nii. To read the\nscans, we use the nibabel package.\nYou can install the package via pip install nibabel. CT scans store raw voxel\nintensity in Hounsfield units (HU). They range from -1024 to above 2000 in this dataset.\nAbove 400 are bones with different radiointensity, so this is used as a higher bound. A threshold\nbetween -1000 and 400 is commonly used to normalize CT scans.\nTo process the data, we do the following:\n\nWe first rotate the volumes by 90 degrees, so the orientation is fixed\nWe scale the HU values to be between 0 and 1.\nWe resize width, height and depth.\n\nHere we define several helper functions to process the data. These functions\nwill be used when building training and validation datasets.", "\nimport nibabel as nib\n\nfrom scipy import ndimage\n\n\ndef read_nifti_file(filepath):\n \"\"\"Read and load volume\"\"\"\n # Read file\n scan = nib.load(filepath)\n # Get raw data\n scan = scan.get_fdata()\n return scan\n\n\ndef normalize(volume):\n \"\"\"Normalize the volume\"\"\"\n min = -1000\n max = 400\n volume[volume < min] = min\n volume[volume > max] = max\n volume = (volume - min) / (max - min)\n volume = volume.astype(\"float32\")\n return volume\n\n\ndef resize_volume(img):\n \"\"\"Resize across z-axis\"\"\"\n # Set the desired depth\n desired_depth = 64\n desired_width = 128\n desired_height = 128\n # Get current depth\n current_depth = img.shape[-1]\n current_width = img.shape[0]\n current_height = img.shape[1]\n # Compute depth factor\n depth = current_depth / desired_depth\n width = current_width / desired_width\n height = current_height / desired_height\n depth_factor = 1 / depth\n width_factor = 1 / width\n height_factor = 1 / height\n # Rotate\n img = ndimage.rotate(img, 90, reshape=False)\n # Resize across z-axis\n img = ndimage.zoom(img, (width_factor, height_factor, depth_factor), order=1)\n return img\n\n\ndef process_scan(path):\n \"\"\"Read and resize volume\"\"\"\n # Read scan\n volume = read_nifti_file(path)\n # Normalize\n volume = normalize(volume)\n # Resize width, height and depth\n volume = resize_volume(volume)\n return volume\n", "Let's read the paths of the CT scans from the class directories.", "# Folder \"CT-0\" consist of CT scans having normal lung tissue,\n# no CT-signs of viral pneumonia.\nnormal_scan_paths = [\n os.path.join(os.getcwd(), \"MosMedData/CT-0\", x)\n for x in os.listdir(\"MosMedData/CT-0\")\n]\n# Folder \"CT-23\" consist of CT scans having several ground-glass opacifications,\n# involvement of lung parenchyma.\nabnormal_scan_paths = [\n os.path.join(os.getcwd(), \"MosMedData/CT-23\", x)\n for x in os.listdir(\"MosMedData/CT-23\")\n]\n\nprint(\"CT scans with normal lung tissue: \" + str(len(normal_scan_paths)))\nprint(\"CT scans with abnormal lung tissue: \" + str(len(abnormal_scan_paths)))\n", "Build train and validation datasets\nRead the scans from the class directories and assign labels. Downsample the scans to have\nshape of 128x128x64. Rescale the raw HU values to the range 0 to 1.\nLastly, split the dataset into train and validation subsets.", "# Read and process the scans.\n# Each scan is resized across height, width, and depth and rescaled.\nabnormal_scans = np.array([process_scan(path) for path in abnormal_scan_paths])\nnormal_scans = np.array([process_scan(path) for path in normal_scan_paths])\n\n# For the CT scans having presence of viral pneumonia\n# assign 1, for the normal ones assign 0.\nabnormal_labels = np.array([1 for _ in range(len(abnormal_scans))])\nnormal_labels = np.array([0 for _ in range(len(normal_scans))])\n\n# Split data in the ratio 70-30 for training and validation.\nx_train = np.concatenate((abnormal_scans[:70], normal_scans[:70]), axis=0)\ny_train = np.concatenate((abnormal_labels[:70], normal_labels[:70]), axis=0)\nx_val = np.concatenate((abnormal_scans[70:], normal_scans[70:]), axis=0)\ny_val = np.concatenate((abnormal_labels[70:], normal_labels[70:]), axis=0)\nprint(\n \"Number of samples in train and validation are %d and %d.\"\n % (x_train.shape[0], x_val.shape[0])\n)", "Data augmentation\nThe CT scans also augmented by rotating at random angles during training. Since\nthe data is stored in rank-3 tensors of shape (samples, height, width, depth),\nwe add a dimension of size 1 at axis 4 to be able to perform 3D convolutions on\nthe data. The new shape is thus (samples, height, width, depth, 1). There are\ndifferent kinds of preprocessing and augmentation techniques out there,\nthis example shows a few simple ones to get started.", "import random\n\nfrom scipy import ndimage\n\n\n@tf.function\ndef rotate(volume):\n \"\"\"Rotate the volume by a few degrees\"\"\"\n\n def scipy_rotate(volume):\n # define some rotation angles\n angles = [-20, -10, -5, 5, 10, 20]\n # pick angles at random\n angle = random.choice(angles)\n # rotate volume\n volume = ndimage.rotate(volume, angle, reshape=False)\n volume[volume < 0] = 0\n volume[volume > 1] = 1\n return volume\n\n augmented_volume = tf.numpy_function(scipy_rotate, [volume], tf.float32)\n return augmented_volume\n\n\ndef train_preprocessing(volume, label):\n \"\"\"Process training data by rotating and adding a channel.\"\"\"\n # Rotate volume\n volume = rotate(volume)\n volume = tf.expand_dims(volume, axis=3)\n return volume, label\n\n\ndef validation_preprocessing(volume, label):\n \"\"\"Process validation data by only adding a channel.\"\"\"\n volume = tf.expand_dims(volume, axis=3)\n return volume, label\n", "While defining the train and validation data loader, the training data is passed through\nand augmentation function which randomly rotates volume at different angles. Note that both\ntraining and validation data are already rescaled to have values between 0 and 1.", "# Define data loaders.\ntrain_loader = tf.data.Dataset.from_tensor_slices((x_train, y_train))\nvalidation_loader = tf.data.Dataset.from_tensor_slices((x_val, y_val))\n\nbatch_size = 2\n# Augment the on the fly during training.\ntrain_dataset = (\n train_loader.shuffle(len(x_train))\n .map(train_preprocessing)\n .batch(batch_size)\n .prefetch(2)\n)\n# Only rescale.\nvalidation_dataset = (\n validation_loader.shuffle(len(x_val))\n .map(validation_preprocessing)\n .batch(batch_size)\n .prefetch(2)\n)", "Visualize an augmented CT scan.", "import matplotlib.pyplot as plt\n\ndata = train_dataset.take(1)\nimages, labels = list(data)[0]\nimages = images.numpy()\nimage = images[0]\nprint(\"Dimension of the CT scan is:\", image.shape)\nplt.imshow(np.squeeze(image[:, :, 30]), cmap=\"gray\")\n", "Since a CT scan has many slices, let's visualize a montage of the slices.", "\ndef plot_slices(num_rows, num_columns, width, height, data):\n \"\"\"Plot a montage of 20 CT slices\"\"\"\n data = np.rot90(np.array(data))\n data = np.transpose(data)\n data = np.reshape(data, (num_rows, num_columns, width, height))\n rows_data, columns_data = data.shape[0], data.shape[1]\n heights = [slc[0].shape[0] for slc in data]\n widths = [slc.shape[1] for slc in data[0]]\n fig_width = 12.0\n fig_height = fig_width * sum(heights) / sum(widths)\n f, axarr = plt.subplots(\n rows_data,\n columns_data,\n figsize=(fig_width, fig_height),\n gridspec_kw={\"height_ratios\": heights},\n )\n for i in range(rows_data):\n for j in range(columns_data):\n axarr[i, j].imshow(data[i][j], cmap=\"gray\")\n axarr[i, j].axis(\"off\")\n plt.subplots_adjust(wspace=0, hspace=0, left=0, right=1, bottom=0, top=1)\n plt.show()\n\n\n# Visualize montage of slices.\n# 4 rows and 10 columns for 100 slices of the CT scan.\nplot_slices(4, 10, 128, 128, image[:, :, :40])", "Define a 3D convolutional neural network\nTo make the model easier to understand, we structure it into blocks.\nThe architecture of the 3D CNN used in this example\nis based on this paper.", "\ndef get_model(width=128, height=128, depth=64):\n \"\"\"Build a 3D convolutional neural network model.\"\"\"\n\n inputs = keras.Input((width, height, depth, 1))\n\n x = layers.Conv3D(filters=64, kernel_size=3, activation=\"relu\")(inputs)\n x = layers.MaxPool3D(pool_size=2)(x)\n x = layers.BatchNormalization()(x)\n\n x = layers.Conv3D(filters=64, kernel_size=3, activation=\"relu\")(x)\n x = layers.MaxPool3D(pool_size=2)(x)\n x = layers.BatchNormalization()(x)\n\n x = layers.Conv3D(filters=128, kernel_size=3, activation=\"relu\")(x)\n x = layers.MaxPool3D(pool_size=2)(x)\n x = layers.BatchNormalization()(x)\n\n x = layers.Conv3D(filters=256, kernel_size=3, activation=\"relu\")(x)\n x = layers.MaxPool3D(pool_size=2)(x)\n x = layers.BatchNormalization()(x)\n\n x = layers.GlobalAveragePooling3D()(x)\n x = layers.Dense(units=512, activation=\"relu\")(x)\n x = layers.Dropout(0.3)(x)\n\n outputs = layers.Dense(units=1, activation=\"sigmoid\")(x)\n\n # Define the model.\n model = keras.Model(inputs, outputs, name=\"3dcnn\")\n return model\n\n\n# Build model.\nmodel = get_model(width=128, height=128, depth=64)\nmodel.summary()", "Train model", "# Compile model.\ninitial_learning_rate = 0.0001\nlr_schedule = keras.optimizers.schedules.ExponentialDecay(\n initial_learning_rate, decay_steps=100000, decay_rate=0.96, staircase=True\n)\nmodel.compile(\n loss=\"binary_crossentropy\",\n optimizer=keras.optimizers.Adam(learning_rate=lr_schedule),\n metrics=[\"acc\"],\n)\n\n# Define callbacks.\ncheckpoint_cb = keras.callbacks.ModelCheckpoint(\n \"3d_image_classification.h5\", save_best_only=True\n)\nearly_stopping_cb = keras.callbacks.EarlyStopping(monitor=\"val_acc\", patience=15)\n\n# Train the model, doing validation at the end of each epoch\nepochs = 100\nmodel.fit(\n train_dataset,\n validation_data=validation_dataset,\n epochs=epochs,\n shuffle=True,\n verbose=2,\n callbacks=[checkpoint_cb, early_stopping_cb],\n)", "It is important to note that the number of samples is very small (only 200) and we don't\nspecify a random seed. As such, you can expect significant variance in the results. The full dataset\nwhich consists of over 1000 CT scans can be found here. Using the full\ndataset, an accuracy of 83% was achieved. A variability of 6-7% in the classification\nperformance is observed in both cases.\nVisualizing model performance\nHere the model accuracy and loss for the training and the validation sets are plotted.\nSince the validation set is class-balanced, accuracy provides an unbiased representation\nof the model's performance.", "fig, ax = plt.subplots(1, 2, figsize=(20, 3))\nax = ax.ravel()\n\nfor i, metric in enumerate([\"acc\", \"loss\"]):\n ax[i].plot(model.history.history[metric])\n ax[i].plot(model.history.history[\"val_\" + metric])\n ax[i].set_title(\"Model {}\".format(metric))\n ax[i].set_xlabel(\"epochs\")\n ax[i].set_ylabel(metric)\n ax[i].legend([\"train\", \"val\"])", "Make predictions on a single CT scan", "# Load best weights.\nmodel.load_weights(\"3d_image_classification.h5\")\nprediction = model.predict(np.expand_dims(x_val[0], axis=0))[0]\nscores = [1 - prediction[0], prediction[0]]\n\nclass_names = [\"normal\", \"abnormal\"]\nfor score, name in zip(scores, class_names):\n print(\n \"This model is %.2f percent confident that CT scan is %s\"\n % ((100 * score), name)\n )" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
uqyge/combustionML
FPV_ANN/notebooks/.ipynb_checkpoints/fgm_nn_inhouse-checkpoint.ipynb
mit
[ "import fgm tables", "!pip install gdown\n!mkdir ./data\nimport gdown\n\ndef data_import():\n ids = {\n \"tables_of_fgm.h5\":\"1XHPF7hUqT-zp__qkGwHg8noRazRnPqb0\"\n }\n\n url = 'https://drive.google.com/uc?id='\n\n for title, g_id in ids.items(): \n try:\n output_file = open(\"/content/data/\" + title, 'wb')\n gdown.download(url + g_id, output_file, quiet=False)\n except IOError as e:\n print(e)\n finally:\n output_file.close()\n \ndata_import()", "Function libaries\nResBlock\nres_block is the backbone of the resnet structure. The resblock has multi branch, bottle neck layer and skip connection build in. This modularized design has made create deep neural network easy.", "import tensorflow as tf\nimport keras\nfrom keras.layers import Dense, Activation, Input, BatchNormalization, Dropout, concatenate\nfrom keras import layers\n\ndef res_branch(bi, conv_name_base, bn_name_base, scale, input_tensor, n_neuron, stage, block,dp1, bn=False):\n x_1 = Dense(scale * n_neuron, name=conv_name_base + '2a_'+str(bi))(input_tensor)\n if bn:\n x_1 = BatchNormalization(axis=-1, name=bn_name_base + '2a_'+str(bi))(x_1)\n x_1 = Activation('relu')(x_1)\n if dp1>0:\n x_1 = Dropout(dp1)(x_1)\n return x_1\n\ndef res_block(input_tensor,scale, n_neuron, stage, block, bn=False,branches=0):\n conv_name_base = 'res' + str(stage) + block + '_branch'\n bn_name_base = 'bn' + str(stage) + block + '_branch'\n\n# scale = 2\n x = Dense(scale * n_neuron, name=conv_name_base + '2a')(input_tensor)\n if bn:\n x = BatchNormalization(axis=-1, name=bn_name_base + '2a')(x)\n x = Activation('relu')(x)\n dp1=0.\n if dp1 >0:\n x = Dropout(dp1)(x)\n \n branch_list=[x]\n for i in range(branches-1):\n branch_list.append(res_branch(i,conv_name_base, bn_name_base, scale,input_tensor,n_neuron,stage,block,dp1,bn))\n if branches-1 > 0:\n x = Dense(n_neuron, name=conv_name_base + '2b')(concatenate(branch_list,axis=-1))\n# x = Dense(n_neuron, name=conv_name_base + '2b')(layers.add(branch_list))\n else:\n x = Dense(n_neuron, name=conv_name_base + '2b')(x)\n \n if bn:\n x = BatchNormalization(axis=-1, name=bn_name_base + '2b')(x)\n x = layers.add([x, input_tensor])\n x = Activation('relu')(x)\n if dp1 >0:\n x = Dropout(dp1)(x)\n\n return x", "data_reader\nThe read_h5_data function read the table from the hdf5 file. \nIn the FGM case we chose not to scale the input features, since they all falls between 0 and 1. There are a great variety in the output features. In the reaction region close to stoichiometry the gradient in the output properties are great. A good example is the source term for progress variable, which rises from 0 to 1e5. So the output features are first transformed to logrithmic scale and then rearranged between 0 and 1. The outputs are normalised by its variance. This way the output value will be large where the gradient is great. So during training more focus would be put. The same 'focus design' has been put on the loss function selection as well. mse is selected over mae for that the squared error put more weights on the data samples that shows great changes.", "import numpy as np\nimport pandas as pd\nfrom sklearn.preprocessing import MinMaxScaler, StandardScaler\n\n\nclass data_scaler(object):\n def __init__(self):\n self.norm = None\n self.norm_1 = None\n self.std = None\n self.case = None\n self.scale = 1\n self.bias = 1e-20\n# self.bias = 1\n\n\n self.switcher = {\n 'min_std': 'min_std',\n 'std2': 'std2',\n 'std_min':'std_min',\n 'min': 'min',\n 'no':'no',\n 'log': 'log',\n 'log_min':'log_min',\n 'log2': 'log2',\n 'tan': 'tan'\n }\n\n def fit_transform(self, input_data, case):\n self.case = case\n if self.switcher.get(self.case) == 'min_std':\n self.norm = MinMaxScaler()\n self.std = StandardScaler()\n out = self.norm.fit_transform(input_data)\n out = self.std.fit_transform(out)\n\n if self.switcher.get(self.case) == 'std2':\n self.std = StandardScaler()\n out = self.std.fit_transform(input_data)\n\n if self.switcher.get(self.case) == 'std_min':\n self.norm = MinMaxScaler()\n self.std = StandardScaler()\n out = self.std.fit_transform(input_data)\n out = self.norm.fit_transform(out)\n\n if self.switcher.get(self.case) == 'min':\n self.norm = MinMaxScaler()\n out = self.norm.fit_transform(input_data)\n\n if self.switcher.get(self.case) == 'no':\n self.norm = MinMaxScaler()\n self.std = StandardScaler()\n out = input_data\n\n if self.switcher.get(self.case) == 'log':\n out = - np.log(np.asarray(input_data / self.scale) + self.bias)\n self.std = StandardScaler()\n out = self.std.fit_transform(out)\n\n if self.switcher.get(self.case) == 'log_min':\n out = - np.log(np.asarray(input_data / self.scale) + self.bias)\n self.norm = MinMaxScaler()\n out = self.norm.fit_transform(out)\n\n if self.switcher.get(self.case) == 'log2':\n self.norm = MinMaxScaler()\n self.norm_1 = MinMaxScaler()\n out = self.norm.fit_transform(input_data)\n out = np.log(np.asarray(out) + self.bias)\n out = self.norm_1.fit_transform(out)\n\n if self.switcher.get(self.case) == 'tan':\n self.norm = MaxAbsScaler()\n self.std = StandardScaler()\n out = self.std.fit_transform(input_data)\n out = self.norm.fit_transform(out)\n out = np.tan(out / (2 * np.pi + self.bias))\n\n return out\n\n def transform(self, input_data):\n if self.switcher.get(self.case) == 'min_std':\n out = self.norm.transform(input_data)\n out = self.std.transform(out)\n\n if self.switcher.get(self.case) == 'std2':\n out = self.std.transform(input_data)\n\n if self.switcher.get(self.case) == 'std_min':\n out = self.std.transform(input_data)\n out = self.norm.transform(out)\n\n if self.switcher.get(self.case) == 'min':\n out = self.norm.transform(input_data)\n\n if self.switcher.get(self.case) == 'no':\n out = input_data\n\n if self.switcher.get(self.case) == 'log':\n out = - np.log(np.asarray(input_data / self.scale) + self.bias)\n out = self.std.transform(out)\n\n if self.switcher.get(self.case) == 'log_min':\n out = - np.log(np.asarray(input_data / self.scale) + self.bias)\n out = self.norm.transform(out)\n\n if self.switcher.get(self.case) == 'log2':\n out = self.norm.transform(input_data)\n out = np.log(np.asarray(out) + self.bias)\n out = self.norm_1.transform(out)\n\n if self.switcher.get(self.case) == 'tan':\n out = self.std.transform(input_data)\n out = self.norm.transform(out)\n out = np.tan(out / (2 * np.pi + self.bias))\n\n return out\n\n def inverse_transform(self, input_data):\n\n if self.switcher.get(self.case) == 'min_std':\n out = self.std.inverse_transform(input_data)\n out = self.norm.inverse_transform(out)\n\n if self.switcher.get(self.case) == 'std2':\n out = self.std.inverse_transform(input_data)\n\n if self.switcher.get(self.case) == 'std_min':\n out = self.norm.inverse_transform(input_data)\n out = self.std.inverse_transform(out)\n\n if self.switcher.get(self.case) == 'min':\n out = self.norm.inverse_transform(input_data)\n\n if self.switcher.get(self.case) == 'no':\n out = input_data\n\n if self.switcher.get(self.case) == 'log':\n out = self.std.inverse_transform(input_data)\n out = (np.exp(-out) - self.bias) * self.scale\n\n if self.switcher.get(self.case) == 'log_min':\n out = self.norm.inverse_transform(input_data)\n out = (np.exp(-out) - self.bias) * self.scale\n\n if self.switcher.get(self.case) == 'log2':\n out = self.norm_1.inverse_transform(input_data)\n out = np.exp(out) - self.bias\n out = self.norm.inverse_transform(out)\n\n if self.switcher.get(self.case) == 'tan':\n out = (2 * np.pi + self.bias) * np.arctan(input_data)\n out = self.norm.inverse_transform(out)\n out = self.std.inverse_transform(out)\n\n return out\n \n\ndef read_h5_data(fileName, input_features, labels):\n df = pd.read_hdf(fileName)\n df = df[df['f']<0.45]\n \n input_df=df[input_features]\n in_scaler = data_scaler()\n input_np = in_scaler.fit_transform(input_df.values,'no')\n\n label_df=df[labels].clip(0)\n# if 'PVs' in labels:\n# label_df['PVs']=np.log(label_df['PVs']+1)\n out_scaler = data_scaler()\n label_np = out_scaler.fit_transform(label_df.values,'std2')\n\n return input_np, label_np, df, in_scaler, out_scaler", "model\nload data", "%matplotlib inline\nimport os\nimport numpy as np\nimport pandas as pd\nimport matplotlib.pyplot as plt\n\n\n# define the labels\ncol_labels=['C2H3', 'C2H6', 'CH2', 'H2CN', 'C2H4', 'H2O2', 'C2H', 'CN',\n 'heatRelease', 'NCO', 'NNH', 'N2', 'AR', 'psi', 'CO', 'CH4', 'HNCO',\n 'CH2OH', 'HCCO', 'CH2CO', 'CH', 'mu', 'C2H2', 'C2H5', 'H2', 'T', 'PVs',\n 'O', 'O2', 'N2O', 'C', 'C3H7', 'CH2(S)', 'NH3', 'HO2', 'NO', 'HCO',\n 'NO2', 'OH', 'HCNO', 'CH3CHO', 'CH3', 'NH', 'alpha', 'CH3O', 'CO2',\n 'CH3OH', 'CH2CHO', 'CH2O', 'C3H8', 'HNO', 'NH2', 'HCN', 'H', 'N', 'H2O',\n 'HCCOH', 'HCNN']\n\n# Taking 0 out\ncol_labels.remove('AR')\ncol_labels.remove('heatRelease')\n\n# labels = ['CH4','O2','H2O','CO','CO2','T','PVs','psi','mu','alpha']\nlabels = ['T','PVs']\n# labels = ['T','CH4','O2','CO2','CO','H2O','H2','OH','psi']\n# labels = ['CH2OH','HNCO','CH3OH', 'CH2CHO', 'CH2O', 'C3H8', 'HNO', 'NH2', 'HCN']\n\n# labels = np.random.choice(col_labels,20,replace=False).tolist()\n# labels.append('PVs')\n\n# labels = col_labels\n\nprint(labels)\n\ninput_features=['f','pv','zeta']\n\n# read in the data\nx_input, y_label, df, in_scaler, out_scaler = read_h5_data('./data/tables_of_fgm.h5',input_features=input_features, labels = labels)", "build neural network model", "from sklearn.model_selection import train_test_split\n\nimport tensorflow as tf\nfrom keras.models import Model\nfrom keras.layers import Dense, Input\nfrom keras.callbacks import ModelCheckpoint\n\n# split into train and test data\nx_train, x_test, y_train, y_test = train_test_split(x_input,y_label, test_size=0.01)\n\nn_neuron = 10\nscale=3\nbranches=3\n# %%\nprint('set up ANN')\n# ANN parameters\ndim_input = x_train.shape[1]\ndim_label = y_train.shape[1]\n\nbatch_norm = False\n\n# This returns a tensor\ninputs = Input(shape=(dim_input,),name='input_1')\n\n# a layer instance is callable on a tensor, and returns a tensor\nx = Dense(n_neuron, activation='relu')(inputs)\n\n# less then 2 res_block, there will be variance\nx = res_block(x, scale, n_neuron, stage=1, block='a', bn=batch_norm,branches=branches)\nx = res_block(x, scale, n_neuron, stage=1, block='b', bn=batch_norm,branches=branches)\n# x = res_block(x, scale, n_neuron, stage=1, block='c', bn=batch_norm,branches=branches)\n\n\nx = Dense(100, activation='relu')(x)\nx = Dropout(0.1)(x)\npredictions = Dense(dim_label, activation='linear', name='output_1')(x)\n\nmodel = Model(inputs=inputs, outputs=predictions)\nmodel.summary()", "model training\ngpu training", "import keras.backend as K\nfrom keras.callbacks import LearningRateScheduler\nimport math\n\ndef cubic_loss(y_true, y_pred):\n return K.mean(K.square(y_true - y_pred)*K.abs(y_true - y_pred), axis=-1)\n\ndef coeff_r2(y_true, y_pred):\n from keras import backend as K\n SS_res = K.sum(K.square( y_true-y_pred ))\n SS_tot = K.sum(K.square( y_true - K.mean(y_true) ) )\n return ( 1 - SS_res/(SS_tot + K.epsilon()) )\n\n \ndef step_decay(epoch):\n initial_lrate = 0.001\n drop = 0.5\n epochs_drop = 200.0\n lrate = initial_lrate * math.pow(drop,math.floor((1+epoch)/epochs_drop))\n return lrate\n \nlrate = LearningRateScheduler(step_decay)\n\nfrom keras import optimizers\n\nbatch_size = 1024*32\nepochs = 60\nvsplit = 0.1\n\nloss_type='mse'\n\nadam_op = optimizers.Adam(lr=0.001, beta_1=0.9, beta_2=0.999,epsilon=1e-8, decay=0.0, amsgrad=True)\n\nmodel.compile(loss=loss_type, optimizer=adam_op, metrics=[coeff_r2])\n# model.compile(loss=cubic_loss, optimizer=adam_op, metrics=['accuracy'])\n\n# checkpoint (save the best model based validate loss)\n!mkdir ./tmp\nfilepath = \"./tmp/weights.best.cntk.hdf5\"\n\ncheckpoint = ModelCheckpoint(filepath,\n monitor='val_loss',\n verbose=1,\n save_best_only=True,\n mode='min',\n period=20)\n\n# callbacks_list = [checkpoint]\ncallbacks_list = [lrate]\n\n\n# fit the model\nhistory = model.fit(\n x_train, y_train,\n epochs=epochs,\n batch_size=batch_size,\n validation_split=vsplit,\n verbose=2,\n# callbacks=callbacks_list,\n shuffle=True)\n\nmodel.save('trained_fgm_nn.h5')", "Training loss plot", "fig = plt.figure()\nplt.semilogy(history.history['loss'])\nif vsplit:\n plt.semilogy(history.history['val_loss'])\nplt.title(loss_type)\nplt.ylabel('loss')\nplt.xlabel('epoch')\nplt.legend(['train', 'test'], loc='upper right')\nplt.show()", "Inference test\nprepare frontend for plotting", "#@title import plotly\nimport plotly.plotly as py\nimport numpy as np\nfrom plotly.offline import init_notebook_mode, iplot\n# from plotly.graph_objs import Contours, Histogram2dContour, Marker, Scatter\nimport plotly.graph_objs as go\n\ndef configure_plotly_browser_state():\n import IPython\n display(IPython.core.display.HTML('''\n <script src=\"/static/components/requirejs/require.js\"></script>\n <script>\n requirejs.config({\n paths: {\n base: '/static/base',\n plotly: 'https://cdn.plot.ly/plotly-1.5.1.min.js?noext',\n },\n });\n </script>\n '''))", "prepare data for plotting\nGPU data prepare", "from sklearn.metrics import r2_score\n# model.load_weights(\"./tmp/weights.best.cntk.hdf5\")\n\nx_test_df = pd.DataFrame(in_scaler.inverse_transform(x_test),columns=input_features)\ny_test_df = pd.DataFrame(out_scaler.inverse_transform(y_test),columns=labels)\n\npredict_val = model.predict(x_test,batch_size=1024*8)\npredict_df = pd.DataFrame(out_scaler.inverse_transform(predict_val), columns=labels)\n\ntest_data=pd.concat([x_test_df,y_test_df],axis=1)\npred_data=pd.concat([x_test_df,predict_df],axis=1)\n\n!rm sim_check.h5\ntest_data.to_hdf('sim_check.h5',key='test')\npred_data.to_hdf('sim_check.h5',key='pred')\n\ndf_test=pd.read_hdf('sim_check.h5',key='test')\ndf_pred=pd.read_hdf('sim_check.h5',key='pred')\n\nzeta_level=list(set(df_test['zeta']))\nzeta_level.sort()\n\n\nres_sum=pd.DataFrame()\nr2s=[]\nr2s_i=[]\n\nnames=[]\nmaxs_0=[]\nmaxs_9=[]\n\nfor r2,name in zip(r2_score(df_test,df_pred,multioutput='raw_values'),df_test.columns):\n names.append(name)\n r2s.append(r2)\n maxs_0.append(df_test[df_test['zeta']==zeta_level[0]][name].max())\n maxs_9.append(df_test[df_test['zeta']==zeta_level[8]][name].max())\n for i in zeta_level:\n r2s_i.append(r2_score(df_pred[df_pred['zeta']==i][name],\n df_test[df_test['zeta']==i][name]))\n\nres_sum['name']=names\n# res_sum['max_0']=maxs_0\n# res_sum['max_9']=maxs_9\nres_sum['z_scale']=[m_9/(m_0+1e-20) for m_9,m_0 in zip(maxs_9,maxs_0)]\n# res_sum['r2']=r2s\n\n\ntmp=np.asarray(r2s_i).reshape(-1,10)\nfor idx,z in enumerate(zeta_level):\n res_sum['r2s_'+str(z)]=tmp[:,idx]\n\nres_sum[3:]\n\nno_drop=res_sum[3:]\nno_drop", "interactive plot", "#@title Default title text\n# species = np.random.choice(labels)\nspecies = 'T' #@param {type:\"string\"}\nz_level = 0 #@param {type:\"integer\"}\n\n# configure_plotly_browser_state()\n# init_notebook_mode(connected=False)\n\nfrom sklearn.metrics import r2_score\n\n\ndf_t=df_test[df_test['zeta']==zeta_level[z_level]].sample(frac=1)\n# df_p=df_pred.loc[df_pred['zeta']==zeta_level[1]].sample(frac=0.1)\ndf_p=df_pred.loc[df_t.index]\nerror=df_p[species]-df_t[species]\nr2=round(r2_score(df_p[species],df_t[species]),4)\n\nprint(species,'r2:',r2,'max:',df_t[species].max())\n\nfig_db = {\n 'data': [ \n {'name':'test data from table',\n 'x': df_t['f'],\n 'y': df_t['pv'],\n 'z': df_t[species],\n 'type':'scatter3d', \n 'mode': 'markers',\n 'marker':{\n 'size':1\n }\n },\n {'name':'prediction from neural networks',\n 'x': df_p['f'],\n 'y': df_p['pv'],\n 'z': df_p[species],\n 'type':'scatter3d', \n 'mode': 'markers',\n 'marker':{\n 'size':1\n },\n },\n {'name':'error in difference',\n 'x': df_p['f'],\n 'y': df_p['pv'],\n 'z': error,\n 'type':'scatter3d', \n 'mode': 'markers',\n 'marker':{\n 'size':1\n },\n } \n ],\n 'layout': {\n 'scene':{\n 'xaxis': {'title':'mixture fraction'},\n 'yaxis': {'title':'progress variable'},\n 'zaxis': {'title': species+'_r2:'+str(r2)}\n }\n }\n}\n# iplot(fig_db, filename='multiple-scatter')\niplot(fig_db)\n\n\nmodel.save('trained_fgm_nn.h5')\n\nmodel.save('trained_fgm_nn.h5')\n%run -i k2tf.py --input_model='trained_fgm_nn.h5' --output_model='exported/fgm.pb'", "Stutdent networ\nThe student network is trained on the synsetic data generated from the full teacher network. It is mean to simplify the final model used in production.", "from keras.models import Model\nfrom keras.layers import Dense, Input\nfrom keras.callbacks import ModelCheckpoint\n\n\nn_neuron = 50\n# %%\nprint('set up student network')\n# ANN parameters\ndim_input = x_train.shape[1]\ndim_label = y_train.shape[1]\n\nbatch_norm = False\n\n# This returns a tensor\ninputs = Input(shape=(dim_input,),name='input_1')\n\n# a layer instance is callable on a tensor, and returns a tensor\nx = Dense(n_neuron, activation='relu',name='l1')(inputs)\nx = Dense(n_neuron, activation='relu',name='l2')(x)\nx = Dropout(0.1)(x)\npredictions = Dense(dim_label, activation='linear', name='output_1')(x)\n\nstudent_model = Model(inputs=inputs, outputs=predictions)\nstudent_model.summary()\n\nbatch_size = 1024*32\nepochs = 60\nvsplit = 0.1\n\nloss_type='mse'\n\nadam_op = optimizers.Adam(lr=0.001, beta_1=0.9, beta_2=0.999,epsilon=1e-8, decay=0.0, amsgrad=True)\n\nstudent_model.compile(loss=loss_type, optimizer=adam_op, metrics=[coeff_r2])\n# model.compile(loss=cubic_loss, optimizer=adam_op, metrics=['accuracy'])\n\n# checkpoint (save the best model based validate loss)\n!mkdir ./tmp\nfilepath = \"./tmp/student_weights.best.cntk.hdf5\"\n\ncheckpoint = ModelCheckpoint(filepath,\n monitor='val_loss',\n verbose=1,\n save_best_only=True,\n mode='min',\n period=20)\n\n# callbacks_list = [checkpoint]\ncallbacks_list = [lrate]\n\nx_train_teacher = x_train\ny_train_teacher = model.predict(x_train, batch_size=1024*8)\n# fit the model\nhistory = student_model.fit(\n x_train_teacher, y_train_teacher,\n epochs=epochs,\n batch_size=batch_size,\n validation_split=vsplit,\n verbose=2,\n# callbacks=callbacks_list,\n shuffle=True)\n\nfrom sklearn.metrics import r2_score\n\nx_test_df = pd.DataFrame(in_scaler.inverse_transform(x_test),columns=input_features)\n\npredict_val = student_model.predict(x_test,batch_size=1024*8)\npredict_df = pd.DataFrame(out_scaler.inverse_transform(predict_val), columns=labels)\n\npred_data=pd.concat([x_test_df,predict_df],axis=1)\n\n!rm sim_check.h5\npred_data.to_hdf('sim_check.h5',key='pred')\ndf_pred=pd.read_hdf('sim_check.h5',key='pred')\n\nzeta_level=list(set(df_test['zeta']))\nzeta_level.sort()\n\n\nres_sum=pd.DataFrame()\nr2s=[]\nr2s_i=[]\n\nnames=[]\nmaxs_0=[]\nmaxs_9=[]\n\nfor r2,name in zip(r2_score(df_test,df_pred,multioutput='raw_values'),df_test.columns):\n names.append(name)\n r2s.append(r2)\n maxs_0.append(df_test[df_test['zeta']==zeta_level[0]][name].max())\n maxs_9.append(df_test[df_test['zeta']==zeta_level[8]][name].max())\n for i in zeta_level:\n r2s_i.append(r2_score(df_pred[df_pred['zeta']==i][name],\n df_test[df_test['zeta']==i][name]))\n\nres_sum['name']=names\n# res_sum['max_0']=maxs_0\n# res_sum['max_9']=maxs_9\nres_sum['z_scale']=[m_9/(m_0+1e-20) for m_9,m_0 in zip(maxs_9,maxs_0)]\n# res_sum['r2']=r2s\n\n\ntmp=np.asarray(r2s_i).reshape(-1,10)\nfor idx,z in enumerate(zeta_level):\n res_sum['r2s_'+str(z)]=tmp[:,idx]\n\nres_sum[3:]", "save student network weights", "import h5py\n!rm student_model_weights.h5\nstudent_model.save('student_model_weights.h5')\nf = h5py.File('student_model_weights.h5','r')\ndset=f['model_weights']\nlist(dset)\n\nl1_w=dset['dense_5']['dense_5']['kernel:0'][:]\nl1_b=dset['dense_5']['dense_5']['bias:0'][:]\nl1_c=np.vstack([l1_w,l1_b])\nl1_c=pd.Series(list(l1_c)).to_json()\n\nl2_w=dset['dense_6']['dense_6']['kernel:0'][:]\nl2_b=dset['dense_6']['dense_6']['bias:0'][:]\nl2_c=np.vstack([l2_w,l2_b])\nl2_c=pd.Series(list(l2_c)).to_json()\n\nl3_w=dset['output_1']['output_1_2']['kernel:0'][:]\nl3_b=dset['output_1']['output_1_2']['bias:0'][:]\nl3_c=np.vstack([l3_w,l3_b])\nl3_c=pd.Series(list(l3_c)).to_json()\n\n\n\n!rm data.json\nprint(\"{\",file=open('data.json','w'))\nprint('\"l1\":',l1_c,file=open('data.json','a'))\nprint(',\"l2\":',l2_c,file=open('data.json','a'))\nprint(',\"output\":',l3_c,file=open('data.json','a'))\nprint(\"}\",file=open('data.json','a'))\n\ntest_id=888\nprint(x_test[test_id])\nprint(student_model.predict(x_test[test_id].reshape(-1,3)))\nprint(y_test[test_id])\n\nl1_b\n\nnp.vstack([l1,l1_b])\n\nstudent_model.predict(np.asarray([0.5,0.1,0.1]).reshape(-1,3))\n\nstudent_model.save_weights('student_weights.h5')" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
tarunchhabra26/fss16dst
code/5/WS1/tchhabr.ipynb
apache-2.0
[ "Genetic Algorithm Workshop\nIn this workshop we will code up a genetic algorithm for a simple mathematical optimization problem.\nGenetic Algorithm is a\n* Meta-heuristic\n* Inspired by Natural Selection\n* Traditionally works on binary data. Can be adopted for other data types as well.\nYou can find an example illustrating GA below", "%matplotlib inline\n# All the imports\nfrom __future__ import print_function, division\nfrom math import *\nimport random\nimport sys\nimport matplotlib.pyplot as plt\n\n# TODO 1: Enter your unity ID here \n__author__ = \"tchhabr\"\n\nclass O:\n \"\"\"\n Basic Class which\n - Helps dynamic updates\n - Pretty Prints\n \"\"\"\n def __init__(self, **kwargs):\n self.has().update(**kwargs)\n def has(self):\n return self.__dict__\n def update(self, **kwargs):\n self.has().update(kwargs)\n return self\n def __repr__(self):\n show = [':%s %s' % (k, self.has()[k]) \n for k in sorted(self.has().keys()) \n if k[0] is not \"_\"]\n txt = ' '.join(show)\n if len(txt) > 60:\n show = map(lambda x: '\\t' + x + '\\n', show)\n return '{' + ' '.join(show) + '}'\n \nprint(\"Unity ID: \", __author__)", "The optimization problem\nThe problem we are considering is a mathematical one \n<img src=\"cone.png\" width=500px/>\nDecisions: r in [0, 10] cm; h in [0, 20] cm\nObjectives: minimize S, T\nConstraints: V > 200cm<sup>3</sup>", "# Few Utility functions\ndef say(*lst):\n \"\"\"\n Print whithout going to new line\n \"\"\"\n print(*lst, end=\"\")\n sys.stdout.flush()\n\ndef random_value(low, high, decimals=2):\n \"\"\"\n Generate a random number between low and high. \n decimals incidicate number of decimal places\n \"\"\"\n return round(random.uniform(low, high),decimals)\n\ndef gt(a, b): return a > b\n\ndef lt(a, b): return a < b\n\ndef shuffle(lst):\n \"\"\"\n Shuffle a list\n \"\"\"\n random.shuffle(lst)\n return lst\n\nclass Decision(O):\n \"\"\"\n Class indicating Decision of a problem\n \"\"\"\n def __init__(self, name, low, high):\n \"\"\"\n @param name: Name of the decision\n @param low: minimum value\n @param high: maximum value\n \"\"\"\n O.__init__(self, name=name, low=low, high=high)\n \nclass Objective(O):\n \"\"\"\n Class indicating Objective of a problem\n \"\"\"\n def __init__(self, name, do_minimize=True):\n \"\"\"\n @param name: Name of the objective\n @param do_minimize: Flag indicating if objective has to be minimized or maximized\n \"\"\"\n O.__init__(self, name=name, do_minimize=do_minimize)\n\nclass Point(O):\n \"\"\"\n Represents a member of the population\n \"\"\"\n def __init__(self, decisions):\n O.__init__(self)\n self.decisions = decisions\n self.objectives = None\n \n def __hash__(self):\n return hash(tuple(self.decisions))\n \n def __eq__(self, other):\n return self.decisions == other.decisions\n \n def clone(self):\n new = Point(self.decisions)\n new.objectives = self.objectives\n return new\n\nclass Problem(O):\n \"\"\"\n Class representing the cone problem.\n \"\"\"\n def __init__(self):\n O.__init__(self)\n # TODO 2: Code up decisions and objectives below for the problem\n # using the auxilary classes provided above.\n self.decisions = None\n self.objectives = None\n \n radius = Decision('radius', 0, 10)\n height = Decision('height', 0, 20)\n \n self.decisions = [radius, height]\n \n s = Objective('surface')\n t = Objective('total area')\n \n self.objectives = [s,t]\n \n @staticmethod\n def evaluate(point):\n [r, h] = point.decisions\n point.objectives = None\n # TODO 3: Evaluate the objectives S and T for the point.\n l = (r**2 + h**2)**0.5\n S = pi * r * l\n T = S + pi * r**2\n point.objectives = [S, T]\n \n return point.objectives\n \n @staticmethod\n def is_valid(point):\n [r, h] = point.decisions\n # TODO 4: Check if the point has valid decisions\n V = pi*(r**2)*h/3\n return V > 200\n \n def generate_one(self):\n # TODO 5: Generate a valid instance of Point.\n \n while True:\n point = Point([random_value(d.low, d.high) for d in self.decisions])\n if Problem.is_valid(point):\n return point\ncone = Problem()\npoint = cone.generate_one()\ncone.evaluate(point)\nprint(point)", "Great. Now that the class and its basic methods is defined, we move on to code up the GA.\nPopulation\nFirst up is to create an initial population.", "def populate(problem, size):\n population = []\n # TODO 6: Create a list of points of length 'size'\n return [problem.generate_one() for _ in xrange(size)]\n\nprint (populate(cone,5))\n ", "Crossover\nWe perform a single point crossover between two points", "def crossover(mom, dad):\n # TODO 7: Create a new point which contains decisions from \n # the first half of mom and second half of dad\n n = len(mom.decisions)\n return Point(mom.decisions[:n//2] + dad.decisions[n//2:])\n\npop = populate(cone,5)\ncrossover(pop[0], pop[1])", "Mutation\nRandomly change a decision such that", "def mutate(problem, point, mutation_rate=0.01):\n # TODO 8: Iterate through all the decisions in the problem\n # and if the probability is less than mutation rate\n # change the decision(randomly set it between its max and min).\n for i, d in enumerate(problem.decisions):\n if random.random() < mutation_rate:\n point.decisions[i] = random_value(d.low, d.high)\n return point\n\nprint (mutate(cone,point,0.1))\nobs = populate(cone,5)\nprint (obs)", "Fitness Evaluation\nTo evaluate fitness between points we use binary domination. Binary Domination is defined as follows:\n* Consider two points one and two.\n* For every decision o and t in one and two, o <= t\n* Atleast one decision o and t in one and two, o == t\nNote: Binary Domination is not the best method to evaluate fitness but due to its simplicity we choose to use it for this workshop.", "def bdom(problem, one, two):\n \"\"\"\n Return if one dominates two\n \"\"\"\n objs_one = problem.evaluate(one)\n objs_two = problem.evaluate(two)\n \n if (one == two):\n return False\n \n dominates = False\n # TODO 9: Return True/False based on the definition\n # of bdom above.\n first = True\n second = False\n for i,_ in enumerate(problem.objectives):\n if ((first is True) & gt(one.objectives[i], two.objectives[i])):\n first = False\n elif (not second & (one.objectives[i] is not two.objectives[i])):\n second = True\n \n dominates = first & second\n \n return dominates\n\nprint (bdom(cone,obs[4],obs[4]))", "Fitness and Elitism\nIn this workshop we will count the number of points of the population P dominated by a point A as the fitness of point A. This is a very naive measure of fitness since we are using binary domination. \nFew prominent alternate methods are\n1. Continuous Domination - Section 3.1\n2. Non-dominated Sort\n3. Non-dominated Sort + Niching\nElitism: Sort points with respect to the fitness and select the top points.", "def fitness(problem, population, point):\n dominates = 0\n # TODO 10: Evaluate fitness of a point.\n # For this workshop define fitness of a point \n # as the number of points dominated by it.\n # For example point dominates 5 members of population,\n # then fitness of point is 5.\n for pop in population:\n if bdom(problem, point, pop):\n dominates += 1\n return dominates\n\ndef elitism(problem, population, retain_size):\n # TODO 11: Sort the population with respect to the fitness\n # of the points and return the top 'retain_size' points of the population\n fit_pop = [fitness(cone,population,pop) for pop in population]\n population = [pop for _,pop in sorted(zip(fit_pop,population), reverse = True)]\n return population[:retain_size]", "Putting it all together and making the GA", "def ga(pop_size = 100, gens = 250):\n problem = Problem()\n population = populate(problem, pop_size)\n [problem.evaluate(point) for point in population]\n initial_population = [point.clone() for point in population]\n gen = 0 \n while gen < gens:\n say(\".\")\n children = []\n for _ in range(pop_size):\n mom = random.choice(population)\n dad = random.choice(population)\n while (mom == dad):\n dad = random.choice(population)\n child = mutate(problem, crossover(mom, dad))\n if problem.is_valid(child) and child not in population+children:\n children.append(child)\n population += children\n population = elitism(problem, population, pop_size)\n gen += 1\n print(\"\")\n return initial_population, population", "Visualize\nLets plot the initial population with respect to the final frontier.", "def plot_pareto(initial, final):\n initial_objs = [point.objectives for point in initial]\n final_objs = [point.objectives for point in final]\n initial_x = [i[0] for i in initial_objs]\n initial_y = [i[1] for i in initial_objs]\n final_x = [i[0] for i in final_objs]\n final_y = [i[1] for i in final_objs]\n plt.scatter(initial_x, initial_y, color='b', marker='+', label='initial')\n plt.scatter(final_x, final_y, color='r', marker='o', label='final')\n plt.title(\"Scatter Plot between initial and final population of GA\")\n plt.ylabel(\"Total Surface Area(T)\")\n plt.xlabel(\"Curved Surface Area(S)\")\n plt.legend(loc=9, bbox_to_anchor=(0.5, -0.175), ncol=2)\n plt.show()\n \n\ninitial, final = ga()\nplot_pareto(initial, final)", "Here is a sample output\n<img src=\"sample.png\" width=300/>" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
ES-DOC/esdoc-jupyterhub
notebooks/cmcc/cmip6/models/cmcc-cm2-vhr4/atmos.ipynb
gpl-3.0
[ "ES-DOC CMIP6 Model Properties - Atmos\nMIP Era: CMIP6\nInstitute: CMCC\nSource ID: CMCC-CM2-VHR4\nTopic: Atmos\nSub-Topics: Dynamical Core, Radiation, Turbulence Convection, Microphysics Precipitation, Cloud Scheme, Observation Simulation, Gravity Waves, Solar, Volcanos. \nProperties: 156 (127 required)\nModel descriptions: Model description details\nInitialized From: -- \nNotebook Help: Goto notebook help page\nNotebook Initialised: 2018-02-15 16:53:50\nDocument Setup\nIMPORTANT: to be executed each time you run the notebook", "# DO NOT EDIT ! \nfrom pyesdoc.ipython.model_topic import NotebookOutput \n\n# DO NOT EDIT ! \nDOC = NotebookOutput('cmip6', 'cmcc', 'cmcc-cm2-vhr4', 'atmos')", "Document Authors\nSet document authors", "# Set as follows: DOC.set_author(\"name\", \"email\") \n# TODO - please enter value(s)", "Document Contributors\nSpecify document contributors", "# Set as follows: DOC.set_contributor(\"name\", \"email\") \n# TODO - please enter value(s)", "Document Publication\nSpecify document publication status", "# Set publication status: \n# 0=do not publish, 1=publish. \nDOC.set_publication_status(0)", "Document Table of Contents\n1. Key Properties --&gt; Overview\n2. Key Properties --&gt; Resolution\n3. Key Properties --&gt; Timestepping\n4. Key Properties --&gt; Orography\n5. Grid --&gt; Discretisation\n6. Grid --&gt; Discretisation --&gt; Horizontal\n7. Grid --&gt; Discretisation --&gt; Vertical\n8. Dynamical Core\n9. Dynamical Core --&gt; Top Boundary\n10. Dynamical Core --&gt; Lateral Boundary\n11. Dynamical Core --&gt; Diffusion Horizontal\n12. Dynamical Core --&gt; Advection Tracers\n13. Dynamical Core --&gt; Advection Momentum\n14. Radiation\n15. Radiation --&gt; Shortwave Radiation\n16. Radiation --&gt; Shortwave GHG\n17. Radiation --&gt; Shortwave Cloud Ice\n18. Radiation --&gt; Shortwave Cloud Liquid\n19. Radiation --&gt; Shortwave Cloud Inhomogeneity\n20. Radiation --&gt; Shortwave Aerosols\n21. Radiation --&gt; Shortwave Gases\n22. Radiation --&gt; Longwave Radiation\n23. Radiation --&gt; Longwave GHG\n24. Radiation --&gt; Longwave Cloud Ice\n25. Radiation --&gt; Longwave Cloud Liquid\n26. Radiation --&gt; Longwave Cloud Inhomogeneity\n27. Radiation --&gt; Longwave Aerosols\n28. Radiation --&gt; Longwave Gases\n29. Turbulence Convection\n30. Turbulence Convection --&gt; Boundary Layer Turbulence\n31. Turbulence Convection --&gt; Deep Convection\n32. Turbulence Convection --&gt; Shallow Convection\n33. Microphysics Precipitation\n34. Microphysics Precipitation --&gt; Large Scale Precipitation\n35. Microphysics Precipitation --&gt; Large Scale Cloud Microphysics\n36. Cloud Scheme\n37. Cloud Scheme --&gt; Optical Cloud Properties\n38. Cloud Scheme --&gt; Sub Grid Scale Water Distribution\n39. Cloud Scheme --&gt; Sub Grid Scale Ice Distribution\n40. Observation Simulation\n41. Observation Simulation --&gt; Isscp Attributes\n42. Observation Simulation --&gt; Cosp Attributes\n43. Observation Simulation --&gt; Radar Inputs\n44. Observation Simulation --&gt; Lidar Inputs\n45. Gravity Waves\n46. Gravity Waves --&gt; Orographic Gravity Waves\n47. Gravity Waves --&gt; Non Orographic Gravity Waves\n48. Solar\n49. Solar --&gt; Solar Pathways\n50. Solar --&gt; Solar Constant\n51. Solar --&gt; Orbital Parameters\n52. Solar --&gt; Insolation Ozone\n53. Volcanos\n54. Volcanos --&gt; Volcanoes Treatment \n1. Key Properties --&gt; Overview\nTop level key properties\n1.1. Model Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of atmosphere model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.key_properties.overview.model_overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "1.2. Model Name\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nName of atmosphere model code (CAM 4.0, ARPEGE 3.2,...)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.key_properties.overview.model_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "1.3. Model Family\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nType of atmospheric model.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.key_properties.overview.model_family') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"AGCM\" \n# \"ARCM\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "1.4. Basic Approximations\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nBasic approximations made in the atmosphere.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.key_properties.overview.basic_approximations') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"primitive equations\" \n# \"non-hydrostatic\" \n# \"anelastic\" \n# \"Boussinesq\" \n# \"hydrostatic\" \n# \"quasi-hydrostatic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "2. Key Properties --&gt; Resolution\nCharacteristics of the model resolution\n2.1. Horizontal Resolution Name\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nThis is a string usually used by the modelling group to describe the resolution of the model grid, e.g. T42, N48.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.key_properties.resolution.horizontal_resolution_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "2.2. Canonical Horizontal Resolution\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nExpression quoted for gross comparisons of resolution, e.g. 2.5 x 3.75 degrees lat-lon.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.key_properties.resolution.canonical_horizontal_resolution') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "2.3. Range Horizontal Resolution\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nRange of horizontal resolution with spatial details, eg. 1 deg (Equator) - 0.5 deg", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.key_properties.resolution.range_horizontal_resolution') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "2.4. Number Of Vertical Levels\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nNumber of vertical levels resolved on the computational grid.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.key_properties.resolution.number_of_vertical_levels') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "2.5. High Top\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDoes the atmosphere have a high-top? High-Top atmospheres have a fully resolved stratosphere with a model top above the stratopause.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.key_properties.resolution.high_top') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "3. Key Properties --&gt; Timestepping\nCharacteristics of the atmosphere model time stepping\n3.1. Timestep Dynamics\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nTimestep for the dynamics, e.g. 30 min.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_dynamics') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "3.2. Timestep Shortwave Radiative Transfer\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nTimestep for the shortwave radiative transfer, e.g. 1.5 hours.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_shortwave_radiative_transfer') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "3.3. Timestep Longwave Radiative Transfer\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nTimestep for the longwave radiative transfer, e.g. 3 hours.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_longwave_radiative_transfer') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "4. Key Properties --&gt; Orography\nCharacteristics of the model orography\n4.1. Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nTime adaptation of the orography.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.key_properties.orography.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"present day\" \n# \"modified\" \n# TODO - please enter value(s)\n", "4.2. Changes\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nIf the orography type is modified describe the time adaptation changes.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.key_properties.orography.changes') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"related to ice sheets\" \n# \"related to tectonics\" \n# \"modified mean\" \n# \"modified variance if taken into account in model (cf gravity waves)\" \n# TODO - please enter value(s)\n", "5. Grid --&gt; Discretisation\nAtmosphere grid discretisation\n5.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview description of grid discretisation in the atmosphere", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.grid.discretisation.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "6. Grid --&gt; Discretisation --&gt; Horizontal\nAtmosphere discretisation in the horizontal\n6.1. Scheme Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nHorizontal discretisation type", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"spectral\" \n# \"fixed grid\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "6.2. Scheme Method\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nHorizontal discretisation method", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"finite elements\" \n# \"finite volumes\" \n# \"finite difference\" \n# \"centered finite difference\" \n# TODO - please enter value(s)\n", "6.3. Scheme Order\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nHorizontal discretisation function order", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_order') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"second\" \n# \"third\" \n# \"fourth\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "6.4. Horizontal Pole\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nHorizontal discretisation pole singularity treatment", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.grid.discretisation.horizontal.horizontal_pole') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"filter\" \n# \"pole rotation\" \n# \"artificial island\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "6.5. Grid Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nHorizontal grid type", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.grid.discretisation.horizontal.grid_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Gaussian\" \n# \"Latitude-Longitude\" \n# \"Cubed-Sphere\" \n# \"Icosahedral\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "7. Grid --&gt; Discretisation --&gt; Vertical\nAtmosphere discretisation in the vertical\n7.1. Coordinate Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nType of vertical coordinate system", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.grid.discretisation.vertical.coordinate_type') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"isobaric\" \n# \"sigma\" \n# \"hybrid sigma-pressure\" \n# \"hybrid pressure\" \n# \"vertically lagrangian\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "8. Dynamical Core\nCharacteristics of the dynamical core\n8.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview description of atmosphere dynamical core", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "8.2. Name\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nCommonly used name for the dynamical core of the model.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "8.3. Timestepping Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nTimestepping framework type", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.timestepping_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Adams-Bashforth\" \n# \"explicit\" \n# \"implicit\" \n# \"semi-implicit\" \n# \"leap frog\" \n# \"multi-step\" \n# \"Runge Kutta fifth order\" \n# \"Runge Kutta second order\" \n# \"Runge Kutta third order\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "8.4. Prognostic Variables\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nList of the model prognostic variables", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.prognostic_variables') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"surface pressure\" \n# \"wind components\" \n# \"divergence/curl\" \n# \"temperature\" \n# \"potential temperature\" \n# \"total water\" \n# \"water vapour\" \n# \"water liquid\" \n# \"water ice\" \n# \"total water moments\" \n# \"clouds\" \n# \"radiation\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "9. Dynamical Core --&gt; Top Boundary\nType of boundary layer at the top of the model\n9.1. Top Boundary Condition\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nTop boundary condition", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_boundary_condition') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"sponge layer\" \n# \"radiation boundary condition\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "9.2. Top Heat\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nTop boundary heat treatment", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_heat') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "9.3. Top Wind\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nTop boundary wind treatment", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_wind') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "10. Dynamical Core --&gt; Lateral Boundary\nType of lateral boundary condition (if the model is a regional model)\n10.1. Condition\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nType of lateral boundary condition", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.lateral_boundary.condition') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"sponge layer\" \n# \"radiation boundary condition\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "11. Dynamical Core --&gt; Diffusion Horizontal\nHorizontal diffusion scheme\n11.1. Scheme Name\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nHorizontal diffusion scheme name", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.diffusion_horizontal.scheme_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "11.2. Scheme Method\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nHorizontal diffusion scheme method", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.diffusion_horizontal.scheme_method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"iterated Laplacian\" \n# \"bi-harmonic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "12. Dynamical Core --&gt; Advection Tracers\nTracer advection scheme\n12.1. Scheme Name\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nTracer advection scheme name", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.scheme_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Heun\" \n# \"Roe and VanLeer\" \n# \"Roe and Superbee\" \n# \"Prather\" \n# \"UTOPIA\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "12.2. Scheme Characteristics\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nTracer advection scheme characteristics", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.scheme_characteristics') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Eulerian\" \n# \"modified Euler\" \n# \"Lagrangian\" \n# \"semi-Lagrangian\" \n# \"cubic semi-Lagrangian\" \n# \"quintic semi-Lagrangian\" \n# \"mass-conserving\" \n# \"finite volume\" \n# \"flux-corrected\" \n# \"linear\" \n# \"quadratic\" \n# \"quartic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "12.3. Conserved Quantities\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nTracer advection scheme conserved quantities", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.conserved_quantities') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"dry mass\" \n# \"tracer mass\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "12.4. Conservation Method\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nTracer advection scheme conservation method", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.conservation_method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"conservation fixer\" \n# \"Priestley algorithm\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "13. Dynamical Core --&gt; Advection Momentum\nMomentum advection scheme\n13.1. Scheme Name\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nMomentum advection schemes name", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"VanLeer\" \n# \"Janjic\" \n# \"SUPG (Streamline Upwind Petrov-Galerkin)\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "13.2. Scheme Characteristics\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nMomentum advection scheme characteristics", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_characteristics') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"2nd order\" \n# \"4th order\" \n# \"cell-centred\" \n# \"staggered grid\" \n# \"semi-staggered grid\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "13.3. Scheme Staggering Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nMomentum advection scheme staggering type", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_staggering_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Arakawa B-grid\" \n# \"Arakawa C-grid\" \n# \"Arakawa D-grid\" \n# \"Arakawa E-grid\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "13.4. Conserved Quantities\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nMomentum advection scheme conserved quantities", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.conserved_quantities') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Angular momentum\" \n# \"Horizontal momentum\" \n# \"Enstrophy\" \n# \"Mass\" \n# \"Total energy\" \n# \"Vorticity\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "13.5. Conservation Method\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nMomentum advection scheme conservation method", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.conservation_method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"conservation fixer\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "14. Radiation\nCharacteristics of the atmosphere radiation process\n14.1. Aerosols\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nAerosols whose radiative effect is taken into account in the atmosphere model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.aerosols') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"sulphate\" \n# \"nitrate\" \n# \"sea salt\" \n# \"dust\" \n# \"ice\" \n# \"organic\" \n# \"BC (black carbon / soot)\" \n# \"SOA (secondary organic aerosols)\" \n# \"POM (particulate organic matter)\" \n# \"polar stratospheric ice\" \n# \"NAT (nitric acid trihydrate)\" \n# \"NAD (nitric acid dihydrate)\" \n# \"STS (supercooled ternary solution aerosol particle)\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "15. Radiation --&gt; Shortwave Radiation\nProperties of the shortwave radiation scheme\n15.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview description of shortwave radiation in the atmosphere", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_radiation.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "15.2. Name\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nCommonly used name for the shortwave radiation scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_radiation.name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "15.3. Spectral Integration\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nShortwave radiation scheme spectral integration", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_radiation.spectral_integration') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"wide-band model\" \n# \"correlated-k\" \n# \"exponential sum fitting\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "15.4. Transport Calculation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nShortwave radiation transport calculation methods", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_radiation.transport_calculation') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"two-stream\" \n# \"layer interaction\" \n# \"bulk\" \n# \"adaptive\" \n# \"multi-stream\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "15.5. Spectral Intervals\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nShortwave radiation scheme number of spectral intervals", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_radiation.spectral_intervals') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "16. Radiation --&gt; Shortwave GHG\nRepresentation of greenhouse gases in the shortwave radiation scheme\n16.1. Greenhouse Gas Complexity\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nComplexity of greenhouse gases whose shortwave radiative effects are taken into account in the atmosphere model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_GHG.greenhouse_gas_complexity') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"CO2\" \n# \"CH4\" \n# \"N2O\" \n# \"CFC-11 eq\" \n# \"CFC-12 eq\" \n# \"HFC-134a eq\" \n# \"Explicit ODSs\" \n# \"Explicit other fluorinated gases\" \n# \"O3\" \n# \"H2O\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "16.2. ODS\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nOzone depleting substances whose shortwave radiative effects are explicitly taken into account in the atmosphere model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_GHG.ODS') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"CFC-12\" \n# \"CFC-11\" \n# \"CFC-113\" \n# \"CFC-114\" \n# \"CFC-115\" \n# \"HCFC-22\" \n# \"HCFC-141b\" \n# \"HCFC-142b\" \n# \"Halon-1211\" \n# \"Halon-1301\" \n# \"Halon-2402\" \n# \"methyl chloroform\" \n# \"carbon tetrachloride\" \n# \"methyl chloride\" \n# \"methylene chloride\" \n# \"chloroform\" \n# \"methyl bromide\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "16.3. Other Flourinated Gases\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nOther flourinated gases whose shortwave radiative effects are explicitly taken into account in the atmosphere model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_GHG.other_flourinated_gases') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"HFC-134a\" \n# \"HFC-23\" \n# \"HFC-32\" \n# \"HFC-125\" \n# \"HFC-143a\" \n# \"HFC-152a\" \n# \"HFC-227ea\" \n# \"HFC-236fa\" \n# \"HFC-245fa\" \n# \"HFC-365mfc\" \n# \"HFC-43-10mee\" \n# \"CF4\" \n# \"C2F6\" \n# \"C3F8\" \n# \"C4F10\" \n# \"C5F12\" \n# \"C6F14\" \n# \"C7F16\" \n# \"C8F18\" \n# \"c-C4F8\" \n# \"NF3\" \n# \"SF6\" \n# \"SO2F2\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "17. Radiation --&gt; Shortwave Cloud Ice\nShortwave radiative properties of ice crystals in clouds\n17.1. General Interactions\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nGeneral shortwave radiative interactions with cloud ice crystals", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.general_interactions') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"scattering\" \n# \"emission/absorption\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "17.2. Physical Representation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nPhysical representation of cloud ice crystals in the shortwave radiation scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.physical_representation') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"bi-modal size distribution\" \n# \"ensemble of ice crystals\" \n# \"mean projected area\" \n# \"ice water path\" \n# \"crystal asymmetry\" \n# \"crystal aspect ratio\" \n# \"effective crystal radius\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "17.3. Optical Methods\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nOptical methods applicable to cloud ice crystals in the shortwave radiation scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.optical_methods') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"T-matrix\" \n# \"geometric optics\" \n# \"finite difference time domain (FDTD)\" \n# \"Mie theory\" \n# \"anomalous diffraction approximation\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "18. Radiation --&gt; Shortwave Cloud Liquid\nShortwave radiative properties of liquid droplets in clouds\n18.1. General Interactions\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nGeneral shortwave radiative interactions with cloud liquid droplets", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.general_interactions') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"scattering\" \n# \"emission/absorption\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "18.2. Physical Representation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nPhysical representation of cloud liquid droplets in the shortwave radiation scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.physical_representation') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"cloud droplet number concentration\" \n# \"effective cloud droplet radii\" \n# \"droplet size distribution\" \n# \"liquid water path\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "18.3. Optical Methods\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nOptical methods applicable to cloud liquid droplets in the shortwave radiation scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.optical_methods') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"geometric optics\" \n# \"Mie theory\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "19. Radiation --&gt; Shortwave Cloud Inhomogeneity\nCloud inhomogeneity in the shortwave radiation scheme\n19.1. Cloud Inhomogeneity\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nMethod for taking into account horizontal cloud inhomogeneity", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_cloud_inhomogeneity.cloud_inhomogeneity') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Monte Carlo Independent Column Approximation\" \n# \"Triplecloud\" \n# \"analytic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "20. Radiation --&gt; Shortwave Aerosols\nShortwave radiative properties of aerosols\n20.1. General Interactions\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nGeneral shortwave radiative interactions with aerosols", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.general_interactions') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"scattering\" \n# \"emission/absorption\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "20.2. Physical Representation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nPhysical representation of aerosols in the shortwave radiation scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.physical_representation') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"number concentration\" \n# \"effective radii\" \n# \"size distribution\" \n# \"asymmetry\" \n# \"aspect ratio\" \n# \"mixing state\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "20.3. Optical Methods\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nOptical methods applicable to aerosols in the shortwave radiation scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.optical_methods') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"T-matrix\" \n# \"geometric optics\" \n# \"finite difference time domain (FDTD)\" \n# \"Mie theory\" \n# \"anomalous diffraction approximation\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "21. Radiation --&gt; Shortwave Gases\nShortwave radiative properties of gases\n21.1. General Interactions\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nGeneral shortwave radiative interactions with gases", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_gases.general_interactions') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"scattering\" \n# \"emission/absorption\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "22. Radiation --&gt; Longwave Radiation\nProperties of the longwave radiation scheme\n22.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview description of longwave radiation in the atmosphere", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_radiation.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "22.2. Name\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nCommonly used name for the longwave radiation scheme.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_radiation.name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "22.3. Spectral Integration\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nLongwave radiation scheme spectral integration", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_radiation.spectral_integration') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"wide-band model\" \n# \"correlated-k\" \n# \"exponential sum fitting\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "22.4. Transport Calculation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nLongwave radiation transport calculation methods", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_radiation.transport_calculation') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"two-stream\" \n# \"layer interaction\" \n# \"bulk\" \n# \"adaptive\" \n# \"multi-stream\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "22.5. Spectral Intervals\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nLongwave radiation scheme number of spectral intervals", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_radiation.spectral_intervals') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "23. Radiation --&gt; Longwave GHG\nRepresentation of greenhouse gases in the longwave radiation scheme\n23.1. Greenhouse Gas Complexity\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nComplexity of greenhouse gases whose longwave radiative effects are taken into account in the atmosphere model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_GHG.greenhouse_gas_complexity') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"CO2\" \n# \"CH4\" \n# \"N2O\" \n# \"CFC-11 eq\" \n# \"CFC-12 eq\" \n# \"HFC-134a eq\" \n# \"Explicit ODSs\" \n# \"Explicit other fluorinated gases\" \n# \"O3\" \n# \"H2O\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "23.2. ODS\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nOzone depleting substances whose longwave radiative effects are explicitly taken into account in the atmosphere model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_GHG.ODS') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"CFC-12\" \n# \"CFC-11\" \n# \"CFC-113\" \n# \"CFC-114\" \n# \"CFC-115\" \n# \"HCFC-22\" \n# \"HCFC-141b\" \n# \"HCFC-142b\" \n# \"Halon-1211\" \n# \"Halon-1301\" \n# \"Halon-2402\" \n# \"methyl chloroform\" \n# \"carbon tetrachloride\" \n# \"methyl chloride\" \n# \"methylene chloride\" \n# \"chloroform\" \n# \"methyl bromide\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "23.3. Other Flourinated Gases\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nOther flourinated gases whose longwave radiative effects are explicitly taken into account in the atmosphere model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_GHG.other_flourinated_gases') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"HFC-134a\" \n# \"HFC-23\" \n# \"HFC-32\" \n# \"HFC-125\" \n# \"HFC-143a\" \n# \"HFC-152a\" \n# \"HFC-227ea\" \n# \"HFC-236fa\" \n# \"HFC-245fa\" \n# \"HFC-365mfc\" \n# \"HFC-43-10mee\" \n# \"CF4\" \n# \"C2F6\" \n# \"C3F8\" \n# \"C4F10\" \n# \"C5F12\" \n# \"C6F14\" \n# \"C7F16\" \n# \"C8F18\" \n# \"c-C4F8\" \n# \"NF3\" \n# \"SF6\" \n# \"SO2F2\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "24. Radiation --&gt; Longwave Cloud Ice\nLongwave radiative properties of ice crystals in clouds\n24.1. General Interactions\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nGeneral longwave radiative interactions with cloud ice crystals", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.general_interactions') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"scattering\" \n# \"emission/absorption\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "24.2. Physical Reprenstation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nPhysical representation of cloud ice crystals in the longwave radiation scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.physical_reprenstation') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"bi-modal size distribution\" \n# \"ensemble of ice crystals\" \n# \"mean projected area\" \n# \"ice water path\" \n# \"crystal asymmetry\" \n# \"crystal aspect ratio\" \n# \"effective crystal radius\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "24.3. Optical Methods\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nOptical methods applicable to cloud ice crystals in the longwave radiation scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.optical_methods') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"T-matrix\" \n# \"geometric optics\" \n# \"finite difference time domain (FDTD)\" \n# \"Mie theory\" \n# \"anomalous diffraction approximation\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "25. Radiation --&gt; Longwave Cloud Liquid\nLongwave radiative properties of liquid droplets in clouds\n25.1. General Interactions\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nGeneral longwave radiative interactions with cloud liquid droplets", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.general_interactions') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"scattering\" \n# \"emission/absorption\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "25.2. Physical Representation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nPhysical representation of cloud liquid droplets in the longwave radiation scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.physical_representation') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"cloud droplet number concentration\" \n# \"effective cloud droplet radii\" \n# \"droplet size distribution\" \n# \"liquid water path\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "25.3. Optical Methods\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nOptical methods applicable to cloud liquid droplets in the longwave radiation scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.optical_methods') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"geometric optics\" \n# \"Mie theory\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "26. Radiation --&gt; Longwave Cloud Inhomogeneity\nCloud inhomogeneity in the longwave radiation scheme\n26.1. Cloud Inhomogeneity\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nMethod for taking into account horizontal cloud inhomogeneity", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_cloud_inhomogeneity.cloud_inhomogeneity') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Monte Carlo Independent Column Approximation\" \n# \"Triplecloud\" \n# \"analytic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "27. Radiation --&gt; Longwave Aerosols\nLongwave radiative properties of aerosols\n27.1. General Interactions\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nGeneral longwave radiative interactions with aerosols", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_aerosols.general_interactions') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"scattering\" \n# \"emission/absorption\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "27.2. Physical Representation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nPhysical representation of aerosols in the longwave radiation scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_aerosols.physical_representation') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"number concentration\" \n# \"effective radii\" \n# \"size distribution\" \n# \"asymmetry\" \n# \"aspect ratio\" \n# \"mixing state\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "27.3. Optical Methods\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nOptical methods applicable to aerosols in the longwave radiation scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_aerosols.optical_methods') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"T-matrix\" \n# \"geometric optics\" \n# \"finite difference time domain (FDTD)\" \n# \"Mie theory\" \n# \"anomalous diffraction approximation\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "28. Radiation --&gt; Longwave Gases\nLongwave radiative properties of gases\n28.1. General Interactions\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nGeneral longwave radiative interactions with gases", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_gases.general_interactions') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"scattering\" \n# \"emission/absorption\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "29. Turbulence Convection\nAtmosphere Convective Turbulence and Clouds\n29.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview description of atmosphere convection and turbulence", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.turbulence_convection.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "30. Turbulence Convection --&gt; Boundary Layer Turbulence\nProperties of the boundary layer turbulence scheme\n30.1. Scheme Name\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nBoundary layer turbulence scheme name", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.scheme_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Mellor-Yamada\" \n# \"Holtslag-Boville\" \n# \"EDMF\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "30.2. Scheme Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nBoundary layer turbulence scheme type", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.scheme_type') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"TKE prognostic\" \n# \"TKE diagnostic\" \n# \"TKE coupled with water\" \n# \"vertical profile of Kz\" \n# \"non-local diffusion\" \n# \"Monin-Obukhov similarity\" \n# \"Coastal Buddy Scheme\" \n# \"Coupled with convection\" \n# \"Coupled with gravity waves\" \n# \"Depth capped at cloud base\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "30.3. Closure Order\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nBoundary layer turbulence scheme closure order", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.closure_order') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "30.4. Counter Gradient\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nUses boundary layer turbulence scheme counter gradient", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.counter_gradient') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "31. Turbulence Convection --&gt; Deep Convection\nProperties of the deep convection scheme\n31.1. Scheme Name\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDeep convection scheme name", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "31.2. Scheme Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nDeep convection scheme type", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_type') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"mass-flux\" \n# \"adjustment\" \n# \"plume ensemble\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "31.3. Scheme Method\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nDeep convection scheme method", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_method') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"CAPE\" \n# \"bulk\" \n# \"ensemble\" \n# \"CAPE/WFN based\" \n# \"TKE/CIN based\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "31.4. Processes\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nPhysical processes taken into account in the parameterisation of deep convection", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.processes') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"vertical momentum transport\" \n# \"convective momentum transport\" \n# \"entrainment\" \n# \"detrainment\" \n# \"penetrative convection\" \n# \"updrafts\" \n# \"downdrafts\" \n# \"radiative effect of anvils\" \n# \"re-evaporation of convective precipitation\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "31.5. Microphysics\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nMicrophysics scheme for deep convection. Microphysical processes directly control the amount of detrainment of cloud hydrometeor and water vapor from updrafts", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.microphysics') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"tuning parameter based\" \n# \"single moment\" \n# \"two moment\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "32. Turbulence Convection --&gt; Shallow Convection\nProperties of the shallow convection scheme\n32.1. Scheme Name\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nShallow convection scheme name", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "32.2. Scheme Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nshallow convection scheme type", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_type') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"mass-flux\" \n# \"cumulus-capped boundary layer\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "32.3. Scheme Method\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nshallow convection scheme method", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"same as deep (unified)\" \n# \"included in boundary layer turbulence\" \n# \"separate diagnosis\" \n# TODO - please enter value(s)\n", "32.4. Processes\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nPhysical processes taken into account in the parameterisation of shallow convection", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.processes') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"convective momentum transport\" \n# \"entrainment\" \n# \"detrainment\" \n# \"penetrative convection\" \n# \"re-evaporation of convective precipitation\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "32.5. Microphysics\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nMicrophysics scheme for shallow convection", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.microphysics') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"tuning parameter based\" \n# \"single moment\" \n# \"two moment\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "33. Microphysics Precipitation\nLarge Scale Cloud Microphysics and Precipitation\n33.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview description of large scale cloud microphysics and precipitation", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.microphysics_precipitation.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "34. Microphysics Precipitation --&gt; Large Scale Precipitation\nProperties of the large scale precipitation scheme\n34.1. Scheme Name\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nCommonly used name of the large scale precipitation parameterisation scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_precipitation.scheme_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "34.2. Hydrometeors\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nPrecipitating hydrometeors taken into account in the large scale precipitation scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_precipitation.hydrometeors') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"liquid rain\" \n# \"snow\" \n# \"hail\" \n# \"graupel\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "35. Microphysics Precipitation --&gt; Large Scale Cloud Microphysics\nProperties of the large scale cloud microphysics scheme\n35.1. Scheme Name\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nCommonly used name of the microphysics parameterisation scheme used for large scale clouds.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_cloud_microphysics.scheme_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "35.2. Processes\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nLarge scale cloud microphysics processes", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_cloud_microphysics.processes') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"mixed phase\" \n# \"cloud droplets\" \n# \"cloud ice\" \n# \"ice nucleation\" \n# \"water vapour deposition\" \n# \"effect of raindrops\" \n# \"effect of snow\" \n# \"effect of graupel\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "36. Cloud Scheme\nCharacteristics of the cloud scheme\n36.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview description of the atmosphere cloud scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "36.2. Name\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nCommonly used name for the cloud scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "36.3. Atmos Coupling\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nAtmosphere components that are linked to the cloud scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.atmos_coupling') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"atmosphere_radiation\" \n# \"atmosphere_microphysics_precipitation\" \n# \"atmosphere_turbulence_convection\" \n# \"atmosphere_gravity_waves\" \n# \"atmosphere_solar\" \n# \"atmosphere_volcano\" \n# \"atmosphere_cloud_simulator\" \n# TODO - please enter value(s)\n", "36.4. Uses Separate Treatment\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDifferent cloud schemes for the different types of clouds (convective, stratiform and boundary layer)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.uses_separate_treatment') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "36.5. Processes\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nProcesses included in the cloud scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.processes') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"entrainment\" \n# \"detrainment\" \n# \"bulk cloud\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "36.6. Prognostic Scheme\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs the cloud scheme a prognostic scheme?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.prognostic_scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "36.7. Diagnostic Scheme\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs the cloud scheme a diagnostic scheme?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.diagnostic_scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "36.8. Prognostic Variables\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nList the prognostic variables used by the cloud scheme, if applicable.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.prognostic_variables') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"cloud amount\" \n# \"liquid\" \n# \"ice\" \n# \"rain\" \n# \"snow\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "37. Cloud Scheme --&gt; Optical Cloud Properties\nOptical cloud properties\n37.1. Cloud Overlap Method\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nMethod for taking into account overlapping of cloud layers", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.optical_cloud_properties.cloud_overlap_method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"random\" \n# \"maximum\" \n# \"maximum-random\" \n# \"exponential\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "37.2. Cloud Inhomogeneity\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nMethod for taking into account cloud inhomogeneity", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.optical_cloud_properties.cloud_inhomogeneity') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "38. Cloud Scheme --&gt; Sub Grid Scale Water Distribution\nSub-grid scale water distribution\n38.1. Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nSub-grid scale water distribution type", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"prognostic\" \n# \"diagnostic\" \n# TODO - please enter value(s)\n", "38.2. Function Name\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nSub-grid scale water distribution function name", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.function_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "38.3. Function Order\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nSub-grid scale water distribution function type", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.function_order') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "38.4. Convection Coupling\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nSub-grid scale water distribution coupling with convection", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.convection_coupling') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"coupled with deep\" \n# \"coupled with shallow\" \n# \"not coupled with convection\" \n# TODO - please enter value(s)\n", "39. Cloud Scheme --&gt; Sub Grid Scale Ice Distribution\nSub-grid scale ice distribution\n39.1. Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nSub-grid scale ice distribution type", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"prognostic\" \n# \"diagnostic\" \n# TODO - please enter value(s)\n", "39.2. Function Name\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nSub-grid scale ice distribution function name", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.function_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "39.3. Function Order\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nSub-grid scale ice distribution function type", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.function_order') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "39.4. Convection Coupling\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nSub-grid scale ice distribution coupling with convection", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.convection_coupling') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"coupled with deep\" \n# \"coupled with shallow\" \n# \"not coupled with convection\" \n# TODO - please enter value(s)\n", "40. Observation Simulation\nCharacteristics of observation simulation\n40.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview description of observation simulator characteristics", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.observation_simulation.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "41. Observation Simulation --&gt; Isscp Attributes\nISSCP Characteristics\n41.1. Top Height Estimation Method\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nCloud simulator ISSCP top height estimation methodUo", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.observation_simulation.isscp_attributes.top_height_estimation_method') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"no adjustment\" \n# \"IR brightness\" \n# \"visible optical depth\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "41.2. Top Height Direction\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nCloud simulator ISSCP top height direction", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.observation_simulation.isscp_attributes.top_height_direction') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"lowest altitude level\" \n# \"highest altitude level\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "42. Observation Simulation --&gt; Cosp Attributes\nCFMIP Observational Simulator Package attributes\n42.1. Run Configuration\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nCloud simulator COSP run configuration", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.run_configuration') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Inline\" \n# \"Offline\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "42.2. Number Of Grid Points\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nCloud simulator COSP number of grid points", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_grid_points') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "42.3. Number Of Sub Columns\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nCloud simulator COSP number of sub-cloumns used to simulate sub-grid variability", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_sub_columns') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "42.4. Number Of Levels\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nCloud simulator COSP number of levels", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_levels') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "43. Observation Simulation --&gt; Radar Inputs\nCharacteristics of the cloud radar simulator\n43.1. Frequency\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nCloud simulator radar frequency (Hz)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.frequency') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "43.2. Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nCloud simulator radar type", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"surface\" \n# \"space borne\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "43.3. Gas Absorption\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nCloud simulator radar uses gas absorption", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.gas_absorption') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "43.4. Effective Radius\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nCloud simulator radar uses effective radius", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.effective_radius') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "44. Observation Simulation --&gt; Lidar Inputs\nCharacteristics of the cloud lidar simulator\n44.1. Ice Types\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nCloud simulator lidar ice type", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.observation_simulation.lidar_inputs.ice_types') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"ice spheres\" \n# \"ice non-spherical\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "44.2. Overlap\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nCloud simulator lidar overlap", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.observation_simulation.lidar_inputs.overlap') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"max\" \n# \"random\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "45. Gravity Waves\nCharacteristics of the parameterised gravity waves in the atmosphere, whether from orography or other sources.\n45.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview description of gravity wave parameterisation in the atmosphere", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.gravity_waves.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "45.2. Sponge Layer\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nSponge layer in the upper levels in order to avoid gravity wave reflection at the top.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.gravity_waves.sponge_layer') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Rayleigh friction\" \n# \"Diffusive sponge layer\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "45.3. Background\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nBackground wave distribution", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.gravity_waves.background') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"continuous spectrum\" \n# \"discrete spectrum\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "45.4. Subgrid Scale Orography\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nSubgrid scale orography effects taken into account.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.gravity_waves.subgrid_scale_orography') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"effect on drag\" \n# \"effect on lifting\" \n# \"enhanced topography\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "46. Gravity Waves --&gt; Orographic Gravity Waves\nGravity waves generated due to the presence of orography\n46.1. Name\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nCommonly used name for the orographic gravity wave scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "46.2. Source Mechanisms\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nOrographic gravity wave source mechanisms", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.source_mechanisms') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"linear mountain waves\" \n# \"hydraulic jump\" \n# \"envelope orography\" \n# \"low level flow blocking\" \n# \"statistical sub-grid scale variance\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "46.3. Calculation Method\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nOrographic gravity wave calculation method", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.calculation_method') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"non-linear calculation\" \n# \"more than two cardinal directions\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "46.4. Propagation Scheme\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOrographic gravity wave propogation scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.propagation_scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"linear theory\" \n# \"non-linear theory\" \n# \"includes boundary layer ducting\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "46.5. Dissipation Scheme\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOrographic gravity wave dissipation scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.dissipation_scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"total wave\" \n# \"single wave\" \n# \"spectral\" \n# \"linear\" \n# \"wave saturation vs Richardson number\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "47. Gravity Waves --&gt; Non Orographic Gravity Waves\nGravity waves generated by non-orographic processes.\n47.1. Name\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nCommonly used name for the non-orographic gravity wave scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "47.2. Source Mechanisms\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nNon-orographic gravity wave source mechanisms", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.source_mechanisms') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"convection\" \n# \"precipitation\" \n# \"background spectrum\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "47.3. Calculation Method\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nNon-orographic gravity wave calculation method", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.calculation_method') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"spatially dependent\" \n# \"temporally dependent\" \n# TODO - please enter value(s)\n", "47.4. Propagation Scheme\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nNon-orographic gravity wave propogation scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.propagation_scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"linear theory\" \n# \"non-linear theory\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "47.5. Dissipation Scheme\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nNon-orographic gravity wave dissipation scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.dissipation_scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"total wave\" \n# \"single wave\" \n# \"spectral\" \n# \"linear\" \n# \"wave saturation vs Richardson number\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "48. Solar\nTop of atmosphere solar insolation characteristics\n48.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview description of solar insolation of the atmosphere", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.solar.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "49. Solar --&gt; Solar Pathways\nPathways for solar forcing of the atmosphere\n49.1. Pathways\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nPathways for the solar forcing of the atmosphere model domain", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.solar.solar_pathways.pathways') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"SW radiation\" \n# \"precipitating energetic particles\" \n# \"cosmic rays\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "50. Solar --&gt; Solar Constant\nSolar constant and top of atmosphere insolation characteristics\n50.1. Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nTime adaptation of the solar constant.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.solar.solar_constant.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"fixed\" \n# \"transient\" \n# TODO - please enter value(s)\n", "50.2. Fixed Value\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nIf the solar constant is fixed, enter the value of the solar constant (W m-2).", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.solar.solar_constant.fixed_value') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "50.3. Transient Characteristics\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nsolar constant transient characteristics (W m-2)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.solar.solar_constant.transient_characteristics') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "51. Solar --&gt; Orbital Parameters\nOrbital parameters and top of atmosphere insolation characteristics\n51.1. Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nTime adaptation of orbital parameters", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.solar.orbital_parameters.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"fixed\" \n# \"transient\" \n# TODO - please enter value(s)\n", "51.2. Fixed Reference Date\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nReference date for fixed orbital parameters (yyyy)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.solar.orbital_parameters.fixed_reference_date') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "51.3. Transient Method\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescription of transient orbital parameters", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.solar.orbital_parameters.transient_method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "51.4. Computation Method\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nMethod used for computing orbital parameters.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.solar.orbital_parameters.computation_method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Berger 1978\" \n# \"Laskar 2004\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "52. Solar --&gt; Insolation Ozone\nImpact of solar insolation on stratospheric ozone\n52.1. Solar Ozone Impact\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDoes top of atmosphere insolation impact on stratospheric ozone?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.solar.insolation_ozone.solar_ozone_impact') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "53. Volcanos\nCharacteristics of the implementation of volcanoes\n53.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview description of the implementation of volcanic effects in the atmosphere", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.volcanos.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "54. Volcanos --&gt; Volcanoes Treatment\nTreatment of volcanoes in the atmosphere\n54.1. Volcanoes Implementation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nHow volcanic effects are modeled in the atmosphere.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.volcanos.volcanoes_treatment.volcanoes_implementation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"high frequency solar constant anomaly\" \n# \"stratospheric aerosols optical thickness\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "©2017 ES-DOC" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
mdda/fossasia-2016_deep-learning
notebooks/2-CNN/6-StyleTransfer/2-Art-Style-Transfer-googlenet_theano.ipynb
mit
[ "Art Style Transfer\nThis notebook is an implementation of the algorithm described in \"A Neural Algorithm of Artistic Style\" (http://arxiv.org/abs/1508.06576) by Gatys, Ecker and Bethge. Additional details of their method are available at http://arxiv.org/abs/1505.07376 and http://bethgelab.org/deepneuralart/.\nAn image is generated which combines the content of a photograph with the \"style\" of a painting. This is accomplished by jointly minimizing the squared difference between feature activation maps of the photo and generated image, and the squared difference of feature correlation between painting and generated image. A total variation penalty is also applied to reduce high frequency noise. \nThis notebook was originally sourced from Lasagne Recipes, but has been modified to use a GoogLeNet network (pre-trained and pre-loaded), and given some features to make it easier to experiment with.", "import theano\nimport theano.tensor as T\n\nimport lasagne\nfrom lasagne.utils import floatX\n\nimport numpy as np\nimport scipy\n\nimport matplotlib.pyplot as plt\n%matplotlib inline\n\nimport os # for directory listings\nimport pickle\nimport time\n\nAS_PATH='./images/art-style'\n\nfrom model import googlenet\n\nnet = googlenet.build_model()\nnet_input_var = net['input'].input_var\nnet_output_layer = net['prob']\n", "Load the pretrained weights into the network :", "params = pickle.load(open('./data/googlenet/blvc_googlenet.pkl', 'rb'), encoding='iso-8859-1')\nmodel_param_values = params['param values']\n#classes = params['synset words']\nlasagne.layers.set_all_param_values(net_output_layer, model_param_values)\n\nIMAGE_W=224\nprint(\"Loaded Model parameters\")", "Choose the Photo to be Enhanced", "photos = [ '%s/photos/%s' % (AS_PATH, f) for f in os.listdir('%s/photos/' % AS_PATH) if not f.startswith('.')]\nphoto_i=-1 # will be incremented in next cell (i.e. to start at [0])", "Executing the cell below will iterate through the images in the ./images/art-style/photos directory, so you can choose the one you want", "photo_i += 1\nphoto = plt.imread(photos[photo_i % len(photos)])\nphoto_rawim, photo = googlenet.prep_image(photo)\nplt.imshow(photo_rawim)", "Choose the photo with the required 'Style'", "styles = [ '%s/styles/%s' % (AS_PATH, f) for f in os.listdir('%s/styles/' % AS_PATH) if not f.startswith('.')]\nstyle_i=-1 # will be incremented in next cell (i.e. to start at [0])", "Executing the cell below will iterate through the images in the ./images/art-style/styles directory, so you can choose the one you want", "style_i += 1\nart = plt.imread(styles[style_i % len(styles)])\nart_rawim, art = googlenet.prep_image(art)\nplt.imshow(art_rawim)", "This defines various measures of difference that we'll use to compare the current output image with the original sources.", "def plot_layout(combined):\n def no_axes():\n plt.gca().xaxis.set_visible(False) \n plt.gca().yaxis.set_visible(False) \n \n plt.figure(figsize=(9,6))\n\n plt.subplot2grid( (2,3), (0,0) )\n no_axes()\n plt.imshow(photo_rawim)\n\n plt.subplot2grid( (2,3), (1,0) )\n no_axes()\n plt.imshow(art_rawim)\n\n plt.subplot2grid( (2,3), (0,1), colspan=2, rowspan=2 )\n no_axes()\n plt.imshow(combined, interpolation='nearest')\n\n plt.tight_layout()\n\ndef gram_matrix(x):\n x = x.flatten(ndim=3)\n g = T.tensordot(x, x, axes=([2], [2]))\n return g\n\ndef content_loss(P, X, layer):\n p = P[layer]\n x = X[layer]\n \n loss = 1./2 * ((x - p)**2).sum()\n return loss\n\ndef style_loss(A, X, layer):\n a = A[layer]\n x = X[layer]\n \n A = gram_matrix(a)\n G = gram_matrix(x)\n \n N = a.shape[1]\n M = a.shape[2] * a.shape[3]\n \n loss = 1./(4 * N**2 * M**2) * ((G - A)**2).sum()\n return loss\n\ndef total_variation_loss(x):\n return (((x[:,:,:-1,:-1] - x[:,:,1:,:-1])**2 + (x[:,:,:-1,:-1] - x[:,:,:-1,1:])**2)**1.25).sum()", "Here are the GoogLeNet layers that we're going to pay attention to :", "layers = [\n # used for 'content' in photo - a mid-tier convolutional layer \n 'inception_4b/output', \n \n # used for 'style' - conv layers throughout model (not same as content one)\n 'conv1/7x7_s2', 'conv2/3x3', 'inception_3b/output', 'inception_4d/output',\n]\n#layers = [\n# # used for 'content' in photo - a mid-tier convolutional layer \n# 'pool4/3x3_s2', \n# \n# # used for 'style' - conv layers throughout model (not same as content one)\n# 'conv1/7x7_s2', 'conv2/3x3', 'pool3/3x3_s2', 'inception_5b/output',\n#]\nlayers = {k: net[k] for k in layers}", "Precompute layer activations for photo and artwork\nThis takes ~ 20 seconds", "input_im_theano = T.tensor4()\noutputs = lasagne.layers.get_output(layers.values(), input_im_theano)\n\nphoto_features = {k: theano.shared(output.eval({input_im_theano: photo}))\n for k, output in zip(layers.keys(), outputs)}\nart_features = {k: theano.shared(output.eval({input_im_theano: art}))\n for k, output in zip(layers.keys(), outputs)}\n\n# Get expressions for layer activations for generated image\ngenerated_image = theano.shared(floatX(np.random.uniform(-128, 128, (1, 3, IMAGE_W, IMAGE_W))))\n\ngen_features = lasagne.layers.get_output(layers.values(), generated_image)\ngen_features = {k: v for k, v in zip(layers.keys(), gen_features)}", "Define the overall loss / badness function", "losses = []\n\n# content loss\ncl = 10 /1000.\nlosses.append(cl * content_loss(photo_features, gen_features, 'inception_4b/output'))\n\n# style loss\nsl = 20 *1000.\nlosses.append(sl * style_loss(art_features, gen_features, 'conv1/7x7_s2'))\nlosses.append(sl * style_loss(art_features, gen_features, 'conv2/3x3'))\nlosses.append(sl * style_loss(art_features, gen_features, 'inception_3b/output'))\nlosses.append(sl * style_loss(art_features, gen_features, 'inception_4d/output'))\n#losses.append(sl * style_loss(art_features, gen_features, 'inception_5b/output'))\n\n# total variation penalty\nvp = 0.01 /1000. /1000.\nlosses.append(vp * total_variation_loss(generated_image))\n\ntotal_loss = sum(losses)", "The Famous Symbolic Gradient operation", "grad = T.grad(total_loss, generated_image)", "Get Ready for Optimisation by SciPy", "# Theano functions to evaluate loss and gradient - takes around 1 minute (!)\nf_loss = theano.function([], total_loss)\nf_grad = theano.function([], grad)\n\n# Helper functions to interface with scipy.optimize\ndef eval_loss(x0):\n x0 = floatX(x0.reshape((1, 3, IMAGE_W, IMAGE_W)))\n generated_image.set_value(x0)\n return f_loss().astype('float64')\n\ndef eval_grad(x0):\n x0 = floatX(x0.reshape((1, 3, IMAGE_W, IMAGE_W)))\n generated_image.set_value(x0)\n return np.array(f_grad()).flatten().astype('float64')", "Initialize with the original photo, since going from noise (the code that's commented out) takes many more iterations.", "generated_image.set_value(photo)\n#generated_image.set_value(floatX(np.random.uniform(-128, 128, (1, 3, IMAGE_W, IMAGE_W))))\n\nx0 = generated_image.get_value().astype('float64')\niteration=0", "Optimize all those losses, and show the image\nTo refine the result, just keep hitting 'run' on this cell (each iteration is about 60 seconds) :", "t0 = time.time()\n\nscipy.optimize.fmin_l_bfgs_b(eval_loss, x0.flatten(), fprime=eval_grad, maxfun=40) \n\nx0 = generated_image.get_value().astype('float64')\niteration += 1\n\nif False:\n plt.figure(figsize=(8,8))\n plt.imshow(googlenet.deprocess(x0), interpolation='nearest')\n plt.axis('off')\n plt.text(270, 25, '# {} in {:.1f}sec'.format(iteration, (float(time.time() - t0))), fontsize=14)\nelse:\n plot_layout(googlenet.deprocess(x0))\n print('Iteration {}, ran in {:.1f}sec'.format(iteration, float(time.time() - t0)))" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
alberto-antonietti/nest-simulator
doc/model_details/aeif_models_implementation.ipynb
gpl-2.0
[ "NEST implementation of the aeif models\nHans Ekkehard Plesser and Tanguy Fardet, 2016-09-09\nThis notebook provides a reference solution for the Adaptive Exponential Integrate and Fire\n(AEIF) neuronal model and compares it with several numerical implementation using simpler solvers.\nIn particular this justifies the change of implementation in September 2016 to make the simulation\ncloser to the reference solution.\nPosition of the problem\nBasics\nThe equations governing the evolution of the AEIF model are\n$$\\left\\lbrace\\begin{array}{rcl}\n C_m\\dot{V} &=& -g_L(V-E_L) + g_L \\Delta_T e^{\\frac{V-V_T}{\\Delta_T}} + I_e + I_s(t) -w\\\n \\tau_s\\dot{w} &=& a(V-E_L) - w\n\\end{array}\\right.$$\nwhen $V < V_{peak}$ (threshold/spike detection).\nOnce a spike occurs, we apply the reset conditions:\n$$V = V_r \\quad \\text{and} \\quad w = w + b$$\nDivergence\nIn the AEIF model, the spike is generated by the exponential divergence. In practice, this means that just before threshold crossing (threshpassing), the argument of the exponential can become very large.\nThis can lead to numerical overflow or numerical instabilities in the solver, all the more if $V_{peak}$ is large, or if $\\Delta_T$ is small.\nTested solutions\nOld implementation (before September 2016)\nThe orginal solution that was adopted was to bind the exponential argument to be smaller that 10 (ad hoc value to be close to the original implementation in BRIAN).\nAs will be shown in the notebook, this solution does not converge to the reference LSODAR solution.\nNew implementation\nThe new implementation does not bind the argument of the exponential, but the potential itself, since according to the theoretical model, $V$ should never get larger than $V_{peak}$.\nWe will show that this solution is not only closer to the reference solution in general, but also converges towards it as the timestep gets smaller.\nReference solution\nThe reference solution is implemented using the LSODAR solver which is described and compared in the following references:\n\nhttp://www.radford.edu/~thompson/RP/eventlocation.pdf (papers citing this one)\nhttp://www.sciencedirect.com/science/article/pii/S0377042712000684\nhttp://www.radford.edu/~thompson/RP/rootfinding.pdf\nhttps://computation.llnl.gov/casc/nsde/pubs/u88007.pdf\nhttp://www.cs.ucsb.edu/~cse/Files/SCE000136.pdf\nhttp://www.sciencedirect.com/science/article/pii/0377042789903348\nhttp://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.455.2976&rep=rep1&type=pdf\nhttps://theses.lib.vt.edu/theses/available/etd-12092002-105032/unrestricted/etd.pdf\n\nTechnical details and requirements\nImplementation of the functions\n\nThe old and new implementations are reproduced using Scipy and are called by the scipy_aeif function\nThe NEST implementations are not shown here, but keep in mind that for a given time resolution, they are closer to the reference result than the scipy implementation since the GSL implementation uses a RK45 adaptive solver.\nThe reference solution using LSODAR, called reference_aeif, is implemented through the assimulo package.\n\nRequirements\nTo run this notebook, you need:\n\nnumpy and scipy\nassimulo\nmatplotlib", "import numpy as np\nfrom scipy.integrate import odeint\nimport matplotlib.pyplot as plt\n%matplotlib inline\nplt.rcParams['figure.figsize'] = (15, 6)", "Scipy functions mimicking the NEST code\nRight hand side functions", "def rhs_aeif_new(y, _, p):\n '''\n New implementation bounding V < V_peak\n \n Parameters\n ----------\n y : list\n Vector containing the state variables [V, w]\n _ : unused var\n p : Params instance\n Object containing the neuronal parameters.\n \n Returns\n -------\n dv : double\n Derivative of V\n dw : double\n Derivative of w\n '''\n v = min(y[0], p.Vpeak)\n w = y[1]\n Ispike = 0.\n \n if p.DeltaT != 0.:\n Ispike = p.gL * p.DeltaT * np.exp((v-p.vT)/p.DeltaT)\n \n dv = (-p.gL*(v-p.EL) + Ispike - w + p.Ie)/p.Cm\n dw = (p.a * (v-p.EL) - w) / p.tau_w\n \n return dv, dw\n\n\ndef rhs_aeif_old(y, _, p):\n '''\n Old implementation bounding the argument of the\n exponential function (e_arg < 10.).\n \n Parameters\n ----------\n y : list\n Vector containing the state variables [V, w]\n _ : unused var\n p : Params instance\n Object containing the neuronal parameters.\n \n Returns\n -------\n dv : double\n Derivative of V\n dw : double\n Derivative of w\n '''\n v = y[0]\n w = y[1]\n Ispike = 0.\n \n if p.DeltaT != 0.:\n e_arg = min((v-p.vT)/p.DeltaT, 10.)\n Ispike = p.gL * p.DeltaT * np.exp(e_arg)\n \n dv = (-p.gL*(v-p.EL) + Ispike - w + p.Ie)/p.Cm\n dw = (p.a * (v-p.EL) - w) / p.tau_w\n \n return dv, dw", "Complete model", "def scipy_aeif(p, f, simtime, dt):\n '''\n Complete aeif model using scipy `odeint` solver.\n \n Parameters\n ----------\n p : Params instance\n Object containing the neuronal parameters.\n f : function\n Right-hand side function (either `rhs_aeif_old`\n or `rhs_aeif_new`)\n simtime : double\n Duration of the simulation (will run between\n 0 and tmax)\n dt : double\n Time increment.\n \n Returns\n -------\n t : list\n Times at which the neuronal state was evaluated.\n y : list\n State values associated to the times in `t`\n s : list\n Spike times.\n vs : list\n Values of `V` just before the spike.\n ws : list\n Values of `w` just before the spike\n fos : list\n List of dictionaries containing additional output\n information from `odeint`\n '''\n t = np.arange(0, simtime, dt) # time axis\n n = len(t) \n y = np.zeros((n, 2)) # V, w\n y[0, 0] = p.EL # Initial: (V_0, w_0) = (E_L, 5.)\n y[0, 1] = 5. # Initial: (V_0, w_0) = (E_L, 5.)\n s = [] # spike times \n vs = [] # membrane potential at spike before reset\n ws = [] # w at spike before step\n fos = [] # full output dict from odeint()\n \n # imitate NEST: update time-step by time-step\n for k in range(1, n):\n \n # solve ODE from t_k-1 to t_k\n d, fo = odeint(f, y[k-1, :], t[k-1:k+1], (p, ), full_output=True)\n y[k, :] = d[1, :]\n fos.append(fo)\n \n # check for threshold crossing\n if y[k, 0] >= p.Vpeak:\n s.append(t[k])\n vs.append(y[k, 0])\n ws.append(y[k, 1])\n \n y[k, 0] = p.Vreset # reset\n y[k, 1] += p.b # step\n \n return t, y, s, vs, ws, fos", "LSODAR reference solution\nSetting assimulo class", "from assimulo.solvers import LSODAR\nfrom assimulo.problem import Explicit_Problem\n\nclass Extended_Problem(Explicit_Problem):\n\n # need variables here for access\n sw0 = [ False ]\n ts_spikes = []\n ws_spikes = []\n Vs_spikes = []\n \n def __init__(self, p):\n self.p = p\n self.y0 = [self.p.EL, 5.] # V, w\n # reset variables\n self.ts_spikes = []\n self.ws_spikes = []\n self.Vs_spikes = []\n\n #The right-hand-side function (rhs)\n\n def rhs(self, t, y, sw):\n \"\"\"\n This is the function we are trying to simulate (aeif model).\n \"\"\"\n V, w = y[0], y[1]\n Ispike = 0.\n \n if self.p.DeltaT != 0.:\n Ispike = self.p.gL * self.p.DeltaT * np.exp((V-self.p.vT)/self.p.DeltaT)\n dotV = ( -self.p.gL*(V-self.p.EL) + Ispike + self.p.Ie - w ) / self.p.Cm\n dotW = ( self.p.a*(V-self.p.EL) - w ) / self.p.tau_w\n return np.array([dotV, dotW])\n\n # Sets a name to our function\n name = 'AEIF_nosyn'\n\n # The event function\n def state_events(self, t, y, sw):\n \"\"\"\n This is our function that keeps track of our events. When the sign\n of any of the events has changed, we have an event.\n \"\"\"\n event_0 = -5 if y[0] >= self.p.Vpeak else 5 # spike\n if event_0 < 0:\n if not self.ts_spikes:\n self.ts_spikes.append(t)\n self.Vs_spikes.append(y[0])\n self.ws_spikes.append(y[1])\n elif self.ts_spikes and not np.isclose(t, self.ts_spikes[-1], 0.01):\n self.ts_spikes.append(t)\n self.Vs_spikes.append(y[0])\n self.ws_spikes.append(y[1])\n return np.array([event_0])\n\n #Responsible for handling the events.\n def handle_event(self, solver, event_info):\n \"\"\"\n Event handling. This functions is called when Assimulo finds an event as\n specified by the event functions.\n \"\"\"\n ev = event_info\n event_info = event_info[0] # only look at the state events information.\n if event_info[0] > 0:\n solver.sw[0] = True\n solver.y[0] = self.p.Vreset\n solver.y[1] += self.p.b\n else:\n solver.sw[0] = False\n\n def initialize(self, solver):\n solver.h_sol=[]\n solver.nq_sol=[]\n\n def handle_result(self, solver, t, y):\n Explicit_Problem.handle_result(self, solver, t, y)\n # Extra output for algorithm analysis\n if solver.report_continuously:\n h, nq = solver.get_algorithm_data()\n solver.h_sol.extend([h])\n solver.nq_sol.extend([nq])", "LSODAR reference model", "def reference_aeif(p, simtime):\n '''\n Reference aeif model using LSODAR.\n \n Parameters\n ----------\n p : Params instance\n Object containing the neuronal parameters.\n f : function\n Right-hand side function (either `rhs_aeif_old`\n or `rhs_aeif_new`)\n simtime : double\n Duration of the simulation (will run between\n 0 and tmax)\n dt : double\n Time increment.\n \n Returns\n -------\n t : list\n Times at which the neuronal state was evaluated.\n y : list\n State values associated to the times in `t`\n s : list\n Spike times.\n vs : list\n Values of `V` just before the spike.\n ws : list\n Values of `w` just before the spike\n h : list\n List of the minimal time increment at each step.\n '''\n #Create an instance of the problem\n exp_mod = Extended_Problem(p) #Create the problem\n exp_sim = LSODAR(exp_mod) #Create the solver\n\n exp_sim.atol=1.e-8\n exp_sim.report_continuously = True\n exp_sim.store_event_points = True\n\n exp_sim.verbosity = 30\n\n #Simulate\n t, y = exp_sim.simulate(simtime) #Simulate 10 seconds\n \n return t, y, exp_mod.ts_spikes, exp_mod.Vs_spikes, exp_mod.ws_spikes, exp_sim.h_sol", "Set the parameters and simulate the models\nParams (chose a dictionary)", "# Regular spiking\naeif_param = {\n 'V_reset': -58.,\n 'V_peak': 0.0,\n 'V_th': -50.,\n 'I_e': 420.,\n 'g_L': 11.,\n 'tau_w': 300.,\n 'E_L': -70.,\n 'Delta_T': 2.,\n 'a': 3.,\n 'b': 0.,\n 'C_m': 200.,\n 'V_m': -70., #! must be equal to E_L\n 'w': 5., #! must be equal to 5.\n 'tau_syn_ex': 0.2\n}\n\n# Bursting\naeif_param2 = {\n 'V_reset': -46.,\n 'V_peak': 0.0,\n 'V_th': -50.,\n 'I_e': 500.0,\n 'g_L': 10.,\n 'tau_w': 120.,\n 'E_L': -58.,\n 'Delta_T': 2.,\n 'a': 2.,\n 'b': 100.,\n 'C_m': 200.,\n 'V_m': -58., #! must be equal to E_L\n 'w': 5., #! must be equal to 5.\n}\n\n# Close to chaos (use resol < 0.005 and simtime = 200)\naeif_param3 = {\n 'V_reset': -48.,\n 'V_peak': 0.0,\n 'V_th': -50.,\n 'I_e': 160.,\n 'g_L': 12.,\n 'tau_w': 130.,\n 'E_L': -60.,\n 'Delta_T': 2.,\n 'a': -11.,\n 'b': 30.,\n 'C_m': 100.,\n 'V_m': -60., #! must be equal to E_L\n 'w': 5., #! must be equal to 5.\n}\n\nclass Params(object):\n '''\n Class giving access to the neuronal\n parameters.\n '''\n def __init__(self):\n self.params = aeif_param\n self.Vpeak = aeif_param[\"V_peak\"]\n self.Vreset = aeif_param[\"V_reset\"]\n self.gL = aeif_param[\"g_L\"]\n self.Cm = aeif_param[\"C_m\"]\n self.EL = aeif_param[\"E_L\"]\n self.DeltaT = aeif_param[\"Delta_T\"]\n self.tau_w = aeif_param[\"tau_w\"]\n self.a = aeif_param[\"a\"]\n self.b = aeif_param[\"b\"]\n self.vT = aeif_param[\"V_th\"]\n self.Ie = aeif_param[\"I_e\"]\n \np = Params()", "Simulate the 3 implementations", "# Parameters of the simulation\nsimtime = 100.\nresol = 0.01\n\nt_old, y_old, s_old, vs_old, ws_old, fo_old = scipy_aeif(p, rhs_aeif_old, simtime, resol)\nt_new, y_new, s_new, vs_new, ws_new, fo_new = scipy_aeif(p, rhs_aeif_new, simtime, resol)\nt_ref, y_ref, s_ref, vs_ref, ws_ref, h_ref = reference_aeif(p, simtime)", "Plot the results\nZoom out", "fig, ax = plt.subplots()\nax2 = ax.twinx()\n\n# Plot the potentials\nax.plot(t_ref, y_ref[:,0], linestyle=\"-\", label=\"V ref.\")\nax.plot(t_old, y_old[:,0], linestyle=\"-.\", label=\"V old\")\nax.plot(t_new, y_new[:,0], linestyle=\"--\", label=\"V new\")\n\n# Plot the adaptation variables\nax2.plot(t_ref, y_ref[:,1], linestyle=\"-\", c=\"k\", label=\"w ref.\")\nax2.plot(t_old, y_old[:,1], linestyle=\"-.\", c=\"m\", label=\"w old\")\nax2.plot(t_new, y_new[:,1], linestyle=\"--\", c=\"y\", label=\"w new\")\n\n# Show\nax.set_xlim([0., simtime])\nax.set_ylim([-65., 40.])\nax.set_xlabel(\"Time (ms)\")\nax.set_ylabel(\"V (mV)\")\nax2.set_ylim([-20., 20.])\nax2.set_ylabel(\"w (pA)\")\nax.legend(loc=6)\nax2.legend(loc=2)\nplt.show()", "Zoom in", "fig, ax = plt.subplots()\nax2 = ax.twinx()\n\n# Plot the potentials\nax.plot(t_ref, y_ref[:,0], linestyle=\"-\", label=\"V ref.\")\nax.plot(t_old, y_old[:,0], linestyle=\"-.\", label=\"V old\")\nax.plot(t_new, y_new[:,0], linestyle=\"--\", label=\"V new\")\n\n# Plot the adaptation variables\nax2.plot(t_ref, y_ref[:,1], linestyle=\"-\", c=\"k\", label=\"w ref.\")\nax2.plot(t_old, y_old[:,1], linestyle=\"-.\", c=\"y\", label=\"w old\")\nax2.plot(t_new, y_new[:,1], linestyle=\"--\", c=\"m\", label=\"w new\")\n\nax.set_xlim([90., 92.])\nax.set_ylim([-65., 40.])\nax.set_xlabel(\"Time (ms)\")\nax.set_ylabel(\"V (mV)\")\nax2.set_ylim([17.5, 18.5])\nax2.set_ylabel(\"w (pA)\")\nax.legend(loc=5)\nax2.legend(loc=2)\nplt.show()", "Compare properties at spike times", "print(\"spike times:\\n-----------\")\nprint(\"ref\", np.around(s_ref, 3)) # ref lsodar\nprint(\"old\", np.around(s_old, 3))\nprint(\"new\", np.around(s_new, 3))\n\nprint(\"\\nV at spike time:\\n---------------\")\nprint(\"ref\", np.around(vs_ref, 3)) # ref lsodar\nprint(\"old\", np.around(vs_old, 3))\nprint(\"new\", np.around(vs_new, 3))\n\nprint(\"\\nw at spike time:\\n---------------\")\nprint(\"ref\", np.around(ws_ref, 3)) # ref lsodar\nprint(\"old\", np.around(ws_old, 3))\nprint(\"new\", np.around(ws_new, 3))", "Size of minimal integration timestep", "plt.semilogy(t_ref, h_ref, label='Reference')\nplt.semilogy(t_old[1:], [d['hu'] for d in fo_old], linewidth=2, label='Old')\nplt.semilogy(t_new[1:], [d['hu'] for d in fo_new], label='New')\n\nplt.legend(loc=6)\nplt.show();", "Convergence towards LSODAR reference with step size\nZoom out", "plt.plot(t_ref, y_ref[:,0], label=\"V ref.\")\nresolutions = (0.1, 0.01, 0.001)\ndi_res = {}\n\nfor resol in resolutions:\n t_old, y_old, _, _, _, _ = scipy_aeif(p, rhs_aeif_old, simtime, resol)\n t_new, y_new, _, _, _, _ = scipy_aeif(p, rhs_aeif_new, simtime, resol)\n di_res[resol] = (t_old, y_old, t_new, y_new)\n plt.plot(t_old, y_old[:,0], linestyle=\":\", label=\"V old, r={}\".format(resol))\n plt.plot(t_new, y_new[:,0], linestyle=\"--\", linewidth=1.5, label=\"V new, r={}\".format(resol))\nplt.xlim(0., simtime)\nplt.xlabel(\"Time (ms)\")\nplt.ylabel(\"V (mV)\")\nplt.legend(loc=2)\nplt.show();", "Zoom in", "plt.plot(t_ref, y_ref[:,0], label=\"V ref.\")\nfor resol in resolutions:\n t_old, y_old = di_res[resol][:2]\n t_new, y_new = di_res[resol][2:]\n plt.plot(t_old, y_old[:,0], linestyle=\"--\", label=\"V old, r={}\".format(resol))\n plt.plot(t_new, y_new[:,0], linestyle=\"-.\", linewidth=2., label=\"V new, r={}\".format(resol))\nplt.xlim(90., 92.)\nplt.ylim([-62., 2.])\nplt.xlabel(\"Time (ms)\")\nplt.ylabel(\"V (mV)\")\nplt.legend(loc=2)\nplt.show();" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
rvuduc/cse6040-ipynbs
25--logreg.ipynb
bsd-3-clause
[ "CSE 6040, Fall 2015 [25]: Logistic regression\nBeyond regression, another important data analysis task is classification, in which you are given a set of labeled data points and you wish to learn a model of the labels. One technique you can apply is logistic regression, the topic of today's lab.\n\nAlthough it's called \"regression\" it is really a model for classification.\n\nWe will focus today on the case of binary classification, where there are $c=2$ possible labels. We will denote these labels by \"0\" or \"1.\" However, the ideas can be generalized to the multiclass ($c > 2$) case.\nSome of today's code snippets use Plotly, so you may need to refer back to how to make plots in Plotly. The most important one is how to log-in to the service, for which you'll need to look up your Plotly API key. Anyway, you may need to refer to the references below. \n\nOur Jupyter notebook where we did stuff using plotly: ipynb\nPlotly Python reference on line and scatter plots: https://plot.ly/python/line-and-scatter/\n\nAlso, this lab builds on the iterative numerical optimization idea from Lab 24, which is known as gradient ascent or gradient descent (also, steepest ascent/descent), depending on whether one is maximizing or minimizing some quantity.\nPreliminaries", "import pandas as pd\nimport seaborn as sns\nimport numpy as np\nfrom IPython.display import display\n\n%matplotlib inline\n\nimport plotly.plotly as py\nfrom plotly.graph_objs import *\n\n# @YOUSE: Fill in your credentials (user ID, API key) for Plotly here\npy.sign_in ('USERNAME', 'APIKEY')", "Note about slicing columns from a Numpy matrix\nIf you want to extract a column i from a Numpy matrix A and keep it as a column vector, you need to use the slicing notation, A[:, i:i+1]. Not doing so can lead to subtle bugs. To see why, compare the following slices.", "A = np.array ([[1, 2, 3],\n [4, 5, 6],\n [7, 8, 9]\n ], dtype=float)\n\nprint \"A[:, :] ==\\n\", A\nprint \"\\nA[:, 0] ==\\n\", A[:, 0]\nprint \"\\nA[:, 2:3] == \\n\", A[:, 2:3]\n\nprint \"\\nAdd columns 0 and 2?\"\na0 = A[:, 0]\na1 = A[:, 2:3]\nprint a0 + a1", "Sample data: Rock lobsters!\nAs a concrete example of a classification task, consider the results of this experiment. Some marine biologists took a bunch of lobsters of varying sizes (size being a proxy for stage of development), and then tethered and exposed these lobsters to a variety of predators. The outcome that they measured is whether the lobsters survived or not.\nIn this case, the data consists of a set of points, one point per lobster, where there is a single predictor (size) and the response is whether the lobsters survived (label \"1\") or died (label \"0\").\n\nFor the original paper, see this link. I can only imagine that this image is what marine biologists look like when experimenting with lobsters.\n\nHere is a plot of the raw data.", "# http://www.stat.ufl.edu/~winner/data/lobster_survive.txt\ndf_lobsters = pd.read_table ('http://www.stat.ufl.edu/~winner/data/lobster_survive.dat',\n sep=r'\\s+', names=['CarapaceLen', 'Survived'])\ndisplay (df_lobsters.head ())\nprint \"...\"\ndisplay (df_lobsters.tail ())\n\nsns.violinplot (x=\"Survived\", y=\"CarapaceLen\",\n data=df_lobsters, inner=\"quart\")", "Although the classes are distinct in the aggregate, where the median carapace (outer shell) length is around 36 mm for the lobsters that died and 42 mm for those that survived, they are not cleanly separable.\nNotation\nTo develop some intuition and a method, let's now turn to a more general setting and work on synthetic data sets.\nLet the data consist of $m$ data points, where each point is $d$-dimensional. Each dimension corresponds to some continuously-valued predictor. In addition, each data point will have a binary label, whose value is either 0 or 1.\nDenote each point by an augumented vector, $x_i$, such that\n$$\n\\begin{array}{rcl}\n x_i\n & \\equiv &\n \\left(\\begin{array}{c}\n 1 \\\n x_{i,1} \\\n x_{i,2} \\\n \\vdots \\\n x_{i,d}\n \\end{array}\\right)\n .\n\\end{array}\n$$\nThat is, the point is the $d$ coordinates augmented by an initial dummy coordinate whose value is 1. This convention is similar to what we did in linear regression.\nWe can also stack these points as rows of a matrix, $X$, again, just as we did in regression:\n$$\n\\begin{array}{rcl}\n X \\equiv\n \\left(\\begin{array}{c}\n x_0^T \\\n x_1^T \\\n \\vdots \\\n x_{m-1}^T\n \\end{array}\\right)\n & = &\n \\left(\\begin{array}{ccccc}\n 1 & x_{0,1} & x_{0,2} & \\cdots & x_{0,d} \\\n 1 & x_{1,1} & x_{1,2} & \\cdots & x_{1,d} \\\n & & & \\vdots & \\\n 1 & x_{m-1,1} & x_{m-1,2} & \\cdots & x_{m-1,d} \\\n \\end{array}\\right).\n\\end{array}\n$$\nWe will take the labels to be a binary column vector, $l \\equiv \\left(l_0, l_1, \\ldots, l_{m-1}\\right)^T$.\nAn example\nWe've pre-generated a synethetic data set consisting of labeled data points. Let's download and inspect it, first as a table and then visually.", "df = pd.read_csv ('http://vuduc.org/cse6040/logreg_points_train.csv')\n\ndisplay (df.head ())\nprint \"...\"\ndisplay (df.tail ())", "Next, let's extract the coordinates as a Numpy matrix of points and the labels as a Numpy column vector labels. Mathematically, the points matrix corresponds to $X$ and the labels vector corresponds to $l$.", "points = np.insert (df.as_matrix (['x_1', 'x_2']), 0, 1.0, axis=1)\nlabels = df.as_matrix (['label'])\n\nprint \"First and last 5 points:\\n\", '='*23, '\\n', points[:5], '\\n...\\n', points[-5:], '\\n'\nprint \"First and last 5 labels:\\n\", '='*23, '\\n', labels[:5], '\\n...\\n', labels[-5:], '\\n'", "Next, let's plot the data as a scatter plot using Plotly. To do so, we need to create separate traces, one for each cluster. Below, we've provided you with a function, make_2d_scatter_traces(), which does exactly that, given a labeled data set as a (points, labels) pair.", "def assert_points_2d (points):\n \"\"\"Checks the dimensions of a given point set.\"\"\"\n assert type (points) is np.ndarray\n assert points.ndim == 2\n assert points.shape[1] == 3\n \ndef assert_labels (labels):\n \"\"\"Checks the type of a given set of labels (must be integral).\"\"\"\n assert labels is not None\n assert (type (labels) is np.ndarray) or (type (labels) is list)\n\ndef extract_clusters (points, labels):\n \"\"\"\n Given a list or array of labeled augmented points, this\n routine returns a pair of lists, (C[0:k], L[0:k]), where\n C[i] is an array of all points whose labels are L[i].\n \"\"\"\n assert_points_2d (points)\n assert_labels (labels)\n\n id_label_pairs = list (enumerate (set (labels.flatten ())))\n labels_map = dict ([(v, i) for (i, v) in id_label_pairs])\n \n # Count how many points belong to each cluster\n counts = [0] * len (labels_map)\n for l in labels.flatten ():\n counts[labels_map[l]] += 1\n \n # Allocate space for each cluster\n clusters = [np.zeros ((k, 3)) for k in counts]\n \n # Separate the points by cluster\n counts = [0] * len (labels_map)\n for (x, l) in zip (points, labels.flatten ()):\n l_id = labels_map[l]\n k = counts[l_id]\n clusters[l_id][k, :] = x\n counts[l_id] += 1\n \n # Generate cluster labels\n cluster_labels = [None] * len (labels_map)\n for (l, i) in labels_map.items ():\n cluster_labels[i] = l\n \n return (clusters, cluster_labels)\n\ndef make_2d_scatter_traces (points, labels=None):\n \"\"\"\n Given an augmented point set, possibly labeled,\n returns a list Plotly-compatible marker traces.\n \"\"\"\n assert_points_2d (points)\n \n traces = []\n if labels is None:\n traces.append (Scatter (x=points[:, 1:2], y=points[:, 2:3], mode='markers'))\n else:\n assert_labels (labels)\n (clusters, cluster_labels) = extract_clusters (points, labels)\n for (c, l) in zip (clusters, cluster_labels):\n traces.append (Scatter (x=c[:, 1:2], y=c[:, 2:3],\n mode='markers',\n name=\"%s\" % str (l)))\n return traces\n\nprint \"Number of points:\", len (points)\n\ntraces = make_2d_scatter_traces (points, labels)\npy.iplot (traces)", "Linear discriminants\nSuppose you think that the boundary between the two clusters may be represented by a line. For the synthetic data example above, I hope you'll agree that such a model is not a terrible one.\nThis line is referred to as a linear discriminant. Any point $x$ on this line may be described by $\\theta^T x$, where $\\theta$ is a vector of coefficients:\n$$\n\\begin{array}{rcl}\n \\theta\n & \\equiv &\n \\left(\\begin{array}{c} \\theta_0 \\ \\theta_1 \\ \\vdots \\ \\theta_d \\end{array}\\right)\n .\n \\\n\\end{array}\n$$\nFor example, consider the case of 2-D points ($d=2$): the condition that $\\theta^T x = 0$ means that\n$$\n\\begin{array}{rrcl}\n &\n \\theta^T x = 0\n & = & \\theta_0 + \\theta_1 x_1 + \\theta_2 x_2 \\\n \\implies\n & x_2\n & = & -\\frac{\\theta_0}{\\theta_2} - \\frac{\\theta_1}{\\theta_2} x_1.\n\\end{array}\n$$\nSo that describes points on the line. However, given any point $x$ in the $d$-dimensional space that is not on the line, $\\theta^T x$ still produces a value: that value will be positive on one side of the line ($\\theta^T x > 0$) or negative on the other ($\\theta^T x < 0$).\nConsequently, here is one simple way to use the linear discriminant function $\\theta^T x$ to generate a label: just reinterpret its sign! In more mathematical terms, the function that converts, say, a positive value to the label \"1\" and all other values to the label \"0\" is called the heaviside function:\n$$\n\\begin{array}{rcl}\n H(y) & \\equiv & \\left{\\begin{array}{ll}\n 1 & \\mathrm{if}\\ y > 0\n \\\n 0 & \\mathrm{if}\\ y \\leq 0\n \\end{array}\\right..\n\\end{array}\n$$\nExercise. This exercise has three parts.\n1) Given the a $m \\times (d+1)$ matrix of augmented points (i.e., the $X$ matrix) and the vector $\\theta$, implement a function to compute the value of the linear discriminant at each point. That is, the function should return a (column) vector $y$ where the $y_i = \\theta^T x_i$.\n2) Implement the heaviside function, $H(y)$. Your function should allow for an arbitrary matrix of input values, and should apply the heaviside function elementwise.\n\nHint: Consider what Numpy's sign() function produces, and transform the result accordingly.\n\n3) For the synthetic data you loaded above, determine a value of $\\theta$ for which $H(\\theta^T x)$ \"best\" separates the two clusters. To help you out, we've provided some Plotly code that draws the discriminant boundary and also applies $H(\\theta^T x)$ to each point, coloring the point by whether it is correctly classified. (The code also prints the number of correcty classified points.) So, you just need to try different values of $\\theta$ until you find something that is \"close.\"\n\nHint: We found a line that commits just 5 errors, out of 375 possible points.", "def lin_discr (X, theta):\n # @YOUSE: Part 1 -- Complete this function.\n pass\n\ndef heaviside (Y):\n # @YOUSE: Part 2 -- Complete this function\n pass", "The following is the code to generate the plot; look for the place to try different values of $\\theta$ a couple of code cells below.", "def heaviside_int (Y):\n \"\"\"Evaluates the heaviside function, but returns integer values.\"\"\"\n return heaviside (Y).astype (dtype=int)\n\ndef assert_discriminant (theta, d=2):\n \"\"\"\n Verifies that the given coefficients correspond to a\n d-dimensional linear discriminant ($\\theta$).\n \"\"\"\n assert len (theta) == (d+1)\n \ndef gen_lin_discr_labels (points, theta, fun=heaviside_int):\n \"\"\"\n Given a set of points and the coefficients of a linear\n discriminant, this function returns a set of labels for\n the points with respect to this discriminant.\n \"\"\"\n assert_points_2d (points)\n assert_discriminant (theta)\n \n score = lin_discr (points, theta)\n labels = fun (score)\n return labels\n\ndef gen_lin_discr_trace (points, theta, name='Discriminant'):\n \"\"\"\n Given a set of points and the coefficients of a linear\n discriminant, this function returns a set of Plotly\n traces that show how the points are classified as well\n as the location of the discriminant boundary.\n \"\"\"\n assert_points_2d (points)\n assert_discriminant (theta)\n \n x1 = [min (points[:, 1]), max (points[:, 1])]\n m = -theta[1] / theta[2]\n b = -theta[0] / theta[2]\n x2 = [(b + m*x) for x in x1]\n \n return Scatter (x=x1, y=x2, mode='lines', name=name)\n\ndef np_row_vec (init_list):\n \"\"\"Generates a Numpy-compatible row vector.\"\"\"\n return np.array (init_list, order='F', ndmin=2)\n\ndef np_col_vec (init_list):\n \"\"\"Generates a Numpy-compatible column vector.\"\"\"\n return np_row_vec (init_list).T\n\ndef gen_labels_part3 (points, labels, theta):\n your_labels = gen_lin_discr_labels (points, theta)\n return (labels == your_labels)\n\n# @YOUSE: Part 3 -- Select parameters for theta!\ntheta = np_col_vec ([0., -1., 3.])\n\n# Generate 0/1 labels for your discriminant:\nis_correct = gen_labels_part3 (points, labels, theta)\n\nprint \"Number of misclassified points:\", (len (points) - sum (is_correct))[0]\nprint \"\\n(Run the code cell below to visualize the results.)\"\n\n# Visually inspect the above results\ntraces = make_2d_scatter_traces (points, is_correct)\ntraces.append (gen_lin_discr_trace (points, theta))\n\n# Plot it!\nlayout = Layout (xaxis=dict (range=[-1.25, 2.25]),\n yaxis=dict (range=[-3.25, 2.25]))\nfig = Figure (data=traces, layout=layout)\npy.iplot (fig)", "An alternative linear discriminant: the logistic or \"sigmoid\" function\nThe heaviside function, $H(\\theta^T x)$, enforces a sharp boundary between classes around the $\\theta^T x=0$ line. The following code produces a contour plot to show this effect.", "# Use Numpy's handy meshgrid() to create a regularly-spaced grid of values.\n# http://docs.scipy.org/doc/numpy/reference/generated/numpy.meshgrid.html\n\nx1 = np.linspace (-2., +2., 100)\nx2 = np.linspace (-2., +2., 100)\nx1_grid, x2_grid = np.meshgrid (x1, x2)\nh_grid = heaviside (theta[0] + theta[1]*x1_grid + theta[2]*x2_grid)\n\ntrace_grid = Contour (x=x1, y=x2, z=h_grid)\npy.iplot ([trace_grid])", "However, as the lobsters example suggests, real data are not likely to be cleanly separable, especially when the number of features we have at our disposal is relatively small.\nSince the labels are binary, a natural idea is to give the classification problem a probabilistic interpretation. The logistic function provides at least one way to do so:\n$$\n\\begin{array}{rcl}\n G(y) & \\equiv & \\frac{1}{1 + e^{-y}}\n\\end{array}\n$$\n\nThis function is also sometimes called the logit or sigmoid function.\n\nThe logistic function takes any value in the range $(-\\infty, +\\infty)$ and produces a value in the range $(0, 1)$. Thus, given a value $x$, we can interpret it as a conditional probability that the label is 1.\nExercise. Consider a set of 1-D points generated by a mixture of Gaussians. That is, suppose that there are two Gaussian distributions over the 1-dimensional variable, $x \\in (-\\infty, +\\infty)$, that have the same variance ($\\sigma^2$) but different means ($\\mu_0$ and $\\mu_1$). Show that the conditional probability of observing a point labeled \"1\" given $x$ may be written as,\n$$\\mathrm{Pr}\\left[l=1\\,|\\,x\\right]\n \\propto \\displaystyle \\frac{1}{1 + e^{-(\\theta_0 + \\theta_1 x)}},$$\nfor a suitable definition of $\\theta_0$ and $\\theta_1$. To carry out this computation, recall Bayes's rule (also: Bayes's theorem):\n$$\n\\begin{array}{rcl}\n \\mathrm{Pr}[l=1\\,|\\,x]\n & = &\n \\dfrac{\\mathrm{Pr}[x\\,|\\,l=1] \\, \\mathrm{Pr}[l=1]}\n {\\mathrm{Pr}[x\\,|\\,l=0] \\, \\mathrm{Pr}[l=0]\n + \\mathrm{Pr}[x\\,|\\,l=1] \\, \\mathrm{Pr}[l=1]\n }.\n\\end{array}\n$$\nYou may assume the prior probabilities of observing a 0 or 1 are given by $\\mathrm{Pr}[l=0] \\equiv p_0$ and $\\mathrm{Pr}[l=1] \\equiv p_1$.\n\nTime and interest permitting, we'll solve this exercise on the whiteboard.\n\nExercise. Implement the logistic function. Inspect the resulting plot of $G(y)$ in 1-D and then the contour plot of $G(\\theta^T{x})$. Your function should accept a Numpy matrix of values, Y, and apply the sigmoid elementwise.", "def logistic (Y):\n # @YOUSE: Implement the logistic function G(y) here\n pass", "Plot of your implementation in 1D:", "x_logit_1d = np.linspace (-6.0, +6.0, 101)\ny_logit_1d = logistic (x_logit_1d)\ntrace_logit_1d = Scatter (x=x_logit_1d, y=y_logit_1d)\npy.iplot ([trace_logit_1d])", "Contour plot of your function:", "g_grid = logistic (theta[0] + theta[1]*x1_grid + theta[2]*x2_grid)\n\ntrace_logit_grid = Contour (x=x1, y=x2, z=g_grid)\npy.iplot ([trace_logit_grid])", "Exercise. Verify the following properties of the logistic function, $G(y)$.\n$$\n\\begin{array}{rcll}\n G(y)\n & = & \\frac{e^y}{e^y + 1}\n & \\mathrm{(P1)} \\\n G(-y)\n & = & 1 - G(y)\n & \\mathrm{(P2)} \\\n \\dfrac{dG}{dy}\n & = & G(y) G(-y)\n & \\mathrm{(P3)} \\\n {\\dfrac{d}{dy}} {\\left[ \\ln G(y) \\right]}\n & = & G(-y)\n & \\mathrm{(P4)} \\\n {\\dfrac{d}{dy}} {\\ln \\left[ 1 - G(y) \\right]}\n & = & -G(y)\n & \\mathrm{(P5)}\n\\end{array}\n$$\nDetermining $\\theta$ via Maximum Likelihood Estimation\nPreviously, you determined $\\theta$ for our synthetic dataset experimentally. Can you compute a good $\\theta$ automatically? One of the standard techniques in statistics is to perform a maximum likelihood estimation (MLE) of a model's parameters, $\\theta$.\nIndeed, MLE is basis for the \"statistical\" way to derive the normal equations in the case of linear regression, though that is of course not how we encountered it in this class.\n\"Likelihood\" as an objective function\nMLE derives from the following idea. Consider the joint probability of observing all of the labels, given the points and the parameters, $\\theta$:\n$$\n \\mathrm{Pr}[l\\,|\\,X, \\theta].\n$$\nSuppose these observations are independent and identically distributed (i.i.d.). Then the joint probability can be factored as the product of individual probabilities,\n$$\n\\begin{array}{rcl}\n \\mathrm{Pr}[l\\,|\\,X,\\theta] = \\mathrm{Pr}[l_0, \\ldots, l_{m-1}\\,|\\,x_0, \\ldots, x_{m-1}, \\theta]\n & = & \\mathrm{Pr}[l_0\\,|\\,x_0, \\theta] \\cdots \\mathrm{Pr}[l_{m-1}\\,|\\,x_{m-1}, \\theta] \\\n & = & \\displaystyle \\prod_{i=0}^{m-1} \\mathrm{Pr}[l_i\\,|\\,x_i,\\theta].\n\\end{array}\n$$\nThe maximum likelihood principle says that you should try to choose a parameter $\\theta$ that maximizes the chances (\"likelihood\") of seeing these particular observations. Thus, we can simply reinterpret the preceding probability as an objective function to optimize. Mathematically, it is equivalent and convenient to consider the logarithm of the likelihood, or log-likelihood, as the objective function, defining it by,\n$$\n\\begin{array}{rcl}\n \\mathcal{L}(\\theta; l, X)\n & \\equiv &\n \\log \\left{ \\displaystyle \\prod_{i=0}^{m-1} \\mathrm{Pr}[l_i\\,|\\,x_i,\\theta] \\right} \\\n & = &\n \\displaystyle \\sum_{i=0}^{m-1} \\log \\mathrm{Pr}[l_i\\,|\\,x_i,\\theta].\n\\end{array}\n$$\n\nWe are using the symbol $\\log$, which could be taken in any convenient base, such as the natural logarithm ($\\ln y$) or the information theoretic base-two logarithm ($\\log_2 y$).\n\nThe MLE procedure then consists of two steps:\n\nFor the problem at hand, determine a suitable choice for $\\mathrm{Pr}[l_i\\,|\\,x_i,\\theta]$.\nRun any optimization procedure to find the $\\theta$ that maximizes $\\mathcal{L}(\\theta; l, X)$.\n\nExample: Logistic regression\nLet's say you have decided that the logistic function, $G(\\theta^T x_i)$, is a good model of the probability of producing a label $l_i$ given the point $x_i$. Under the i.i.d. assumption, we can interpret the label $l_i$ as being the result of a Bernoulli trial (e.g., a biased coin flip), where the probability of success ($l_i=1$) is defined as $g_i = g_i(\\theta) \\equiv G(\\theta^T x_i)$. Thus,\n$$\n\\begin{array}{rcl}\n \\mathrm{Pr}[l_i \\, | \\, x_i, \\theta]\n & \\equiv & g_i^{l_i} \\cdot \\left(1 - g_i\\right)^{1 - l_i}.\n\\end{array}\n$$\nThe log-likelihood in turn becomes,\n$$\n\\begin{array}{rcl}\n \\mathcal{L}(\\theta; l, X)\n & = & \\displaystyle\n \\sum_{i=0}^{m-1} l_i \\log g_i + (1-l_i) \\log (1-g_i) \\\n & = &\n l^T \\log g + (1-l)^T \\log (1-g),\n\\end{array}\n$$\nwhere $g \\equiv (g_0, g_1, \\ldots, g_{m-1})^T$.\nOptimizing the log-likelihood via gradient (steepest) ascent\nTo optimize the log-likelihood with respect to the parameters, $\\theta$, you'd like to do the moral equivalent of taking its derivative, setting it to zero, and then solving for $\\theta$.\nFor example, recall that in the case of linear regression via least squares minimization, carrying out this process produced an analytic solution for the parameters, which was to solve the normal equations.\nUnfortunately, for logistic regression---or for most log-likelihoods you are likely to ever write down---you cannot usually derive an analytic solution. Therefore, you will need to resort to numerical optimization procedures.\nThe simplest such procedure is gradient ascent (or steepest ascent), in the case of maximizing some function; if instead you are minimizing the function, then the equivalent procedure is gradient (steepest) descent. The idea is to start with some guess, compute the derivative of the objective function at that guess, and then move in the direction of steepest descent. As it happens, the direction of steepest descent is given by the gradient. More formally, the procedure applied to the log-likelihood is:\n\nStart with some initial guess, $\\theta(0)$.\nAt each iteration $t \\geq 0$ of the procedure, let $\\theta(t)$ be the current guess.\nCompute the direction of steepest descent by evaluating the gradient, $\\Delta_t \\equiv \\nabla_{\\theta(t)} \\left{\\mathcal{L}(\\theta(t); l, X)\\right}$.\nTake a step in the direction of the gradient, $\\theta(t+1) \\leftarrow \\theta(t) + \\phi \\Delta_t$, where $\\phi$ is a suitably chosen fudge factor.\n\nThis procedure should smell eerily like the one in Lab 24! And just as in Lab 24, the tricky bit is how to choose $\\phi$, the principled choice of which we will defer until another lab.\n\nOne additional and slight distinction between this procedure and the Lab 24 procedure is that here we are optimizing using the full dataset, rather than processing data points one at a time. (That is, the step iteration variable $t$ used above is not used in exactly the same way as the step iteration $k$ was used in Lab 24.)\nAnother question is, how do we know this procedure will converge to the global maximum, rather than, say, a local maximum? For that you need a deeper analysis of a specific $\\mathcal{L}(\\theta; l, X)$, to show, for instance, that it is convex in $\\theta$.\n\nExample: A gradient ascent algorithm for logistic regression\nLet's apply the gradient ascent procedure to the logistic regression problem, in order to determine a good $\\theta$.\nExercise. Show the following:\n$$\n\\begin{array}{rcl}\n \\nabla_\\theta \\left{\\mathcal{L}(\\theta; l, X)\\right}\n & = & X^T \\left[ l - G(X \\cdot \\theta)\\right].\n\\end{array}\n$$\nExercise. Implement the gradient ascent procedure to determine $\\theta$, and try it out on the sample data.\n\nIn your solution, we'd like you to store all guesses in the matrix thetas, so that you can later see how the $\\theta(t)$ values evolve. To extract a particular column t, use the notation, theta[:, t:t+1]. This notation is necessary to preserve the \"shape\" of the column as a column vector.", "MAX_STEP = 100\nPHI = 0.1\n\n# Get the data coordinate matrix, X, and labels vector, l\nX = points\nl = labels.astype (dtype=float)\n\n# Store *all* guesses, for subsequent analysis\nthetas = np.zeros ((3, MAX_STEP+1))\n\nfor t in range (MAX_STEP):\n # @YOUSE: Fill in this code\n pass\n \nprint \"Your (hand) solution:\", theta.T.flatten ()\nprint \"Computed solution:\", thetas[:, MAX_STEP]\n\ntheta_mle = thetas[:, MAX_STEP:]\n\n# Generate 0/1 labels for computed discriminant:\nis_correct_mle = gen_labels_part3 (points, labels, theta_mle)\n\nprint \"Number of misclassified points using MLE:\", (len (points) - sum (is_correct_mle))[0]\nprint \"\\n(Run the code cell below to visualize the results.)\"\n\n# Visually inspect the above results\ntraces_mle = make_2d_scatter_traces (points, is_correct_mle)\ntraces_mle.append (gen_lin_discr_trace (points, theta_mle))\n\n# Plot it!\nlayout_mle = Layout (xaxis=dict (range=[-1.25, 2.25]),\n yaxis=dict (range=[-3.25, 2.25]))\nfig_mle = Figure (data=traces_mle, layout=layout_mle)\npy.iplot (fig_mle)", "Exercise. Make a contour plot of the log-likelihood and draw the trajectory taken by the $\\theta(t)$ values laid on top of it.", "def log_likelihood (theta, l, X):\n # @YOUSE: Complete this function to evaluate the log-likelihood\n pass\n\nn1_ll = 100\nx1_ll = np.linspace (-20., 0., n1_ll)\nn2_ll = 100\nx2_ll = np.linspace (-20., 0., n2_ll)\nx1_ll_grid, x2_ll_grid = np.meshgrid (x1_ll, x2_ll)\n\nll_grid = np.zeros ((n1_ll, n2_ll))\n# @YOUSE: Write some code to compute ll_grid, which the following code cell visualizes\n\ntrace_ll_grid = Contour (x=x1_ll, y=x2_ll, z=ll_grid)\ntrace_thetas = Scatter (x=thetas[1, :], y=thetas[2, :], mode='markers+lines')\npy.iplot ([trace_ll_grid, trace_thetas])" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
d-li14/CS231n-Assignments
assignment2/ConvolutionalNetworks.ipynb
gpl-3.0
[ "Convolutional Networks\nSo far we have worked with deep fully-connected networks, using them to explore different optimization strategies and network architectures. Fully-connected networks are a good testbed for experimentation because they are very computationally efficient, but in practice all state-of-the-art results use convolutional networks instead.\nFirst you will implement several layer types that are used in convolutional networks. You will then use these layers to train a convolutional network on the CIFAR-10 dataset.", "# As usual, a bit of setup\nfrom __future__ import print_function\nimport numpy as np\nimport matplotlib.pyplot as plt\nfrom cs231n.classifiers.cnn import *\nfrom cs231n.data_utils import get_CIFAR10_data\nfrom cs231n.gradient_check import eval_numerical_gradient_array, eval_numerical_gradient\nfrom cs231n.layers import *\nfrom cs231n.fast_layers import *\nfrom cs231n.solver import Solver\n\n%matplotlib inline\nplt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots\nplt.rcParams['image.interpolation'] = 'nearest'\nplt.rcParams['image.cmap'] = 'gray'\n\n# for auto-reloading external modules\n# see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython\n%load_ext autoreload\n%autoreload 2\n\ndef rel_error(x, y):\n \"\"\" returns relative error \"\"\"\n return np.max(np.abs(x - y) / (np.maximum(1e-8, np.abs(x) + np.abs(y))))\n\n# Load the (preprocessed) CIFAR10 data.\n\ndata = get_CIFAR10_data()\nfor k, v in data.items():\n print('%s: ' % k, v.shape)", "Convolution: Naive forward pass\nThe core of a convolutional network is the convolution operation. In the file cs231n/layers.py, implement the forward pass for the convolution layer in the function conv_forward_naive. \nYou don't have to worry too much about efficiency at this point; just write the code in whatever way you find most clear.\nYou can test your implementation by running the following:", "x_shape = (2, 3, 4, 4)\nw_shape = (3, 3, 4, 4)\nx = np.linspace(-0.1, 0.5, num=np.prod(x_shape)).reshape(x_shape)\nw = np.linspace(-0.2, 0.3, num=np.prod(w_shape)).reshape(w_shape)\nb = np.linspace(-0.1, 0.2, num=3)\n\nconv_param = {'stride': 2, 'pad': 1}\nout, _ = conv_forward_naive(x, w, b, conv_param)\ncorrect_out = np.array([[[[-0.08759809, -0.10987781],\n [-0.18387192, -0.2109216 ]],\n [[ 0.21027089, 0.21661097],\n [ 0.22847626, 0.23004637]],\n [[ 0.50813986, 0.54309974],\n [ 0.64082444, 0.67101435]]],\n [[[-0.98053589, -1.03143541],\n [-1.19128892, -1.24695841]],\n [[ 0.69108355, 0.66880383],\n [ 0.59480972, 0.56776003]],\n [[ 2.36270298, 2.36904306],\n [ 2.38090835, 2.38247847]]]])\n\n# Compare your output to ours; difference should be around 2e-8\nprint('Testing conv_forward_naive')\nprint('difference: ', rel_error(out, correct_out))", "Aside: Image processing via convolutions\nAs fun way to both check your implementation and gain a better understanding of the type of operation that convolutional layers can perform, we will set up an input containing two images and manually set up filters that perform common image processing operations (grayscale conversion and edge detection). The convolution forward pass will apply these operations to each of the input images. We can then visualize the results as a sanity check.", "from scipy.misc import imread, imresize\n\nkitten, puppy = imread('kitten.jpg'), imread('puppy.jpg')\n# kitten is wide, and puppy is already square\nd = kitten.shape[1] - kitten.shape[0]\nkitten_cropped = kitten[:, d//2:-d//2, :]\n\nimg_size = 200 # Make this smaller if it runs too slow\nx = np.zeros((2, 3, img_size, img_size))\nx[0, :, :, :] = imresize(puppy, (img_size, img_size)).transpose((2, 0, 1))\nx[1, :, :, :] = imresize(kitten_cropped, (img_size, img_size)).transpose((2, 0, 1))\n\n# Set up a convolutional weights holding 2 filters, each 3x3\nw = np.zeros((2, 3, 3, 3))\n\n# The first filter converts the image to grayscale.\n# Set up the red, green, and blue channels of the filter.\nw[0, 0, :, :] = [[0, 0, 0], [0, 0.3, 0], [0, 0, 0]]\nw[0, 1, :, :] = [[0, 0, 0], [0, 0.6, 0], [0, 0, 0]]\nw[0, 2, :, :] = [[0, 0, 0], [0, 0.1, 0], [0, 0, 0]]\n\n# Second filter detects horizontal edges in the blue channel.\nw[1, 2, :, :] = [[1, 2, 1], [0, 0, 0], [-1, -2, -1]]\n\n# Vector of biases. We don't need any bias for the grayscale\n# filter, but for the edge detection filter we want to add 128\n# to each output so that nothing is negative.\nb = np.array([0, 128])\n\n# Compute the result of convolving each input in x with each filter in w,\n# offsetting by b, and storing the results in out.\nout, _ = conv_forward_naive(x, w, b, {'stride': 1, 'pad': 1})\n\ndef imshow_noax(img, normalize=True):\n \"\"\" Tiny helper to show images as uint8 and remove axis labels \"\"\"\n if normalize:\n img_max, img_min = np.max(img), np.min(img)\n img = 255.0 * (img - img_min) / (img_max - img_min)\n plt.imshow(img.astype('uint8'))\n plt.gca().axis('off')\n\n# Show the original images and the results of the conv operation\nplt.subplot(2, 3, 1)\nimshow_noax(puppy, normalize=False)\nplt.title('Original image')\nplt.subplot(2, 3, 2)\nimshow_noax(out[0, 0])\nplt.title('Grayscale')\nplt.subplot(2, 3, 3)\nimshow_noax(out[0, 1])\nplt.title('Edges')\nplt.subplot(2, 3, 4)\nimshow_noax(kitten_cropped, normalize=False)\nplt.subplot(2, 3, 5)\nimshow_noax(out[1, 0])\nplt.subplot(2, 3, 6)\nimshow_noax(out[1, 1])\nplt.show()", "Convolution: Naive backward pass\nImplement the backward pass for the convolution operation in the function conv_backward_naive in the file cs231n/layers.py. Again, you don't need to worry too much about computational efficiency.\nWhen you are done, run the following to check your backward pass with a numeric gradient check.", "np.random.seed(231)\nx = np.random.randn(4, 3, 5, 5)\nw = np.random.randn(2, 3, 3, 3)\nb = np.random.randn(2,)\ndout = np.random.randn(4, 2, 5, 5)\nconv_param = {'stride': 1, 'pad': 1}\n\ndx_num = eval_numerical_gradient_array(lambda x: conv_forward_naive(x, w, b, conv_param)[0], x, dout)\ndw_num = eval_numerical_gradient_array(lambda w: conv_forward_naive(x, w, b, conv_param)[0], w, dout)\ndb_num = eval_numerical_gradient_array(lambda b: conv_forward_naive(x, w, b, conv_param)[0], b, dout)\n\nout, cache = conv_forward_naive(x, w, b, conv_param)\ndx, dw, db = conv_backward_naive(dout, cache)\n\n# Your errors should be around 1e-8'\nprint('Testing conv_backward_naive function')\nprint('dx error: ', rel_error(dx, dx_num))\nprint('dw error: ', rel_error(dw, dw_num))\nprint('db error: ', rel_error(db, db_num))", "Max pooling: Naive forward\nImplement the forward pass for the max-pooling operation in the function max_pool_forward_naive in the file cs231n/layers.py. Again, don't worry too much about computational efficiency.\nCheck your implementation by running the following:", "x_shape = (2, 3, 4, 4)\nx = np.linspace(-0.3, 0.4, num=np.prod(x_shape)).reshape(x_shape)\npool_param = {'pool_width': 2, 'pool_height': 2, 'stride': 2}\n\nout, _ = max_pool_forward_naive(x, pool_param)\n\ncorrect_out = np.array([[[[-0.26315789, -0.24842105],\n [-0.20421053, -0.18947368]],\n [[-0.14526316, -0.13052632],\n [-0.08631579, -0.07157895]],\n [[-0.02736842, -0.01263158],\n [ 0.03157895, 0.04631579]]],\n [[[ 0.09052632, 0.10526316],\n [ 0.14947368, 0.16421053]],\n [[ 0.20842105, 0.22315789],\n [ 0.26736842, 0.28210526]],\n [[ 0.32631579, 0.34105263],\n [ 0.38526316, 0.4 ]]]])\n\n# Compare your output with ours. Difference should be around 1e-8.\nprint('Testing max_pool_forward_naive function:')\nprint('difference: ', rel_error(out, correct_out))", "Max pooling: Naive backward\nImplement the backward pass for the max-pooling operation in the function max_pool_backward_naive in the file cs231n/layers.py. You don't need to worry about computational efficiency.\nCheck your implementation with numeric gradient checking by running the following:", "np.random.seed(231)\nx = np.random.randn(3, 2, 8, 8)\ndout = np.random.randn(3, 2, 4, 4)\npool_param = {'pool_height': 2, 'pool_width': 2, 'stride': 2}\n\ndx_num = eval_numerical_gradient_array(lambda x: max_pool_forward_naive(x, pool_param)[0], x, dout)\n\nout, cache = max_pool_forward_naive(x, pool_param)\ndx = max_pool_backward_naive(dout, cache)\n\n# Your error should be around 1e-12\nprint('Testing max_pool_backward_naive function:')\nprint('dx error: ', rel_error(dx, dx_num))", "Fast layers\nMaking convolution and pooling layers fast can be challenging. To spare you the pain, we've provided fast implementations of the forward and backward passes for convolution and pooling layers in the file cs231n/fast_layers.py.\nThe fast convolution implementation depends on a Cython extension; to compile it you need to run the following from the cs231n directory:\nbash\npython setup.py build_ext --inplace\nThe API for the fast versions of the convolution and pooling layers is exactly the same as the naive versions that you implemented above: the forward pass receives data, weights, and parameters and produces outputs and a cache object; the backward pass recieves upstream derivatives and the cache object and produces gradients with respect to the data and weights.\nNOTE: The fast implementation for pooling will only perform optimally if the pooling regions are non-overlapping and tile the input. If these conditions are not met then the fast pooling implementation will not be much faster than the naive implementation.\nYou can compare the performance of the naive and fast versions of these layers by running the following:", "from cs231n.fast_layers import conv_forward_fast, conv_backward_fast\nfrom time import time\nnp.random.seed(231)\nx = np.random.randn(100, 3, 31, 31)\nw = np.random.randn(25, 3, 3, 3)\nb = np.random.randn(25,)\ndout = np.random.randn(100, 25, 16, 16)\nconv_param = {'stride': 2, 'pad': 1}\n\nt0 = time()\nout_naive, cache_naive = conv_forward_naive(x, w, b, conv_param)\nt1 = time()\nout_fast, cache_fast = conv_forward_fast(x, w, b, conv_param)\nt2 = time()\n\nprint('Testing conv_forward_fast:')\nprint('Naive: %fs' % (t1 - t0))\nprint('Fast: %fs' % (t2 - t1))\nprint('Speedup: %fx' % ((t1 - t0) / (t2 - t1)))\nprint('Difference: ', rel_error(out_naive, out_fast))\n\nt0 = time()\ndx_naive, dw_naive, db_naive = conv_backward_naive(dout, cache_naive)\nt1 = time()\ndx_fast, dw_fast, db_fast = conv_backward_fast(dout, cache_fast)\nt2 = time()\n\nprint('\\nTesting conv_backward_fast:')\nprint('Naive: %fs' % (t1 - t0))\nprint('Fast: %fs' % (t2 - t1))\nprint('Speedup: %fx' % ((t1 - t0) / (t2 - t1)))\nprint('dx difference: ', rel_error(dx_naive, dx_fast))\nprint('dw difference: ', rel_error(dw_naive, dw_fast))\nprint('db difference: ', rel_error(db_naive, db_fast))\n\nfrom cs231n.fast_layers import max_pool_forward_fast, max_pool_backward_fast\nnp.random.seed(231)\nx = np.random.randn(100, 3, 32, 32)\ndout = np.random.randn(100, 3, 16, 16)\npool_param = {'pool_height': 2, 'pool_width': 2, 'stride': 2}\n\nt0 = time()\nout_naive, cache_naive = max_pool_forward_naive(x, pool_param)\nt1 = time()\nout_fast, cache_fast = max_pool_forward_fast(x, pool_param)\nt2 = time()\n\nprint('Testing pool_forward_fast:')\nprint('Naive: %fs' % (t1 - t0))\nprint('fast: %fs' % (t2 - t1))\nprint('speedup: %fx' % ((t1 - t0) / (t2 - t1)))\nprint('difference: ', rel_error(out_naive, out_fast))\n\nt0 = time()\ndx_naive = max_pool_backward_naive(dout, cache_naive)\nt1 = time()\ndx_fast = max_pool_backward_fast(dout, cache_fast)\nt2 = time()\n\nprint('\\nTesting pool_backward_fast:')\nprint('Naive: %fs' % (t1 - t0))\nprint('speedup: %fx' % ((t1 - t0) / (t2 - t1)))\nprint('dx difference: ', rel_error(dx_naive, dx_fast))", "Convolutional \"sandwich\" layers\nPreviously we introduced the concept of \"sandwich\" layers that combine multiple operations into commonly used patterns. In the file cs231n/layer_utils.py you will find sandwich layers that implement a few commonly used patterns for convolutional networks.", "from cs231n.layer_utils import conv_relu_pool_forward, conv_relu_pool_backward\nnp.random.seed(231)\nx = np.random.randn(2, 3, 16, 16)\nw = np.random.randn(3, 3, 3, 3)\nb = np.random.randn(3,)\ndout = np.random.randn(2, 3, 8, 8)\nconv_param = {'stride': 1, 'pad': 1}\npool_param = {'pool_height': 2, 'pool_width': 2, 'stride': 2}\n\nout, cache = conv_relu_pool_forward(x, w, b, conv_param, pool_param)\ndx, dw, db = conv_relu_pool_backward(dout, cache)\n\ndx_num = eval_numerical_gradient_array(lambda x: conv_relu_pool_forward(x, w, b, conv_param, pool_param)[0], x, dout)\ndw_num = eval_numerical_gradient_array(lambda w: conv_relu_pool_forward(x, w, b, conv_param, pool_param)[0], w, dout)\ndb_num = eval_numerical_gradient_array(lambda b: conv_relu_pool_forward(x, w, b, conv_param, pool_param)[0], b, dout)\n\nprint('Testing conv_relu_pool')\nprint('dx error: ', rel_error(dx_num, dx))\nprint('dw error: ', rel_error(dw_num, dw))\nprint('db error: ', rel_error(db_num, db))\n\nfrom cs231n.layer_utils import conv_relu_forward, conv_relu_backward\nnp.random.seed(231)\nx = np.random.randn(2, 3, 8, 8)\nw = np.random.randn(3, 3, 3, 3)\nb = np.random.randn(3,)\ndout = np.random.randn(2, 3, 8, 8)\nconv_param = {'stride': 1, 'pad': 1}\n\nout, cache = conv_relu_forward(x, w, b, conv_param)\ndx, dw, db = conv_relu_backward(dout, cache)\n\ndx_num = eval_numerical_gradient_array(lambda x: conv_relu_forward(x, w, b, conv_param)[0], x, dout)\ndw_num = eval_numerical_gradient_array(lambda w: conv_relu_forward(x, w, b, conv_param)[0], w, dout)\ndb_num = eval_numerical_gradient_array(lambda b: conv_relu_forward(x, w, b, conv_param)[0], b, dout)\n\nprint('Testing conv_relu:')\nprint('dx error: ', rel_error(dx_num, dx))\nprint('dw error: ', rel_error(dw_num, dw))\nprint('db error: ', rel_error(db_num, db))", "Three-layer ConvNet\nNow that you have implemented all the necessary layers, we can put them together into a simple convolutional network.\nOpen the file cs231n/classifiers/cnn.py and complete the implementation of the ThreeLayerConvNet class. Run the following cells to help you debug:\nSanity check loss\nAfter you build a new network, one of the first things you should do is sanity check the loss. When we use the softmax loss, we expect the loss for random weights (and no regularization) to be about log(C) for C classes. When we add regularization this should go up.", "model = ThreeLayerConvNet()\n\nN = 50\nX = np.random.randn(N, 3, 32, 32)\ny = np.random.randint(10, size=N)\n\nloss, grads = model.loss(X, y)\nprint('Initial loss (no regularization): ', loss)\n\nmodel.reg = 0.5\nloss, grads = model.loss(X, y)\nprint('Initial loss (with regularization): ', loss)", "Gradient check\nAfter the loss looks reasonable, use numeric gradient checking to make sure that your backward pass is correct. When you use numeric gradient checking you should use a small amount of artifical data and a small number of neurons at each layer. Note: correct implementations may still have relative errors up to 1e-2.", "num_inputs = 2\ninput_dim = (3, 16, 16)\nreg = 0.0\nnum_classes = 10\nnp.random.seed(231)\nX = np.random.randn(num_inputs, *input_dim)\ny = np.random.randint(num_classes, size=num_inputs)\n\nmodel = ThreeLayerConvNet(num_filters=3, filter_size=3,\n input_dim=input_dim, hidden_dim=7,\n dtype=np.float64)\nloss, grads = model.loss(X, y)\nfor param_name in sorted(grads):\n f = lambda _: model.loss(X, y)[0]\n param_grad_num = eval_numerical_gradient(f, model.params[param_name], verbose=False, h=1e-6)\n e = rel_error(param_grad_num, grads[param_name])\n print('%s max relative error: %e' % (param_name, rel_error(param_grad_num, grads[param_name])))", "Overfit small data\nA nice trick is to train your model with just a few training samples. You should be able to overfit small datasets, which will result in very high training accuracy and comparatively low validation accuracy.", "np.random.seed(231)\n\nnum_train = 100\nsmall_data = {\n 'X_train': data['X_train'][:num_train],\n 'y_train': data['y_train'][:num_train],\n 'X_val': data['X_val'],\n 'y_val': data['y_val'],\n}\n\nmodel = ThreeLayerConvNet(weight_scale=1e-2)\n\nsolver = Solver(model, small_data,\n num_epochs=15, batch_size=50,\n update_rule='adam',\n optim_config={\n 'learning_rate': 1e-3,\n },\n verbose=True, print_every=1)\nsolver.train()", "Plotting the loss, training accuracy, and validation accuracy should show clear overfitting:", "plt.subplot(2, 1, 1)\nplt.plot(solver.loss_history, 'o')\nplt.xlabel('iteration')\nplt.ylabel('loss')\n\nplt.subplot(2, 1, 2)\nplt.plot(solver.train_acc_history, '-o')\nplt.plot(solver.val_acc_history, '-o')\nplt.legend(['train', 'val'], loc='upper left')\nplt.xlabel('epoch')\nplt.ylabel('accuracy')\nplt.show()", "Train the net\nBy training the three-layer convolutional network for one epoch, you should achieve greater than 40% accuracy on the training set:", "model = ThreeLayerConvNet(weight_scale=0.001, hidden_dim=500, reg=0.001)\n\nsolver = Solver(model, data,\n num_epochs=1, batch_size=50,\n update_rule='adam',\n optim_config={\n 'learning_rate': 1e-3,\n },\n verbose=True, print_every=20)\nsolver.train()", "Visualize Filters\nYou can visualize the first-layer convolutional filters from the trained network by running the following:", "from cs231n.vis_utils import visualize_grid\n\ngrid = visualize_grid(model.params['W1'].transpose(0, 2, 3, 1))\nplt.imshow(grid.astype('uint8'))\nplt.axis('off')\nplt.gcf().set_size_inches(5, 5)\nplt.show()", "Spatial Batch Normalization\nWe already saw that batch normalization is a very useful technique for training deep fully-connected networks. Batch normalization can also be used for convolutional networks, but we need to tweak it a bit; the modification will be called \"spatial batch normalization.\"\nNormally batch-normalization accepts inputs of shape (N, D) and produces outputs of shape (N, D), where we normalize across the minibatch dimension N. For data coming from convolutional layers, batch normalization needs to accept inputs of shape (N, C, H, W) and produce outputs of shape (N, C, H, W) where the N dimension gives the minibatch size and the (H, W) dimensions give the spatial size of the feature map.\nIf the feature map was produced using convolutions, then we expect the statistics of each feature channel to be relatively consistent both between different imagesand different locations within the same image. Therefore spatial batch normalization computes a mean and variance for each of the C feature channels by computing statistics over both the minibatch dimension N and the spatial dimensions H and W.\nSpatial batch normalization: forward\nIn the file cs231n/layers.py, implement the forward pass for spatial batch normalization in the function spatial_batchnorm_forward. Check your implementation by running the following:", "np.random.seed(231)\n# Check the training-time forward pass by checking means and variances\n# of features both before and after spatial batch normalization\n\nN, C, H, W = 2, 3, 4, 5\nx = 4 * np.random.randn(N, C, H, W) + 10\n\nprint('Before spatial batch normalization:')\nprint(' Shape: ', x.shape)\nprint(' Means: ', x.mean(axis=(0, 2, 3)))\nprint(' Stds: ', x.std(axis=(0, 2, 3)))\n\n# Means should be close to zero and stds close to one\ngamma, beta = np.ones(C), np.zeros(C)\nbn_param = {'mode': 'train'}\nout, _ = spatial_batchnorm_forward(x, gamma, beta, bn_param)\nprint('After spatial batch normalization:')\nprint(' Shape: ', out.shape)\nprint(' Means: ', out.mean(axis=(0, 2, 3)))\nprint(' Stds: ', out.std(axis=(0, 2, 3)))\n\n# Means should be close to beta and stds close to gamma\ngamma, beta = np.asarray([3, 4, 5]), np.asarray([6, 7, 8])\nout, _ = spatial_batchnorm_forward(x, gamma, beta, bn_param)\nprint('After spatial batch normalization (nontrivial gamma, beta):')\nprint(' Shape: ', out.shape)\nprint(' Means: ', out.mean(axis=(0, 2, 3)))\nprint(' Stds: ', out.std(axis=(0, 2, 3)))\n\nnp.random.seed(231)\n# Check the test-time forward pass by running the training-time\n# forward pass many times to warm up the running averages, and then\n# checking the means and variances of activations after a test-time\n# forward pass.\nN, C, H, W = 10, 4, 11, 12\n\nbn_param = {'mode': 'train'}\ngamma = np.ones(C)\nbeta = np.zeros(C)\nfor t in range(50):\n x = 2.3 * np.random.randn(N, C, H, W) + 13\n spatial_batchnorm_forward(x, gamma, beta, bn_param)\nbn_param['mode'] = 'test'\nx = 2.3 * np.random.randn(N, C, H, W) + 13\na_norm, _ = spatial_batchnorm_forward(x, gamma, beta, bn_param)\n\n# Means should be close to zero and stds close to one, but will be\n# noisier than training-time forward passes.\nprint('After spatial batch normalization (test-time):')\nprint(' means: ', a_norm.mean(axis=(0, 2, 3)))\nprint(' stds: ', a_norm.std(axis=(0, 2, 3)))", "Spatial batch normalization: backward\nIn the file cs231n/layers.py, implement the backward pass for spatial batch normalization in the function spatial_batchnorm_backward. Run the following to check your implementation using a numeric gradient check:", "np.random.seed(231)\nN, C, H, W = 2, 3, 4, 5\nx = 5 * np.random.randn(N, C, H, W) + 12\ngamma = np.random.randn(C)\nbeta = np.random.randn(C)\ndout = np.random.randn(N, C, H, W)\n\nbn_param = {'mode': 'train'}\nfx = lambda x: spatial_batchnorm_forward(x, gamma, beta, bn_param)[0]\nfg = lambda a: spatial_batchnorm_forward(x, gamma, beta, bn_param)[0]\nfb = lambda b: spatial_batchnorm_forward(x, gamma, beta, bn_param)[0]\n\ndx_num = eval_numerical_gradient_array(fx, x, dout)\nda_num = eval_numerical_gradient_array(fg, gamma, dout)\ndb_num = eval_numerical_gradient_array(fb, beta, dout)\n\n_, cache = spatial_batchnorm_forward(x, gamma, beta, bn_param)\ndx, dgamma, dbeta = spatial_batchnorm_backward(dout, cache)\nprint('dx error: ', rel_error(dx_num, dx))\nprint('dgamma error: ', rel_error(da_num, dgamma))\nprint('dbeta error: ', rel_error(db_num, dbeta))", "Extra Credit Description\nIf you implement any additional features for extra credit, clearly describe them here with pointers to any code in this or other files if applicable." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
dpshelio/2015-EuroScipy-pandas-tutorial
04 - Groupby operations.ipynb
bsd-2-clause
[ "Groupby operations\nSome imports:", "%matplotlib inline\nimport pandas as pd\nimport numpy as np\nimport matplotlib.pyplot as plt\ntry:\n import seaborn\nexcept ImportError:\n pass\n\npd.options.display.max_rows = 10", "Some 'theory': the groupby operation (split-apply-combine)\nBy \"group by\" we are referring to a process involving one or more of the following steps\n\nSplitting the data into groups based on some criteria\nApplying a function to each group independently\nCombining the results into a data structure\n\n<img src=\"img/splitApplyCombine.png\">\nSimilar to SQL GROUP BY\nThe example of the image in pandas syntax:", "df = pd.DataFrame({'key':['A','B','C','A','B','C','A','B','C'],\n 'data': [0, 5, 10, 5, 10, 15, 10, 15, 20]})\ndf\n\ndf.groupby('key').aggregate('sum') # np.sum\n\ndf.groupby('key').sum()", "And now applying this on some real data\nThese exercises are based on the PyCon tutorial of Brandon Rhodes (so all credit to him!) and the datasets he prepared for that. You can download these data from here: titles.csv and cast.csv and put them in the /data folder.", "cast = pd.read_csv('data/cast.csv')\ncast.head()\n\ntitles = pd.read_csv('data/titles.csv')\ntitles.head()", "<div class=\"alert alert-success\">\n <b>EXERCISE</b>: Using groupby(), plot the number of films that have been released each decade in the history of cinema.\n</div>\n\n<div class=\"alert alert-success\">\n <b>EXERCISE</b>: Use groupby() to plot the number of \"Hamlet\" films made each decade.\n</div>\n\n<div class=\"alert alert-success\">\n <b>EXERCISE</b>: How many leading (n=1) roles were available to actors, and how many to actresses, in each year of the 1950s?\n</div>\n\n<div class=\"alert alert-success\">\n <b>EXERCISE</b>: Use groupby() to determine how many roles are listed for each of The Pink Panther movies.\n</div>\n\n<div class=\"alert alert-success\">\n <b>EXERCISE</b>: List, in order by year, each of the films in which Frank Oz has played more than 1 role.\n</div>\n\n<div class=\"alert alert-success\">\n <b>EXERCISE</b>: List each of the characters that Frank Oz has portrayed at least twice.\n</div>\n\nTransforms\nSometimes you don't want to aggregate the groups, but transform the values in each group. This can be achieved with transform:£", "df\n\ndef normalize(group):\n return (group - group.mean()) / group.std()\n\ndf.groupby('key').transform(normalize)", "<div class=\"alert alert-success\">\n <b>EXERCISE</b>: Calculate the ratio of number roles of actors and actresses to the total number of roles per decade and plot this for both in time (tip: you need to do a groupby twice in two steps, once calculating the numbers, and then the ratios.\n</div>\n\nValue counts\nA useful shortcut to calculate the number of occurences of certain values is value_counts (this is somewhat equivalent to df.groupby(key).size()))\nFor example, what are the most occuring movie titles?", "titles.title.value_counts().head()", "<div class=\"alert alert-success\">\n <b>EXERCISE</b>: Which years saw the most films released?\n</div>\n\n<div class=\"alert alert-success\">\n <b>EXERCISE</b>: Plot the number of released films over time\n</div>\n\n<div class=\"alert alert-success\">\n <b>EXERCISE</b>: Plot the number of \"Hamlet\" films made each decade.\n</div>\n\n<div class=\"alert alert-success\">\n <b>EXERCISE</b>: What are the 11 most common character names in movie history?\n</div>\n\n<div class=\"alert alert-success\">\n <b>EXERCISE</b>: Which actors or actresses appeared in the most movies in the year 2010?\n</div>\n\n<div class=\"alert alert-success\">\n <b>EXERCISE</b>: Plot how many roles Brad Pitt has played in each year of his career.\n</div>\n\n<div class=\"alert alert-success\">\n <b>EXERCISE</b>: What are the 10 most film titles roles that start with the word \"The Life\"?\n</div>\n\n<div class=\"alert alert-success\">\n <b>EXERCISE</b>: How many leading (n=1) roles were available to actors, and how many to actresses, in the 1950s? And in 2000s?\n</div>" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
arnicas/eyeo_nlp
python/Word2Vec_Yelp.ipynb
cc0-1.0
[ "Word2Vec Whirlwind Tour Using Yelp Review Data\nLynn Cherny (@arnicas, arnicas@gmail.com)\nNote: This repo requires a gzipped file of data to be unzipped, in order to use it from scratch. Otherwise, you can load the model that I saved.", "import json\nimport gensim\nimport numpy as np\nimport string\nimport tsne as ts # local copy in the repo\n\n%matplotlib inline\n\nimport matplotlib.pyplot as plt", "You can skip the following steps if you just want to load the word2vec model I provided... but this is the raw data approach.", "# We need to unzip the data file to use it:\n!gunzip ../data/yelp/yelp_academic_dataset_reviews.json.gz\n\n# Make sure it is there and unzipped:\n!ls -al ../data/yelp/\n\n## Make sure this dataset is here and unzipped.\ndata = []\nwith open(\"../data/yelp/yelp_academic_dataset_reviews.json\") as handle:\n for line in handle.readlines():\n yelp = json.loads(line)\n data.append(yelp)\n\nlen(data)\n\ndata[0]\n\nrevs = [d[u'text'] for d in data]\n\nrevs[0]", "What is Word2Vec?\n\"Generally, word2vec is trained using something called a skip-gram model. The skip-gram model, pictures above, attempts to use the vector representation that it learns to predict the words that appear around a given word in the corpus. Essentially, it uses the context of the word as it is used in a variety of books and other literature to derive a meaningful set of numbers. If the “context” of two words is similar, they will have similar vector representations.\" (Source)\n\"In word2vec, a distributed representation of a word is used. Take a vector with several hundred dimensions (say 1000). Each word is representated by a distribution of weights across those elements. So instead of a one-to-one mapping between an element in the vector and a word, the representation of a word is spread across all of the elements in the vector, and each element in the vector contributes to the definition of many words.\nIf I label the dimensions in a hypothetical word vector (there are no such pre-assigned labels in the algorithm of course), it might look a bit like this:\"\n<img src=\"img/word2vec-distributed-representation.png\">\nSource\nSo that means we can do associative logic, or analogies, with these models:\n<img src=\"img/word2vec-king-queen-vectors.png\">\nSpecifically, a large enough model of the right kind of language (like a lot of news, or lots of books) will allow you to get \"queen\" from putting in man, king, woman... and doing vector math on them. So, king-man+woman=queen. \nSource\n<img src=\"img/word2vec-king-queen-composition.png\">\nSource\nCreating a word2vec Model with Gensim\nThis takes a while. You don't need to do this, since I already did it. You can skip down to the place where we load the file!", "\"\"\" An alternate from gensim tutorials - just use all words in the model in a rewiew. No nltk used to split.\"\"\"\n\nimport re\n\nclass YelpReviews(object):\n \"\"\"Iterate over sentences of all plaintext files in a directory \"\"\"\n SPLIT_SENTENCES = re.compile(u\"[.!?:]\\s+\") # split sentences on these characters\n\n def __init__(self, objs, field):\n self.field = field\n self.objs = objs\n\n def __iter__(self):\n for obj in self.objs:\n text = obj[self.field]\n for sentence in self.SPLIT_SENTENCES.split(text):\n yield gensim.utils.simple_preprocess(sentence, deacc=True)\n\n## Don't do this is you already have the model file! Skip to the step after.\n## Otherwise, feel free to do it from scratch.\n## We pass in the full data objs and use the YelpReviews class to get the 'text' field for us.\n\n#model = gensim.models.Word2Vec(YelpReviews(data, 'text'), min_count=2, workers=2)\n\n#model.save('yelp_w2v_model.mod')\n\n#model.save_word2vec_format('yelp_w2vformat.mod')\n\n# If you already have a model file, load it here:\n\nmodel = gensim.models.Word2Vec.load_word2vec_format('../data/yelp/yelp_w2vformat.mod')\n\nmodel.most_similar(positive=[\"chicken\", \"waffles\"], topn=20)\n\nmodel.most_similar(\"waitress\")\n\nmodel.vocab.items()[0:5]\n\nmodel.most_similar(['good', 'pizza'])\n\nmodel.most_similar_cosmul(['good', 'pizza']) # less susceptible to extreme outliers\n\nmodel.most_similar(['dog'])\n\nmodel.most_similar(['salon'])\n\nmodel.most_similar(positive=['donuts', 'nypd'], negative=['fireman'])", "Now let's do some basic word sentiment stuff again... for the html side!", "import nltk\nnltk.data.path = ['../nltk_data']\nfrom nltk.corpus import stopwords\nenglish_stops = stopwords.words('english')\n\nrevs[0]\n\ntokens = [nltk.word_tokenize(rev) for rev in revs] # this takes a long time. don't run unless you're sure.\n\nmystops = english_stops + [u\"n't\", u'...', u\"'ve\"]\n\ndef clean_tokens(tokens, stoplist):\n \"\"\" Lowercases, takes out punct and stopwords and short strings \"\"\"\n return [token.lower() for token in tokens if (token not in string.punctuation) and \n (token.lower() not in stoplist) and len(token) > 2]\n\nclean = [clean_tokens(tok, mystops) for tok in tokens]\n\nfrom nltk import Text\n\nallclean = [y for x in clean for y in x] # flatten the list of lists\ncleantext = Text(allclean)\n\nmostcommon = cleantext.vocab().most_common()[0:1500]\nmostcommon_words = [word[0] for word in mostcommon]\n\nmostcommon_words[0:12]\n\n# thing required to get the vectors for tsne\n\ndef get_vectors(words, model):\n # requires model be in the binary format, not gensim's\n word_vectors = []\n word_labels = []\n for word in words:\n if word in model:\n word_vectors.append( model[word] )\n word_labels.append(word)\n return word_vectors, word_labels\n\nmymodel = gensim.models.Word2Vec.load_word2vec_format('../data/yelp/yelp_w2vformat.mod')\nvectors, labels = get_vectors(mostcommon_words, mymodel)\n\n# should be same as top words above\nlabels[:12]\n\nres = ts.tsne(np.asfarray(vectors, dtype='float'), 2, 50, 20)", "The \"AFINN-111.txt\" file is another sentiment file.", "from collections import defaultdict\nsentiment = defaultdict(int)\nwith open('../data/sentiment_wordlists/AFINN-111.txt') as handle:\n for line in handle.readlines():\n word = line.split('\\t')[0]\n polarity = line.split('\\t')[1]\n sentiment[word] = int(polarity)\n\nsentiment['pho']\n\nsentiment['good']\n\nsentiment['angry']\n\nsentiment['pizza']\n\ndef render_json( vectors, labels, filename ):\n output = []\n vectors = np.array(vectors)\n for i in range(len(vectors)):\n new_hash = {}\n new_hash[\"word\"] = str(labels[i])\n new_hash[\"x\"] = int(vectors[i][0])\n new_hash[\"y\"] = int(vectors[i][1])\n new_hash[\"sentiment\"] = sentiment[str(labels[i])]\n output.append(new_hash)\n with open(filename, 'w') as handle:\n json.dump(output, handle)\n\nrender_json(res, labels, \"../outputdata/yelp.json\")", "TSNE Plotting Stuff", "plt.figure(figsize=(15, 15))\nplt.scatter(res[:,0], res[:,1], s=10, color='gray', alpha=0.2)", "If you go to the file tsne_yelp.html, you can interact with this and see what the words are." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
jacobdein/alpine-soundscapes
Compute distance to roads.ipynb
mit
[ "Compute distance to roads\nThis notebook computes the distance to each of the nearest road types in a 'roads' vector map from a vector map of 'points' (sample locations).\nThis notebook uses GRASS GIS (7.0.4), and must be run inside of a GRASS environment (start the jupyter notebook server from the GRASS command line).\nRequired packages\nnumpy <br />\npandas <br />\npyprind\nVariable declarations\npoints – vector map with points to measure distance from (sample locations) <br />\nroads – vector map with roads data <br />\nroad_type_field – field name containing the road classification type (i.e. residential, secondary, etc.) <br />\ndistance_table_filename – path to export the distances table as a csv file", "points = 'sample_points_field'\n\nroads = 'highway'\n\nroad_type_field = 'Type'\n\ndistance_table_filename = \"\"", "Import statements", "import pandas\nimport numpy as np\nimport pyprind", "GRASS import statements", "import grass.script as gscript\n\nfrom grass.pygrass.vector import VectorTopo\nfrom grass.pygrass.vector.table import DBlinks", "Function declarations\nconnect to an attribute table", "def connectToAttributeTable(map):\n vector = VectorTopo(map)\n vector.open(mode='r')\n dblinks = DBlinks(vector.c_mapinfo)\n link = dblinks[0]\n return link.table()", "finds the nearest element in a vector map (to) for elements in another vector map (from) <br />\ncalls the GRASS v.distance command", "def computeDistance(from_map, to_map):\n\n upload = 'dist'\n result = gscript.read_command('v.distance',\n from_=from_map,\n to=to_map,\n upload=upload,\n separator='comma',\n flags='p')\n return result.split('\\n')", "selects vector features from an existing vector map and creates a new vector map containing only the selected features <br />\ncalls the GRASS v.extract command", "def extractFeatures(input_, type_, output):\n\n where = \"{0} = '{1}'\".format(road_type_field, type_)\n gscript.read_command('v.extract',\n input_=input_,\n where=where,\n output=output,\n overwrite=True)", "Get unique 'roads' types", "roads_table = connectToAttributeTable(map=roads)\nroads_table.filters.select(road_type_field)\ncursor = roads_table.execute()\nresult = np.array(cursor.fetchall())\ncursor.close()\nroad_types = np.unique(result)\n\nprint(road_types)", "Get 'points' attribute table", "point_table = connectToAttributeTable(map=points)\npoint_table.filters.select()\ncolumns = point_table.columns.names()\ncursor = point_table.execute()\nresult = np.array(cursor.fetchall())\ncursor.close()\npoint_data = pandas.DataFrame(result, columns=columns).set_index('cat')", "Loop through 'roads' types and compute the distances from all 'points'", "distances = pandas.DataFrame(columns=road_types, index=point_data.index)\n\nprogress_bar = pyprind.ProgBar(road_types.size, bar_char='█', title='Progress', monitor=True, stream=1, width=50)\n\nfor type_ in road_types:\n \n # update progress bar\n progress_bar.update(item_id=type_)\n \n # extract road data based on type query\n extractFeatures(input_=roads, type_=type_, output='roads_tmp')\n \n # compute distance from points to road type\n results = computeDistance(points, 'roads_tmp')\n \n # save results to data frame\n distances[type_] = [ d.split(',')[1] for d in results[1:len(results)-1] ]\n\n# match index with SiteID\ndistances['SiteID'] = point_data['ID']\ndistances.set_index('SiteID', inplace=True)", "Export distances table to a csv file", "distances.to_csv(distance_table_filename, header=False)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
massimo-nocentini/PhD
notebooks/recurrences-unfolding.ipynb
apache-2.0
[ "<p>\n<img src=\"http://www.cerm.unifi.it/chianti/images/logo%20unifi_positivo.jpg\" \n alt=\"UniFI logo\" style=\"float: left; width: 20%; height: 20%;\">\n<div align=\"right\"> Massimo Nocentini<br>\n<small>\n<br>September {23, 26}, 2016: refactoring toward class-based code\n<br>September 22, 2016: Quicksort theory, average cases\n</small>\n</div>\n</p>\n<br>\n<p>\n<div align=\"center\">\n<b>Abstract</b><br>\nIn this notebook we study two recurrence relations arising from the analysis of the `Quicksort` algorithm: numbers of checks and swaps are taken into account, in the average case. Such relations involve subterms where subscripts dependends on *one* dimension. They are a simple, but interesting, starting point to approach the general method of <b>recurrence unfolding</b>, an algorithmic/symbolical idea stretched further in other notebooks.\n</div>\n</p>", "%run \"recurrences.py\"\n%run \"sums.py\"\n%run \"start_session.py\"\n\nfrom itertools import accumulate\n\ndef accumulating(acc, current): return Eq(acc.lhs + current.lhs, acc.rhs + current.rhs)", "A generalization using accumulation", "mapped = list(accumulate(mapped, accumulating))\nmapped\n\nclear_cache()\nm,v,r = to_matrix_notation(mapped, f, [n-k for k in range(-2, 19)])\nm,v,r\n\nm_sym = m.subs(inverted_fibs, simultaneous=True)\nm_sym[:,0] = m_sym[:,0].subs(f[2],f[1])\nm_sym[1,2] = m_sym[1,2].subs(f[2],f[1])\nm_sym\n\n# the following cell produces an error due to ordering, while `m * v` doesn't.\n#clear_cache()\n#m_sym * v\n\nto_matrix_notation(mapped, f, [n+k for k in range(-18, 3)])", "According to A162741, we can generalize the pattern above:", "i = symbols('i')\nd = IndexedBase('d')\nk_fn_gen = Eq((k+1)*f[n], Sum(d[k,2*k-i]*f[n-i], (i, 0, 2*k)))\nd_triangle= {d[0,0]:1, d[n,2*n]:1, d[n,k]:d[n-1, k-1]+d[n-1,k]}\nk_fn_gen, d_triangle\n\nmapped = list(accumulate(mapped, accumulating))\nmapped\n\n# skip this cell to maintain math coerent version\ndef adjust(term):\n a_wild, b_wild = Wild('a', exclude=[f]), Wild('b')\n matched = term.match(a_wild*f[n+2] + b_wild)\n return -(matched[a_wild]-1)*f[n+2]\n\nm = fix_combination(mapped,adjust, lambda v, side: Add(v, side))\nmapped = list(m)\nmapped\n\nto_matrix_notation(mapped, f, [n-k for k in range(-2, 19)])\n\nmapped = list(accumulate(mapped, accumulating))\nmapped\n\nto_matrix_notation(mapped, f, [n-k for k in range(-2, 19)])\n\nmapped = list(accumulate(mapped, accumulating))\nmapped\n\nto_matrix_notation(mapped, f, [n-k for k in range(-2, 19)])\n\nmapped = list(accumulate(mapped, accumulating))\nmapped\n\nto_matrix_notation(mapped, f, [n-k for k in range(-2, 19)])", "Unfolding a recurrence with generic coefficients", "s = IndexedBase('s')\na = IndexedBase('a')\nswaps_recurrence = Eq(n*s[n],(n+1)*s[n-1]+a[n])\nswaps_recurrence\n\nboundary_conditions = {s[0]:Integer(0)}\nswaps_recurrence_spec=dict(recurrence_eq=swaps_recurrence, indexed=s, \n index=n, terms_cache=boundary_conditions)\n\nunfolded = do_unfolding_steps(swaps_recurrence_spec, 4)\n\nrecurrence_eq = project_recurrence_spec(unfolded, recurrence_eq=True)\nrecurrence_eq\n\nfactored_recurrence_eq = project_recurrence_spec(factor_rhs_unfolded_rec(unfolded), recurrence_eq=True)\nfactored_recurrence_eq\n\nfactored_recurrence_eq.rhs.collect(s[n-5]).collect(a[n-4])\n\nfactored_recurrence_eq.subs(n,5)\n\nrecurrence_eq.subs(n, 5)\n\ndef additional_term(n): return (2*Integer(n)-3)/6\n\nas_dict = {a[n]:additional_term(n) for n in range(1,6)}\n\nrecurrence_eq.subs(n, 5).subs(as_dict)", "A curious relation about Fibonacci numbers, in matrix notation", "d = 10\nm = Matrix(d,d, lambda i,j: binomial(n-i,j)*binomial(n-j,i))\nm\n\nf = IndexedBase('f')\nfibs = [fibonacci(i) for i in range(50)]\nmp = (ones(1,d)*m*ones(d,1))[0,0]\nodd_fibs_eq = Eq(f[2*n+1], mp, evaluate=True)\nodd_fibs_eq\n\n(m*ones(d,1))", "<a rel=\"license\" href=\"http://creativecommons.org/licenses/by-nc-sa/4.0/\"><img alt=\"Creative Commons License\" style=\"border-width:0\" src=\"https://i.creativecommons.org/l/by-nc-sa/4.0/88x31.png\" /></a><br />This work by <a xmlns:cc=\"http://creativecommons.org/ns#\" href=\"https://github.com/massimo-nocentini/\" property=\"cc:attributionName\" rel=\"cc:attributionURL\">Massimo Nocentini</a> is licensed under a <a rel=\"license\" href=\"http://creativecommons.org/licenses/by-nc-sa/4.0/\">Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License</a>." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
mu4farooqi/deep-learning-projects
tv-script-generation/dlnd_tv_script_generation.ipynb
gpl-3.0
[ "TV Script Generation\nIn this project, you'll generate your own Simpsons TV scripts using RNNs. You'll be using part of the Simpsons dataset of scripts from 27 seasons. The Neural Network you'll build will generate a new TV script for a scene at Moe's Tavern.\nGet the Data\nThe data is already provided for you. You'll be using a subset of the original dataset. It consists of only the scenes in Moe's Tavern. This doesn't include other versions of the tavern, like \"Moe's Cavern\", \"Flaming Moe's\", \"Uncle Moe's Family Feed-Bag\", etc..", "\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\n\nimport helper\n\ndata_dir = './data/simpsons/moes_tavern_lines.txt'\ntext = helper.load_data(data_dir)\n# Ignore notice, since we don't use it for analysing the data\ntext = text[81:]", "Explore the Data\nPlay around with view_sentence_range to view different parts of the data.", "view_sentence_range = (0, 10)\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\nimport numpy as np\n\nprint('Dataset Stats')\nprint('Roughly the number of unique words: {}'.format(len({word: None for word in text.split()})))\nscenes = text.split('\\n\\n')\nprint('Number of scenes: {}'.format(len(scenes)))\nsentence_count_scene = [scene.count('\\n') for scene in scenes]\nprint('Average number of sentences in each scene: {}'.format(np.average(sentence_count_scene)))\n\nsentences = [sentence for scene in scenes for sentence in scene.split('\\n')]\nprint('Number of lines: {}'.format(len(sentences)))\nword_count_sentence = [len(sentence.split()) for sentence in sentences]\nprint('Average number of words in each line: {}'.format(np.average(word_count_sentence)))\n\nprint()\nprint('The sentences {} to {}:'.format(*view_sentence_range))\nprint('\\n'.join(text.split('\\n')[view_sentence_range[0]:view_sentence_range[1]]))", "Implement Preprocessing Functions\nThe first thing to do to any dataset is preprocessing. Implement the following preprocessing functions below:\n- Lookup Table\n- Tokenize Punctuation\nLookup Table\nTo create a word embedding, you first need to transform the words to ids. In this function, create two dictionaries:\n- Dictionary to go from the words to an id, we'll call vocab_to_int\n- Dictionary to go from the id to word, we'll call int_to_vocab\nReturn these dictionaries in the following tuple (vocab_to_int, int_to_vocab)", "import numpy as np\nimport problem_unittests as tests\n\ndef create_lookup_tables(text):\n \"\"\"\n Create lookup tables for vocabulary\n :param text: The text of tv scripts split into words\n :return: A tuple of dicts (vocab_to_int, int_to_vocab)\n \"\"\"\n unique_words = set(text)\n vocab_to_int = {word: index for index, word in enumerate(unique_words)}\n int_to_vocab = {index: word for index, word in enumerate(unique_words)}\n return vocab_to_int, int_to_vocab\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_create_lookup_tables(create_lookup_tables)", "Tokenize Punctuation\nWe'll be splitting the script into a word array using spaces as delimiters. However, punctuations like periods and exclamation marks make it hard for the neural network to distinguish between the word \"bye\" and \"bye!\".\nImplement the function token_lookup to return a dict that will be used to tokenize symbols like \"!\" into \"||Exclamation_Mark||\". Create a dictionary for the following symbols where the symbol is the key and value is the token:\n- Period ( . )\n- Comma ( , )\n- Quotation Mark ( \" )\n- Semicolon ( ; )\n- Exclamation mark ( ! )\n- Question mark ( ? )\n- Left Parentheses ( ( )\n- Right Parentheses ( ) )\n- Dash ( -- )\n- Return ( \\n )\nThis dictionary will be used to token the symbols and add the delimiter (space) around it. This separates the symbols as it's own word, making it easier for the neural network to predict on the next word. Make sure you don't use a token that could be confused as a word. Instead of using the token \"dash\", try using something like \"||dash||\".", "def token_lookup():\n \"\"\"\n Generate a dict to turn punctuation into a token.\n :return: Tokenize dictionary where the key is the punctuation and the value is the token\n \"\"\"\n return {\n '.': \"||Period||\",\n ',': \"||Comma||\",\n '\"': \"||Quotation_Mark||\",\n ';': \"||Semicolon||\",\n '!': \"||Exclamation_Mark||\",\n '?': \"||Question_Mark||\",\n '(': \"||Left_Parentheses||\",\n ')': \"||Right_Parentheses||\",\n '--': \"||Dash||\",\n '\\n': \"||Return||\"\n }\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_tokenize(token_lookup)", "Preprocess all the data and save it\nRunning the code cell below will preprocess all the data and save it to file.", "\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\n# Preprocess Training, Validation, and Testing Data\nhelper.preprocess_and_save_data(data_dir, token_lookup, create_lookup_tables)", "Check Point\nThis is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.\nBuild the Neural Network\nYou'll build the components necessary to build a RNN by implementing the following functions below:\n- get_inputs\n- get_init_cell\n- get_embed\n- build_rnn\n- build_nn\n- get_batches\nCheck the Version of TensorFlow and Access to GPU", "\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\nimport helper\nimport numpy as np\nimport problem_unittests as tests\n\nint_text, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\nfrom distutils.version import LooseVersion\nimport warnings\nimport tensorflow as tf\n\n# Check TensorFlow Version\nassert LooseVersion(tf.__version__) >= LooseVersion('1.0'), 'Please use TensorFlow version 1.0 or newer'\nprint('TensorFlow Version: {}'.format(tf.__version__))\n\n# Check for a GPU\nif not tf.test.gpu_device_name():\n warnings.warn('No GPU found. Please use a GPU to train your neural network.')\nelse:\n print('Default GPU Device: {}'.format(tf.test.gpu_device_name()))", "Input\nImplement the get_inputs() function to create TF Placeholders for the Neural Network. It should create the following placeholders:\n- Input text placeholder named \"input\" using the TF Placeholder name parameter.\n- Targets placeholder\n- Learning Rate placeholder\nReturn the placeholders in the following tuple (Input, Targets, LearningRate)", "def get_inputs():\n \"\"\"\n Create TF Placeholders for input, targets, and learning rate.\n :return: Tuple (input, targets, learning rate)\n \"\"\"\n return tf.placeholder(dtype=tf.int32, shape=(None, None), name=\"input\"), tf.placeholder(dtype=tf.int32, shape=(None, None), name=\"target\"), tf.placeholder(dtype=tf.float32, name=\"learning_rate\")\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_get_inputs(get_inputs)", "Build RNN Cell and Initialize\nStack one or more BasicLSTMCells in a MultiRNNCell.\n- The Rnn size should be set using rnn_size\n- Initalize Cell State using the MultiRNNCell's zero_state() function\n - Apply the name \"initial_state\" to the initial state using tf.identity()\nReturn the cell and initial state in the following tuple (Cell, InitialState)", "\ndef get_init_cell(batch_size, rnn_size):\n \"\"\"\n Create an RNN Cell and initialize it.\n :param batch_size: Size of batches\n :param rnn_size: Size of RNNs\n :return: Tuple (cell, initialize state)\n \"\"\"\n cell = tf.contrib.rnn.MultiRNNCell([tf.contrib.rnn.BasicLSTMCell(rnn_size)])\n return cell, tf.identity(cell.zero_state(batch_size, tf.float32), 'initial_state')\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_get_init_cell(get_init_cell)", "Word Embedding\nApply embedding to input_data using TensorFlow. Return the embedded sequence.", "def get_embed(input_data, vocab_size, embed_dim):\n \"\"\"\n Create embedding for <input_data>.\n :param input_data: TF placeholder for text input.\n :param vocab_size: Number of words in vocabulary.\n :param embed_dim: Number of embedding dimensions\n :return: Embedded input.\n \"\"\"\n embeddings = tf.Variable(tf.random_uniform((vocab_size, embed_dim), -1, 1))\n return tf.nn.embedding_lookup(embeddings, input_data)\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_get_embed(get_embed)", "Build RNN\nYou created a RNN Cell in the get_init_cell() function. Time to use the cell to create a RNN.\n- Build the RNN using the tf.nn.dynamic_rnn()\n - Apply the name \"final_state\" to the final state using tf.identity()\nReturn the outputs and final_state state in the following tuple (Outputs, FinalState)", "def build_rnn(cell, inputs):\n \"\"\"\n Create a RNN using a RNN Cell\n :param cell: RNN Cell\n :param inputs: Input text data\n :return: Tuple (Outputs, Final State)\n \"\"\"\n outputs, final_state = tf.nn.dynamic_rnn(cell, inputs, dtype=tf.float32)\n return outputs, tf.identity(final_state, name=\"final_state\")\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_build_rnn(build_rnn)", "Build the Neural Network\nApply the functions you implemented above to:\n- Apply embedding to input_data using your get_embed(input_data, vocab_size, embed_dim) function.\n- Build RNN using cell and your build_rnn(cell, inputs) function.\n- Apply a fully connected layer with a linear activation and vocab_size as the number of outputs.\nReturn the logits and final state in the following tuple (Logits, FinalState)", "def build_nn(cell, rnn_size, input_data, vocab_size, embed_dim):\n \"\"\"\n Build part of the neural network\n :param cell: RNN cell\n :param rnn_size: Size of rnns\n :param input_data: Input data\n :param vocab_size: Vocabulary size\n :param embed_dim: Number of embedding dimensions\n :return: Tuple (Logits, FinalState)\n \"\"\"\n embeddings = get_embed(input_data, vocab_size, embed_dim)\n outputs, final_state = build_rnn(cell, embeddings) # max_time x batch_size x rnn_size\n logits = tf.contrib.layers.fully_connected(outputs, vocab_size, None)\n return logits, final_state\n\n\n# Test was failing because it was written for some previous version of Tensorflow\n# tests.test_build_nn(build_nn)", "Batches\nImplement get_batches to create batches of input and targets using int_text. The batches should be a Numpy array with the shape (number of batches, 2, batch size, sequence length). Each batch contains two elements:\n- The first element is a single batch of input with the shape [batch size, sequence length]\n- The second element is a single batch of targets with the shape [batch size, sequence length]\nIf you can't fill the last batch with enough data, drop the last batch.\nFor exmple, get_batches([1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20], 3, 2) would return a Numpy array of the following:\n```\n[\n # First Batch\n [\n # Batch of Input\n [[ 1 2], [ 7 8], [13 14]]\n # Batch of targets\n [[ 2 3], [ 8 9], [14 15]]\n ]\n# Second Batch\n [\n # Batch of Input\n [[ 3 4], [ 9 10], [15 16]]\n # Batch of targets\n [[ 4 5], [10 11], [16 17]]\n ]\n# Third Batch\n [\n # Batch of Input\n [[ 5 6], [11 12], [17 18]]\n # Batch of targets\n [[ 6 7], [12 13], [18 1]]\n ]\n]\n```\nNotice that the last target value in the last batch is the first input value of the first batch. In this case, 1. This is a common technique used when creating sequence batches, although it is rather unintuitive.", "def get_batches(int_text, batch_size, seq_length):\n \"\"\"\n Return batches of input and target\n :param int_text: Text with the words replaced by their ids\n :param batch_size: The size of batch\n :param seq_length: The length of sequence\n :return: Batches as a Numpy array\n \"\"\"\n total_batches = len(int_text) // (batch_size * seq_length)\n int_text = np.asarray(int_text[:(total_batches * batch_size * seq_length)])\n label_text = np.asarray([0] * int_text.shape[0])\n label_text[:-1], label_text[-1] = int_text[1:], int_text[0]\n int_text = np.reshape(int_text, (-1, batch_size, seq_length))\n label_text = np.reshape(label_text, (-1, batch_size, seq_length))\n # print(np.concatenate((int_text[:, None, :, :], label_text[:, None, :, :]), axis=1)[0])\n return np.concatenate((int_text[:, None, :, :], label_text[:, None, :, :]), axis=1)\n\n\n# Test was not generic enough i.e. it was passing only a specific arranegement of numbers.\n# tests.test_get_batches(get_batches)", "Neural Network Training\nHyperparameters\nTune the following parameters:\n\nSet num_epochs to the number of epochs.\nSet batch_size to the batch size.\nSet rnn_size to the size of the RNNs.\nSet embed_dim to the size of the embedding.\nSet seq_length to the length of sequence.\nSet learning_rate to the learning rate.\nSet show_every_n_batches to the number of batches the neural network should print progress.", "# Number of Epochs\nnum_epochs = 100\n# Batch Size\nbatch_size = 256\n# RNN Size\nrnn_size = 256\n# Embedding Dimension Size\nembed_dim = 300\n# Sequence Length\nseq_length = 10\n# Learning Rate\nlearning_rate = 0.01\n# Show stats for every n number of batches\nshow_every_n_batches = 20\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\nsave_dir = './save'", "Build the Graph\nBuild the graph using the neural network you implemented.", "\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\nfrom tensorflow.contrib import seq2seq\n\ntrain_graph = tf.Graph()\nwith train_graph.as_default():\n vocab_size = len(int_to_vocab)\n input_text, targets, lr = get_inputs()\n input_data_shape = tf.shape(input_text)\n cell, initial_state = get_init_cell(input_data_shape[0], rnn_size)\n logits, final_state = build_nn(cell, rnn_size, input_text, vocab_size, embed_dim)\n\n # Probabilities for generating words\n probs = tf.nn.softmax(logits, name='probs')\n\n # Loss function\n cost = seq2seq.sequence_loss(\n logits,\n targets,\n tf.ones([input_data_shape[0], input_data_shape[1]]))\n\n # Optimizer\n optimizer = tf.train.AdamOptimizer(lr)\n\n # Gradient Clipping\n gradients = optimizer.compute_gradients(cost)\n capped_gradients = [(tf.clip_by_value(grad, -1., 1.), var) for grad, var in gradients if grad is not None]\n train_op = optimizer.apply_gradients(capped_gradients)", "Train\nTrain the neural network on the preprocessed data. If you have a hard time getting a good loss, check the forums to see if anyone is having the same problem.", "\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\nbatches = get_batches(int_text, batch_size, seq_length)\n\nwith tf.Session(graph=train_graph) as sess:\n sess.run(tf.global_variables_initializer())\n\n for epoch_i in range(num_epochs):\n state = sess.run(initial_state, {input_text: batches[0][0]})\n\n for batch_i, (x, y) in enumerate(batches):\n feed = {\n input_text: x,\n targets: y,\n initial_state: state,\n lr: learning_rate}\n train_loss, state, _ = sess.run([cost, final_state, train_op], feed)\n\n # Show every <show_every_n_batches> batches\n if (epoch_i * len(batches) + batch_i) % show_every_n_batches == 0:\n print('Epoch {:>3} Batch {:>4}/{} train_loss = {:.3f}'.format(\n epoch_i,\n batch_i,\n len(batches),\n train_loss))\n\n # Save Model\n saver = tf.train.Saver()\n saver.save(sess, save_dir)\n print('Model Trained and Saved')", "Save Parameters\nSave seq_length and save_dir for generating a new TV script.", "\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\n# Save parameters for checkpoint\nhelper.save_params((seq_length, save_dir))", "Checkpoint", "\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\nimport tensorflow as tf\nimport numpy as np\nimport helper\nimport problem_unittests as tests\n\n_, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()\nseq_length, load_dir = helper.load_params()", "Implement Generate Functions\nGet Tensors\nGet tensors from loaded_graph using the function get_tensor_by_name(). Get the tensors using the following names:\n- \"input:0\"\n- \"initial_state:0\"\n- \"final_state:0\"\n- \"probs:0\"\nReturn the tensors in the following tuple (InputTensor, InitialStateTensor, FinalStateTensor, ProbsTensor)", "def get_tensors(loaded_graph):\n \"\"\"\n Get input, initial state, final state, and probabilities tensor from <loaded_graph>\n :param loaded_graph: TensorFlow graph loaded from file\n :return: Tuple (InputTensor, InitialStateTensor, FinalStateTensor, ProbsTensor)\n \"\"\"\n return (loaded_graph.get_tensor_by_name('input:0'), loaded_graph.get_tensor_by_name('initial_state:0'), \n loaded_graph.get_tensor_by_name('final_state:0'), loaded_graph.get_tensor_by_name('probs:0'))\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_get_tensors(get_tensors)", "Choose Word\nImplement the pick_word() function to select the next word using probabilities.", "def pick_word(probabilities, int_to_vocab):\n \"\"\"\n Pick the next word in the generated text\n :param probabilities: Probabilites of the next word\n :param int_to_vocab: Dictionary of word ids as the keys and words as the values\n :return: String of the predicted word\n \"\"\"\n probabilities = np.reshape(probabilities, (len(int_to_vocab.keys()),))\n return int_to_vocab[np.argmax(probabilities)]\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_pick_word(pick_word)", "Generate TV Script\nThis will generate the TV script for you. Set gen_length to the length of TV script you want to generate.", "gen_length = 200\n# homer_simpson, moe_szyslak, or Barney_Gumble\nprime_word = 'moe_szyslak'\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\nloaded_graph = tf.Graph()\nwith tf.Session(graph=loaded_graph) as sess:\n # Load saved model\n loader = tf.train.import_meta_graph(load_dir + '.meta')\n loader.restore(sess, load_dir)\n\n # Get Tensors from loaded model\n input_text, initial_state, final_state, probs = get_tensors(loaded_graph)\n\n # Sentences generation setup\n gen_sentences = [prime_word + ':']\n prev_state = sess.run(initial_state, {input_text: np.array([[1]])})\n\n # Generate sentences\n for n in range(gen_length):\n # Dynamic Input\n dyn_input = [[vocab_to_int[word] for word in gen_sentences[-seq_length:]]]\n dyn_seq_length = len(dyn_input[0])\n\n # Get Prediction\n probabilities, prev_state = sess.run(\n [probs, final_state],\n {input_text: dyn_input, initial_state: prev_state})\n \n pred_word = pick_word(probabilities[0][dyn_seq_length-1], int_to_vocab)\n\n gen_sentences.append(pred_word)\n \n # Remove tokens\n tv_script = ' '.join(gen_sentences)\n for key, token in token_dict.items():\n ending = ' ' if key in ['\\n', '(', '\"'] else ''\n tv_script = tv_script.replace(' ' + token.lower(), key)\n tv_script = tv_script.replace('\\n ', '\\n')\n tv_script = tv_script.replace('( ', '(')\n \n print(tv_script)", "The TV Script is Nonsensical\nIt's ok if the TV script doesn't make any sense. We trained on less than a megabyte of text. In order to get good results, you'll have to use a smaller vocabulary or get more data. Luckly there's more data! As we mentioned in the begging of this project, this is a subset of another dataset. We didn't have you train on all the data, because that would take too long. However, you are free to train your neural network on all the data. After you complete the project, of course.\nSubmitting This Project\nWhen submitting this project, make sure to run all the cells before saving the notebook. Save the notebook file as \"dlnd_tv_script_generation.ipynb\" and save it as a HTML file under \"File\" -> \"Download as\". Include the \"helper.py\" and \"problem_unittests.py\" files in your submission." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
adityaka/misc_scripts
python-scripts/data_analytics_learn/link_pandas/Ex_Files_Pandas_Data/Exercise Files/04_02/Begin/Select.ipynb
bsd-3-clause
[ "Select, Add, Delete, Columns", "import pandas as pd\nimport numpy as np", "dictionary like operations\ndictionary selection with string index", "cookbook_df = pd.DataFrame({'AAA' : [4,5,6,7], 'BBB' : [10,20,30,40],'CCC' : [100,50,-30,-50]})\ncookbook_df['BBB']", "arithmetic vectorized operation using string indices\ncolumn deletion\nadd a new column using a Python list\ninsert function\ndocumentation: http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.insert.html" ]
[ "markdown", "code", "markdown", "code", "markdown" ]
0Rick0/Fontys-DS-GCD
MongoDB.ipynb
mit
[ "MongoDB in Python", "from pymongo import MongoClient, IndexModel, ASCENDING, DESCENDING\nfrom bson.son import SON\n\ncl = MongoClient()\n\nscratch_db = cl.scratch\n\n\nfirst = scratch_db.zips.find().limit(10)\nfor item in first:\n print(item)", "Index comparison\nSee how indexes affect queries\nFirst without then with", "\nscratch_db.zips.drop_indexes()\n\ncount = scratch_db.zips.find().count()\ncity_count = scratch_db.zips.find({\"city\": \"FLAGSTAFF\"}).count()\ncity_explain = scratch_db.zips.find({\"city\": \"FLAGSTAFF\"}).explain()['executionStats']\n\nprint(count)\nprint(city_count)\nprint(city_explain)\n\nscratch_db.zips.drop_indexes()\nscratch_db.zips.create_index([(\"city\", ASCENDING)])\n\n\ncount = scratch_db.zips.find().count()\ncity_count = scratch_db.zips.find({\"city\": \"FLAGSTAFF\"}).count()\ncity_explain = scratch_db.zips.find({\"city\": \"FLAGSTAFF\"}).explain()['executionStats']\n\nprint(count)\nprint(city_count)\nprint(city_explain)", "You can see with the index it's execution is a bit different.\nSeeing the executionTimeMillis parameter shows that the second one is executed much faster.\nThis is because the index allow you to search the index instead of all the documents.\nSome other information about the dataset", "print(\"Amount of cities per state:\")\npipeline = [\n {\"$unwind\": \"$state\"},\n {\"$group\": {\"_id\": \"$state\", \"count\": {\"$sum\": 1}}},\n {\"$sort\": SON([(\"count\", -1), (\"_id\", -1)])}\n ]\nresults = scratch_db.zips.aggregate(pipeline)\nfor result in results:\n print(\"State %s: %d\" % tuple(result.values()))\n\nprint(\"Amount of cities with fewer then 50 people\")\nlt = scratch_db.zips.find({\"pop\": {\"$lt\": 50}})\nprint(\"%d cities\" % lt.count())\nfor city in lt.limit(10):\n print(\"%s: %d\" % (city['city'], city['pop']))", "Geolocation\nMongodb also has build in support for geolocation indexes\nThis allows for searching for example nearby shops for a given location", "scratch_db.zips.create_index([(\"loc\", \"2dsphere\")])\nflagstaff = scratch_db.zips.find_one({\"city\": \"FLAGSTAFF\"})\nnearby = scratch_db.zips.find({\"loc\": {\n \"$near\": {\n \"$geometry\": {\n 'type': 'Point',\n 'coordinates': flagstaff['loc']\n },\n \"$maxDistance\": 50000\n }\n}})\n\nfor city in nearby:\n print(city['city'])" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
GoogleCloudPlatform/training-data-analyst
courses/machine_learning/deepdive2/production_ml/labs/sdk_metric_parameter_tracking_for_custom_jobs.ipynb
apache-2.0
[ "Tracking Parameters and Metrics for Vertex AI Custom Training Jobs\nLearning objectives\nIn this notebook, you learn how to:\n\nTrack training parameters and prediction metrics for a custom training job.\nExtract and perform analysis for all parameters and metrics within an experiment.\n\nOverview\nThis notebook demonstrates how to track metrics and parameters for Vertex AI custom training jobs, and how to perform detailed analysis using this data.\nDataset\nThis example uses the Abalone Dataset. For more information about this dataset please visit: https://archive.ics.uci.edu/ml/datasets/abalone \nEach learning objective will correspond to a #TODO in this student lab notebook -- try to complete this notebook first and then review the solution notebook\nInstall additional packages\nInstall additional package dependencies not installed in your notebook environment.", "import os\n\n# The Google Cloud Notebook product has specific requirements\nIS_GOOGLE_CLOUD_NOTEBOOK = os.path.exists(\"/opt/deeplearning/metadata/env_version\")\n\n# Google Cloud Notebook requires dependencies to be installed with '--user'\nUSER_FLAG = \"\"\nif IS_GOOGLE_CLOUD_NOTEBOOK:\n USER_FLAG = \"--user\"\n\n# Install additional packages\n! pip3 install -U tensorflow $USER_FLAG\n! python3 -m pip install {USER_FLAG} google-cloud-aiplatform --upgrade\n! pip3 install scikit-learn {USER_FLAG}\n", "Please ignore the incompatibility errors.\nRestart the kernel\nAfter you install the additional packages, you need to restart the notebook kernel so it can find the packages.", "# Automatically restart kernel after installs\nimport os\n\nif not os.getenv(\"IS_TESTING\"):\n # Automatically restart kernel after installs\n import IPython\n\n app = IPython.Application.instance()\n app.kernel.do_shutdown(True)", "Set up your Google Cloud project\nThe following steps are required, regardless of your notebook environment.\n\n\nEnable the Vertex AI API and Compute Engine API.\n\n\nIf you are running this notebook locally, you will need to install the Cloud SDK.\n\n\nEnter your project ID in the cell below. Then run the cell to make sure the\nCloud SDK uses the right project for all the commands in this notebook.\n\n\nNote: Jupyter runs lines prefixed with ! as shell commands, and it interpolates Python variables prefixed with $ into these commands.\nSet your project ID\nIf you don't know your project ID, you may be able to get your project ID using gcloud.", "import os\n\nPROJECT_ID = \"qwiklabs-gcp-03-aaf99941e8b2\" # Replace your project ID here \n\n# Get your Google Cloud project ID from gcloud\nif not os.getenv(\"IS_TESTING\"):\n shell_output = !gcloud config list --format 'value(core.project)' 2>/dev/null\n PROJECT_ID = shell_output[0]\n print(\"Project ID: \", PROJECT_ID)", "Otherwise, set your project ID here.", "if PROJECT_ID == \"\" or PROJECT_ID is None:\n PROJECT_ID = \"qwiklabs-gcp-03-aaf99941e8b2\" # Replace your project ID here", "Set gcloud config to your project ID.", "!gcloud config set project $PROJECT_ID", "Timestamp\nIf you are in a live tutorial session, you might be using a shared test account or project. To avoid name collisions between users on resources created, you create a timestamp for each instance session, and append it onto the name of resources you create in this tutorial.", "# Import necessary library and define Timestamp\nfrom datetime import datetime\n\nTIMESTAMP = datetime.now().strftime(\"%Y%m%d%H%M%S\")", "Create a Cloud Storage bucket\nThe following steps are required, regardless of your notebook environment.\nWhen you submit a training job using the Cloud SDK, you upload a Python package\ncontaining your training code to a Cloud Storage bucket. Vertex AI runs\nthe code from this package. In this tutorial, Vertex AI also saves the\ntrained model that results from your job in the same bucket. Using this model artifact, you can then\ncreate Vertex AI model and endpoint resources in order to serve\nonline predictions.\nSet the name of your Cloud Storage bucket below. It must be unique across all\nCloud Storage buckets.\nYou may also change the REGION variable, which is used for operations\nthroughout the rest of this notebook. Make sure to choose a region where Vertex AI services are\navailable. You may\nnot use a Multi-Regional Storage bucket for training with Vertex AI.", "BUCKET_URI = \"gs://qwiklabs-gcp-03-aaf99941e8b2\" # Replace your bucket name here\nREGION = \"us-central1\" # @param {type:\"string\"}\n\nif BUCKET_URI == \"\" or BUCKET_URI is None or BUCKET_URI == \"gs://qwiklabs-gcp-03-aaf99941e8b2\": # Replace your bucket name here\n BUCKET_URI = \"gs://\" + PROJECT_ID + \"-aip-\" + TIMESTAMP\n\nif REGION == \"[your-region]\":\n REGION = \"us-central1\"", "Only if your bucket doesn't already exist: Run the following cell to create your Cloud Storage bucket.", "# Create your bucket\n! gsutil mb -l $REGION $BUCKET_URI", "Finally, validate access to your Cloud Storage bucket by examining its contents:", "! gsutil ls -al $BUCKET_URI", "Import libraries and define constants\nImport required libraries.", "# Import required libraries\nimport pandas as pd\nfrom google.cloud import aiplatform\nfrom sklearn.metrics import mean_absolute_error, mean_squared_error\nfrom tensorflow.python.keras.utils import data_utils", "Initialize Vertex AI and set an experiment\nDefine experiment name.", "EXPERIMENT_NAME = \"new\" # Give your experiment a name of you choice", "If EXEPERIMENT_NAME is not set, set a default one below:", "if EXPERIMENT_NAME == \"\" or EXPERIMENT_NAME is None:\n EXPERIMENT_NAME = \"my-experiment-\" + TIMESTAMP", "Initialize the client for Vertex AI.", "aiplatform.init(\n project=PROJECT_ID,\n location=REGION,\n staging_bucket=BUCKET_URI,\n experiment=EXPERIMENT_NAME,\n)", "Tracking parameters and metrics in Vertex AI custom training jobs\nThis example uses the Abalone Dataset. For more information about this dataset please visit: https://archive.ics.uci.edu/ml/datasets/abalone", "Download and copy the csv file in your bucket\n!wget https://storage.googleapis.com/download.tensorflow.org/data/abalone_train.csv\n!gsutil cp abalone_train.csv {BUCKET_URI}/data/\n\ngcs_csv_path = f\"{BUCKET_URI}/data/abalone_train.csv\"", "Create a managed tabular dataset from a CSV\nA Managed dataset can be used to create an AutoML model or a custom model.", "# Create a managed tabular dataset\nds = # TODO 1: Your code goes here(display_name=\"abalone\", gcs_source=[gcs_csv_path])\n\nds.resource_name", "Write the training script\nRun the following cell to create the training script that is used in the sample custom training job.", "%%writefile training_script.py\n\nimport pandas as pd\nimport argparse\nimport os\nimport tensorflow as tf\nfrom tensorflow import keras\nfrom tensorflow.keras import layers\n\nparser = argparse.ArgumentParser()\nparser.add_argument('--epochs', dest='epochs',\n default=10, type=int,\n help='Number of epochs.')\nparser.add_argument('--num_units', dest='num_units',\n default=64, type=int,\n help='Number of unit for first layer.')\nargs = parser.parse_args()\n# uncomment and bump up replica_count for distributed training\n# strategy = tf.distribute.experimental.MultiWorkerMirroredStrategy()\n# tf.distribute.experimental_set_strategy(strategy)\n\ncol_names = [\"Length\", \"Diameter\", \"Height\", \"Whole weight\", \"Shucked weight\", \"Viscera weight\", \"Shell weight\", \"Age\"]\ntarget = \"Age\"\n\ndef aip_data_to_dataframe(wild_card_path):\n return pd.concat([pd.read_csv(fp.numpy().decode(), names=col_names)\n for fp in tf.data.Dataset.list_files([wild_card_path])])\n\ndef get_features_and_labels(df):\n return df.drop(target, axis=1).values, df[target].values\n\ndef data_prep(wild_card_path):\n return get_features_and_labels(aip_data_to_dataframe(wild_card_path))\n\n\nmodel = tf.keras.Sequential([layers.Dense(args.num_units), layers.Dense(1)])\nmodel.compile(loss='mse', optimizer='adam')\n\nmodel.fit(*data_prep(os.environ[\"AIP_TRAINING_DATA_URI\"]),\n epochs=args.epochs ,\n validation_data=data_prep(os.environ[\"AIP_VALIDATION_DATA_URI\"]))\nprint(model.evaluate(*data_prep(os.environ[\"AIP_TEST_DATA_URI\"])))\n\n# save as Vertex AI Managed model\ntf.saved_model.save(model, os.environ[\"AIP_MODEL_DIR\"])", "Launch a custom training job and track its training parameters on Vertex AI ML Metadata", "# Define the training parameters\njob = aiplatform.CustomTrainingJob(\n display_name=\"train-abalone-dist-1-replica\",\n script_path=\"training_script.py\",\n container_uri=\"us-docker.pkg.dev/vertex-ai/training/tf-cpu.2-8:latest\",\n requirements=[\"gcsfs==0.7.1\"],\n model_serving_container_image_uri=\"us-docker.pkg.dev/vertex-ai/prediction/tf2-cpu.2-8:latest\",\n)", "Start a new experiment run to track training parameters and start the training job. Note that this operation will take around 10 mins.", "aiplatform.start_run(\"custom-training-run-1\") # Change this to your desired run name\nparameters = {\"epochs\": 10, \"num_units\": 64}\naiplatform.log_params(parameters)\n\n# Launch the training job\nmodel = # TODO 2: Your code goes here(\n ds,\n replica_count=1,\n model_display_name=\"abalone-model\",\n args=[f\"--epochs={parameters['epochs']}\", f\"--num_units={parameters['num_units']}\"],\n)", "Deploy Model and calculate prediction metrics\nDeploy model to Google Cloud. This operation will take 10-20 mins.", "# Deploy the model\nendpoint = # TODO 3: Your code goes here(machine_type=\"n1-standard-4\")", "Once model is deployed, perform online prediction using the abalone_test dataset and calculate prediction metrics.\nPrepare the prediction dataset.", "def read_data(uri):\n dataset_path = data_utils.get_file(\"abalone_test.data\", uri)\n col_names = [\n \"Length\",\n \"Diameter\",\n \"Height\",\n \"Whole weight\",\n \"Shucked weight\",\n \"Viscera weight\",\n \"Shell weight\",\n \"Age\",\n ]\n dataset = pd.read_csv(\n dataset_path,\n names=col_names,\n na_values=\"?\",\n comment=\"\\t\",\n sep=\",\",\n skipinitialspace=True,\n )\n return dataset\n\n\ndef get_features_and_labels(df):\n target = \"Age\"\n return df.drop(target, axis=1).values, df[target].values\n\n\ntest_dataset, test_labels = get_features_and_labels(\n read_data(\n \"https://storage.googleapis.com/download.tensorflow.org/data/abalone_test.csv\"\n )\n)", "Perform online prediction.", "# Perform online prediction using endpoint\nprediction = # TODO 4: Your code goes here(test_dataset.tolist())\nprediction", "Calculate and track prediction evaluation metrics.", "mse = mean_squared_error(test_labels, prediction.predictions)\nmae = mean_absolute_error(test_labels, prediction.predictions)\n\naiplatform.log_metrics({\"mse\": mse, \"mae\": mae})", "Extract all parameters and metrics created during this experiment.", "# Extract all parameters and metrics of the experiment\n# TODO 5: Your code goes here", "View data in the Cloud Console\nParameters and metrics can also be viewed in the Cloud Console.", "print(\"Vertex AI Experiments:\")\nprint(\n f\"https://console.cloud.google.com/ai/platform/experiments/experiments?folder=&organizationId=&project={PROJECT_ID}\"\n)", "Cleaning up\nTo clean up all Google Cloud resources used in this project, you can delete the Google Cloud\nproject you used for the tutorial.\nOtherwise, you can delete the individual resources you created in this tutorial:\nTraining Job\nModel\nCloud Storage Bucket\n\nVertex AI Dataset\nTraining Job\nModel\nEndpoint\nCloud Storage Bucket", "# Warning: Setting this to true will delete everything in your bucket\ndelete_bucket = False\n\n# Delete dataset\nds.delete()\n\n# Delete the training job\njob.delete()\n\n# Undeploy model from endpoint\nendpoint.undeploy_all()\n\n# Delete the endpoint\nendpoint.delete()\n\n# Delete the model\nmodel.delete()\n\n\nif delete_bucket or os.getenv(\"IS_TESTING\"):\n ! gsutil -m rm -r $BUCKET_URI" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
GoogleCloudPlatform/cloudml-samples
notebooks/xgboost/HyperparameterTuningWithXGBoostInCMLE.ipynb
apache-2.0
[ "#!/usr/bin/env python\n\n# Copyright 2018 Google LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.", "XGBoost HP Tuning on AI Platform\nThis notebook trains a model on Ai Platform using Hyperparameter Tuning to predict a car's Miles Per Gallon. It uses Auto MPG Data Set from UCI Machine Learning Repository.\nCitation: Dua, D. and Karra Taniskidou, E. (2017). UCI Machine Learning Repository [http://archive.ics.uci.edu/ml]. Irvine, CA: University of California, School of Information and Computer Science.\nHow to train your model on AI Platform with HP tuning.\nUsing HP Tuning for training can be done in a few steps:\n1. Create your python model file\n 1. Add argument parsing for the hyperparameter values. (These values are chosen for you in this notebook)\n 1. Add code to download your data from Google Cloud Storage so that AI Platform can use it\n 1. Add code to track the performance of your hyperparameter values.\n 1. Add code to export and save the model to Google Cloud Storage once AI Platform finishes training the model\n1. Prepare a package\n1. Submit the training job\nPrerequisites\nBefore you jump in, let’s cover some of the different tools you’ll be using to get HP tuning up and running on AI Platform. \nGoogle Cloud Platform lets you build and host applications and websites, store data, and analyze data on Google's scalable infrastructure.\nAI Platform is a managed service that enables you to easily build machine learning models that work on any type of data, of any size.\nGoogle Cloud Storage (GCS) is a unified object storage for developers and enterprises, from live data serving to data analytics/ML to data archiving.\nCloud SDK is a command line tool which allows you to interact with Google Cloud products. In order to run this notebook, make sure that Cloud SDK is installed in the same environment as your Jupyter kernel.\nOverview of Hyperparameter Tuning - Hyperparameter tuning takes advantage of the processing infrastructure of Google Cloud Platform to test different hyperparameter configurations when training your model.\nPart 0: Setup\n\nCreate a project on GCP\nCreate a Google Cloud Storage Bucket\nEnable AI Platform Training and Prediction and Compute Engine APIs\nInstall Cloud SDK\nInstall XGBoost [Optional: used if running locally]\nInstall pandas [Optional: used if running locally]\nInstall cloudml-hypertune [Optional: used if running locally]\n\nThese variables will be needed for the following steps.\n* TRAINER_PACKAGE_PATH &lt;./auto_mpg_hp_tuning&gt; - A packaged training application that will be staged in a Google Cloud Storage location. The model file created below is placed inside this package path.\n* MAIN_TRAINER_MODULE &lt;auto_mpg_hp_tuning.train&gt; - Tells AI Platform which file to execute. This is formatted as follows <folder_name.python_file_name>\n* JOB_DIR &lt;gs://$BUCKET_ID/xgboost_learn_job_dir&gt; - The path to a Google Cloud Storage location to use for job output.\n* RUNTIME_VERSION &lt;1.9&gt; - The version of AI Platform to use for the job. If you don't specify a runtime version, the training service uses the default AI Platform runtime version 1.0. See the list of runtime versions for more information.\n* PYTHON_VERSION &lt;3.5&gt; - The Python version to use for the job. Python 3.5 is available with runtime version 1.4 or greater. If you don't specify a Python version, the training service uses Python 2.7.\n* HPTUNING_CONFIG &lt;hptuning_config.yaml&gt; - Path to the job configuration file.\n Replace: \n* PROJECT_ID &lt;YOUR_PROJECT_ID&gt; - with your project's id. Use the PROJECT_ID that matches your Google Cloud Platform project.\n* BUCKET_ID &lt;YOUR_BUCKET_ID&gt; - with the bucket id you created above.\n* JOB_DIR &lt;gs://YOUR_BUCKET_ID/xgboost_job_dir&gt; - with the bucket id you created above.\n* REGION &lt;REGION&gt; - select a region from here or use the default 'us-central1'. The region is where the model will be deployed.", "# Replace <PROJECT_ID> and <BUCKET_ID> with proper Project and Bucket ID's:\n%env PROJECT_ID <PROJECT_ID>\n%env BUCKET_ID <BUCKET_ID>\n%env JOB_DIR gs://<BUCKET_ID>/xgboost_job_dir\n%env REGION us-central1\n%env TRAINER_PACKAGE_PATH ./auto_mpg_hp_tuning\n%env MAIN_TRAINER_MODULE auto_mpg_hp_tuning.train\n%env RUNTIME_VERSION 1.9\n%env PYTHON_VERSION 3.5\n%env HPTUNING_CONFIG hptuning_config.yaml\n! mkdir auto_mpg_hp_tuning", "The data\nThe Auto MPG Data Set that this sample\nuses for training is provided by the UC Irvine Machine Learning\nRepository. We have hosted the data on a public GCS bucket gs://cloud-samples-data/ml-engine/auto_mpg/. The data has been pre-processed to remove rows with incomplete data so as not to create additional steps for this notebook.\n\nTraining file is auto-mpg.data\n\nNote: Your typical development process with your own data would require you to upload your data to GCS so that AI Platform can access that data. However, in this case, we have put the data on GCS to avoid the steps of having you download the data from UC Irvine and then upload the data to GCS.\nCitation: Dua, D. and Karra Taniskidou, E. (2017). UCI Machine Learning Repository [http://archive.ics.uci.edu/ml]. Irvine, CA: University of California, School of Information and Computer Science.\nDisclaimer\nThis dataset is provided by a third party. Google provides no representation,\nwarranty, or other guarantees about the validity or any other aspects of this dataset.\nPart 1: Create your python model file\nFirst, we'll create the python model file (provided below) that we'll upload to AI Platform. This is similar to your normal process for creating a XGBoost model. However, there are a few key differences:\n1. Downloading the data from GCS at the start of your file, so that AI Platform can access the data.\n1. Exporting/saving the model to GCS at the end of your file, so that you can use it for predictions.\n1. Define a command-line argument in your main training module for each tuned hyperparameter.\n1. Use the value passed in those arguments to set the corresponding hyperparameter in your application's XGBoost code.\n1. Use cloudml-hypertune to track your training jobs metrics.\nThe code in this file first handles the hyperparameters passed to the file from AI Platform. Then it loads the data into a pandas DataFrame that can be used by XGBoost. Then the model is fit against the training data and the metrics for that data are shared with AI Platform. Lastly, Python's built in pickle library is used to save the model to a file that can be uploaded to AI Platform's prediction service.\nNote: In normal practice you would want to test your model locally on a small dataset to ensure that it works, before using it with your larger dataset on AI Platform. This avoids wasted time and costs.\nSetup the imports and helper functions", "%%writefile ./auto_mpg_hp_tuning/train.py\n\nimport argparse\nimport datetime\nimport os\nimport pandas as pd\nimport subprocess\nimport pickle\n\nfrom google.cloud import storage\nimport hypertune\nimport xgboost as xgb\nfrom random import shuffle\n\ndef split_dataframe(dataframe, rate=0.8):\n indices = dataframe.index.values.tolist()\n length = len(dataframe)\n shuffle(indices)\n train_size = int(length * rate)\n train_indices = indices[:train_size]\n test_indices = indices[train_size:]\n return dataframe.iloc[train_indices], dataframe.iloc[test_indices]\n\n", "Load the hyperparameter values that are passed to the model during training.\nIn this tutorial, the Lasso regressor is used, because it has several parameters that can be used to help demonstrate how to choose HP tuning values. (The range of values are set below in the configuration file for the HP tuning values.)", "%%writefile -a ./auto_mpg_hp_tuning/train.py\n\nparser = argparse.ArgumentParser()\nparser.add_argument(\n '--job-dir', # handled automatically by AI Platform\n help='GCS location to write checkpoints and export models',\n required=True\n)\nparser.add_argument(\n '--max_depth', # Specified in the config file\n help='Maximum depth of the XGBoost tree. default: 3',\n default=3,\n type=int\n)\nparser.add_argument(\n '--n_estimators', # Specified in the config file\n help='Number of estimators to be created. default: 100',\n default=100,\n type=int\n)\nparser.add_argument(\n '--booster', # Specified in the config file\n help='which booster to use: gbtree, gblinear or dart. default: gbtree',\n default='gbtree',\n type=str\n)\n\nargs = parser.parse_args()\n\n", "Add code to download the data from GCS\nIn this case, using the publicly hosted data,AI Platform will then be able to use the data when training your model.", "%%writefile -a ./auto_mpg_hp_tuning/train.py\n\n# Public bucket holding the auto mpg data\nbucket = storage.Client().bucket('cloud-samples-data')\n# Path to the data inside the public bucket\nblob = bucket.blob('ml-engine/auto_mpg/auto-mpg.data')\n# Download the data\nblob.download_to_filename('auto-mpg.data')\n\n\n# ---------------------------------------\n# This is where your model code would go. Below is an example model using the auto mpg dataset.\n# ---------------------------------------\n# Define the format of your input data including unused columns\n# (These are the columns from the auto-mpg data files)\n\nCOLUMNS = [\n 'mpg',\n 'cylinders',\n 'displacement',\n 'horsepower',\n 'weight',\n 'acceleration',\n 'model-year',\n 'origin',\n 'car-name'\n]\n\nFEATURES = [\n 'cylinders',\n 'displacement',\n 'horsepower',\n 'weight',\n 'acceleration',\n 'model-year',\n 'origin'\n]\n\nTARGET = 'mpg'\n\n# Load the training auto mpg dataset\nwith open('./auto-mpg.data', 'r') as train_data:\n raw_training_data = pd.read_csv(train_data, header=None, names=COLUMNS, delim_whitespace=True)\n raw_training_data = raw_training_data[FEATURES + [TARGET]]\n \ntrain_df, test_df = split_dataframe(raw_training_data, 0.8)\n\n", "Use the Hyperparameters\nUse the Hyperparameter values passed in those arguments to set the corresponding hyperparameters in your application's XGBoost code.", "%%writefile -a ./auto_mpg_hp_tuning/train.py\n\n# Create the regressor, here we will use a Lasso Regressor to demonstrate the use of HP Tuning.\n# Here is where we set the variables used during HP Tuning from\n# the parameters passed into the python script\nregressor = xgb.XGBRegressor(max_depth=args.max_depth,\n n_estimators=args.n_estimators,\n booster=args.booster\n )\n\n# Transform the features and fit them to the regressor\nregressor.fit(train_df[FEATURES], train_df[TARGET])\n\n", "Report the mean accuracy as hyperparameter tuning objective metric.", "%%writefile -a ./auto_mpg_hp_tuning/train.py\n\n# Calculate the mean accuracy on the given test data and labels.\nscore = regressor.score(test_df[FEATURES], test_df[TARGET])\n\n# The default name of the metric is training/hptuning/metric. \n# We recommend that you assign a custom name. The only functional difference is that \n# if you use a custom name, you must set the hyperparameterMetricTag value in the \n# HyperparameterSpec object in your job request to match your chosen name.\n# https://cloud.google.com/ml-engine/reference/rest/v1/projects.jobs#HyperparameterSpec\nhpt = hypertune.HyperTune()\nhpt.report_hyperparameter_tuning_metric(\n hyperparameter_metric_tag='my_metric_tag',\n metric_value=score,\n global_step=1000)\n\n", "Export and save the model to GCS", "%%writefile -a ./auto_mpg_hp_tuning/train.py\n\n# Export the model to a file\nmodel_filename = 'model.pkl'\nwith open(model_filename, \"wb\") as f:\n pickle.dump(regressor, f)\n\n# Example: job_dir = 'gs://BUCKET_ID/xgboost_job_dir/1'\njob_dir = args.job_dir.replace('gs://', '') # Remove the 'gs://'\n# Get the Bucket Id\nbucket_id = job_dir.split('/')[0]\n# Get the path\nbucket_path = job_dir[len('{}/'.format(bucket_id)):] # Example: 'xgboost_job_dir/1'\n\n# Upload the model to GCS\nbucket = storage.Client().bucket(bucket_id)\nblob = bucket.blob('{}/{}'.format(\n bucket_path,\n model_filename))\n\nblob.upload_from_filename(model_filename)\n\n", "Part 2: Create Trainer Package with Hyperparameter Tuning\nNext we need to build the Trainer Package, which holds all your code and dependencies need to train your model on AI Platform. \nFirst, we create an empty __init__.py file.", "%%writefile ./auto_mpg_hp_tuning/__init__.py\n\n#!/usr/bin/env python\n\n# Copyright 2018 Google LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n# Note that __init__.py can be an empty file.\n", "Next, we need to set the hp tuning values used to train our model. Check HyperparameterSpec for more info. \nIn this config file several key things are set:\n* maxTrials - How many training trials should be attempted to optimize the specified hyperparameters.\n* maxParallelTrials: 5 - The number of training trials to run concurrently. \n* params - The set of parameters to tune.. These are the different parameters to pass into your model and the specified ranges you wish to try.\n * parameterName - The parameter name must be unique amongst all ParameterConfigs\n * type - The type of the parameter. [INTEGER, DOUBLE, ...]\n * minValue & maxValue - The range of values that this parameter could be. \n * scaleType - How the parameter should be scaled to the hypercube. Leave unset for categorical parameters. Some kind of scaling is strongly recommended for real or integral parameters (e.g., UNIT_LINEAR_SCALE).", "%%writefile ./hptuning_config.yaml\n\n#!/usr/bin/env python\n\n# Copyright 2018 Google LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n# hyperparam.yaml\ntrainingInput:\n hyperparameters:\n goal: MAXIMIZE\n maxTrials: 30\n maxParallelTrials: 5\n hyperparameterMetricTag: my_metric_tag\n enableTrialEarlyStopping: TRUE \n params:\n - parameterName: max_depth\n type: INTEGER\n minValue: 3\n maxValue: 8\n - parameterName: n_estimators\n type: INTEGER\n minValue: 50\n maxValue: 200\n - parameterName: booster\n type: CATEGORICAL\n categoricalValues: [\n \"gbtree\",\n \"gblinear\",\n \"dart\"\n ]\n\n", "Lastly, we need to install the dependencies used in our model. Check adding_standard_pypi_dependencies for more info.\nTo do this, AI Platform uses a setup.py file to install your dependencies.", "%%writefile ./setup.py\n\n#!/usr/bin/env python\n\n# Copyright 2018 Google LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom setuptools import find_packages\nfrom setuptools import setup\n\nREQUIRED_PACKAGES = ['cloudml-hypertune']\n\nsetup(\n name='auto_mpg_hp_tuning',\n version='0.1',\n install_requires=REQUIRED_PACKAGES,\n packages=find_packages(),\n include_package_data=True,\n description='Auto MPG XGBoost HP tuning training application'\n)\n", "Part 3: Submit Training Job\nNext we need to submit the job for training on AI Platform. We'll use gcloud to submit the job which has the following flags:\n\njob-name - A name to use for the job (mixed-case letters, numbers, and underscores only, starting with a letter). In this case: auto_mpg_hp_tuning_$(date +\"%Y%m%d_%H%M%S\")\njob-dir - The path to a Google Cloud Storage location to use for job output.\npackage-path - A packaged training application that is staged in a Google Cloud Storage location. If you are using the gcloud command-line tool, this step is largely automated.\nmodule-name - The name of the main module in your trainer package. The main module is the Python file you call to start the application. If you use the gcloud command to submit your job, specify the main module name in the --module-name argument. Refer to Python Packages to figure out the module name.\nregion - The Google Cloud Compute region where you want your job to run. You should run your training job in the same region as the Cloud Storage bucket that stores your training data. Select a region from here or use the default 'us-central1'.\nruntime-version - The version of AI Platform to use for the job. If you don't specify a runtime version, the training service uses the default AI Platform runtime version 1.0. See the list of runtime versions for more information.\npython-version - The Python version to use for the job. Python 3.5 is available with runtime version 1.4 or greater. If you don't specify a Python version, the training service uses Python 2.7.\nscale-tier - A scale tier specifying the type of processing cluster to run your job on. This can be the CUSTOM scale tier, in which case you also explicitly specify the number and type of machines to use.\nconfig - Path to the job configuration file. This file should be a YAML document (JSON also accepted) containing a Job resource as defined in the API\n\nNote: Check to make sure gcloud is set to the current PROJECT_ID", "! gcloud config set project $PROJECT_ID", "Submit the training job.", "! gcloud ml-engine jobs submit training auto_mpg_hp_tuning_$(date +\"%Y%m%d_%H%M%S\") \\\n --job-dir $JOB_DIR \\\n --package-path $TRAINER_PACKAGE_PATH \\\n --module-name $MAIN_TRAINER_MODULE \\\n --region $REGION \\\n --runtime-version=$RUNTIME_VERSION \\\n --python-version=$PYTHON_VERSION \\\n --scale-tier basic \\\n --config $HPTUNING_CONFIG", "[Optional] StackDriver Logging\nYou can view the logs for your training job:\n1. Go to https://console.cloud.google.com/\n1. Select \"Logging\" in left-hand pane\n1. In left-hand pane, go to \"AI Platform\" and select Jobs\n1. In filter by prefix, use the value of $JOB_NAME to view the logs\nOn the logging page of your model, you can view the different results for each HP tuning job. \nExample:\n{\n \"trialId\": \"15\",\n \"hyperparameters\": {\n \"booster\": \"dart\",\n \"max_depth\": \"7\",\n \"n_estimators\": \"102\"\n },\n \"finalMetric\": {\n \"trainingStep\": \"1000\",\n \"objectiveValue\": 0.9259230441279733\n }\n}\n[Optional] Verify Model File in GCS\nView the contents of the destination model folder to verify that all 30 model files have indeed been uploaded to GCS.\nNote: The model can take a few minutes to train and show up in GCS.", "! gsutil ls $JOB_DIR/*" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
BL-Labs/poetryhunt
Clustering running notepad.ipynb
mit
[ "Clustering experiments\nI hope that by interrogating various ways of looking at the newspaper text placement and the way it is aligned on a page, that some sort of grouping might surface. From the selection of poetry, it seems that a poem is likely to have an aligned left edge to the text, but a more wildly varying left edge.\n'clustering.py' can create a database of vectors for a given date range slice of the (readable) Burney newspaper archive. This vector can then be used to investigate various coorelations to see if, in fact, it is possible to cluster the text columns in such a way that poetry is very likely to be found near each other.\nFurther to this, one we have a means of creating interesting clusters of text, we can ask it about other data and find out which cluster it would put the new data. If we find a cluster that is by majority poetry, then if it puts new data into this cluster, we can have a level of confidence that the new data is also like these and a poem.\nPlan:\nIterate through the following steps:\n\nPull or derive a set of interesting types of numbers from the dataset. Each block of text will have a set of these numbers (a 'vector').\nCreate a suitable number of clusters using two (though hopefully more) of these types to test.\nCheck to see if these clusters are sensible and are not arbitrary in nature subjectively.\nGiven the set of found poems, see into which clusters the poems get assigned.\nIf a high % of the poems get assigned to a single cluster -> Success! Focus on this!\nOtherwise, try again from the top.", "%matplotlib inline\nimport mpld3\nmpld3.enable_notebook()\n\n# Get the dataset:\nfrom clustering import create_cluster_dataset, NewspaperArchive\nDBFILE = \"1749_1750_no_drift.db\"\nn = NewspaperArchive()\nds = create_cluster_dataset(n, daterange = [1749, 1750], dbfile = DBFILE)\n", "What do these 'vectors' look like? What do the columns refer to?", "data, transform, id_list = ds\nprint(data.toarray())\n\nprint(transform.get_feature_names())", "Going from a vector back to the metadata reference:\nBy keeping an 'id_list', we can look up the identifier for any vector in the list from the database we've made for this clustering attemp. This lets us look up what the reference for that is, and where we can find it:", "from clustering import ClusterDB\ndb = ClusterDB(DBFILE)\nprint(dict(db.vecidtoitem(id_list[-1])))\nprint(data.toarray()[-1])\n\nfrom burney_data import BurneyDB\nbdb = BurneyDB(\"burney.db\")\n\nbdb.get_title_row(titleAbbreviation=\"B0574REMEMBRA\")", "Initial data woes\nThere was a considerable discrepancy between the x1 average indent and the column \"box\" left edge. Looking at the data, the presence of a few outliers can really affect this value. Omitting the 2 smallest and largest x values might be enough to avoid this biasing the sample too badly.\nAlso, the initial 'drift correction' (adjustments made to correct warped or curved columns) seemed to add more issues than it solved, so the dataset was remade without it.", "from scipy import cluster\nfrom matplotlib import pyplot as plt\nimport numpy as np\n\n# Where is the K-means 'elbow'?\n# Try between 1 and 10\n# use only the x1 and x2 variences\nvset = [cluster.vq.kmeans(data.toarray()[:, [3,6]], i) for i in range(1,10)]\nplt.plot([v for (c,v) in vset])\nplt.show()", "Seems the elbow is quite wide and not sharply defined, based on just the line variences. Let's see what it looks like in general.", "# Mask off leaving just the front and end variance columns\nnpdata = data.toarray()\nmask = np.ones((8), dtype=bool)\nmask[[0,1,2,4,5,7]] = False\n\nmarray = npdata[:,mask]", "x1 vs x2 varience?\nWhat is the rough shape of this data? The varience of x1 and x2 are equivalent to the left and right alignment of the text varies in a given block of text.", "plt.scatter(marray[:,0], marray[:,1])\nplt.show()", "Attempting K-Means\nWhat sort of clustering algorithm to employ is actually a good question. K-means can give fairly meaningless responses if the data is of a given sort. Generally, it can be useful but cannot be used blindly. \nGiven the data above, it might be a good start however.", "#trying a different KMeans\nfrom sklearn.cluster import KMeans\n\nestimators = {'k_means_3': KMeans(n_clusters=3),\n 'k_means_5': KMeans(n_clusters=5),\n 'k_means_8': KMeans(n_clusters=8),}\n\nfignum = 1\nfor name, est in estimators.items():\n fig = plt.figure(fignum, figsize=(8, 8))\n plt.clf()\n plt.cla()\n est.fit(marray)\n labels = est.labels_\n \n plt.scatter(marray[:,0], marray[:,1], c=labels.astype(np.float))\n fignum = fignum + 1\nplt.show()", "Interesting!\nThe lack of really well defined clusters bolstered the \"elbow\" test above. K-means is likely not put to good use here, with just these two variables.\nThe left edge of the scatterplot is a region that contains those blocks of text with lines aligned to the left edge of the paper's column, but have some considerable variation to the length of the line.\nFor example, I'd expect text looking like the following:\nQui quis at ex voluptatibus cupiditate quod quia. \nQuas fuga quasi sit mollitia quos atque. Saepe atque officia sed dolorem. \nNumquam quas aperiam eaque nam sunt itaque est. Sed expedita \nmaxime fugiat mollitia error necessitatibus quam soluta. Amet laborum eius\nsequi quae sit sit.\n\nThis is promising (as long as the data is realistic and there isn't a bug in generating that...)\nNow, I wonder if including the \"margin\" (x1ave-ledge: average x1 coordinate minus the leftmost edge) might help find or distinguish these further?", "mpld3.disable_notebook() # switch off the interactive graph functionality which doesn't work well with the 3D library\n\nfrom mpl_toolkits.mplot3d import Axes3D\n\nX = npdata[:, [3,5,6]]\n\nfignum = 1\nfor name, est in estimators.items():\n fig = plt.figure(fignum, figsize=(8, 8))\n plt.clf()\n ax = Axes3D(fig, rect=[0, 0, .95, 1], elev=5, azim=30)\n plt.cla()\n est.fit(X)\n labels = est.labels_\n \n ax.scatter(X[:,0], X[:,2], X[:,1], c=labels.astype(np.float))\n \n ax.set_xlabel('x1 varience')\n ax.set_ylabel('x2 varience')\n ax.set_zlabel('Average indent')\n \n fignum = fignum + 1\nplt.show()", "How about the area density? In other words, what does it look like if the total area of the block is compared to the area taken up by just the words themselves?", "X = npdata[:, [3,0,6]]\n\nfignum = 1\nfor name, est in estimators.items():\n fig = plt.figure(fignum, figsize=(8, 8))\n plt.clf()\n ax = Axes3D(fig, rect=[0, 0, .95, 1], elev=25, azim=40)\n plt.cla()\n est.fit(X)\n labels = est.labels_\n \n ax.scatter(X[:,0], X[:,2], X[:,1], c=labels.astype(np.float))\n \n ax.set_xlabel('x1 varience')\n ax.set_ylabel('x2 varience')\n ax.set_zlabel('Density')\n \n fignum = fignum + 1\nplt.show()", "More outliers skewing the results. This time for blocks with nearly zero varience at either end, but a huge amount of letter area attributed to it by the ocr, but sweeping out a very small overall area. Perhaps mask out the columns which aren't actually columns but dividers mistaken for text? ie skip all blocks that are narrow under 100px perhaps. Another way might be to ignore blocks which are under approximately 40 words (40 words * 5 characters)", "mask = npdata[:,1] > 40 * 5 # mask based on the ltcount value\nprint(mask)\nprint(\"Amount of vectors: {0}, Vectors with ltcount < 50: {1}\".format(len(npdata), sum([1 for item in mask if item == False])))\n\nm_npdata = npdata[mask, :]\nX = m_npdata[:, [3,0,6]]\n\n# Let's just plot one graph to see:\n\nest = estimators['k_means_8']\nfig = plt.figure(fignum, figsize=(8, 8))\nplt.clf()\nax = Axes3D(fig, rect=[0, 0, .95, 1], elev=25, azim=40)\nplt.cla()\nest.fit(X)\nlabels = est.labels_\n \nax.scatter(X[:,0], X[:,2], X[:,1], c=labels.astype(np.float))\n \nax.set_xlabel('x1 varience')\nax.set_ylabel('x2 varience')\nax.set_zlabel('Density')\n \nplt.show()" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
M0nica/python-foundations-hw
07/billionaires.ipynb
mit
[ "import pandas as pd\n\ndf = pd.read_excel(\"richpeople.xlsx\")\nprint(df)", "What country are most billionaires from? For the top ones, how many billionaires per billion people?", "df['citizenship'].value_counts().head()\n\ndf.groupby('citizenship')['networthusbillion'].sum().sort_values(ascending=False)\n\nus_pop = 318.9 #billion (2014)\nus_bill = df[df['citizenship'] == 'United States']\nprint(\"There are\", us_pop/len(us_bill), \"billionaires per billion people in the United States.\")\n\n\ngerm_pop = 0.08062 #(2013)\ngerm_bill = df[df['citizenship'] == 'Germany']\nprint(\"There are\", germ_pop/len(germ_bill), \"billionaires per billion people in Germany.\")\n\n\nchina_pop = 1.357 #(2013)\nchina_bill = df[df['citizenship'] == 'China']\nprint(\"There are\", china_pop/len(china_bill), \"billionaires per billion people in China.\")\n\n\nrussia_pop = 0.1435 #(2013)\nrussia_bill = df[df['citizenship'] == 'Russia']\nprint(\"There are\", russia_pop/len(russia_bill), \"billionaires per billion people in Russia.\")\n\njapan_pop = 0.1273 # 2013 \njapan_bill = df[df['citizenship'] == 'Japan']\nprint(\"There are\", japan_pop/len(japan_bill), \"billionaires per billion people in Japan.\")\n\nprint(df.columns)", "Who are the top 10 richest billionaires?", "recent = df[df['year'] == 2014]\n# if it is not recent then there are duplicates for diff years\nrecent.sort_values('rank').head(10)\n\nrecent['networthusbillion'].describe()", "What's the average wealth of a billionaire? Male? Female?\nWhat's the average wealth of a billionaire? Male? Female", "print(\"The average wealth of a billionaire is\", recent['networthusbillion'].mean(), \"billion dollars\")\n\nmale = recent[(recent['gender'] == 'male')]\nfemale = recent[(recent['gender'] == 'female')]\n\n\nprint(\"The average wealth of a male billionaire is\", male['networthusbillion'].mean(), \"billion dollars\")\nprint(\"The average wealth of a female billionaire is\", female['networthusbillion'].mean(), \"billion dollars\")\n", "Who is the poorest billionaire?", "recent.sort_values('networthusbillion').head(1)\n\n# Who are the top 10 poorest billionaires?\n\n# Who are the top 10 poorest billionaires\n\nrecent.sort_values('networthusbillion').head(10)\n\n# 'What is relationship to company'? And what are the most common relationships?\n\n#top 10 most common relationships to company\ndf['relationshiptocompany'].value_counts().head(10)\n\n# Most common source of wealth? Male vs. female?\n\n# Most common source of wealth? Male vs. female\n\nprint(\"The most common source of wealth is\", df['sourceofwealth'].value_counts().head(1))\nprint(\"The most common source of wealth for males is\", male['sourceofwealth'].value_counts().head(1))\nprint(\"The most common source of wealth for females is\", female['sourceofwealth'].value_counts().head(1))\n\n#need to figure out how to extract just the number nd not the data type 'Name: sourceofwealth, dtype: int64'", "Given the richest person in a country, what % of the GDP is their wealth?", "richest = df[df['citizenship'] == 'United States'].sort_values('rank').head(1)['networthusbillion'].to_dict()\n# richest['networthusbillion']\nrichest[282]\n\n## I JUST WANT THE VALUE -- 18.5.\n\n## 16.77 TRILLION\nUS_GDP = 1.677 * (10^13)\nUS_GDP", "Add up the wealth of all of the billionaires in a given country (or a few countries) and then compare it to the GDP of the country, or other billionaires, so like pit the US vs India\nWhat are the most common industries for billionaires to come from? What's the total amount of billionaire money from each industry?", "recent['sector'].value_counts().head(10)\n\ndf.groupby('sector')['networthusbillion'].sum()", "How many self made billionaires vs. others?", "(recent['selfmade'] == 'self-made').value_counts()", "How old are billionaires?", "# recent['age'].value_counts().sort_values()\nprint(\"The average billionnaire is\", round(recent['age'].mean()), \"years old.\")", "How old are billionaires self made vs. non self made?", "df.groupby('selfmade')['age'].mean()\n\n # or different industries?\n\ndf.groupby('sector')['age'].mean()\n\n#youngest billionnaires \nrecent.sort_values('age').head(10)\n\n#oldest billionnaires \nrecent.sort_values('age', ascending =False).head(10)\n\n#Age distribution - maybe make a graph about it?\n\n\nimport matplotlib.pyplot as plt\n%matplotlib inline\n# This will scream we don't have matplotlib.\nhis = df['age'].hist(range=[0, 100])\n\n\nhis.set_title('Distribution of Age Amongst Billionaires')\nhis.set_xlabel('Age(years)')\nhis.set_ylabel('# of Billionnaires')\n\n# Maybe just made a graph about how wealthy they are in general?\n\nimport matplotlib.pyplot as plt\n%matplotlib inline\n# This will scream we don't have matplotlib.\n\n\nhis = df['networthusbillion'].hist(range=[0, 45])\n\n\nhis.set_title('Distribution of Wealth Amongst Billionaires')\nhis.set_xlabel('Wealth(Billions)')\nhis.set_ylabel('# of Billionnaires')", "Maybe plot their net worth vs age (scatterplot)\nMake a bar graph of the top 10 or 20 richest", "recent.plot(kind='scatter', x='networthusbillion', y='age')\n\nrecent.plot(kind='scatter', x='age', y='networthusbillion', alpha = 0.2)" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
Abjad/intensive
day-3/1-making-music.ipynb
mit
[ "%reload_ext abjadext.ipython\nimport abjad\nfrom abjadext import rmakers", "Designing a music-generating class\nThe rhythm-makers we studied yesterday help us think about rhythm in a formal way. Today we'll extend the rhythm-makers' pattern with pitches, articulations and dynamics. In this notebook we'll develop the code we need; in the next notebook we'll encapsulate our work in a class.\nMaking notes and rests\nWe start by re-implementing the basic note-making functionality of the talea rhythm-maker \"by hand.\" Beginning with a talea object that models a cycle of durations:", "pairs = [(4, 4), (3, 4), (7, 16), (6, 8)]\ntime_signatures = [abjad.TimeSignature(_) for _ in pairs]\ndurations = [_.duration for _ in time_signatures]\ntime_signature_total = sum(durations)\ncounts = [1, 2, -3, 4]\ndenominator = 16\ntalea = rmakers.Talea(counts, denominator)\ntalea_index = 0", "We can ask our talea for as many durations as we want. (Taleas output nonreduced fractions instead of durations. This is to allow talea output to model either durations or time signatures, depending on the application.) We include some negative values, which we will later interpret as rests. We can ask our talea for ten durations like this:", "talea[:10]", "Let's use our talea to make notes and rests, stopping when the duration of the accumulated notes and rests sums to that of the four time signatures defined above:", "events = []\naccumulated_duration = abjad.Duration(0)\nwhile accumulated_duration < time_signature_total:\n duration = talea[talea_index]\n if 0 < duration: \n pitch = abjad.NamedPitch(\"c'\")\n else:\n pitch = None\n duration = abs(duration)\n if time_signature_total < (duration + accumulated_duration):\n duration = time_signature_total - accumulated_duration\n events_ = abjad.LeafMaker()([pitch], [duration])\n events.extend(events_)\n accumulated_duration += duration\n talea_index += 1\nstaff = abjad.Staff(events)\nabjad.show(staff)", "To attach the four time signatures defined above, we must split our notes and rests at measure boundaries. Then we can attach a time signature to the first note or rest in each of the four selections that result:", "selections = abjad.mutate.split(staff[:], time_signatures, cyclic=True)\nfor time_signature, selection in zip(time_signatures, selections):\n first_leaf = abjad.get.leaf(selection, 0)\n abjad.attach(time_signature, first_leaf)\nabjad.show(staff)", "Then we group our notes and rests by measure, and metrically respell each group:", "measure_selections = abjad.select(staff).leaves().group_by_measure()\nfor time_signature, measure_selection in zip(time_signatures, measure_selections):\n abjad.Meter.rewrite_meter(measure_selection, time_signature)\nabjad.show(staff)", "Pitching notes\nWe can pitch our notes however we like. First we define a cycle of pitches:", "string = \"d' fs' a' d'' g' ef'\"\nstrings = string.split()\npitches = abjad.CyclicTuple(strings)", "Then we loop through pitched logical ties, pitching notes as we go:", "plts = abjad.select(staff).logical_ties(pitched=True)\nfor i, plt in enumerate(plts):\n pitch = pitches[i]\n for note in plt:\n note.written_pitch = pitch\nabjad.show(staff)", "Attaching articulations and dynamics\nAbjad's run selector selects notes and chords, separated by rests:", "for selection in abjad.select(staff).runs():\n print(selection)", "We can use Abjad's run selector to loop through the runs in our music, attaching articulations and dynamics along the way:", "for selection in abjad.select(staff).runs():\n articulation = abjad.Articulation(\"tenuto\")\n abjad.attach(articulation, selection[0])\n if 3 <= len(selection):\n abjad.hairpin(\"p < f\", selection)\n else:\n dynamic = abjad.Dynamic(\"ppp\")\n abjad.attach(dynamic, selection[0])\nabjad.override(staff).dynamic_line_spanner.staff_padding = 4\nabjad.show(staff)", "In the next notebook we'll encapsulate our work in functions.\n\nContributed: Treviño (2.21); revised: Bača (3.2)." ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
changhoonhahn/centralMS
centralms/notebooks/notes_SFRmpajhu_uncertainty.ipynb
mit
[ "Measurement uncertainties from the total SFR from the MPA-JHU catalog\nWe're interested in quantifying the measurement uncertainty from repeat spectra in Brinchmann's SFR catalog", "import numpy as np \nimport scipy as sp \n\nimport env\nimport util as UT\nfrom ChangTools.fitstables import mrdfits\n\nfrom pydl.pydlutils.spheregroup import spherematch\n\nimport matplotlib as mpl \nimport matplotlib.pyplot as plt \nmpl.rcParams['text.usetex'] = True\nmpl.rcParams['font.family'] = 'serif'\nmpl.rcParams['axes.linewidth'] = 1.5\nmpl.rcParams['axes.xmargin'] = 1\nmpl.rcParams['xtick.labelsize'] = 'x-large'\nmpl.rcParams['xtick.major.size'] = 5\nmpl.rcParams['xtick.major.width'] = 1.5\nmpl.rcParams['ytick.labelsize'] = 'x-large'\nmpl.rcParams['ytick.major.size'] = 5\nmpl.rcParams['ytick.major.width'] = 1.5\nmpl.rcParams['legend.frameon'] = False\n%matplotlib inline", "Read in the total SFRs from https://wwwmpa.mpa-garching.mpg.de/SDSS/DR7/sfrs.html . These SFRs are derived from spectra but later aperture corrected using Salim et al.(2007)'s method.", "# data with the galaxy information\ndata_gals = mrdfits(UT.dat_dir()+'gal_info_dr7_v5_2.fit.gz')\n# data with the SFR information \ndata_sfrs = mrdfits(UT.dat_dir()+'gal_totsfr_dr7_v5_2.fits.gz')\n\nif len(data_gals.ra) != len(data_sfrs.median):\n raise ValueError(\"the data should have the same number of galaxies\")", "spherematch using 3'' for 10,000 galaxies. Otherwise laptop explodes.", "#ngal = len(data_gals.ra)\nngal = 10000\n\nmatches = spherematch(data_gals.ra[:10000], data_gals.dec[:10000], \n data_gals.ra[:10000], data_gals.dec[:10000], \n 0.000833333, maxmatch=0)\n\nm0, m1, d_m = matches\n\nn_matches = np.zeros(ngal)\nsfr_list = [[] for i in range(ngal)]\n\nfor i in range(ngal): \n ism = (i == m0)\n n_matches[i] = np.sum(ism)\n if n_matches[i] > 1: \n #print '#', data_gals.ra[i], data_gals.dec[i], data_sfrs.median[i]\n sfr_list[i] = data_sfrs.median[m1[np.where(ism)]]\n #for r,d,s in zip(data_gals.ra[m1[np.where(ism)]], data_gals.dec[m1[np.where(ism)]], data_sfrs.median[m1[np.where(ism)]]): \n # print r, d, s\n #sfr_list[i] = data_sfrs.median[:10000][ism]\n\nfor i in np.where(n_matches > 1)[0][:5]: \n print sfr_list[i] \n print np.mean(sfr_list[i]), np.std(sfr_list[i])\n\nfig = plt.figure()\nsub = fig.add_subplot(111)\nsigs = []\nfor i in np.where(n_matches > 1)[0]: \n if -99. in sfr_list[i]:\n continue\n sub.scatter([np.mean(sfr_list[i])], [np.std(sfr_list[i], ddof=1)], c='k', s=2)\n sigs.append(np.std(sfr_list[i], ddof=1))\nsub.set_xlim([-3., 3.])\nsub.set_xlabel('log SFR', fontsize=25)\nsub.set_ylim([0., 0.6])\nsub.set_ylabel('$\\sigma_\\mathrm{log\\,SFR}$', fontsize=25)\nplt.show()\n\nplt.hist(np.array(sigs), bins=40, range=[0.0, 0.6], normed=True, histtype='step')\nplt.xlim([0., 0.6])\nplt.xlabel('$\\sigma_\\mathrm{log\\,SFR}$', fontsize=25)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code" ]
chapmanbe/nlm_clinical_nlp
IntroductionToPyConTextNLP.ipynb
mit
[ "Simple Clinical Natural Language Processing with pyConTextNLP\nIn this notebook we introduce the basics of pyConTextNLP, a simple Python tool that we have used extensively for processing clinical text, including radiology, psychiatry, etc.\npyConTextNLP is built around the concept of targets and modifiers: the target is the concept we are interested in identifying (like a cough or a pulmonary embolism); a modifier is a concept that changes the target in some sense (e.g. historical, severity, certainty, negation).\npyConTextNLP relies on regular expressions to identify concepts (both targets and modifiers) within a sentence and then uses simple lexical rules to assign relationships between the identified targets and modifiers. Internally, pyConTextNLP uses graphs. Targets and modifiers are nodes in the graph and relationships between modifiers and targets are edges in the graph.\nSpecifying targets, modifiers, and rules\npyConTextNLP uses a four-tuple to represent concepts. Within the program we create an instance of an itemData class. Each itemData consists of the following four attributres:\n\nA literal (e.g. \"pulmonary embolism\", \"no definite evidence of\"): This is a lingustic representation of the target or modifier we want to identify\nA category (e.g. \"CRITICAL_FINDING\", \"PROBABLE_EXISTENCE\"): This is the label we want applied to the literal when we see it in text\nA regular expression that defines how to identify the literal concept. If no regular expression is specified, a regular expression will be built directly from the literal by wrapping it with word boundaries (e.g. r\"\"\"\\bpulmonary embolism\\b\"\"\")\nA rule that defines how the concept works in the sentence (e.g. a negation term that looks forward in the sentence). this only applies to modifiers.", "!pip install -U pycontextnlp==0.6.1.1\n\nimport pyConTextNLP.pyConTextGraph as pyConText\nimport pyConTextNLP.itemData as itemData", "The task: Identify patients with pulmonary embolism from radiology reports\nStep 1: how is the concept of pulmonary embolism represented in the reports - fill in the list below with literals you want to use.", "mytargets = itemData.itemData()\nmytargets.extend([[\"pulmonary embolism\", \"CRITICAL_FINDING\", \"\", \"\"],\n [\"pneumonia\", \"CRITICAL_FINDING\", \"\", \"\"]])\n\nprint(mytargets)\n\n!pip install -U radnlp==0.2.0.8", "Sentence Splitting\npyConTextNLP operates on a sentence level and so the first step we need to take is to split our document into individual sentences. pyConTextNLP comes with a simple sentence splitter class.", "import pyConTextNLP.helpers as helpers\nspliter = helpers.sentenceSplitter()\nspliter.splitSentences(\"This is Dr. Chapman's first sentence. This is the 2.0 sentence.\")\n", "However, sentence splitting is a common NLP task and so most full-fledged NLP applications provide sentence splitters. We usually rely on the sentence splitter that is part of the TextBlob package, which in turn relies on the Natural Language Toolkit (NLTK). So before proceeding we need to download some NLTK resources with the following command.", "!python -m textblob.download_corpora" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
elastic/examples
Machine Learning/Query Optimization/notebooks/Appendix B - Combining queries.ipynb
apache-2.0
[ "Appendix B: Combining queries\nIn this example, we'll walk through the process of building a more complex query for the MSMARCO Document dataset. This assumes that you are familiar with the other query tuning notebooks. We'll be using the cross_fields and best_fields queries and the optimal parameters found as the foundation on which we build a more complex query.\nAs with the previous notebook and in accordance with the MSMARCO Document ranking task, we'll continue to use MRR@100 on the dev dataset for evaluation and comparison with other approaches.", "%load_ext autoreload\n%autoreload 2\n\nimport importlib\nimport os\nimport sys\n\nfrom elasticsearch import Elasticsearch\nfrom skopt.plots import plot_objective\n\n# project library\nsys.path.insert(0, os.path.abspath('..'))\n\nimport qopt\nimportlib.reload(qopt)\n\nfrom qopt.notebooks import evaluate_mrr100_dev, optimize_query_mrr100\nfrom qopt.optimize import Config\n\n# use a local Elasticsearch or Cloud instance (https://cloud.elastic.co/)\n# es = Elasticsearch('http://localhost:9200')\nes = Elasticsearch('http://35.246.195.44:9200')\n\n# set the parallelization parameter `max_concurrent_searches` for the Rank Evaluation API calls\n# max_concurrent_searches = 10\nmax_concurrent_searches = 30\n\nindex = 'msmarco-document'\ntemplate_id = 'combined'", "Combining cross_fields and best_fields\nBased on previous tuning, we have the following optimal parameters for each multi_match query type.", "cross_fields_params = {\n 'operator': 'OR',\n 'minimum_should_match': 50,\n 'tie_breaker': 0.25,\n 'url|boost': 1.0129720302556104,\n 'title|boost': 5.818478716515356,\n 'body|boost': 3.736613263685484,\n}\n\nbest_fields_params = {\n 'tie_breaker': 0.3936135232328522,\n 'url|boost': 0.0,\n 'title|boost': 8.63280262513067,\n 'body|boost': 10.0,\n}", "We've seen the process to optimize field boosts on two different multi_match queries but it would be interesting to see if combining them in some way might actually result in even better MRR@100. Let's give it a shot and find out.\nSide note: Combining queries where each sub-query is always executed may improve relevance but it will hurt performance and the query times will be quite a lot higher than with a single, simpler query. Keep this in mind when building complex queries for production!", "def prefix_keys(d, prefix):\n return {f'{prefix}{k}': v for k, v in d.items()}\n\n# prefix key of each sub-query\n# add default boosts\nall_params = {\n **prefix_keys(cross_fields_params, 'cross_fields|'),\n 'cross_fields|boost': 1.0,\n **prefix_keys(best_fields_params, 'best_fields|'),\n 'best_fields|boost': 1.0,\n}\nall_params", "Baseline evaluation", "%%time\n\n_ = evaluate_mrr100_dev(es, max_concurrent_searches, index, template_id, params=all_params)", "Query tuning\nHere we'll just tune the boosts for each sub-query. Note that this takes twice as long as tuning individual queries because we have two queries combined.", "%%time\n\n_, _, final_params_boosts, metadata_boosts = optimize_query_mrr100(es, max_concurrent_searches, index, template_id,\n config_space=Config.parse({\n 'num_iterations': 30,\n 'num_initial_points': 15,\n 'space': {\n 'cross_fields|boost': { 'low': 0.0, 'high': 5.0 },\n 'best_fields|boost': { 'low': 0.0, 'high': 5.0 },\n },\n 'default': all_params,\n }))\n\n_ = plot_objective(metadata_boosts, sample_source='result')", "Seems that there's not much to tune here, but let's keep going.", "%%time\n\n_ = evaluate_mrr100_dev(es, max_concurrent_searches, index, template_id, params=final_params_boosts)", "So that's the same as without tuning. What's going on?\nDebugging\nPlot scores from each sub-query to determine why we don't really see an improvement over individual queries.", "import matplotlib.pyplot as plt\nfrom itertools import chain\n\nfrom qopt.notebooks import ROOT_DIR\nfrom qopt.search import temporary_search_template, search_template\nfrom qopt.trec import load_queries_as_tuple_list, load_qrels\n\ndef collect_scores():\n \n def _search(template_id, query_string, params, doc_id):\n res = search_template(es, index, template_id, query={\n 'id': 0,\n 'params': {\n 'query_string': query_string,\n **params,\n },\n })\n return [hit['score'] for hit in res['hits'] if hit['id'] == doc_id]\n\n queries = load_queries_as_tuple_list(os.path.join(ROOT_DIR, 'data', 'msmarco-document-sampled-queries.1000.tsv'))\n qrels = load_qrels(os.path.join(ROOT_DIR, 'data', 'msmarco', 'document', 'msmarco-doctrain-qrels.tsv'))\n template_file = os.path.join(ROOT_DIR, 'config', 'msmarco-document-templates.json')\n size = 100\n \n cross_field_scores = []\n best_field_scores = []\n \n with temporary_search_template(es, template_file, 'cross_fields', size) as cross_fields_template_id:\n with temporary_search_template(es, template_file, 'best_fields', size) as best_fields_template_id:\n for query in queries:\n doc_id = list(qrels[query[0]].keys())[0]\n \n cfs = _search(cross_fields_template_id, query[1], cross_fields_params, doc_id)\n bfs = _search(best_fields_template_id, query[1], best_fields_params, doc_id)\n\n # keep just n scores to make sure the lists are the same length\n length = min(len(cfs), len(bfs))\n cross_field_scores.append(cfs[:length])\n best_field_scores.append(bfs[:length])\n\n return cross_field_scores, best_field_scores\n\ncfs, bfs = collect_scores()\n\n# plot scores\ncfs_flat = list(chain(*cfs))\nbfs_flat = list(chain(*bfs))\n\nplt.scatter(cfs_flat, bfs_flat)\nplt.show()", "It looks like combining queries really won't do much. Seems that the scores from each sub-query are already pretty aligned and we can't benefit from learning the boost values which tell us how to combine scores." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
jeancochrane/learning
python-machine-learning/code/ch04exercises.ipynb
mit
[ "Exercises for Chapter 4\nImputing missing data\nLoad the Pima diabetes dataset as a pandas dataframe. (Note that the data does not include a header row. You'll have to build that yourself based on the documentation.)", "import pandas\n\nnames = ['num_times_pregnant', 'glucose_concentration',\n 'blood_pressure', 'skin_fold_thickness', 'insulin',\n 'bmi', 'diabetes_pedigree', 'age', 'target']\n\ndata_url = 'http://archive.ics.uci.edu/ml/machine-learning-databases/' +\\\n 'pima-indians-diabetes/pima-indians-diabetes.data'\n\ndf = pandas.read_csv(data_url, header=None, index_col=False, names=names)\nprint(df)", "Check the dataframe to see which columns contain 0's. Based on the data type of each column, do these 0's all make sense? Which 0's are suspicious?", "for name in names:\n print(name, ':', any(df.loc[:, name] == 0))", "Answer: Columns 2-6 (glucose, blood pressure, skin fold thickness, insulin, and BMI) all contain zeros, but none of these measurements should ever be 0 in a human.\nAssume that 0s indiciate missing values, and fix them in the dataset by eliminating samples with missing features. Then run a logistic regression, and measure the performance of the model.", "import numpy as np\nfrom sklearn.model_selection import train_test_split\nfrom sklearn.preprocessing import StandardScaler\nfrom sklearn.linear_model import LogisticRegression\n\nfor i in range(1,6):\n df.loc[df.loc[:, names[i]] == 0, names[i]] = np.nan\n\ndf_no_nan = df.dropna(axis=0, how='any')\n\nX = df_no_nan.iloc[:, :8].values\ny = df_no_nan.iloc[:, 8].values\n\ndef fit_and_score_rlr(X, y, normalize=True):\n \n if normalize:\n scaler = StandardScaler().fit(X)\n X_std = scaler.transform(X)\n else:\n X_std = X\n \n X_train, X_test, y_train, y_test = train_test_split(X_std, y,\n test_size=0.33,\n random_state=42)\n\n rlr = LogisticRegression(C=1)\n\n rlr.fit(X_train, y_train)\n return rlr.score(X_test, y_test)\n\nfit_and_score_rlr(X, y)", "Next, replace missing features through mean imputation. Run a regression and measure the performance of the model.", "from sklearn.preprocessing import Imputer\n\nimputer = Imputer(missing_values='NaN', strategy='mean', axis=1)\nX = imputer.fit_transform(df.iloc[:, :8].values)\n\ny = df.iloc[:, 8].values\n\nfit_and_score_rlr(X, y)", "Comment on your results.\nAnswer: Interestingly, there's not a huge performance improvement between the two approaches! In my run, using mean imputation corresponded to about a 3 point increase in model performance. Some ideas for why this might be:\n\nThis is a small dataset to start out with, so removing ~half its samples doesn't change performance very much\nThere's not much information contained in the features with missing data\nThere are other effects underlying poor performance of the model (e.g. regularization parameters) that are having a greater impact\n\nPreprocessing categorical variables\nLoad the TA evaluation dataset. As before, the data and header are split into two files, so you'll have to combine them yourself.", "data_url = 'https://archive.ics.uci.edu/ml/machine-learning-databases/tae/tae.data'\n\nnames = ['native_speaker', 'instructor', 'course', 'season', 'class_size', 'rating']\n\ndf = pandas.read_csv(data_url, header=None, index_col=False, names=names)\nprint(df)", "Which of the features are categorical? Are they ordinal, or nominal? Which features are numeric?\nAnswer: According to the documentation:\n\nNative speaker: categorical (nominal)\nInstructor: categorical (nominal)\nCourse: categorical (nominal)\nSeason: categorical (nominal)\nClass size: numeric\nRating: categorical (ordinal)\n\nEncode the categorical variables in a naive fashion, by leaving them in place as numerics. Run a classification and measure performance against a test set.", "X = df.iloc[:, :-1].values\ny = df.iloc[:, -1].values\n\nfit_and_score_rlr(X, y, normalize=True)", "Now, encode the categorical variables with a one-hot encoder. Again, run a classification and measure performance.", "from sklearn.preprocessing import OneHotEncoder\n\nenc = OneHotEncoder(categorical_features=range(5))\nX_encoded = enc.fit_transform(X)\n\nfit_and_score_rlr(X_encoded, y, normalize=False)", "Comment on your results.\nFeature scaling\nRaschka mentions that decision trees and random forests do not require standardized features prior to classification, while the rest of the classifiers we've seen so far do. Why might that be? Explain the intuition behind this idea based on the differences between tree-based classifiers and the other classifiers we've seen.\nNow, we'll test the two scaling algorithms on the wine dataset. Start by loading the wine dataset.\nScale the features via \"standardization\" (as Raschka describes it). Classify and measure performance.\nScale the features via \"normalization\" (as Raschka describes it). Again, classify and measure performance.\nComment on your results.\nFeature selection\n\nImplement SBS below. Then, run the tests.", "class SBS(object):\n \"\"\"\n Class to select the k-best features in a dataset via sequential backwards selection.\n \"\"\"\n def __init__(self):\n \"\"\"\n Initialize the SBS model.\n \"\"\"\n pass\n \n def fit(self):\n \"\"\"\n Fit SBS to a dataset.\n \"\"\"\n pass\n \n def transform(self):\n \"\"\"\n Transform a dataset based on the model.\n \"\"\"\n pass\n \n def fit_transform(self):\n \"\"\"\n Fit SBS to a dataset and transform it, returning the k-best features.\n \"\"\"\n pass", "Now, we'll practice feature selection. Start by loading the breast cancer dataset.\nUse a random forest to determine the feature importances. Plot the features and their importances.\nUse L1 regularization with a standard C value (0.1) to eliminate low-information features. Again, plot the feature importances using the coef_ attribute of the model.\nHow do the feature importances from the random forest/L1 regularization compare?" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
Alexoner/mooc
cs231n/2016/assignment2/Dropout.ipynb
apache-2.0
[ "Dropout\nDropout [1] is a technique for regularizing neural networks by randomly setting some features to zero during the forward pass. In this exercise you will implement a dropout layer and modify your fully-connected network to optionally use dropout.\n[1] Geoffrey E. Hinton et al, \"Improving neural networks by preventing co-adaptation of feature detectors\", arXiv 2012", "# As usual, a bit of setup\n\nimport time\nimport numpy as np\nimport matplotlib.pyplot as plt\nfrom cs231n.classifiers.fc_net import *\nfrom cs231n.data_utils import get_CIFAR10_data\nfrom cs231n.gradient_check import eval_numerical_gradient, eval_numerical_gradient_array\nfrom cs231n.solver import Solver\n\n%matplotlib inline\nplt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots\nplt.rcParams['image.interpolation'] = 'nearest'\nplt.rcParams['image.cmap'] = 'gray'\n\n# for auto-reloading external modules\n# see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython\n%load_ext autoreload\n%autoreload 2\n\ndef rel_error(x, y):\n \"\"\" returns relative error \"\"\"\n return np.max(np.abs(x - y) / (np.maximum(1e-8, np.abs(x) + np.abs(y))))\n\n# Load the (preprocessed) CIFAR10 data.\n\ndata = get_CIFAR10_data()\nfor k, v in data.iteritems():\n print '%s: ' % k, v.shape", "Dropout forward pass\nIn the file cs231n/layers.py, implement the forward pass for dropout. Since dropout behaves differently during training and testing, make sure to implement the operation for both modes.\nOnce you have done so, run the cell below to test your implementation.", "x = np.random.randn(500, 500) + 10\n\nfor p in [0.3, 0.6, 0.75]:\n out, _ = dropout_forward(x, {'mode': 'train', 'p': p})\n out_test, _ = dropout_forward(x, {'mode': 'test', 'p': p})\n\n print 'Running tests with p = ', p\n print 'Mean of input: ', x.mean()\n print 'Mean of train-time output: ', out.mean()\n print 'Mean of test-time output: ', out_test.mean()\n print 'Fraction of train-time output set to zero: ', (out == 0).mean()\n print 'Fraction of test-time output set to zero: ', (out_test == 0).mean()\n print", "Dropout backward pass\nIn the file cs231n/layers.py, implement the backward pass for dropout. After doing so, run the following cell to numerically gradient-check your implementation.", "x = np.random.randn(10, 10) + 10\ndout = np.random.randn(*x.shape)\n\ndropout_param = {'mode': 'train', 'p': 0.8, 'seed': 123}\nout, cache = dropout_forward(x, dropout_param)\ndx = dropout_backward(dout, cache)\ndx_num = eval_numerical_gradient_array(lambda xx: dropout_forward(xx, dropout_param)[0], x, dout)\n\nprint 'dx relative error: ', rel_error(dx, dx_num)", "Fully-connected nets with Dropout\nIn the file cs231n/classifiers/fc_net.py, modify your implementation to use dropout. Specificially, if the constructor the the net receives a nonzero value for the dropout parameter, then the net should add dropout immediately after every ReLU nonlinearity. After doing so, run the following to numerically gradient-check your implementation.", "N, D, H1, H2, C = 2, 15, 20, 30, 10\nX = np.random.randn(N, D)\ny = np.random.randint(C, size=(N,))\n\nfor dropout in [0, 0.25, 0.5]:\n print 'Running check with dropout = ', dropout\n model = FullyConnectedNet([H1, H2], input_dim=D, num_classes=C,\n weight_scale=5e-2, dtype=np.float64,\n dropout=dropout, seed=123)\n\n loss, grads = model.loss(X, y)\n print 'Initial loss: ', loss\n\n for name in sorted(grads):\n f = lambda _: model.loss(X, y)[0]\n grad_num = eval_numerical_gradient(f, model.params[name], verbose=False, h=1e-5)\n print '%s relative error: %.2e' % (name, rel_error(grad_num, grads[name]))\n print", "Regularization experiment\nAs an experiment, we will train a pair of two-layer networks on 500 training examples: one will use no dropout, and one will use a dropout probability of 0.75. We will then visualize the training and validation accuracies of the two networks over time.", "# Train two identical nets, one with dropout and one without\n\nnum_train = 500\nsmall_data = {\n 'X_train': data['X_train'][:num_train],\n 'y_train': data['y_train'][:num_train],\n 'X_val': data['X_val'],\n 'y_val': data['y_val'],\n}\n\nsolvers = {}\ndropout_choices = [0, 0.75]\nfor dropout in dropout_choices:\n model = FullyConnectedNet([500], dropout=dropout)\n print dropout\n\n solver = Solver(model, small_data,\n num_epochs=25, batch_size=100,\n update_rule='adam',\n optim_config={\n 'learning_rate': 5e-4,\n },\n verbose=True, print_every=100)\n solver.train()\n solvers[dropout] = solver\n\n# Plot train and validation accuracies of the two models\n\ntrain_accs = []\nval_accs = []\nfor dropout in dropout_choices:\n solver = solvers[dropout]\n train_accs.append(solver.train_acc_history[-1])\n val_accs.append(solver.val_acc_history[-1])\n\nplt.subplot(3, 1, 1)\nfor dropout in dropout_choices:\n plt.plot(solvers[dropout].train_acc_history, 'o', label='%.2f dropout' % dropout)\nplt.title('Train accuracy')\nplt.xlabel('Epoch')\nplt.ylabel('Accuracy')\nplt.legend(ncol=2, loc='lower right')\n \nplt.subplot(3, 1, 2)\nfor dropout in dropout_choices:\n plt.plot(solvers[dropout].val_acc_history, 'o', label='%.2f dropout' % dropout)\nplt.title('Val accuracy')\nplt.xlabel('Epoch')\nplt.ylabel('Accuracy')\nplt.legend(ncol=2, loc='lower right')\n\nplt.gcf().set_size_inches(15, 15)\nplt.show()", "Question\nExplain what you see in this experiment. What does it suggest about dropout?\nAnswer" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
miguelfrde/stanford-cs231n
assignment1/features.ipynb
mit
[ "Image features exercise\nComplete and hand in this completed worksheet (including its outputs and any supporting code outside of the worksheet) with your assignment submission. For more details see the assignments page on the course website.\nWe have seen that we can achieve reasonable performance on an image classification task by training a linear classifier on the pixels of the input image. In this exercise we will show that we can improve our classification performance by training linear classifiers not on raw pixels but on features that are computed from the raw pixels.\nAll of your work for this exercise will be done in this notebook.", "import random\nimport numpy as np\nfrom cs231n.data_utils import load_CIFAR10\nimport matplotlib.pyplot as plt\n\nfrom __future__ import print_function\n\n%matplotlib inline\nplt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots\nplt.rcParams['image.interpolation'] = 'nearest'\nplt.rcParams['image.cmap'] = 'gray'\n\n# for auto-reloading extenrnal modules\n# see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython\n%load_ext autoreload\n%autoreload 2", "Load data\nSimilar to previous exercises, we will load CIFAR-10 data from disk.", "from cs231n.features import color_histogram_hsv, hog_feature\n\ndef get_CIFAR10_data(num_training=49000, num_validation=1000, num_test=1000):\n # Load the raw CIFAR-10 data\n cifar10_dir = 'cs231n/datasets/cifar-10-batches-py'\n X_train, y_train, X_test, y_test = load_CIFAR10(cifar10_dir)\n \n # Subsample the data\n mask = list(range(num_training, num_training + num_validation))\n X_val = X_train[mask]\n y_val = y_train[mask]\n mask = list(range(num_training))\n X_train = X_train[mask]\n y_train = y_train[mask]\n mask = list(range(num_test))\n X_test = X_test[mask]\n y_test = y_test[mask]\n \n return X_train, y_train, X_val, y_val, X_test, y_test\n\nX_train, y_train, X_val, y_val, X_test, y_test = get_CIFAR10_data()", "Extract Features\nFor each image we will compute a Histogram of Oriented\nGradients (HOG) as well as a color histogram using the hue channel in HSV\ncolor space. We form our final feature vector for each image by concatenating\nthe HOG and color histogram feature vectors.\nRoughly speaking, HOG should capture the texture of the image while ignoring\ncolor information, and the color histogram represents the color of the input\nimage while ignoring texture. As a result, we expect that using both together\nought to work better than using either alone. Verifying this assumption would\nbe a good thing to try for the bonus section.\nThe hog_feature and color_histogram_hsv functions both operate on a single\nimage and return a feature vector for that image. The extract_features\nfunction takes a set of images and a list of feature functions and evaluates\neach feature function on each image, storing the results in a matrix where\neach column is the concatenation of all feature vectors for a single image.", "from cs231n.features import *\n\nnum_color_bins = 10 # Number of bins in the color histogram\nfeature_fns = [hog_feature, lambda img: color_histogram_hsv(img, nbin=num_color_bins)]\nX_train_feats = extract_features(X_train, feature_fns, verbose=True)\nX_val_feats = extract_features(X_val, feature_fns)\nX_test_feats = extract_features(X_test, feature_fns)\n\n# Preprocessing: Subtract the mean feature\nmean_feat = np.mean(X_train_feats, axis=0, keepdims=True)\nX_train_feats -= mean_feat\nX_val_feats -= mean_feat\nX_test_feats -= mean_feat\n\n# Preprocessing: Divide by standard deviation. This ensures that each feature\n# has roughly the same scale.\nstd_feat = np.std(X_train_feats, axis=0, keepdims=True)\nX_train_feats /= std_feat\nX_val_feats /= std_feat\nX_test_feats /= std_feat\n\n# Preprocessing: Add a bias dimension\nX_train_feats = np.hstack([X_train_feats, np.ones((X_train_feats.shape[0], 1))])\nX_val_feats = np.hstack([X_val_feats, np.ones((X_val_feats.shape[0], 1))])\nX_test_feats = np.hstack([X_test_feats, np.ones((X_test_feats.shape[0], 1))])", "Train SVM on features\nUsing the multiclass SVM code developed earlier in the assignment, train SVMs on top of the features extracted above; this should achieve better results than training SVMs directly on top of raw pixels.", "# Use the validation set to tune the learning rate and regularization strength\n\nfrom cs231n.classifiers.linear_classifier import LinearSVM\n\nlearning_rates = [1e-9, 1e-8, 1e-7]\nregularization_strengths = [5e4, 5e5, 5e6]\n#learning_rates = list(map(lambda x: x*1e-9, np.arange(0.9, 2, 0.1)))\n#regularization_strengths = list(map(lambda x: x*1e4, np.arange(1, 10)))\n\nresults = {}\nbest_val = -1\nbest_svm = None\niters = 2000\n\n################################################################################\n# TODO: #\n# Use the validation set to set the learning rate and regularization strength. #\n# This should be identical to the validation that you did for the SVM; save #\n# the best trained classifer in best_svm. You might also want to play #\n# with different numbers of bins in the color histogram. If you are careful #\n# you should be able to get accuracy of near 0.44 on the validation set. #\n################################################################################\nfor lr in learning_rates:\n for reg in regularization_strengths:\n print('Training with lr={0}, reg={1}'.format(lr, reg))\n svm = LinearSVM()\n loss_hist = svm.train(X_train_feats, y_train, learning_rate=lr, reg=reg, num_iters=iters)\n y_train_pred = svm.predict(X_train_feats)\n y_val_pred = svm.predict(X_val_feats)\n train_accuracy = np.mean(y_train == y_train_pred)\n validation_accuracy = np.mean(y_val == y_val_pred)\n if validation_accuracy > best_val:\n best_val = validation_accuracy\n best_svm = svm\n results[(lr, reg)] = (validation_accuracy, train_accuracy)\n################################################################################\n# END OF YOUR CODE #\n################################################################################\n\n# Print out results.\nfor lr, reg in sorted(results):\n train_accuracy, val_accuracy = results[(lr, reg)]\n print('lr %e reg %e train accuracy: %f val accuracy: %f' % (\n lr, reg, train_accuracy, val_accuracy))\n\nprint('best validation accuracy achieved during cross-validation: %f' % best_val)\n\n# Evaluate your trained SVM on the test set\ny_test_pred = best_svm.predict(X_test_feats)\ntest_accuracy = np.mean(y_test == y_test_pred)\nprint(test_accuracy)\n\n# An important way to gain intuition about how an algorithm works is to\n# visualize the mistakes that it makes. In this visualization, we show examples\n# of images that are misclassified by our current system. The first column\n# shows images that our system labeled as \"plane\" but whose true label is\n# something other than \"plane\".\n\nexamples_per_class = 8\nclasses = ['plane', 'car', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck']\nfor cls, cls_name in enumerate(classes):\n idxs = np.where((y_test != cls) & (y_test_pred == cls))[0]\n idxs = np.random.choice(idxs, examples_per_class, replace=False)\n for i, idx in enumerate(idxs):\n plt.subplot(examples_per_class, len(classes), i * len(classes) + cls + 1)\n plt.imshow(X_test[idx].astype('uint8'))\n plt.axis('off')\n if i == 0:\n plt.title(cls_name)\nplt.show()", "Inline question 1:\nDescribe the misclassification results that you see. Do they make sense?\nIt makes sense given that we are using color histogram features, so for some results the background seems to affect. For example, blue background/flat background for a plane, trucks as cars (street + background) and the other way, etc.\nNeural Network on image features\nEarlier in this assigment we saw that training a two-layer neural network on raw pixels achieved better classification performance than linear classifiers on raw pixels. In this notebook we have seen that linear classifiers on image features outperform linear classifiers on raw pixels. \nFor completeness, we should also try training a neural network on image features. This approach should outperform all previous approaches: you should easily be able to achieve over 55% classification accuracy on the test set; our best model achieves about 60% classification accuracy.", "print(X_train_feats.shape)\n\nfrom cs231n.classifiers.neural_net import TwoLayerNet\n\ninput_dim = X_train_feats.shape[1]\nhidden_dim = 500\nnum_classes = 10\n\nbest_net = None\n\n################################################################################\n# TODO: Train a two-layer neural network on image features. You may want to #\n# cross-validate various parameters as in previous sections. Store your best #\n# model in the best_net variable. #\n################################################################################\nlearning_rates = np.arange(0.1, 1.6, 0.1)\nregularization_params = [1e-5, 1e-4, 1e-3, 1e-2, 1e-1, 1]\nresults = {}\nbest_val_accuracy = 0\n\nfor lr in learning_rates:\n for reg in regularization_params:\n net = TwoLayerNet(input_dim, hidden_dim, num_classes)\n stats = net.train(X_train_feats, y_train, X_val_feats, y_val, num_iters=2000, batch_size=200,\n learning_rate=lr, learning_rate_decay=0.95, reg=reg)\n val_accuracy = (net.predict(X_val_feats) == y_val).mean()\n if val_accuracy > best_val_accuracy:\n best_val_accuracy = val_accuracy\n best_net = net \n print('LR: {0} REG: {1} ACC: {2}'.format(lr, reg, val_accuracy))\n \nprint('best validation accuracy achieved during cross-validation: {0}'.format(best_val_accuracy))\nnet = best_net\n################################################################################\n# END OF YOUR CODE #\n################################################################################\n\n# Run your neural net classifier on the test set. You should be able to\n# get more than 55% accuracy.\n\ntest_acc = (best_net.predict(X_test_feats) == y_test).mean()\nprint(test_acc)", "Bonus: Design your own features!\nYou have seen that simple image features can improve classification performance. So far we have tried HOG and color histograms, but other types of features may be able to achieve even better classification performance.\nFor bonus points, design and implement a new type of feature and use it for image classification on CIFAR-10. Explain how your feature works and why you expect it to be useful for image classification. Implement it in this notebook, cross-validate any hyperparameters, and compare its performance to the HOG + Color histogram baseline.\nBonus: Do something extra!\nUse the material and code we have presented in this assignment to do something interesting. Was there another question we should have asked? Did any cool ideas pop into your head as you were working on the assignment? This is your chance to show off!" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
spm2164/foundations-homework
12/homework 12 - 311 time series homework.ipynb
artistic-2.0
[ "import pandas as pd\nimport numpy as np\nimport matplotlib.pyplot as plt\n%matplotlib inline\nplt.style.use('ggplot')\nimport dateutil.parser", "First, I made a mistake naming the data set! It's 2015 data, not 2014 data. But yes, still use 311-2014.csv. You can rename it.\n\nImporting and preparing your data\nImport your data, but only the first 200,000 rows. You'll also want to change the index to be a datetime based on the Created Date column - you'll want to check if it's already a datetime, and parse it if not.", "df=pd.read_csv(\"311-2014.csv\", nrows=200000)\n\ndateutil.parser.parse(df['Created Date'][0])\n\ndef parse_date(str_date):\n return dateutil.parser.parse(str_date)\n\ndf['created_datetime']=df['Created Date'].apply(parse_date)\n\ndf.index=df['created_datetime']", "What was the most popular type of complaint, and how many times was it filed?", "df['Complaint Type'].describe()", "Make a horizontal bar graph of the top 5 most frequent complaint types.", "df.groupby(by='Complaint Type')['Complaint Type'].count().sort_values(ascending=False).head(5).plot(kind='barh').invert_yaxis()", "Which borough has the most complaints per capita? Since it's only 5 boroughs, you can do the math manually.", "df.groupby(by='Borough')['Borough'].count()\n\nboro_pop={\n 'BRONX': 1438159,\n 'BROOKLYN': 2621793,\n 'MANHATTAN': 1636268,\n 'QUEENS': 2321580,\n 'STATEN ISLAND': 473279}\n\n\nboro_df=pd.Series.to_frame(df.groupby(by='Borough')['Borough'].count())\nboro_df['Population']=pd.DataFrame.from_dict(boro_pop, orient='index')\nboro_df['Complaints']=boro_df['Borough']\nboro_df.drop('Borough', axis=1, inplace=True)\nboro_df['Per Capita']=boro_df['Complaints']/boro_df['Population']\nboro_df['Per Capita'].plot(kind='bar')", "According to your selection of data, how many cases were filed in March? How about May?", "df['2015-03']['Created Date'].count()\n\ndf['2015-05']['Created Date'].count()", "I'd like to see all of the 311 complaints called in on April 1st.\n\nSurprise! We couldn't do this in class, but it was just a limitation of our data set", "df['2015-04-01']", "What was the most popular type of complaint on April 1st?", "df['2015-04-01'].groupby(by='Complaint Type')['Complaint Type'].count().sort_values(ascending=False).head(1)", "What were the most popular three types of complaint on April 1st", "df['2015-04-01'].groupby(by='Complaint Type')['Complaint Type'].count().sort_values(ascending=False).head(3)", "What month has the most reports filed? How many? Graph it.", "df.resample('M')['Unique Key'].count().sort_values(ascending=False)\n\ndf.resample('M').count().plot(y='Unique Key')", "What week of the year has the most reports filed? How many? Graph the weekly complaints.", "df.resample('W')['Unique Key'].count().sort_values(ascending=False).head(5)\n\ndf.resample('W').count().plot(y='Unique Key')", "Noise complaints are a big deal. Use .str.contains to select noise complaints, and make an chart of when they show up annually. Then make a chart about when they show up every day (cyclic).", "noise_df=df[df['Complaint Type'].str.contains('Noise')]\nnoise_df.resample('M').count().plot(y='Unique Key')\n\nnoise_df.groupby(by=noise_df.index.hour).count().plot(y='Unique Key')", "Which were the top five days of the year for filing complaints? How many on each of those days? Graph it.", "df.resample('D')['Unique Key'].count().sort_values(ascending=False).head(5)\n\ndf.resample('D')['Unique Key'].count().sort_values().tail(5).plot(kind='barh')", "What hour of the day are the most complaints? Graph a day of complaints.", "df['Unique Key'].groupby(by=df.index.hour).count().sort_values(ascending=False)\n\ndf['Unique Key'].groupby(df.index.hour).count().plot()", "One of the hours has an odd number of complaints. What are the most common complaints at that hour, and what are the most common complaints the hour before and after?", "df[df.index.hour==0].groupby(by='Complaint Type')['Complaint Type'].count().sort_values(ascending=False).head(5)\n\ndf[df.index.hour==1].groupby(by='Complaint Type')['Complaint Type'].count().sort_values(ascending=False).head(5)\n\ndf[df.index.hour==11].groupby(by='Complaint Type')['Complaint Type'].count().sort_values(ascending=False).head(5)", "So odd. What's the per-minute breakdown of complaints between 12am and 1am? You don't need to include 1am.", "midnight_df = df[df.index.hour==0]\n\nmidnight_df.groupby(midnight_df.index.minute)['Unique Key'].count().sort_values(ascending=False)", "Looks like midnight is a little bit of an outlier. Why might that be? Take the 5 most common agencies and graph the times they file reports at (all day, not just midnight).", "df.groupby('Agency')['Unique Key'].count().sort_values(ascending=False).head(5)\n\nax=df[df['Agency']=='NYPD'].groupby(df[df['Agency']=='NYPD'].index.hour)['Unique Key'].count().plot(legend=True, label='NYPD')\ndf[df['Agency']=='HPD'].groupby(df[df['Agency']=='HPD'].index.hour)['Unique Key'].count().plot(ax=ax, legend=True, label='HPD')\ndf[df['Agency']=='DOT'].groupby(df[df['Agency']=='DOT'].index.hour)['Unique Key'].count().plot(ax=ax, legend=True, label='DOT')\ndf[df['Agency']=='DPR'].groupby(df[df['Agency']=='DPR'].index.hour)['Unique Key'].count().plot(ax=ax, legend=True, label='DPR')\ndf[df['Agency']=='DOHMH'].groupby(df[df['Agency']=='DOHMH'].index.hour)['Unique Key'].count().plot(ax=ax, legend=True, label='DOHMH')", "Graph those same agencies on an annual basis - make it weekly. When do people like to complain? When does the NYPD have an odd number of complaints?", "ax=df[df['Agency']=='NYPD'].groupby(df[df['Agency']=='NYPD'].index.week)['Unique Key'].count().plot(legend=True, label='NYPD')\ndf[df['Agency']=='HPD'].groupby(df[df['Agency']=='HPD'].index.week)['Unique Key'].count().plot(ax=ax, legend=True, label='HPD')\ndf[df['Agency']=='DOT'].groupby(df[df['Agency']=='DOT'].index.week)['Unique Key'].count().plot(ax=ax, legend=True, label='DOT')\ndf[df['Agency']=='DPR'].groupby(df[df['Agency']=='DPR'].index.week)['Unique Key'].count().plot(ax=ax, legend=True, label='DPR')\ndf[df['Agency']=='DOHMH'].groupby(df[df['Agency']=='DOHMH'].index.week)['Unique Key'].count().plot(ax=ax, legend=True, label='DOHMH')", "Maybe the NYPD deals with different issues at different times? Check the most popular complaints in July and August vs the month of May. Also check the most common complaints for the Housing Preservation Bureau (HPD) in winter vs. summer.", "nypd=df[df['Agency']=='NYPD']\n\nnypd[(nypd.index.month==7) | (nypd.index.month==8)].groupby('Complaint Type')['Complaint Type'].count().sort_values(ascending=False).head(5)\n\nnypd[nypd.index.month==5].groupby('Complaint Type')['Complaint Type'].count().sort_values(ascending=False).head(5)\n\n# seems like mostly noise complaints and bad parking to me\n\nhpd=df[df['Agency']=='HPD']\n\nhpd[(hpd.index.month>=6) & (hpd.index.month<=8)].groupby('Complaint Type')['Complaint Type'].count().sort_values(ascending=False).head(5)\n# i would consider summer to be june to august.\n\nhpd[(hpd.index.month==12) | (hpd.index.month<=2)].groupby('Complaint Type')['Complaint Type'].count().sort_values(ascending=False).head(5)\n\n# pretty similar list, but people probably notice a draft from their bad window or door in the winter more easily than summer" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
GoogleCloudPlatform/asl-ml-immersion
notebooks/docker_and_kubernetes/solutions/3_k8s_hello_node.ipynb
apache-2.0
[ "Hello Node Kubernetes\nLearning Objectives\n * Create a Node.js server\n * Create a Docker container image\n * Create a container cluster and a Kubernetes pod\n * Scale up your services\nOverview\nThe goal of this hands-on lab is for you to turn code that you have developed into a replicated application running on Kubernetes, which is running on Kubernetes Engine. For this lab the code will be a simple Hello World node.js app.\nHere's a diagram of the various parts in play in this lab, to help you understand how the pieces fit together with one another. Use this as a reference as you progress through the lab; it should all make sense by the time you get to the end (but feel free to ignore this for now).\n<img src='../assets/k8s_hellonode_overview.png' width='50%'>\nCreate a Node.js server\nThe file ./src/server.js contains a simple Node.js server. Use cat to examine the contents of that file.", "!cat ./src/server.js", "Start the server by running node server.js in the cell below. Open a terminal and type\nbash\ncurl http://localhost:8000\nto see what the server outputs.", "!node ./src/server.js", "You should see the output \"Hello World!\". Once you've verfied this, interupt the above running cell by hitting the stop button.\nCreate and build a Docker image\nNow we will create a docker image called hello_node.docker that will do the following:\n\nStart from the node image found on the Docker hub by inhereting from node:6.9.2\nExpose port 8000\nCopy the ./src/server.js file to the image\nStart the node server as we previously did manually\n\nSave your Dockerfile in the folder labeled dockerfiles. Your finished Dockerfile should look something like this\n```bash\nFROM node:6.9.2\nEXPOSE 8000\nCOPY ./src/server.js .\nCMD node server.js\n```\nNext, build the image in your project using docker build.", "import os\n\nPROJECT_ID = \"your-gcp-project-here\" # REPLACE WITH YOUR PROJECT NAME\nos.environ[\"PROJECT_ID\"] = PROJECT_ID\n\n%%bash\ndocker build -f dockerfiles/hello_node.docker -t gcr.io/${PROJECT_ID}/hello-node:v1 .", "It'll take some time to download and extract everything, but you can see the progress bars as the image builds. Once complete, test the image locally by running a Docker container as a daemon on port 8000 from your newly-created container image.\nRun the container using docker run", "%%bash\ndocker run -d -p 8000:8000 gcr.io/${PROJECT_ID}/hello-node:v1", "Your output should look something like this:\nbash\nb16e5ccb74dc39b0b43a5b20df1c22ff8b41f64a43aef15e12cc9ac3b3f47cfd\nRight now, since you used the --d flag, the container process is running in the background. You can verify it's running using curl as before.", "!curl http://localhost:8000", "Now, stop running the container. Get the container id using docker ps and then terminate using docker stop", "!docker ps\n\n# your container id will be different\n!docker stop b16e5ccb74dc", "Now that the image is working as intended, push it to the Google Container Registry, a private repository for your Docker images, accessible from your Google Cloud projects. First, configure docker uisng your local config file. The initial push may take a few minutes to complete. You'll see the progress bars as it builds.", "!gcloud auth configure-docker\n\n%%bash\ndocker push gcr.io/${PROJECT_ID}/hello-node:v1", "The container image will be listed in your Console. Select Navigation > Container Registry.\nCreate a cluster on GKE\nNow you're ready to create your Kubernetes Engine cluster. A cluster consists of a Kubernetes master API server hosted by Google and a set of worker nodes. The worker nodes are Compute Engine virtual machines.\nCreate a cluster with two n1-standard-1 nodes (this will take a few minutes to complete). You can safely ignore warnings that come up when the cluster builds.\nNote: You can also create this cluster through the Console by opening the Navigation menu and selecting Kubernetes Engine > Kubernetes clusters > Create cluster.", "!gcloud container clusters create hello-world \\\n --num-nodes 2 \\\n --machine-type n1-standard-1 \\\n --zone us-central1-a", "Create a pod\nA Kubernetes pod is a group of containers tied together for administration and networking purposes. It can contain single or multiple containers. Here you'll use one container built with your Node.js image stored in your private container registry. It will serve content on port 8000.\nCreate a pod with the kubectl run command.", "%%bash\nkubectl create deployment hello-node \\\n --image=gcr.io/${PROJECT_ID}/hello-node:v1", "As you can see, you've created a deployment object. Deployments are the recommended way to create and scale pods. Here, a new deployment manages a single pod replica running the hello-node:v1 image.\nView the deployment using kubectl get", "!kubectl get deployments", "Similarly, view the pods created by the deployment by also using kubectl get", "!kubectl get pods", "Here are some other good kubectl commands you should know. They won't change the state of the cluster. Others can be found here.\n * kubectl cluster-info\n * kubectl config view\n * kubectl get events\n * kubectl logs &lt;pod-name&gt;\nAllow external traffic\nBy default, the pod is only accessible by its internal IP within the cluster. In order to make the hello-node container accessible from outside the Kubernetes virtual network, you have to expose the pod as a Kubernetes service.\nYou can expose the pod to the public internet with the kubectl expose command. The --type=\"LoadBalancer\" flag is required for the creation of an externally accessible IP. This flag specifies that we are using the load-balancer provided by the underlying infrastructure (in this case the Compute Engine load balancer). Note that you expose the deployment, and not the pod, directly. This will cause the resulting service to load balance traffic across all pods managed by the deployment (in this case only 1 pod, but you will add more replicas later).", "!kubectl expose deployment hello-node --type=\"LoadBalancer\" --port=8000", "The Kubernetes master creates the load balancer and related Compute Engine forwarding rules, target pools, and firewall rules to make the service fully accessible from outside of Google Cloud.\nTo find the publicly-accessible IP address of the service, request kubectl to list all the cluster services.", "!kubectl get services", "There are 2 IP addresses listed for your hello-node service, both serving port 8000. The CLUSTER-IP is the internal IP that is only visible inside your cloud virtual network; the EXTERNAL-IP is the external load-balanced IP.\nYou should now be able to reach the service by calling curl http://&lt;EXTERNAL_IP&gt;:8000.", "!curl http://34.122.72.240:8000", "At this point you've gained several features from moving to containers and Kubernetes - you do not need to specify on which host to run your workload and you also benefit from service monitoring and restart. Now see what else can be gained from your new Kubernetes infrastructure.\nScale up your service\nOne of the powerful features offered by Kubernetes is how easy it is to scale your application. Suppose you suddenly need more capacity. You can tell the replication controller to manage a new number of replicas for your pod: \nbash\nkubectl scale deployment hello-node --replicas=4\nScale your hello-node application to have 6 replicas. Then use kubectl get to request a description of the updated deployment and list all the pods:", "!kubectl scale deployment hello-node --replicas=6\n\n!kubectl get deployment\n\n!kubectl get pods", "A declarative approach is being used here. Rather than starting or stopping new instances, you declare how many instances should be running at all times. Kubernetes reconciliation loops makes sure that reality matches what you requested and takes action if needed.\nHere's a diagram summarizing the state of your Kubernetes cluster:\n<img src='../assets/k8s_cluster.png' width='60%'>\nRoll out an upgrade to your service\nAt some point the application that you've deployed to production will require bug fixes or additional features. Kubernetes helps you deploy a new version to production without impacting your users.\nFirst, modify the application by opening server.js so that the response is \nbash\nresponse.end(\"Hello Kubernetes World!\");\nNow you can build and publish a new container image to the registry with an incremented tag (v2 in this case).\nNote: Building and pushing this updated image should be quicker since caching is being taken advantage of.", "%%bash\ndocker build -f dockerfiles/hello_node.docker -t gcr.io/${PROJECT_ID}/hello-node:v2 .\ndocker push gcr.io/${PROJECT_ID}/hello-node:v2", "Kubernetes will smoothly update your replication controller to the new version of the application. In order to change the image label for your running container, you will edit the existing hello-node deployment and change the image from gcr.io/PROJECT_ID/hello-node:v1 to gcr.io/PROJECT_ID/hello-node:v2.\nTo do this, use the kubectl edit command. It opens a text editor displaying the full deployment yaml configuration. It isn't necessary to understand the full yaml config right now, just understand that by updating the spec.template.spec.containers.image field in the config you are telling the deployment to update the pods with the new image.\nOpen a terminal and run the following command:\nbash \nkubectl edit deployment hello-node\nLook for Spec > containers > image and change the version number to v2. This is the output you should see:\nbash\ndeployment.extensions/hello-node edited\nNew pods will be created with the new image and the old pods will be deleted. Run kubectl get deployments to confirm. \nNote: You may need to rerun the above command as it provisions machines.", "!kubectl get deployments", "While this is happening, the users of your services shouldn't see any interruption. After a little while they'll start accessing the new version of your application. You can find more details on rolling updates in this documentation.\nHopefully with these deployment, scaling, and updated features, once you've set up your Kubernetes Engine cluster, you'll agree that Kubernetes will help you focus on the application rather than the infrastructure.\nCleanup\nDelete the cluster using gcloud to free up those resources. Use the --quiet flag if you are executing this in a notebook. Deleting the cluster can take a few minutes.", "!gcloud container clusters --quiet delete hello-world", "Copyright 2020 Google LLC Licensed under the Apache License, Version 2.0 (the \"License\"); you may not use this file except in compliance with the License. You may obtain a copy of the License at https://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
deepmind/open_spiel
open_spiel/colabs/OpenSpielTutorial.ipynb
apache-2.0
[ "#@title ##### License { display-mode: \"form\" }\n# Copyright 2019 DeepMind Technologies Ltd. All rights reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.", "OpenSpiel\n\nThis Colab gets you started the basics of OpenSpiel.\nOpenSpiel is a framework for reinforcement learning in games. The code is hosted on github.\nThere is an accompanying video tutorial that works through this colab. It will be linked here once it is live.\nThere is also an OpenSpiel paper with more detail.\n\nInstall\nThe following command will install OpenSpiel via pip.\nOnly the required dependencies are installed. You may need other dependencies if you use some of the algorithms. There is a the complete list of packages and versions we install for the CI tests, which can be installed as necessary.", "!pip install --upgrade open_spiel", "Part 1. OpenSpiel API Basics.", "# Importing pyspiel and showing the list of supported games.\nimport pyspiel\nprint(pyspiel.registered_names())\n\n# Loading a game (with no/default parameters).\ngame = pyspiel.load_game(\"tic_tac_toe\")\nprint(game)\n\n# Some properties of the games.\nprint(game.num_players())\nprint(game.max_utility())\nprint(game.min_utility())\nprint(game.num_distinct_actions())\n\n# Creating initial states.\nstate = game.new_initial_state()\nprint(state)\n\n# Basic information about states.\nprint(state.current_player())\nprint(state.is_terminal())\nprint(state.returns())\nprint(state.legal_actions())\n\n# Playing the game: applying actions.\nstate = game.new_initial_state()\nstate.apply_action(1)\nprint(state)\nprint(state.current_player())\nstate.apply_action(2)\nstate.apply_action(4)\nstate.apply_action(0)\nstate.apply_action(7)\nprint(state)\nprint(state.is_terminal())\nprint(state.player_return(0)) # win for x (player 0)\nprint(state.current_player())\n\n# Different game: Breakthrough with default parameters (number of rows and columns are both 8)\ngame = pyspiel.load_game(\"breakthrough\")\nstate = game.new_initial_state()\nprint(state)\n\n# Parameterized games: loading a 6x6 Breakthrough.\ngame = pyspiel.load_game(\"breakthrough(rows=6,columns=6)\")\nstate = game.new_initial_state()\nprint(state)\nprint(state.legal_actions())\nprint(game.num_distinct_actions())\nfor action in state.legal_actions():\n print(f\"{action} {state.action_to_string(action)}\")", "Part 2. Normal-form Games and Evolutionary Dynamics in OpenSpiel.", "import pyspiel\ngame = pyspiel.create_matrix_game([[1, -1], [-1, 1]], [[-1, 1], [1, -1]])\nprint(game) # name not provided: uses a default\nstate = game.new_initial_state()\nprint(state) # action names also not provided; defaults used\n\n# Normal-form games are 1-step simultaneous-move games.\nprint(state.current_player()) # special player id \nprint(state.legal_actions(0)) # query legal actions for each player\nprint(state.legal_actions(1))\nprint(state.is_terminal())\n\n\n# Applying a joint action (one action per player)\nstate.apply_actions([0, 0])\nprint(state.is_terminal())\nprint(state.returns())\n\n# Evolutionary dynamics in Rock, Paper, Scissors\nfrom open_spiel.python.egt import dynamics\nfrom open_spiel.python.egt.utils import game_payoffs_array\nimport numpy as np\n\ngame = pyspiel.load_matrix_game(\"matrix_rps\") # load the Rock, Paper, Scissors matrix game\npayoff_matrix = game_payoffs_array(game) # convert any normal-form game to a numpy payoff matrix\n\ndyn = dynamics.SinglePopulationDynamics(payoff_matrix, dynamics.replicator)\nx = np.array([0.2, 0.2, 0.6]) # population heavily-weighted toward scissors\ndyn(x)\n\n# Choose a step size and apply the dynamic\nalpha = 0.01\nx += alpha * dyn(x)\nprint(x)\nx += alpha * dyn(x)\nprint(x)\nx += alpha * dyn(x)\nx += alpha * dyn(x)\nx += alpha * dyn(x)\nx += alpha * dyn(x)\nprint(x)", "Part 3. Chance Nodes and Partially-Observable Games.", "# Kuhn poker: simplified poker with a 3-card deck (https://en.wikipedia.org/wiki/Kuhn_poker)\nimport pyspiel\ngame = pyspiel.load_game(\"kuhn_poker\")\nprint(game.num_distinct_actions()) # bet and fold\n\n\n# Chance nodes.\nstate = game.new_initial_state()\nprint(state.current_player()) # special chance player id\nprint(state.is_chance_node())\nprint(state.chance_outcomes()) # distibution over outcomes as a list of (outcome, probability) pairs\n\n# Applying chance node outcomes: same function as applying actions.\nstate.apply_action(0) # let's choose the first card (jack)\nprint(state.is_chance_node()) # still at a chance node (player 2's card).\nprint(state.chance_outcomes()) # jack no longer a possible outcome\nstate.apply_action(1) # second player gets the queen\nprint(state.current_player()) # no longer chance node, time to play!\n\n# States vs. information states\nprint(state) # ground/world state (all information open)\nprint(state.legal_actions())\nfor action in state.legal_actions():\n print(state.action_to_string(action))\nprint(state.information_state_string()) # only current player's information!\n\n# Take an action (pass / check), second player's turn.\n# Information state tensor is vector of floats (often bits) representing the information state.\nstate.apply_action(0)\nprint(state.current_player())\nprint(state.information_state_string()) # now contains second player's card and the public action sequence\nprint(state.information_state_tensor())\n\n# Leduc poker is a larger game (6 cards, two suits), 3 actions: fold, check/call, raise.\ngame = pyspiel.load_game(\"leduc_poker\")\nprint(game.num_distinct_actions())\nstate = game.new_initial_state()\nprint(state)\nstate.apply_action(0) # first player gets first jack \nstate.apply_action(1) # second player gets second jack\nprint(state.current_player())\nprint(state.information_state_string())\nprint(state.information_state_tensor())\n\n\n# Let's check until the second round.\nprint(state.legal_actions_mask()) # Helper function for neural networks.\nstate.apply_action(1) # check\nstate.apply_action(1) # check\nprint(state)\nprint(state.chance_outcomes()) # public card (4 left in the deck)\nstate.apply_action(2)\nprint(state.information_state_string()) # player 0's turn again.", "Part 4. Basic RL: Self-play Q-Learning in Tic-Tac-Toe.", "# Let's do independent Q-learning in Tic-Tac-Toe, and play it against random.\n# RL is based on python/examples/independent_tabular_qlearning.py\nfrom open_spiel.python import rl_environment\nfrom open_spiel.python import rl_tools\nfrom open_spiel.python.algorithms import tabular_qlearner\n\n# Create the environment\nenv = rl_environment.Environment(\"tic_tac_toe\")\nnum_players = env.num_players\nnum_actions = env.action_spec()[\"num_actions\"]\n\n# Create the agents\nagents = [\n tabular_qlearner.QLearner(player_id=idx, num_actions=num_actions)\n for idx in range(num_players)\n]\n\n# Train the Q-learning agents in self-play.\nfor cur_episode in range(25000):\n if cur_episode % 1000 == 0:\n print(f\"Episodes: {cur_episode}\")\n time_step = env.reset()\n while not time_step.last():\n player_id = time_step.observations[\"current_player\"]\n agent_output = agents[player_id].step(time_step)\n time_step = env.step([agent_output.action])\n # Episode is over, step all agents with final info state.\n for agent in agents:\n agent.step(time_step)\nprint(\"Done!\")\n\n# Evaluate the Q-learning agent against a random agent.\nfrom open_spiel.python.algorithms import random_agent\neval_agents = [agents[0], random_agent.RandomAgent(1, num_actions, \"Entropy Master 2000\") ]\n\ntime_step = env.reset()\nwhile not time_step.last():\n print(\"\")\n print(env.get_state)\n player_id = time_step.observations[\"current_player\"]\n # Note the evaluation flag. A Q-learner will set epsilon=0 here.\n agent_output = eval_agents[player_id].step(time_step, is_evaluation=True)\n print(f\"Agent {player_id} chooses {env.get_state.action_to_string(agent_output.action)}\")\n time_step = env.step([agent_output.action])\n\nprint(\"\")\nprint(env.get_state)\nprint(time_step.rewards)\n" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
geektoni/shogun
doc/ipython-notebooks/classification/Classification.ipynb
bsd-3-clause
[ "Visual Comparison Between Different Classification Methods in Shogun\nNotebook by Youssef Emad El-Din (Github ID: <a href=\"https://github.com/youssef-emad/\">youssef-emad</a>)\nThis notebook demonstrates different classification methods in Shogun. The point is to compare and visualize the decision boundaries of different classifiers on two different datasets, where one is linear separable, and one is not.\n\n<a href =\"#section1\">Data Generation and Visualization</a>\n<a href =\"#section2\">Support Vector Machine</a>\n<a href =\"#section2a\">Linear SVM</a>\n<a href =\"#section2b\">Gaussian Kernel</a>\n<a href =\"#section2c\">Sigmoid Kernel</a>\n<a href =\"#section2d\">Polynomial Kernel</a>\n<a href =\"#section3\">Naive Bayes</a>\n<a href =\"#section4\">Nearest Neighbors</a>\n<a href =\"#section5\">Linear Discriminant Analysis</a>\n<a href =\"#section6\">Quadratic Discriminat Analysis</a>\n<a href =\"#section7\">Gaussian Process</a>\n<a href =\"#section7a\">Logit Likelihood model</a>\n<a href =\"#section7b\">Probit Likelihood model</a>\n<a href =\"#section8\">Putting It All Together</a>", "import numpy as np\nimport matplotlib.pyplot as plt\nimport os\nimport shogun as sg\n%matplotlib inline\n\nSHOGUN_DATA_DIR=os.getenv('SHOGUN_DATA_DIR', '../../../data')\n\n#Needed lists for the final plot\nclassifiers_linear = []*10\nclassifiers_non_linear = []*10\nclassifiers_names = []*10\nfadings = []*10", "<a id = \"section1\">Data Generation and Visualization</a>\nTransformation of features to Shogun format using <a href=\"http://www.shogun-toolbox.org/doc/en/current/classshogun_1_1DenseFeatures.html\">RealFeatures</a> and <a href=\"http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1BinaryLabels.html\">BinaryLables</a> classes.", "shogun_feats_linear = sg.create_features(sg.read_csv(os.path.join(SHOGUN_DATA_DIR, 'toy/classifier_binary_2d_linear_features_train.dat')))\nshogun_labels_linear = sg.create_labels(sg.read_csv(os.path.join(SHOGUN_DATA_DIR, 'toy/classifier_binary_2d_linear_labels_train.dat')))\n\nshogun_feats_non_linear = sg.create_features(sg.read_csv(os.path.join(SHOGUN_DATA_DIR, 'toy/classifier_binary_2d_nonlinear_features_train.dat')))\nshogun_labels_non_linear = sg.create_labels(sg.read_csv(os.path.join(SHOGUN_DATA_DIR, 'toy/classifier_binary_2d_nonlinear_labels_train.dat')))\n\nfeats_linear = shogun_feats_linear.get('feature_matrix')\nlabels_linear = shogun_labels_linear.get('labels')\n\nfeats_non_linear = shogun_feats_non_linear.get('feature_matrix')\nlabels_non_linear = shogun_labels_non_linear.get('labels')", "Data visualization methods.", "def plot_binary_data(plot,X_train, y_train):\n \"\"\"\n This function plots 2D binary data with different colors for different labels.\n \"\"\"\n plot.xlabel(r\"$x$\")\n plot.ylabel(r\"$y$\")\n plot.plot(X_train[0, np.argwhere(y_train == 1)], X_train[1, np.argwhere(y_train == 1)], 'ro')\n plot.plot(X_train[0, np.argwhere(y_train == -1)], X_train[1, np.argwhere(y_train == -1)], 'bo')\n\ndef compute_plot_isolines(classifier,feats,size=200,fading=True):\n \"\"\"\n This function computes the classification of points on the grid\n to get the decision boundaries used in plotting\n \"\"\"\n x1 = np.linspace(1.2*min(feats[0]), 1.2*max(feats[0]), size)\n x2 = np.linspace(1.2*min(feats[1]), 1.2*max(feats[1]), size)\n\n x, y = np.meshgrid(x1, x2)\n\n plot_features = sg.create_features(np.array((np.ravel(x), np.ravel(y))))\n \n if fading == True:\n plot_labels = classifier.apply_binary(plot_features).get('current_values')\n else:\n plot_labels = classifier.apply(plot_features).get('labels')\n z = plot_labels.reshape((size, size))\n return x,y,z\n\ndef plot_model(plot,classifier,features,labels,fading=True):\n \"\"\"\n This function plots an input classification model\n \"\"\"\n x,y,z = compute_plot_isolines(classifier,features,fading=fading)\n plot.pcolor(x,y,z,cmap='RdBu_r')\n plot.contour(x, y, z, linewidths=1, colors='black')\n plot_binary_data(plot,features, labels)\n \n\nplt.figure(figsize=(15,5))\nplt.subplot(121)\nplt.title(\"Linear Features\")\nplot_binary_data(plt,feats_linear, labels_linear)\nplt.subplot(122)\nplt.title(\"Non Linear Features\")\nplot_binary_data(plt,feats_non_linear, labels_non_linear)", "<a id=\"section2\" href=\"http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1SVM.html\">Support Vector Machine</a>\n<a id=\"section2a\" href=\"http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1LibLinear.html\">Linear SVM</a>\nShogun provide <a href=\"http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1LibLinear.html\">Liblinear</a> which is a library for large-scale linear learning focusing on SVM used for classification", "plt.figure(figsize=(15,5))\nc = 0.5\nepsilon = 1e-3\n\nsvm_linear = sg.create_machine(\"LibLinear\", C1=c, C2=c, \n labels=shogun_labels_linear, \n epsilon=epsilon,\n liblinear_solver_type=\"L2R_L2LOSS_SVC\")\nsvm_linear.train(shogun_feats_linear)\nclassifiers_linear.append(svm_linear)\nclassifiers_names.append(\"SVM Linear\")\nfadings.append(True)\n\nplt.subplot(121)\nplt.title(\"Linear SVM - Linear Features\")\nplot_model(plt,svm_linear,feats_linear,labels_linear)\n\nsvm_non_linear = sg.create_machine(\"LibLinear\", C1=c, C2=c, \n labels=shogun_labels_non_linear,\n epsilon=epsilon,\n liblinear_solver_type=\"L2R_L2LOSS_SVC\")\nsvm_non_linear.train(shogun_feats_non_linear)\nclassifiers_non_linear.append(svm_non_linear)\n\nplt.subplot(122)\nplt.title(\"Linear SVM - Non Linear Features\")\nplot_model(plt,svm_non_linear,feats_non_linear,labels_non_linear)", "SVM - Kernels\nShogun provides many options for using kernel functions. Kernels in Shogun are based on two classes which are <a href=\"http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1Kernel.html\">Kernel</a> and <a href=\"http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1KernelMachine.html\">KernelMachine</a> base class.\n<a id =\"section2b\" href = \"http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1GaussianKernel.html\">Gaussian Kernel</a>", "gaussian_c = 0.7\n\ngaussian_kernel_linear = sg.create_kernel(\"GaussianKernel\", width=20)\ngaussian_svm_linear = sg.create_machine('LibSVM', C1=gaussian_c, C2=gaussian_c, \n kernel=gaussian_kernel_linear, labels=shogun_labels_linear)\ngaussian_svm_linear.train(shogun_feats_linear)\nclassifiers_linear.append(gaussian_svm_linear)\nfadings.append(True)\n\ngaussian_kernel_non_linear = sg.create_kernel(\"GaussianKernel\", width=10)\ngaussian_svm_non_linear=sg.create_machine('LibSVM', C1=gaussian_c, C2=gaussian_c, \n kernel=gaussian_kernel_non_linear, labels=shogun_labels_non_linear)\ngaussian_svm_non_linear.train(shogun_feats_non_linear)\nclassifiers_non_linear.append(gaussian_svm_non_linear)\nclassifiers_names.append(\"SVM Gaussian Kernel\")\n\nplt.figure(figsize=(15,5))\nplt.subplot(121)\nplt.title(\"SVM Gaussian Kernel - Linear Features\")\nplot_model(plt,gaussian_svm_linear,feats_linear,labels_linear)\n\nplt.subplot(122)\nplt.title(\"SVM Gaussian Kernel - Non Linear Features\")\nplot_model(plt,gaussian_svm_non_linear,feats_non_linear,labels_non_linear)", "<a id =\"section2c\" href=\"http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1CSigmoidKernel.html\">Sigmoid Kernel</a>", "sigmoid_c = 0.9\n\n\nsigmoid_kernel_linear = sg.create_kernel(\"SigmoidKernel\", cache_size=200, gamma=1, coef0=0.5)\nsigmoid_kernel_linear.init(shogun_feats_linear, shogun_feats_linear)\nsigmoid_svm_linear = sg.create_machine('LibSVM', C1=sigmoid_c, C2=sigmoid_c, \n kernel=sigmoid_kernel_linear, labels=shogun_labels_linear)\nsigmoid_svm_linear.train()\nclassifiers_linear.append(sigmoid_svm_linear)\nclassifiers_names.append(\"SVM Sigmoid Kernel\")\nfadings.append(True)\n\nplt.figure(figsize=(15,5))\nplt.subplot(121)\nplt.title(\"SVM Sigmoid Kernel - Linear Features\")\nplot_model(plt,sigmoid_svm_linear,feats_linear,labels_linear)\n\nsigmoid_kernel_non_linear = sg.create_kernel(\"SigmoidKernel\", cache_size=400, gamma=2.5, coef0=2)\nsigmoid_kernel_non_linear.init(shogun_feats_non_linear, shogun_feats_non_linear)\nsigmoid_svm_non_linear = sg.create_machine('LibSVM', C1=sigmoid_c, C2=sigmoid_c, \n kernel=sigmoid_kernel_non_linear, labels=shogun_labels_non_linear)\nsigmoid_svm_non_linear.train()\nclassifiers_non_linear.append(sigmoid_svm_non_linear)\n\nplt.subplot(122)\nplt.title(\"SVM Sigmoid Kernel - Non Linear Features\")\nplot_model(plt,sigmoid_svm_non_linear,feats_non_linear,labels_non_linear)", "<a id =\"section2d\" href=\"http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1CPolyKernel.html\">Polynomial Kernel</a>", "poly_c = 0.5\ndegree = 4\n\npoly_kernel_linear = sg.create_kernel('PolyKernel', degree=degree, c=1.0)\npoly_kernel_linear.init(shogun_feats_linear, shogun_feats_linear)\npoly_svm_linear = sg.create_machine('LibSVM', C1=poly_c, C2=poly_c, \n kernel=poly_kernel_linear, labels=shogun_labels_linear)\npoly_svm_linear.train()\nclassifiers_linear.append(poly_svm_linear)\nclassifiers_names.append(\"SVM Polynomial kernel\")\nfadings.append(True)\n\nplt.figure(figsize=(15,5))\nplt.subplot(121)\nplt.title(\"SVM Polynomial Kernel - Linear Features\")\nplot_model(plt,poly_svm_linear,feats_linear,labels_linear)\n\npoly_kernel_non_linear = sg.create_kernel('PolyKernel', degree=degree, c=1.0)\npoly_kernel_non_linear.init(shogun_feats_non_linear, shogun_feats_non_linear)\npoly_svm_non_linear = sg.create_machine('LibSVM', C1=poly_c, C2=poly_c, \n kernel=poly_kernel_non_linear, labels=shogun_labels_non_linear)\npoly_svm_non_linear.train()\nclassifiers_non_linear.append(poly_svm_non_linear)\n\nplt.subplot(122)\nplt.title(\"SVM Polynomial Kernel - Non Linear Features\")\nplot_model(plt,poly_svm_non_linear,feats_non_linear,labels_non_linear)", "<a id =\"section3\" href=\"http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1GaussianNaiveBayes.html\">Naive Bayes</a>", "multiclass_labels_linear = shogun_labels_linear.get('labels')\nfor i in range(0,len(multiclass_labels_linear)):\n if multiclass_labels_linear[i] == -1:\n multiclass_labels_linear[i] = 0\n\nmulticlass_labels_non_linear = shogun_labels_non_linear.get('labels')\nfor i in range(0,len(multiclass_labels_non_linear)):\n if multiclass_labels_non_linear[i] == -1:\n multiclass_labels_non_linear[i] = 0\n\n\nshogun_multiclass_labels_linear = sg.MulticlassLabels(multiclass_labels_linear)\nshogun_multiclass_labels_non_linear = sg.MulticlassLabels(multiclass_labels_non_linear)\n\nnaive_bayes_linear = sg.create_machine(\"GaussianNaiveBayes\")\nnaive_bayes_linear.put('features', shogun_feats_linear)\nnaive_bayes_linear.put('labels', shogun_multiclass_labels_linear)\nnaive_bayes_linear.train()\nclassifiers_linear.append(naive_bayes_linear)\nclassifiers_names.append(\"Naive Bayes\")\nfadings.append(False)\n\nplt.figure(figsize=(15,5))\nplt.subplot(121)\nplt.title(\"Naive Bayes - Linear Features\")\nplot_model(plt,naive_bayes_linear,feats_linear,labels_linear,fading=False)\n\nnaive_bayes_non_linear = sg.create_machine(\"GaussianNaiveBayes\")\nnaive_bayes_non_linear.put('features', shogun_feats_non_linear)\nnaive_bayes_non_linear.put('labels', shogun_multiclass_labels_non_linear)\nnaive_bayes_non_linear.train()\nclassifiers_non_linear.append(naive_bayes_non_linear)\n\nplt.subplot(122)\nplt.title(\"Naive Bayes - Non Linear Features\")\nplot_model(plt,naive_bayes_non_linear,feats_non_linear,labels_non_linear,fading=False)", "<a id =\"section4\" href=\"http://www.shogun-toolbox.org/doc/en/current/classshogun_1_1KNN.html\">Nearest Neighbors</a>", "number_of_neighbors = 10\n\ndistances_linear = sg.create_distance('EuclideanDistance')\ndistances_linear.init(shogun_feats_linear, shogun_feats_linear)\nknn_linear = sg.create_machine(\"KNN\", k=number_of_neighbors, distance=distances_linear, \n labels=shogun_labels_linear)\nknn_linear.train()\nclassifiers_linear.append(knn_linear)\nclassifiers_names.append(\"Nearest Neighbors\")\nfadings.append(False)\n\nplt.figure(figsize=(15,5))\nplt.subplot(121)\nplt.title(\"Nearest Neighbors - Linear Features\")\nplot_model(plt,knn_linear,feats_linear,labels_linear,fading=False)\n\ndistances_non_linear = sg.create_distance('EuclideanDistance')\ndistances_non_linear.init(shogun_feats_non_linear, shogun_feats_non_linear)\nknn_non_linear = sg.create_machine(\"KNN\", k=number_of_neighbors, distance=distances_non_linear, \n labels=shogun_labels_non_linear)\nknn_non_linear.train()\nclassifiers_non_linear.append(knn_non_linear)\n\nplt.subplot(122)\nplt.title(\"Nearest Neighbors - Non Linear Features\")\nplot_model(plt,knn_non_linear,feats_non_linear,labels_non_linear,fading=False)", "<a id =\"section5\" href=\"http://www.shogun-toolbox.org/doc/en/current/classshogun_1_1CLDA.html\">Linear Discriminant Analysis</a>", "gamma = 0.1\n\nlda_linear = sg.create_machine('LDA', gamma=gamma, labels=shogun_labels_linear)\nlda_linear.train(shogun_feats_linear)\nclassifiers_linear.append(lda_linear)\nclassifiers_names.append(\"LDA\")\nfadings.append(True)\n\nplt.figure(figsize=(15,5))\nplt.subplot(121)\nplt.title(\"LDA - Linear Features\")\nplot_model(plt,lda_linear,feats_linear,labels_linear)\n\nlda_non_linear = sg.create_machine('LDA', gamma=gamma, labels=shogun_labels_non_linear)\nlda_non_linear.train(shogun_feats_non_linear)\nclassifiers_non_linear.append(lda_non_linear)\n\nplt.subplot(122)\nplt.title(\"LDA - Non Linear Features\")\nplot_model(plt,lda_non_linear,feats_non_linear,labels_non_linear)", "<a id =\"section6\" href=\"http://www.shogun-toolbox.org/doc/en/current/classshogun_1_1QDA.html\">Quadratic Discriminant Analysis</a>", "qda_linear = sg.create_machine(\"QDA\", labels=shogun_multiclass_labels_linear)\nqda_linear.train(shogun_feats_linear)\nclassifiers_linear.append(qda_linear)\nclassifiers_names.append(\"QDA\")\nfadings.append(False)\n\nplt.figure(figsize=(15,5))\nplt.subplot(121)\nplt.title(\"QDA - Linear Features\")\nplot_model(plt,qda_linear,feats_linear,labels_linear,fading=False)\n\nqda_non_linear = sg.create_machine(\"QDA\", labels=shogun_multiclass_labels_non_linear)\nqda_non_linear.train(shogun_feats_non_linear)\nclassifiers_non_linear.append(qda_non_linear)\n\nplt.subplot(122)\nplt.title(\"QDA - Non Linear Features\")\nplot_model(plt,qda_non_linear,feats_non_linear,labels_non_linear,fading=False)", "<a id =\"section7\" href=\"http://www.shogun-toolbox.org/doc/en/current/classshogun_1_1GaussianProcessBinaryClassification.html\">Gaussian Process</a>\n<a id =\"section7a\">Logit Likelihood model</a>\nShogun's <a href= \"http://www.shogun-toolbox.org/doc/en/current/classshogun_1_1LogitLikelihood.html\">LogitLikelihood</a> and <a href=\"http://www.shogun-toolbox.org/doc/en/current/classshogun_1_1EPInferenceMethod.html\">EPInferenceMethod</a> classes are used.", "# create Gaussian kernel with width = 5.0\nkernel = sg.create_kernel(\"GaussianKernel\", width=5.0)\n# create zero mean function\nzero_mean = sg.create_gp_mean(\"ZeroMean\")\n# create logit likelihood model\nlikelihood = sg.create_gp_likelihood(\"LogitLikelihood\")\n# specify EP approximation inference method\ninference_model_linear = sg.create_gp_inference(\"EPInferenceMethod\",kernel=kernel, \n features=shogun_feats_linear, \n mean_function=zero_mean, \n labels=shogun_labels_linear, \n likelihood_model=likelihood)\n# create and train GP classifier, which uses Laplace approximation\ngaussian_logit_linear = sg.create_gaussian_process(\"GaussianProcessClassification\", inference_method=inference_model_linear)\ngaussian_logit_linear.train()\nclassifiers_linear.append(gaussian_logit_linear)\nclassifiers_names.append(\"Gaussian Process Logit\")\nfadings.append(True)\n\nplt.figure(figsize=(15,5))\nplt.subplot(121)\nplt.title(\"Gaussian Process - Logit - Linear Features\")\nplot_model(plt,gaussian_logit_linear,feats_linear,labels_linear)\n\ninference_model_non_linear = sg.create_gp_inference(\"EPInferenceMethod\", kernel=kernel, \n features=shogun_feats_non_linear, \n mean_function=zero_mean, \n labels=shogun_labels_non_linear, \n likelihood_model=likelihood)\ngaussian_logit_non_linear = sg.create_gaussian_process(\"GaussianProcessClassification\", \n inference_method=inference_model_non_linear)\ngaussian_logit_non_linear.train()\nclassifiers_non_linear.append(gaussian_logit_non_linear)\n\nplt.subplot(122)\nplt.title(\"Gaussian Process - Logit - Non Linear Features\")\nplot_model(plt,gaussian_logit_non_linear,feats_non_linear,labels_non_linear)", "<a id =\"section7b\">Probit Likelihood model</a> \nShogun's <a href=\"http://www.shogun-toolbox.org/doc/en/current/classshogun_1_1ProbitLikelihood.html\">ProbitLikelihood</a> class is used.", "likelihood = sg.create_gp_likelihood(\"ProbitLikelihood\")\n\ninference_model_linear = sg.create_gp_inference(\"EPInferenceMethod\", kernel=kernel, \n features=shogun_feats_linear, \n mean_function=zero_mean, \n labels=shogun_labels_linear, \n likelihood_model=likelihood)\ngaussian_probit_linear = sg.create_gaussian_process(\"GaussianProcessClassification\", \n inference_method=inference_model_linear)\ngaussian_probit_linear.train()\nclassifiers_linear.append(gaussian_probit_linear)\nclassifiers_names.append(\"Gaussian Process Probit\")\nfadings.append(True)\n\nplt.figure(figsize=(15,5))\nplt.subplot(121)\nplt.title(\"Gaussian Process - Probit - Linear Features\")\nplot_model(plt,gaussian_probit_linear,feats_linear,labels_linear)\n\ninference_model_non_linear = sg.create_gp_inference(\"EPInferenceMethod\", kernel=kernel, \n features=shogun_feats_non_linear, \n mean_function=zero_mean, \n labels=shogun_labels_non_linear, \n likelihood_model=likelihood)\ngaussian_probit_non_linear = sg.create_gaussian_process(\"GaussianProcessClassification\", \n inference_method=inference_model_non_linear)\ngaussian_probit_non_linear.train()\nclassifiers_non_linear.append(gaussian_probit_non_linear)\n\nplt.subplot(122)\nplt.title(\"Gaussian Process - Probit - Non Linear Features\")\nplot_model(plt,gaussian_probit_non_linear,feats_non_linear,labels_non_linear)", "<a id=\"section8\">Putting It All Together</a>", "figure = plt.figure(figsize=(30,9))\nplt.subplot(2,11,1)\nplot_binary_data(plt,feats_linear, labels_linear)\nfor i in range(0,10):\n plt.subplot(2,11,i+2)\n plt.title(classifiers_names[i])\n plot_model(plt,classifiers_linear[i],feats_linear,labels_linear,fading=fadings[i])\n\nplt.subplot(2,11,12)\nplot_binary_data(plt,feats_non_linear, labels_non_linear)\n\nfor i in range(0,10):\n plt.subplot(2,11,13+i)\n plot_model(plt,classifiers_non_linear[i],feats_non_linear,labels_non_linear,fading=fadings[i])" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
Neuroglycerin/neukrill-net-work
notebooks/model_run_and_result_analyses/Analyse hierarchy-based model with no augmentation or 8 augmentation.ipynb
mit
[ "Checking the model with superclass hierarchy with no augmentation. (It was manually switched off in the .json file)", "cd ..", "Run the modification of check_test_score.py so that it can work with superclass representation.", "import numpy as np\nimport pylearn2.utils\nimport pylearn2.config\nimport theano\nimport neukrill_net.dense_dataset\nimport neukrill_net.utils\nimport sklearn.metrics\nimport argparse\nimport os\nimport pylearn2.config.yaml_parse", "Check which core is free.", "%env THEANO_FLAGS = 'device=gpu3,floatX=float32,base_compiledir=~/.theano/stonesoup3'\n\nverbose = False\naugment = 1\nsettings = neukrill_net.utils.Settings(\"settings.json\")", "Give the path to .json.", "run_settings = neukrill_net.utils.load_run_settings('run_settings/alexnet_based_extra_convlayer_with_superclasses.json', \n settings, force=True)\n\nmodel = pylearn2.utils.serial.load(run_settings['pickle abspath'])\n\n# format the YAML\nyaml_string = neukrill_net.utils.format_yaml(run_settings, settings)\n# load proxied objects\nproxied = pylearn2.config.yaml_parse.load(yaml_string, instantiate=False)\n# pull out proxied dataset\nproxdata = proxied.keywords['dataset']\n# force loading of dataset and switch to test dataset\nproxdata.keywords['force'] = True\nproxdata.keywords['training_set_mode'] = 'test'\nproxdata.keywords['verbose'] = False\n# then instantiate the dataset\ndataset = pylearn2.config.yaml_parse._instantiate(proxdata)\n\nif hasattr(dataset.X, 'shape'):\n N_examples = dataset.X.shape[0]\nelse:\n N_examples = len(dataset.X)\nbatch_size = 500\nwhile N_examples%batch_size != 0:\n batch_size += 1\nn_batches = int(N_examples/batch_size)\n\nmodel.set_batch_size(batch_size)\nX = model.get_input_space().make_batch_theano()\nY = model.fprop(X)\nf = theano.function([X],Y)\n\nimport neukrill_net.encoding as enc\nhier = enc.get_hierarchy()\nlengths = sum([len(array) for array in hier])\ny = np.zeros((N_examples*augment,lengths))\n# get the data specs from the cost function using the model\npcost = proxied.keywords['algorithm'].keywords['cost']\ncost = pylearn2.config.yaml_parse._instantiate(pcost)\ndata_specs = cost.get_data_specs(model)\n\ni = 0 \nfor _ in range(augment):\n # make sequential iterator\n iterator = dataset.iterator(batch_size=batch_size,num_batches=n_batches,\n mode='even_sequential', data_specs=data_specs)\n for batch in iterator:\n if verbose:\n print(\" Batch {0} of {1}\".format(i+1,n_batches*augment))\n y[i*batch_size:(i+1)*batch_size,:] = f(batch[0])\n i += 1", "Best .pkl scores as:", "logloss = sklearn.metrics.log_loss(dataset.y[:, :len(settings.classes)], y[:, :len(settings.classes)])\nprint(\"Log loss: {0}\".format(logloss))", "Recent .pkl scores as: (rerun relevant cells with a different path)", "logloss = sklearn.metrics.log_loss(dataset.y[:, :len(settings.classes)], y[:, :len(settings.classes)])\nprint(\"Log loss: {0}\".format(logloss))\n\n%env THEANO_FLAGS = device=gpu2,floatX=float32,base_compiledir=~/.theano/stonesoup2\n\n%env", "Check the same model with 8 augmentation.", "import numpy as np\nimport pylearn2.utils\nimport pylearn2.config\nimport theano\nimport neukrill_net.dense_dataset\nimport neukrill_net.utils\nimport sklearn.metrics\nimport argparse\nimport os\nimport pylearn2.config.yaml_parse\n\nverbose = False\naugment = 1\nsettings = neukrill_net.utils.Settings(\"settings.json\")\n\nrun_settings = neukrill_net.utils.load_run_settings('run_settings/alexnet_based_extra_convlayer_with_superclasses_aug.json', \n settings, force=True)\n\nmodel = pylearn2.utils.serial.load(run_settings['pickle abspath'])\n\n# format the YAML\nyaml_string = neukrill_net.utils.format_yaml(run_settings, settings)\n# load proxied objects\nproxied = pylearn2.config.yaml_parse.load(yaml_string, instantiate=False)\n# pull out proxied dataset\nproxdata = proxied.keywords['dataset']\n# force loading of dataset and switch to test dataset\nproxdata.keywords['force'] = True\nproxdata.keywords['training_set_mode'] = 'test'\nproxdata.keywords['verbose'] = False\n# then instantiate the dataset\ndataset = pylearn2.config.yaml_parse._instantiate(proxdata)\n\nif hasattr(dataset.X, 'shape'):\n N_examples = dataset.X.shape[0]\nelse:\n N_examples = len(dataset.X)\nbatch_size = 500\nwhile N_examples%batch_size != 0:\n batch_size += 1\nn_batches = int(N_examples/batch_size)\n\nmodel.set_batch_size(batch_size)\nX = model.get_input_space().make_batch_theano()\nY = model.fprop(X)\nf = theano.function([X],Y)\n\nimport neukrill_net.encoding as enc\nhier = enc.get_hierarchy()\nlengths = sum([len(array) for array in hier])\ny = np.zeros((N_examples*augment,lengths))\n# get the data specs from the cost function using the model\npcost = proxied.keywords['algorithm'].keywords['cost']\ncost = pylearn2.config.yaml_parse._instantiate(pcost)\ndata_specs = cost.get_data_specs(model)\n\ni = 0 \nfor _ in range(augment):\n # make sequential iterator\n iterator = dataset.iterator(batch_size=batch_size,num_batches=n_batches,\n mode='even_sequential', data_specs=data_specs)\n for batch in iterator:\n if verbose:\n print(\" Batch {0} of {1}\".format(i+1,n_batches*augment))\n y[i*batch_size:(i+1)*batch_size,:] = f(batch[0])\n i += 1", "Best .pkl scored as:", "logloss = sklearn.metrics.log_loss(dataset.y[:, :len(settings.classes)], y[:, :len(settings.classes)])\nprint(\"Log loss: {0}\".format(logloss))", "Strange. Not as good as we hoped. Is there a problem with augmentation?\nLet's plot the nll.", "import pylearn2.utils\nimport pylearn2.config\nimport theano\nimport neukrill_net.dense_dataset\nimport neukrill_net.utils\nimport numpy as np\n%matplotlib inline\nimport matplotlib.pyplot as plt\n#import holoviews as hl\n#load_ext holoviews.ipython\nimport sklearn.metrics\n\nm = pylearn2.utils.serial.load(\n \"/disk/scratch/neuroglycerin/models/alexnet_based_extra_convlayer_with_superclasses_aug_recent.pkl\")\n\nchannel = m.monitor.channels[\"valid_y_y_1_nll\"]\nplt.plot(channel.example_record,channel.val_record)", "Looks like it's pretty stable at 4 and had this random strange glitch which gave the best result.\nLook at the best pkl of the none-aug model again: (just to confirm that it was indeed good)", "import numpy as np\nimport pylearn2.utils\nimport pylearn2.config\nimport theano\nimport neukrill_net.dense_dataset\nimport neukrill_net.utils\nimport sklearn.metrics\nimport argparse\nimport os\nimport pylearn2.config.yaml_parse\n\nverbose = False\naugment = 1\nsettings = neukrill_net.utils.Settings(\"settings.json\")\n\nrun_settings = neukrill_net.utils.load_run_settings('run_settings/alexnet_based_extra_convlayer_with_superclasses.json', \n settings, force=True)\n\nmodel = pylearn2.utils.serial.load(run_settings['pickle abspath'])\n\n# format the YAML\nyaml_string = neukrill_net.utils.format_yaml(run_settings, settings)\n# load proxied objects\nproxied = pylearn2.config.yaml_parse.load(yaml_string, instantiate=False)\n# pull out proxied dataset\nproxdata = proxied.keywords['dataset']\n# force loading of dataset and switch to test dataset\nproxdata.keywords['force'] = True\nproxdata.keywords['training_set_mode'] = 'test'\nproxdata.keywords['verbose'] = False\n# then instantiate the dataset\ndataset = pylearn2.config.yaml_parse._instantiate(proxdata)\n\nif hasattr(dataset.X, 'shape'):\n N_examples = dataset.X.shape[0]\nelse:\n N_examples = len(dataset.X)\nbatch_size = 500\nwhile N_examples%batch_size != 0:\n batch_size += 1\nn_batches = int(N_examples/batch_size)\n\nmodel.set_batch_size(batch_size)\nX = model.get_input_space().make_batch_theano()\nY = model.fprop(X)\nf = theano.function([X],Y)\n\nimport neukrill_net.encoding as enc\nhier = enc.get_hierarchy()\nlengths = sum([len(array) for array in hier])\ny = np.zeros((N_examples*augment,lengths))\n# get the data specs from the cost function using the model\npcost = proxied.keywords['algorithm'].keywords['cost']\ncost = pylearn2.config.yaml_parse._instantiate(pcost)\ndata_specs = cost.get_data_specs(model)\n\ni = 0 \nfor _ in range(augment):\n # make sequential iterator\n iterator = dataset.iterator(batch_size=batch_size,num_batches=n_batches,\n mode='even_sequential', data_specs=data_specs)\n for batch in iterator:\n if verbose:\n print(\" Batch {0} of {1}\".format(i+1,n_batches*augment))\n y[i*batch_size:(i+1)*batch_size,:] = f(batch[0])\n i += 1\n\nlogloss = sklearn.metrics.log_loss(dataset.y[:, :len(settings.classes)], y[:, :len(settings.classes)])\nprint(\"Log loss: {0}\".format(logloss))", "It was. Annoying. Let's plot the nll too:", "m = pylearn2.utils.serial.load(\n \"/disk/scratch/neuroglycerin/models/alexnet_based_extra_convlayer_with_superclasses.pkl\")\n\nchannel = m.monitor.channels[\"valid_y_y_1_nll\"]\nplt.plot(channel.example_record,channel.val_record)", "That's way nicer. Looks like it was working fine.\nNow we are going to modify the dense dataset class so that for each image, the same exact image is produced. This was we will mimic the augmentation but effectively will run on exactly the same dataset. We can use the same .json and .yaml files too." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
tiagoantao/biopython-notebook
notebooks/04 - Sequence Annotation objects.ipynb
mit
[ "Source of the materials: Biopython Tutorial and Cookbook (adapted)\nSequence annotation objects\nThe previous notebook introduced the sequence classes. Immediately ``above'' the Seq class is the Sequence Record or SeqRecord class, defined in the Bio.SeqRecord module. This class allows higher level features such as identifiers and features (as SeqFeature objects) to be associated with the sequence, and is used throughout the sequence input/output interface Bio.SeqIO described fully in another notebook.\nIf you are only going to be working with simple data like FASTA files, you can probably skip this chapter\nfor now. If on the other hand you are going to be using richly annotated sequence data, say from GenBank\nor EMBL files, this information is quite important.\nWhile this chapter should cover most things to do with the \\verb|SeqRecord| and \\verb|SeqFeature| objects in this chapter, you may also want to read the SeqRecord wiki page (http://biopython.org/wiki/SeqRecord), and the built in documentation (also online -- http://biopython.org/DIST/docs/api/Bio.SeqRecord.SeqRecord-class.html - SeqRecord and http://biopython.org/DIST/docs/api/Bio.SeqFeature.SeqFeature-class.html - SeqFeature):", "from Bio.SeqRecord import SeqRecord", "The SeqRecord Object\nThe SeqRecord (Sequence Record) class is defined in the Bio.SeqRecord module. This class allows higher level features such as identifiers and features to be associated with a sequence, and is the basic data type for the Bio.SeqIO sequence input/output interface.\nThe SeqRecord class itself is quite simple, and offers the following information as attributes:\n\n\n.seq - The sequence itself, typically a Seq object.\n\n\n.id - The primary ID used to identify the sequence - a string. In most cases this is something like an accession number.\n\n\n.name - A 'common' name/id for the sequence - a string. In some cases this will be the same as the accession number, but it could also be a clone name. I think of this as being analogous to the LOCUS id in a GenBank record.\n\n\n.description - A human readable description or expressive name for the sequence - a string.\n\n\n.letter_annotations - Holds per-letter-annotations using a (restricted) dictionary of additional information about the letters in the sequence. The keys are the name of the information, and the information is contained in the value as a Python sequence (i.e. a list, tuple or string) with the same length as the sequence itself. This is often used for quality scores or secondary structure information (e.g. from Stockholm/PFAM alignment files).\n\n\n.annotations - A dictionary of additional information about the sequence. The keys are the name of the information, and the information is contained in the value. This allows the addition of more 'unstructured' information to the sequence.\n\n\n.features - A list of SeqFeature objects with more structured information about the features on a sequence (e.g. position of genes on a genome, or domains on a protein sequence).\n\n\n.dbxrefs - A list of database cross-references as strings.\n\n\nCreating a SeqRecord\nUsing a SeqRecord object is not very complicated, since all of the\ninformation is presented as attributes of the class. Usually you won't create\na SeqRecord 'by hand', but instead use Bio.SeqIO to read in a\nsequence file for you and the examples\nbelow). However, creating SeqRecord can be quite simple.\nSeqRecord objects from scratch\nTo create a SeqRecord at a minimum you just need a Seq object:", "from Bio.Seq import Seq\nsimple_seq = Seq(\"GATC\")\nsimple_seq_r = SeqRecord(simple_seq)", "Additionally, you can also pass the id, name and description to the initialization function, but if not they will be set as strings indicating they are unknown, and can be modified subsequently:", "simple_seq_r.id\n\nsimple_seq_r.id = \"AC12345\"\nsimple_seq_r.description = \"Made up sequence I wish I could write a paper about\"\nprint(simple_seq_r.description)\nsimple_seq_r.seq\nprint(simple_seq_r.seq)", "Including an identifier is very important if you want to output your SeqRecord to a file. You would normally include this when creating the object:", "simple_seq = Seq(\"GATC\")\nsimple_seq_r = SeqRecord(simple_seq, id=\"AC12345\")", "As mentioned above, the SeqRecord has an dictionary attribute annotations. This is used\nfor any miscellaneous annotations that doesn't fit under one of the other more specific attributes.\nAdding annotations is easy, and just involves dealing directly with the annotation dictionary:", "simple_seq_r.annotations[\"evidence\"] = \"None. I just made it up.\"\nprint(simple_seq_r.annotations)\nprint(simple_seq_r.annotations[\"evidence\"])", "Working with per-letter-annotations is similar, letter_annotations is a\ndictionary like attribute which will let you assign any Python sequence (i.e.\na string, list or tuple) which has the same length as the sequence:", "simple_seq_r.letter_annotations[\"phred_quality\"] = [40, 40, 38, 30]\nprint(simple_seq_r.letter_annotations)\nprint(simple_seq_r.letter_annotations[\"phred_quality\"])", "The dbxrefs and features attributes are just Python lists, and\nshould be used to store strings and SeqFeature objects (discussed later) respectively.\nSeqRecord objects from FASTA files\nThis example uses a fairly large FASTA file containing the whole sequence for \\textit{Yersinia pestis biovar Microtus} str. 91001 plasmid pPCP1, originally downloaded from the NCBI. This file is included with the Biopython unit tests under the GenBank folder, or online (http://biopython.org/SRC/biopython/Tests/GenBank/NC_005816.fna) from our website.\nThe file starts like this - and you can check there is only one record present (i.e. only one line starting with a greater than symbol):\n&gt;gi|45478711|ref|NC_005816.1| Yersinia pestis biovar Microtus ... pPCP1, complete sequence\nTGTAACGAACGGTGCAATAGTGATCCACACCCAACGCCTGAAATCAGATCCAGGGGGTAATCTGCTCTCC\n...\n\nIn a previous notebook you will have seen the function Bio.SeqIO.parse\nused to loop over all the records in a file as SeqRecord objects. The Bio.SeqIO module\nhas a sister function for use on files which contain just one record which we'll use here:", "from Bio import SeqIO\nrecord = SeqIO.read(\"data/NC_005816.fna\", \"fasta\")\nrecord", "Now, let's have a look at the key attributes of this SeqRecord\nindividually - starting with the seq attribute which gives you a\nSeq object:", "record.seq", "Next, the identifiers and description:", "record.id\n\nrecord.name\nrecord.description", "As you can see above, the first word of the FASTA record's title line (after\nremoving the greater than symbol) is used for both the id and\nname attributes. The whole title line (after removing the greater than\nsymbol) is used for the record description. This is deliberate, partly for\nbackwards compatibility reasons, but it also makes sense if you have a FASTA\nfile like this:\n&gt;Yersinia pestis biovar Microtus str. 91001 plasmid pPCP1\nTGTAACGAACGGTGCAATAGTGATCCACACCCAACGCCTGAAATCAGATCCAGGGGGTAATCTGCTCTCC\n...\n\nNote that none of the other annotation attributes get populated when reading a\nFASTA file:", "record.dbxrefs\n\nrecord.annotations\n\nrecord.letter_annotations\n\nrecord.features", "In this case our example FASTA file was from the NCBI, and they have a fairly well defined set of conventions for formatting their FASTA lines. This means it would be possible to parse this information and extract the GI number and accession for example. However, FASTA files from other sources vary, so this isn't possible in general.\nSeqRecord objects from GenBank files\nAs in the previous example, we're going to look at the whole sequence for Yersinia pestis biovar Microtus str. 91001 plasmid pPCP1, originally downloaded from the NCBI, but this time as a GenBank file.\nThis file contains a single record (i.e. only one LOCUS line) and starts:\nLOCUS NC_005816 9609 bp DNA circular BCT 21-JUL-2008\nDEFINITION Yersinia pestis biovar Microtus str. 91001 plasmid pPCP1, complete\n sequence.\nACCESSION NC_005816\nVERSION NC_005816.1 GI:45478711\nPROJECT GenomeProject:10638\n...\n\nAgain, we'll use Bio.SeqIO to read this file in, and the code is almost identical to that for used above for the FASTA file:", "record = SeqIO.read(\"data/NC_005816.gb\", \"genbank\")\nrecord", "You should be able to spot some differences already! But taking the attributes individually,\nthe sequence string is the same as before, but this time Bio.SeqIO has been able to automatically assign a more specific alphabet:", "record.seq", "The name comes from the LOCUS line, while the \\verb|id| includes the version suffix.\nThe description comes from the DEFINITION line:", "record.id\n\nrecord.name\n\nrecord.description", "GenBank files don't have any per-letter annotations:", "record.letter_annotations", "Most of the annotations information gets recorded in the \\verb|annotations| dictionary, for example:", "len(record.annotations)\n\nrecord.annotations[\"source\"]", "The dbxrefs list gets populated from any PROJECT or DBLINK lines:", "record.dbxrefs", "Finally, and perhaps most interestingly, all the entries in the features table (e.g. the genes or CDS features) get recorded as SeqFeature objects in the features list.", "len(record.features)", "Feature, location and position objects\nSeqFeature objects\nSequence features are an essential part of describing a sequence. Once you get beyond the sequence itself, you need some way to organize and easily get at the more 'abstract' information that is known about the sequence. While it is probably impossible to develop a general sequence feature class that will cover everything, the Biopython SeqFeature class attempts to encapsulate as much of the information about the sequence as possible. The design is heavily based on the GenBank/EMBL feature tables, so if you understand how they look, you'll probably have an easier time grasping the structure of the Biopython classes.\nThe key idea about each SeqFeature object is to describe a region on a parent sequence, typically a SeqRecord object. That region is described with a location object, typically a range between two positions (see below).\nThe SeqFeature class has a number of attributes, so first we'll list them and their general features, and then later in the chapter work through examples to show how this applies to a real life example. The attributes of a SeqFeature are:\n\n\n.type] - This is a textual description of the type of feature (for instance, this will be something like 'CDS' or 'gene').\n\n\n.location - The location of the SeqFeature on the sequence\n that you are dealing with. The\n SeqFeature delegates much of its functionality to the location\n object, and includes a number of shortcut attributes for properties\n of the location:\n\n\n.ref - shorthand for .location.ref - any (different)\n reference sequence the location is referring to. Usually just None.\n\n\n.ref_db - shorthand for .location.ref_db - specifies\n the database any identifier in .ref refers to. Usually just None.\n\n\n.strand - shorthand for .location.strand - the strand on\n the sequence that the feature is located on. For double stranded nucleotide\n sequence this may either be 1 for the top strand, -1 for the bottom\n strand, 0 if the strand is important but is unknown, or None\n if it doesn't matter. This is None for proteins, or single stranded sequences.\n\n\n.qualifiers - This is a Python dictionary of additional information about the feature. The key is some kind of terse one-word description of what the information contained in the value is about, and the value is the actual information. For example, a common key for a qualifier might be 'evidence' and the value might be 'computational (non-experimental). This is just a way to let the person who is looking at the feature know that it has not be experimentally (i.e. in a wet lab) confirmed. Note that other the value will be a list of strings (even when there is only one string). This is a reflection of the feature tables in GenBank/EMBL files.\n\n\n.sub_features - This used to be used to represent features with complicated locations like 'joins' in GenBank/EMBL files. This has been deprecated with the introduction of the CompoundLocation object, and should now be ignored.\n\n\nPositions and locations\nThe key idea about each SeqFeature object is to describe a\nregion on a parent sequence, for which we use a location object,\ntypically describing a range between two positions. Two try to\nclarify the terminology we're using:\n\n\nposition - This refers to a single position on a sequence,\n which may be fuzzy or not. For instance, 5, 20, <100 and >200 are all positions.\n\n\nlocation - A location is region of sequence bounded by\n some positions. For instance 5..20 (i.e. 5 to 20) is a location.\n\n\nI just mention this because sometimes I get confused between the two.\nFeatureLocation object\nUnless you work with eukaryotic genes, most SeqFeature locations are\nextremely simple - you just need start and end coordinates and a strand.\nThat's essentially all the basic FeatureLocation object does.\nIn practise of course, things can be more complicated. First of all\nwe have to handle compound locations made up of several regions.\nSecondly, the positions themselves may be fuzzy (inexact).\nCompoundLocation object\nBiopython 1.62 introduced the CompoundLocation as part of\na restructuring of how complex locations made up of multiple regions\nare represented.\nThe main usage is for handling `join' locations in EMBL/GenBank files.\nFuzzy Positions\nSo far we've only used simple positions. One complication in dealing\nwith feature locations comes in the positions themselves.\nIn biology many times things aren't entirely certain\n(as much as us wet lab biologists try to make them certain!). For\ninstance, you might do a dinucleotide priming experiment and discover\nthat the start of mRNA transcript starts at one of two sites. This\nis very useful information, but the complication comes in how to\nrepresent this as a position. To help us deal with this, we have\nthe concept of fuzzy positions. Basically there are several types\nof fuzzy positions, so we have five classes do deal with them:\n\n\nExactPosition - As its name suggests, this class represents a position which is specified as exact along the sequence. This is represented as just a number, and you can get the position by looking at the position attribute of the object.\n\n\nBeforePosition - This class represents a fuzzy position\n that occurs prior to some specified site. In GenBank/EMBL notation,\n this is represented as something like <13, signifying that\n the real position is located somewhere less than 13. To get\n the specified upper boundary, look at the position\n attribute of the object.\n\n\nAfterPosition - Contrary to BeforePosition, this\n class represents a position that occurs after some specified site.\n This is represented in GenBank as >13, and like\n BeforePosition, you get the boundary number by looking\n at the position attribute of the object.\n\n\nWithinPosition - Occasionally used for GenBank/EMBL locations,\n this class models a position which occurs somewhere between two\n specified nucleotides. In GenBank/EMBL notation, this would be\n represented as (1.5), to represent that the position is somewhere\n within the range 1 to 5. To get the information in this class you\n have to look at two attributes. The position attribute\n specifies the lower boundary of the range we are looking at, so in\n our example case this would be one. The extension attribute\n specifies the range to the higher boundary, so in this case it\n would be 4. So object.position is the lower boundary and\n object.position + object.extension is the upper boundary.\n\n\nOneOfPosition - Occasionally used for GenBank/EMBL locations,\n this class deals with a position where several possible values exist,\n for instance you could use this if the start codon was unclear and\n there where two candidates for the start of the gene. Alternatively,\n that might be handled explicitly as two related gene features.\n\n\nUnknownPosition - This class deals with a position of unknown\n location. This is not used in GenBank/EMBL, but corresponds to the `?'\n feature coordinate used in UniProt.\n\n\nHere's an example where we create a location with fuzzy end points:", "from Bio import SeqFeature\nstart_pos = SeqFeature.AfterPosition(5)\nend_pos = SeqFeature.BetweenPosition(9, left=8, right=9)\nmy_location = SeqFeature.FeatureLocation(start_pos, end_pos)", "Note that the details of some of the fuzzy-locations changed in Biopython 1.59,\nin particular for BetweenPosition and WithinPosition you must now make it explicit\nwhich integer position should be used for slicing etc. For a start position this\nis generally the lower (left) value, while for an end position this would generally\nbe the higher (right) value.\nIf you print out a FeatureLocation object, you can get a nice representation of the information:", "print(my_location)", "We can access the fuzzy start and end positions using the start and end attributes of the location:", "my_location.start\n\nprint(my_location.start)\n\nmy_location.end\n\nprint(my_location.end)", "If you don't want to deal with fuzzy positions and just want numbers,\nthey are actually subclasses of integers so should work like integers:", "int(my_location.start) \n\nint(my_location.end)", "For compatibility with older versions of Biopython you can ask for the\n\\verb|nofuzzy_start| and \\verb|nofuzzy_end| attributes of the location\nwhich are plain integers:", "my_location.nofuzzy_start\n\nmy_location.nofuzzy_end", "Notice that this just gives you back the position attributes of the fuzzy locations.\nSimilarly, to make it easy to create a position without worrying about fuzzy positions, you can just pass in numbers to the FeaturePosition constructors, and you'll get back out ExactPosition objects:", "exact_location = SeqFeature.FeatureLocation(5, 9)\nprint(exact_location)\n\nexact_location.start\n\nprint(int(exact_location.start))\n\nexact_location.nofuzzy_start", "That is most of the nitty gritty about dealing with fuzzy positions in Biopython.\nIt has been designed so that dealing with fuzziness is not that much more\ncomplicated than dealing with exact positions, and hopefully you find that true!\nLocation testing\nYou can use the Python keyword in with a SeqFeature or location\nobject to see if the base/residue for a parent coordinate is within the\nfeature/location or not.\nFor example, suppose you have a SNP of interest and you want to know which\nfeatures this SNP is within, and lets suppose this SNP is at index 4350\n(Python counting!). Here is a simple brute force solution where we just\ncheck all the features one by one in a loop:", "my_snp = 4350\nrecord = SeqIO.read(\"data/NC_005816.gb\", \"genbank\")\nfor feature in record.features:\n if my_snp in feature:\n print(\"%s %s\" % (feature.type, feature.qualifiers.get('db_xref')))", "Note that gene and CDS features from GenBank or EMBL files defined with joins\nare the union of the exons -- they do not cover any introns.\nSequence described by a feature or location\nA SeqFeature or location object doesn't directly contain a sequence, instead the location describes how to get this from the parent sequence. For example consider a (short) gene sequence with location 5:18 on the reverse strand, which in GenBank/EMBL notation using 1-based counting would be complement(6..18), like this:", "from Bio.SeqFeature import SeqFeature, FeatureLocation\nseq = Seq(\"ACCGAGACGGCAAAGGCTAGCATAGGTATGAGACTTCCTTCCTGCCAGTGCTGAGGAACTGGGAGCCTAC\")\nfeature = SeqFeature(FeatureLocation(5, 18), type=\"gene\", strand=-1)", "You could take the parent sequence, slice it to extract 5:18, and then take the reverse complement.\nIf you are using Biopython 1.59 or later, the feature location's start and end are integer like so this works:", "feature_seq = seq[feature.location.start:feature.location.end].reverse_complement()\nprint(feature_seq)", "This is a simple example so this isn't too bad -- however once you have to deal with compound features (joins) this is rather messy. Instead, the SeqFeature object has an extract method to take care of all this (and since Biopython 1.78 can handle trans-splicing by supplying a dictionary of referenced sequences):", "feature_seq = feature.extract(seq) \nprint(feature_seq)", "The length of a SeqFeature or location matches\nthat of the region of sequence it describes.", "print(len(feature_seq))\nprint(len(feature))\nprint(len(feature.location))", "For simple FeatureLocation objects the length is just the difference between the start and end positions. However, for a CompoundLocation the length is the sum of the constituent regions.\nComparison\nThe SeqRecord mobjects can be very complex, but here’s a simple example:", "from Bio.SeqRecord import SeqRecord\nrecord1 = SeqRecord(Seq(\"ACGT\"), id=\"test\")\nrecord2 = SeqRecord(Seq(\"ACGT\"), id=\"test\")", "What happens when you try to compare these “identical” records?", "record1 == record2", "Perhaps surprisingly older versions of Biopython would use Python’s default object comparison for theSeqRecord, meaning record1 == record2 would only return True if these variables pointed at the same object in memory. In this example, record1 == record2 would have returned False here!", "record1 == record2 # on old versions of Biopython!", "False\nAs of Biopython 1.67, SeqRecord comparison like record1 == record2 will instead raise an explicit error to avoid people being caught out by this:", "record1 == record2", "Instead you should check the attributes you are interested in, for example the identifier and the sequence:", "record1.id == record2.id\n\nrecord1.seq == record2.seq", "Beware that comparing complex objects quickly gets complicated.\nReferences\nAnother common annotation related to a sequence is a reference to a journal or other published work dealing with the sequence. We have a fairly simple way of representing a Reference in Biopython -- we have a Bio.SeqFeature.Reference class that stores the relevant information about a reference as attributes of an object.\nThe attributes include things that you would expect to see in a reference like journal, title and authors. Additionally, it also can hold the medline_id and pubmed_id and a comment about the reference. These are all accessed simply as attributes of the object.\nA reference also has a location object so that it can specify a particular location on the sequence that the reference refers to. For instance, you might have a journal that is dealing with a particular gene located on a BAC, and want to specify that it only refers to this position exactly. The location is a potentially fuzzy location.\nAny reference objects are stored as a list in the SeqRecord object's annotations dictionary under the key 'references'.\nThat's all there is too it. References are meant to be easy to deal with, and hopefully general enough to cover lots of usage cases.\nThe format method\nThe format method of the SeqRecord class gives a string\ncontaining your record formatted using one of the output file formats\nsupported by Bio.SeqIO, such as FASTA:", "record = SeqRecord(\n Seq(\n \"MMYQQGCFAGGTVLRLAKDLAENNRGARVLVVCSEITAVTFRGPSETHLDSMVGQALFGD\"\n \"GAGAVIVGSDPDLSVERPLYELVWTGATLLPDSEGAIDGHLREVGLTFHLLKDVPGLISK\"\n \"NIEKSLKEAFTPLGISDWNSTFWIAHPGGPAILDQVEAKLGLKEEKMRATREVLSEYGNM\"\n \"SSAC\"\n ),\n id=\"gi|14150838|gb|AAK54648.1|AF376133_1\",\n description=\"chalcone synthase [Cucumis sativus]\",\n)\nprint(record.format(\"fasta\"))", "This format method takes a single mandatory argument, a lower case string which is\nsupported by Bio.SeqIO as an output format.\nHowever, some of the file formats Bio.SeqIO can write to require more than\none record (typically the case for multiple sequence alignment formats), and thus won't\nwork via this format() method.\nSlicing a SeqRecord\nYou can slice a SeqRecord, to give you a new SeqRecord covering just\npart of the sequence. What is important\nhere is that any per-letter annotations are also sliced, and any features which fall\ncompletely within the new sequence are preserved (with their locations adjusted).\nFor example, taking the same GenBank file used earlier:", "record = SeqIO.read(\"data/NC_005816.gb\", \"genbank\")\nprint(record)\n\nlen(record)\n\nlen(record.features)", "For this example we're going to focus in on the pim gene, YP_pPCP05.\nIf you have a look at the GenBank file directly you'll find this gene/CDS has\nlocation string 4343..4780, or in Python counting 4342:4780.\nFrom looking at the file you can work out that these are the twelfth and\nthirteenth entries in the file, so in Python zero-based counting they are\nentries 11 and 12 in the features list:", "print(record.features[20])\n\nprint(record.features[21])", "Let's slice this parent record from 4300 to 4800 (enough to include the pim\ngene/CDS), and see how many features we get:", "sub_record = record[4300:4800]\nsub_record\n\nlen(sub_record)\n\nlen(sub_record.features)", "Our sub-record just has two features, the gene and CDS entries for YP_pPCP05:", "print(sub_record.features[0])\n\nprint(sub_record.features[1])", "Notice that their locations have been adjusted to reflect the new parent sequence!\nWhile Biopython has done something sensible and hopefully intuitive with the features\n(and any per-letter annotation), for the other annotation it is impossible to know if\nthis still applies to the sub-sequence or not. To avoid guessing, the annotations\nand dbxrefs are omitted from the sub-record, and it is up to you to transfer\nany relevant information as appropriate.", "print(sub_record.annotations)\n\nprint(sub_record.dbxrefs)", "The same point could be made about the record id, name\nand description, but for practicality these are preserved:", "print(sub_record.id)\n\nprint(sub_record.name)\n\nprint(sub_record.description)", "This illustrates the problem nicely though, our new sub-record is\nnot the complete sequence of the plasmid, so the description is wrong!\nLet's fix this and then view the sub-record as a reduced FASTA file using\nthe format method described above:", "sub_record.description =\"Yersinia pestis biovar Microtus str. 91001 plasmid pPCP1, partial.\"\nprint(sub_record.format(\"fasta\"))", "Adding SeqRecord objects\nYou can add SeqRecord objects together, giving a new SeqRecord.\nWhat is important here is that any common\nper-letter annotations are also added, all the features are preserved (with their\nlocations adjusted), and any other common annotation is also kept (like the id, name\nand description).\nFor an example with per-letter annotation, we'll use the first record in a\nFASTQ file.", "record = next(SeqIO.parse(\"data/example.fastq\", \"fastq\"))\nprint(len(record))\n\nprint(record.seq)\n\nprint(record.letter_annotations[\"phred_quality\"])", "Let's suppose this was Roche 454 data, and that from other information\nyou think the TTT should be only TT. We can make a new edited\nrecord by first slicing the SeqRecord before and after the 'extra'\nthird T:", "left = record[:20]\nprint(left.seq)\n\nprint(left.letter_annotations[\"phred_quality\"])\n\nright = record[21:]\nprint(right.seq)\n\nprint(right.letter_annotations[\"phred_quality\"])", "Now add the two parts together:", "edited = left + right\nprint(len(edited))\n\nprint(edited.seq)\n\nprint(edited.letter_annotations[\"phred_quality\"])", "Easy and intuitive? We hope so! You can make this shorter with just:", "edited = record[:20] + record[21:]", "Now, for an example with features, we'll use a GenBnak file.\nSuppose you have a circular genome:", "record = SeqIO.read(\"data/NC_005816.gb\", \"genbank\")\nprint(record)", "You can shift the origin like this:", "print(len(record))\n\nprint(len(record.features))\n\nprint(record.dbxrefs)\n\nprint(record.annotations.keys())", "You can shift the origin like this:", "shifted = record[2000:] + record[:2000]\nprint(shifted)\n\nprint(len(shifted))", "Note that this isn't perfect in that some annotation like the database cross references\nand one of the features (the source feature) have been lost:", "print(len(shifted.features))\n\nprint(shifted.dbxrefs)\n\nprint(shifted.annotations.keys())", "This is because the SeqRecord slicing step is cautious in what annotation\nit preserves (erroneously propagating annotation can cause major problems). If\nyou want to keep the database cross references or the annotations dictionary,\nthis must be done explicitly:", "shifted.dbxrefs = record.dbxrefs[:]\nshifted.annotations = record.annotations.copy()\nprint(shifted.dbxrefs)\n\nprint(shifted.annotations.keys())", "Also note that in an example like this, you should probably change the record\nidentifiers since the NCBI references refer to the original unmodified\nsequence.\nReverse-complementing SeqRecord objects\nOne of the new features in Biopython 1.57 was the SeqRecord object's\nreverse_complement method. This tries to balance easy of use with worries\nabout what to do with the annotation in the reverse complemented record.\nFor the sequence, this uses the Seq object's reverse complement method. Any\nfeatures are transferred with the location and strand recalculated. Likewise\nany per-letter-annotation is also copied but reversed (which makes sense for\ntypical examples like quality scores). However, transfer of most annotation\nis problematical.\nFor instance, if the record ID was an accession, that accession should not really\napply to the reverse complemented sequence, and transferring the identifier by\ndefault could easily cause subtle data corruption in downstream analysis.\nTherefore by default, the SeqRecord's id, name, description, annotations\nand database cross references are all not transferred by default.\nThe SeqRecord object's reverse_complement method takes a number\nof optional arguments corresponding to properties of the record. Setting these\narguments to True means copy the old values, while False means\ndrop the old values and use the default value. You can alternatively provide\nthe new desired value instead.\nConsider this example record:", "record = SeqIO.read(\"data/NC_005816.gb\", \"genbank\")\nprint(\"%s %i %i %i %i\" % (record.id, len(record), len(record.features), len(record.dbxrefs), len(record.annotations)))", "Here we take the reverse complement and specify a new identifier - but notice\nhow most of the annotation is dropped (but not the features):", "rc = record.reverse_complement(id=\"TESTING\")\nprint(\"%s %i %i %i %i\" % (rc.id, len(rc), len(rc.features), len(rc.dbxrefs), len(rc.annotations)))" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
erickpeirson/statistical-computing
Hamiltonian MCMC (HMC).ipynb
cc0-1.0
[ "%pylab inline\n\nfrom scipy.stats import beta, multivariate_normal, uniform, norm\nfrom scipy.misc import derivative\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport copy\nimport pandas as pd", "Use the (local) shape of the distribution to make smarter proposals.\nHamiltonian: quantity that is conserved regardless of position in space.\nMetaphor: hockey puck sliding on a (non-flat) surface. Want to be able to describe the state of the puck. The state has two quantities: \n\nCurrent position, $q$\nMomentum, $p$\n\nHamiltonian: $H(q, p) = U(q) + K(p)$\n\n$U(q)$ -- potential energy\n$K(p)$ -- kinetic energy", "dtarget = lambda x: multivariate_normal.pdf(x, mean=(3, 10), cov=[[1, 0], [0, 1]])\nx1 = np.linspace(-6, 12, 101)\nx2 = np.linspace(-11, 31, 101)\nX, Y = np.meshgrid(x1, x2)\nZ = np.array(map(dtarget, zip(X.flat, Y.flat))).reshape(101, 101)\n\nplt.figure(figsize=(10,7))\nplt.contour(X, Y, Z)\nplt.xlim(0, 6)\nplt.ylim(7, 13)\nplt.show()", "The surface of interest will be $U(q) = -\\log{f(q)}$\n$K(p) = \\frac{p^T p}{2m}$, where $m$ = mass of the puck.\nPosition over time is a function of momentum: \n$\\frac{dq_i}{dt} = \\frac{p_i}{m}$\nChange in momentum over time is a function of surface gradient:\n$\\frac{dp_i}{dt} = -\\frac{\\delta U}{\\delta q_i}$\nLeap-frog algorithm\n$ p_i(t + \\frac{\\epsilon}{2}) = p_i(t) - \\frac{\\epsilon}{2} \\frac{\\delta U}{\\delta q_i} U(q(t))$\n$ q_i(t + \\epsilon) = q_i(t) + \\frac{\\epsilon}{m}p_i(t+\\frac{\\epsilon}{2})$\n$ p_i(t + \\epsilon) = p_i(t + \\frac{\\epsilon}{2}) - \\frac{\\epsilon}{2} \\frac{\\delta U}{\\delta q_i}(q(t+\\epsilon))$\n$\\epsilon$ -- step size", "def HMC_one_step(U, current_q, Eps, L, m=1):\n \"\"\"\n One step of the Hamiltonian Monte Carlo.\n \n Parameters\n ----------\n U : callable\n A function that takes a single argument, the position.\n q : array-like\n Current position.\n Eps : float\n The step size, epsilon.\n L : int\n Number of leapfrog stpes.\n m : float\n Mass of the particle.\n \n Returns\n -------\n q_out : array\n Path from ``q`` to the proposed position.\n \"\"\"\n\n q = copy.copy(current_q)\n Nq = len(q)\n p = multivariate_normal.rvs([0. for i in xrange(Nq)])\n current_p = copy.copy(p)\n\n out = {}\n \n out['p'] = np.zeros((L, Nq))\n out['p'][0,:] = copy.copy(p)\n out['q'] = np.zeros((L, Nq))\n out['q'][0,:] = copy.copy(q)\n \n for i in xrange(1, L):\n p -= Eps*derivative(U, q, 0.01)/2.\n q += (Eps/m)*p\n out['q'][i, :] = copy.copy(q)\n p -= Eps*derivative(U, q, 0.01)/2.\n out['p'][i, :] = copy.copy(p)\n \n current_U = U(current_q)\n current_K = (current_p**2).sum()/2.\n proposed_U = U(q)\n proposed_K = (p**2).sum()/2.\n \n if uniform.rvs() < exp(current_U - proposed_U + current_K - proposed_K):\n out['value'] = q\n else:\n out['value'] = current_q\n \n return out\n\nplt.figure(figsize=(10,7))\nplt.contour(X, Y, Z)\nU = lambda x: -1.*np.log(dtarget(x))\nchain = HMC_one_step(U, np.array([4., 10.]), Eps=0.2, L=10, m=2)['q']\nplt.plot(chain[:, 0], chain[:, 1], 'ro')\nplt.plot(chain[:, 0], chain[:, 1], 'r-')\nplt.plot(chain[0, 0], chain[0,1], 'bo')\nplt.xlim(0, 6)\nplt.ylim(7, 13)\nplt.xlabel('x1')\nplt.ylabel('x2')\nplt.show()\n\ndef HMC(dtarget, start, Eps=0.2, L=10, m=2, N=1000, num_chains=4):\n \"\"\"\n Perform an HMC simulation.\n \n Parameters\n ----------\n dtarget : callable\n Target PDF.\n \n \"\"\"\n \n # Invert the target PDF into a concave surface.\n neg_log_dtarget = lambda x: -1.*np.log(dtarget(x))\n \n # If only one starting position is provided, use it for all chains.\n if len(start.shape) == 1:\n start = np.array([np.array(start) for i in xrange(num_chains)])\n \n chains = []\n for j in xrange(num_chains):\n chain = [start[j, :]]\n for i in xrange(N):\n proposal = HMC_one_step(neg_log_dtarget, \n copy.copy(chain[-1]), \n Eps, L, m)['value']\n chain.append(proposal)\n chains.append(np.array(chain))\n return np.array(chains) ", "Tuning parameters: step size, number of steps, and \"mass\" \nHMC does not work discrete parameters. STAN is all HMC.\nGelman metric still applies -- we just have a better way of proposing values.", "def Gelman(chains):\n if len(chains.shape) == 3:\n N_p = chains.shape[2]\n else:\n N_p = 1\n generate = lambda ptn: np.array([np.array([np.array([ptn(p, i, c) \n for p in xrange(N_p)\n for i in xrange(chains.shape[1])])\n for c in xrange(chains.shape[0])])])\n params = generate(lambda p, i, c: 'x{0}'.format(p))\n iters = generate(lambda p, i, c: i)\n labels = generate(lambda p, i, c: c)\n \n data = zip(chains.flat, params.flat, iters.flat, labels.flat)\n dataframe = pd.DataFrame(data, columns=('Value', 'Parameter', 'Iteration', 'Chain'))\n\n xbar = dataframe.groupby('Parameter').Value.mean()\n m = chains.shape[0]\n xbar_i = dataframe.groupby(('Parameter', 'Chain')).Value.mean()\n s2_i = dataframe.groupby(('Parameter', 'Chain')).Value.var()\n n = dataframe.groupby(('Parameter', 'Chain')).Value.count().mean()\n\n W = s2_i.mean()\n B = (n/(m-1.)) * ((xbar_i - xbar)**2).sum()\n sigma2_hat = W*(n-1.)/n + B/n\n R_hat = np.sqrt(sigma2_hat/W)\n n_eff = m*n*sigma2_hat/B # I missed what this was for.\n \n return R_hat, n_eff\n\nchains = HMC(dtarget, array([4., 10.]), Eps=0.2, L=5, N=1000)\n\nplt.figure(figsize=(10,7))\nplt.contour(X, Y, Z)\nplt.plot(chains[0][:, 0], chains[0][:, 1], alpha=0.5)\nplt.plot(chains[1][:, 0], chains[1][:, 1], alpha=0.5)\nplt.plot(chains[2][:, 0], chains[2][:, 1], alpha=0.5)\nplt.plot(chains[3][:, 0], chains[3][:, 1], alpha=0.5)\nplt.xlim(0, 6)\nplt.ylim(7, 13)\nplt.show()\n\nplt.subplot(211)\nfor i in xrange(chains.shape[0]):\n plt.plot(chains[i,:,0])\nplt.ylabel('x1')\n\nplt.subplot(212)\nfor i in xrange(chains.shape[0]):\n plt.plot(chains[i,:,1])\nplt.ylabel('x2')\n\nGelman(chains)", "Banana-shaped target distribution", "dtarget = lambda x: exp( (-x[0]**2)/200. - 0.5*(x[1]+(0.05*x[0]**2) - 100.*0.05)**2)\n\nx1 = np.linspace(-20, 20, 101)\nx2 = np.linspace(-15, 10, 101)\nX, Y = np.meshgrid(x1, x2)\nZ = np.array(map(dtarget, zip(X.flat, Y.flat))).reshape(101, 101)\n\nplt.figure(figsize=(10,7))\nplt.contour(X, Y, Z)\nplt.show()\n\nstart = np.array([[2., 5.]\n for i in xrange(4)])\nchains = HMC(dtarget, start, Eps=0.5, L=200, m=0.5, N=5000)\n\nplt.figure(figsize=(10,7))\nplt.contour(X, Y, Z)\n\nplt.plot(chains[0][:, 0], chains[0][:, 1], alpha=0.8)\nplt.plot(chains[1][:, 0], chains[1][:, 1], alpha=0.8)\nplt.plot(chains[2][:, 0], chains[2][:, 1], alpha=0.8)\nplt.plot(chains[3][:, 0], chains[3][:, 1], alpha=0.8)\nplt.show()\n\nplt.subplot(211)\nplt.title(Gelman(chains)[0])\nfor i in xrange(chains.shape[0]):\n plt.plot(chains[i,:,0])\nplt.ylabel('x1')\n\nplt.subplot(212)\nfor i in xrange(chains.shape[0]):\n plt.plot(chains[i,:,1])\nplt.ylabel('x2')\n\nplt.tight_layout()\nplt.show()", "NUTS Sampler\nToy implementation of No-U-Turn Sampler, described by Hoffman and Gelman (2011). Algorithm 3, page 14.", "def Leapfrog(U, theta, r, Eps, m=1.):\n \"\"\"\n Slightly different update rules, since the negative log of the \n target PDF is not used.\n \"\"\"\n gradient = lambda U, theta: derivative(U, theta, 0.01)\n r += (Eps/2.)*gradient(U, theta)\n theta += (Eps/m)*r\n r += (Eps/2.)*gradient(U, theta)\n return copy.copy(theta), copy.copy(r)\n\ndef BuildTree(U, theta, r, u, v, j, Eps, m=1., delta_max=1000):\n \"\"\"\n Recursive tree-building.\n \n TODO: Make this less ugly.\n \"\"\"\n if j == 0:\n # Take one leapfrog step in the direction v.\n theta_p, r_p = Leapfrog(U, theta, r, v*Eps, m=m)\n n_p = float(u <= exp(U(theta_p) - np.dot(0.5*r_p, r_p)))\n s_p = float(u < exp(delta_max + U(theta_p) - np.dot(0.5*r_p, r_p)))\n return theta_p, r_p, theta_p, r_p, theta_p, n_p, s_p\n else:\n # Recursion -- implicitly build the left and right subtrees.\n rargs = (u, v, j-1., Eps)\n rkwargs = {'m':m}\n theta_n, r_n, theta_f, r_f, theta_p, n_p, s_p = BuildTree(U, theta, r, *rargs, **rkwargs)\n if s_p == 1:\n if v == -1:\n theta_n, r_n, null, null, theta_dp, n_dp, s_dp = BuildTree(U, theta_n, r_n, *rargs, **rkwargs)\n else:\n null, null, theta_f, r_f, theta_dp, n_dp, s_dp = BuildTree(U, theta_f, r_f, *rargs, **rkwargs)\n try:\n if uniform.rvs() <= (n_dp/(n_p + n_dp)):\n theta_p = copy.copy(theta_dp)\n except ZeroDivisionError:\n pass\n s_p = s_p*s_dp*int(np.dot((theta_f - theta_n), r_n) >= 0)*int( np.dot((theta_f - theta_n), r_f) >= 0)\n n_p += n_dp\n return theta_n, r_n, theta_f, r_f, theta_p, n_p, s_p\n\ndef NUTS_one_step(U, theta_last, Eps, m=1.):\n \"\"\"\n TODO: clean up all the copies -- stop being so paranoid.\n \"\"\"\n r_not = norm.rvs(0, 1., size=len(theta_last))\n u = uniform.rvs(0, exp(U(theta_last) - np.dot(0.5*r_not, r_not)))\n \n # Initialize.\n theta_m = copy.copy(theta_last)\n theta_n, theta_f = copy.copy(theta_last), copy.copy(theta_last)\n r_n, r_f = copy.copy(r_not), copy.copy(r_not)\n j = 0.\n s = 1.\n n = 1.\n\n while s == 1.:\n v_j = np.random.choice(np.array([-1., 1.])) # Choose a direction.\n if v_j == -1:\n theta_n, r_n, null, null, theta_p, n_p, s_p = BuildTree(U, theta_n, r_n, u, v_j, j, Eps, m=m)\n else:\n null, null, theta_f, r_f, theta_p, n_p, s_p = BuildTree(U, theta_f, r_f, u, v_j, j, Eps, m=m)\n\n if s_p == 1:\n try:\n if uniform.rvs() <= min(1., (n_p/n)):\n theta_m = copy.copy(theta_p)\n except ZeroDivisionError:\n pass\n s = s_p*int(np.dot((theta_f - theta_n), r_n) >= 0)*int( np.dot((theta_f - theta_n), r_f) >= 0)\n j += 1.\n\n return theta_m\n\nNUTS_one_step(lambda x: np.log(dtarget(x)), np.array([3.2, 9.1]), 0.02)\n\ndef NUTS(dtarget, theta_not, Eps, num_iters=1000, delta_max=1000, m=1.):\n U = lambda x: np.log(dtarget(x))\n \n theta = [theta_not]\n for i in xrange(num_iters):\n theta_i = NUTS_one_step(U, theta[-1], Eps, m=m)\n theta.append(theta_i)\n return theta", "Testing on the banana", "start = np.array([[uniform.rvs(loc=-10., scale=15.), \n uniform.rvs(loc=0., scale=10)]\n for i in xrange(4)])\n\nchains = np.array([ np.array(NUTS(dtarget, start[i, :], Eps=0.55, m=1.5, num_iters=10000)) for i in xrange(start.shape[0])])\n\nplt.figure(figsize=(10,7))\nplt.contour(X, Y, Z)\n\nfor i in xrange(chains.shape[0]):\n plt.scatter(chains[i, :, 0], chains[i, :, 1], alpha=0.5, s=0.02)\nplt.show()\n\nplt.subplot(211)\nplt.title(Gelman(chains)[0])\nfor i in xrange(chains.shape[0]):\n plt.plot(chains[i, :, 0])\nplt.ylabel('x1')\n\nplt.subplot(212)\nfor i in xrange(chains.shape[0]):\n plt.plot(chains[i, :, 1])\nplt.ylabel('x2')\n\nplt.tight_layout()\nplt.show()\n\nplt.hist(chains[0,:,0])" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
ES-DOC/esdoc-jupyterhub
notebooks/mohc/cmip6/models/hadgem3-gc31-hh/land.ipynb
gpl-3.0
[ "ES-DOC CMIP6 Model Properties - Land\nMIP Era: CMIP6\nInstitute: MOHC\nSource ID: HADGEM3-GC31-HH\nTopic: Land\nSub-Topics: Soil, Snow, Vegetation, Energy Balance, Carbon Cycle, Nitrogen Cycle, River Routing, Lakes. \nProperties: 154 (96 required)\nModel descriptions: Model description details\nInitialized From: -- \nNotebook Help: Goto notebook help page\nNotebook Initialised: 2018-02-15 16:54:14\nDocument Setup\nIMPORTANT: to be executed each time you run the notebook", "# DO NOT EDIT ! \nfrom pyesdoc.ipython.model_topic import NotebookOutput \n\n# DO NOT EDIT ! \nDOC = NotebookOutput('cmip6', 'mohc', 'hadgem3-gc31-hh', 'land')", "Document Authors\nSet document authors", "# Set as follows: DOC.set_author(\"name\", \"email\") \n# TODO - please enter value(s)", "Document Contributors\nSpecify document contributors", "# Set as follows: DOC.set_contributor(\"name\", \"email\") \n# TODO - please enter value(s)", "Document Publication\nSpecify document publication status", "# Set publication status: \n# 0=do not publish, 1=publish. \nDOC.set_publication_status(0)", "Document Table of Contents\n1. Key Properties\n2. Key Properties --&gt; Conservation Properties\n3. Key Properties --&gt; Timestepping Framework\n4. Key Properties --&gt; Software Properties\n5. Grid\n6. Grid --&gt; Horizontal\n7. Grid --&gt; Vertical\n8. Soil\n9. Soil --&gt; Soil Map\n10. Soil --&gt; Snow Free Albedo\n11. Soil --&gt; Hydrology\n12. Soil --&gt; Hydrology --&gt; Freezing\n13. Soil --&gt; Hydrology --&gt; Drainage\n14. Soil --&gt; Heat Treatment\n15. Snow\n16. Snow --&gt; Snow Albedo\n17. Vegetation\n18. Energy Balance\n19. Carbon Cycle\n20. Carbon Cycle --&gt; Vegetation\n21. Carbon Cycle --&gt; Vegetation --&gt; Photosynthesis\n22. Carbon Cycle --&gt; Vegetation --&gt; Autotrophic Respiration\n23. Carbon Cycle --&gt; Vegetation --&gt; Allocation\n24. Carbon Cycle --&gt; Vegetation --&gt; Phenology\n25. Carbon Cycle --&gt; Vegetation --&gt; Mortality\n26. Carbon Cycle --&gt; Litter\n27. Carbon Cycle --&gt; Soil\n28. Carbon Cycle --&gt; Permafrost Carbon\n29. Nitrogen Cycle\n30. River Routing\n31. River Routing --&gt; Oceanic Discharge\n32. Lakes\n33. Lakes --&gt; Method\n34. Lakes --&gt; Wetlands \n1. Key Properties\nLand surface key properties\n1.1. Model Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of land surface model.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.model_overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "1.2. Model Name\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nName of land surface model code (e.g. MOSES2.2)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.model_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "1.3. Description\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nGeneral description of the processes modelled (e.g. dymanic vegation, prognostic albedo, etc.)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "1.4. Land Atmosphere Flux Exchanges\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nFluxes exchanged with the atmopshere.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.land_atmosphere_flux_exchanges') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"water\" \n# \"energy\" \n# \"carbon\" \n# \"nitrogen\" \n# \"phospherous\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "1.5. Atmospheric Coupling Treatment\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe the treatment of land surface coupling with the Atmosphere model component, which may be different for different quantities (e.g. dust: semi-implicit, water vapour: explicit)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.atmospheric_coupling_treatment') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "1.6. Land Cover\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nTypes of land cover defined in the land surface model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.land_cover') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"bare soil\" \n# \"urban\" \n# \"lake\" \n# \"land ice\" \n# \"lake ice\" \n# \"vegetated\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "1.7. Land Cover Change\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe how land cover change is managed (e.g. the use of net or gross transitions)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.land_cover_change') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "1.8. Tiling\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe the general tiling procedure used in the land surface (if any). Include treatment of physiography, land/sea, (dynamic) vegetation coverage and orography/roughness", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.tiling') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "2. Key Properties --&gt; Conservation Properties\nTODO\n2.1. Energy\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe if/how energy is conserved globally and to what level (e.g. within X [units]/year)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.conservation_properties.energy') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "2.2. Water\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe if/how water is conserved globally and to what level (e.g. within X [units]/year)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.conservation_properties.water') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "2.3. Carbon\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe if/how carbon is conserved globally and to what level (e.g. within X [units]/year)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.conservation_properties.carbon') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "3. Key Properties --&gt; Timestepping Framework\nTODO\n3.1. Timestep Dependent On Atmosphere\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs a time step dependent on the frequency of atmosphere coupling?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.timestepping_framework.timestep_dependent_on_atmosphere') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "3.2. Time Step\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverall timestep of land surface model (i.e. time between calls)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.timestepping_framework.time_step') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "3.3. Timestepping Method\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nGeneral description of time stepping method and associated time step(s)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.timestepping_framework.timestepping_method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "4. Key Properties --&gt; Software Properties\nSoftware properties of land surface code\n4.1. Repository\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nLocation of code for this component.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.software_properties.repository') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "4.2. Code Version\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nCode version identifier.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.software_properties.code_version') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "4.3. Code Languages\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nCode language(s).", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.software_properties.code_languages') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "5. Grid\nLand surface grid\n5.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of the grid in the land surface", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.grid.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "6. Grid --&gt; Horizontal\nThe horizontal grid in the land surface\n6.1. Description\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe the general structure of the horizontal grid (not including any tiling)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.grid.horizontal.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "6.2. Matches Atmosphere Grid\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDoes the horizontal grid match the atmosphere?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.grid.horizontal.matches_atmosphere_grid') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "7. Grid --&gt; Vertical\nThe vertical grid in the soil\n7.1. Description\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe the general structure of the vertical grid in the soil (not including any tiling)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.grid.vertical.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "7.2. Total Depth\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nThe total depth of the soil (in metres)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.grid.vertical.total_depth') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "8. Soil\nLand surface soil\n8.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of soil in the land surface", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "8.2. Heat Water Coupling\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe the coupling between heat and water in the soil", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.heat_water_coupling') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "8.3. Number Of Soil layers\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nThe number of soil layers", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.number_of_soil layers') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "8.4. Prognostic Variables\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nList the prognostic variables of the soil scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.prognostic_variables') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "9. Soil --&gt; Soil Map\nKey properties of the land surface soil map\n9.1. Description\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nGeneral description of soil map", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.soil_map.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "9.2. Structure\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the soil structure map", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.soil_map.structure') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "9.3. Texture\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the soil texture map", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.soil_map.texture') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "9.4. Organic Matter\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the soil organic matter map", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.soil_map.organic_matter') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "9.5. Albedo\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the soil albedo map", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.soil_map.albedo') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "9.6. Water Table\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the soil water table map, if any", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.soil_map.water_table') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "9.7. Continuously Varying Soil Depth\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDoes the soil properties vary continuously with depth?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.soil_map.continuously_varying_soil_depth') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "9.8. Soil Depth\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the soil depth map", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.soil_map.soil_depth') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "10. Soil --&gt; Snow Free Albedo\nTODO\n10.1. Prognostic\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs snow free albedo prognostic?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.snow_free_albedo.prognostic') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "10.2. Functions\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nIf prognostic, describe the dependancies on snow free albedo calculations", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.snow_free_albedo.functions') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"vegetation type\" \n# \"soil humidity\" \n# \"vegetation state\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "10.3. Direct Diffuse\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nIf prognostic, describe the distinction between direct and diffuse albedo", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.snow_free_albedo.direct_diffuse') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"distinction between direct and diffuse albedo\" \n# \"no distinction between direct and diffuse albedo\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "10.4. Number Of Wavelength Bands\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nIf prognostic, enter the number of wavelength bands used", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.snow_free_albedo.number_of_wavelength_bands') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "11. Soil --&gt; Hydrology\nKey properties of the land surface soil hydrology\n11.1. Description\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nGeneral description of the soil hydrological model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.hydrology.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "11.2. Time Step\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nTime step of river soil hydrology in seconds", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.hydrology.time_step') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "11.3. Tiling\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the soil hydrology tiling, if any.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.hydrology.tiling') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "11.4. Vertical Discretisation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe the typical vertical discretisation", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.hydrology.vertical_discretisation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "11.5. Number Of Ground Water Layers\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nThe number of soil layers that may contain water", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.hydrology.number_of_ground_water_layers') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "11.6. Lateral Connectivity\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nDescribe the lateral connectivity between tiles", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.hydrology.lateral_connectivity') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"perfect connectivity\" \n# \"Darcian flow\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "11.7. Method\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nThe hydrological dynamics scheme in the land surface model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.hydrology.method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Bucket\" \n# \"Force-restore\" \n# \"Choisnel\" \n# \"Explicit diffusion\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "12. Soil --&gt; Hydrology --&gt; Freezing\nTODO\n12.1. Number Of Ground Ice Layers\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nHow many soil layers may contain ground ice", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.hydrology.freezing.number_of_ground_ice_layers') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "12.2. Ice Storage Method\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe the method of ice storage", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.hydrology.freezing.ice_storage_method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "12.3. Permafrost\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe the treatment of permafrost, if any, within the land surface scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.hydrology.freezing.permafrost') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "13. Soil --&gt; Hydrology --&gt; Drainage\nTODO\n13.1. Description\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nGeneral describe how drainage is included in the land surface scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.hydrology.drainage.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "13.2. Types\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nDifferent types of runoff represented by the land surface model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.hydrology.drainage.types') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Gravity drainage\" \n# \"Horton mechanism\" \n# \"topmodel-based\" \n# \"Dunne mechanism\" \n# \"Lateral subsurface flow\" \n# \"Baseflow from groundwater\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "14. Soil --&gt; Heat Treatment\nTODO\n14.1. Description\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nGeneral description of how heat treatment properties are defined", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.heat_treatment.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "14.2. Time Step\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nTime step of soil heat scheme in seconds", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.heat_treatment.time_step') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "14.3. Tiling\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the soil heat treatment tiling, if any.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.heat_treatment.tiling') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "14.4. Vertical Discretisation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe the typical vertical discretisation", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.heat_treatment.vertical_discretisation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "14.5. Heat Storage\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nSpecify the method of heat storage", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.heat_treatment.heat_storage') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Force-restore\" \n# \"Explicit diffusion\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "14.6. Processes\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nDescribe processes included in the treatment of soil heat", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.heat_treatment.processes') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"soil moisture freeze-thaw\" \n# \"coupling with snow temperature\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "15. Snow\nLand surface snow\n15.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of snow in the land surface", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.snow.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "15.2. Tiling\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the snow tiling, if any.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.snow.tiling') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "15.3. Number Of Snow Layers\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nThe number of snow levels used in the land surface scheme/model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.snow.number_of_snow_layers') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "15.4. Density\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescription of the treatment of snow density", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.snow.density') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"prognostic\" \n# \"constant\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "15.5. Water Equivalent\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescription of the treatment of the snow water equivalent", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.snow.water_equivalent') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"prognostic\" \n# \"diagnostic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "15.6. Heat Content\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescription of the treatment of the heat content of snow", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.snow.heat_content') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"prognostic\" \n# \"diagnostic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "15.7. Temperature\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescription of the treatment of snow temperature", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.snow.temperature') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"prognostic\" \n# \"diagnostic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "15.8. Liquid Water Content\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescription of the treatment of snow liquid water", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.snow.liquid_water_content') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"prognostic\" \n# \"diagnostic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "15.9. Snow Cover Fractions\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nSpecify cover fractions used in the surface snow scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.snow.snow_cover_fractions') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"ground snow fraction\" \n# \"vegetation snow fraction\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "15.10. Processes\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nSnow related processes in the land surface scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.snow.processes') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"snow interception\" \n# \"snow melting\" \n# \"snow freezing\" \n# \"blowing snow\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "15.11. Prognostic Variables\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nList the prognostic variables of the snow scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.snow.prognostic_variables') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "16. Snow --&gt; Snow Albedo\nTODO\n16.1. Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe the treatment of snow-covered land albedo", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.snow.snow_albedo.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"prognostic\" \n# \"prescribed\" \n# \"constant\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "16.2. Functions\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\n*If prognostic, *", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.snow.snow_albedo.functions') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"vegetation type\" \n# \"snow age\" \n# \"snow density\" \n# \"snow grain type\" \n# \"aerosol deposition\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "17. Vegetation\nLand surface vegetation\n17.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of vegetation in the land surface", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "17.2. Time Step\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nTime step of vegetation scheme in seconds", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.time_step') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "17.3. Dynamic Vegetation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs there dynamic evolution of vegetation?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.dynamic_vegetation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "17.4. Tiling\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the vegetation tiling, if any.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.tiling') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "17.5. Vegetation Representation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nVegetation classification used", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.vegetation_representation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"vegetation types\" \n# \"biome types\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "17.6. Vegetation Types\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nList of vegetation types in the classification, if any", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.vegetation_types') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"broadleaf tree\" \n# \"needleleaf tree\" \n# \"C3 grass\" \n# \"C4 grass\" \n# \"vegetated\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "17.7. Biome Types\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nList of biome types in the classification, if any", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.biome_types') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"evergreen needleleaf forest\" \n# \"evergreen broadleaf forest\" \n# \"deciduous needleleaf forest\" \n# \"deciduous broadleaf forest\" \n# \"mixed forest\" \n# \"woodland\" \n# \"wooded grassland\" \n# \"closed shrubland\" \n# \"opne shrubland\" \n# \"grassland\" \n# \"cropland\" \n# \"wetlands\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "17.8. Vegetation Time Variation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nHow the vegetation fractions in each tile are varying with time", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.vegetation_time_variation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"fixed (not varying)\" \n# \"prescribed (varying from files)\" \n# \"dynamical (varying from simulation)\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "17.9. Vegetation Map\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nIf vegetation fractions are not dynamically updated , describe the vegetation map used (common name and reference, if possible)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.vegetation_map') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "17.10. Interception\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs vegetation interception of rainwater represented?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.interception') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "17.11. Phenology\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nTreatment of vegetation phenology", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.phenology') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"prognostic\" \n# \"diagnostic (vegetation map)\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "17.12. Phenology Description\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nGeneral description of the treatment of vegetation phenology", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.phenology_description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "17.13. Leaf Area Index\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nTreatment of vegetation leaf area index", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.leaf_area_index') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"prescribed\" \n# \"prognostic\" \n# \"diagnostic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "17.14. Leaf Area Index Description\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nGeneral description of the treatment of leaf area index", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.leaf_area_index_description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "17.15. Biomass\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\n*Treatment of vegetation biomass *", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.biomass') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"prognostic\" \n# \"diagnostic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "17.16. Biomass Description\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nGeneral description of the treatment of vegetation biomass", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.biomass_description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "17.17. Biogeography\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nTreatment of vegetation biogeography", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.biogeography') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"prognostic\" \n# \"diagnostic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "17.18. Biogeography Description\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nGeneral description of the treatment of vegetation biogeography", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.biogeography_description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "17.19. Stomatal Resistance\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nSpecify what the vegetation stomatal resistance depends on", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.stomatal_resistance') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"light\" \n# \"temperature\" \n# \"water availability\" \n# \"CO2\" \n# \"O3\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "17.20. Stomatal Resistance Description\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nGeneral description of the treatment of vegetation stomatal resistance", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.stomatal_resistance_description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "17.21. Prognostic Variables\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nList the prognostic variables of the vegetation scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.prognostic_variables') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "18. Energy Balance\nLand surface energy balance\n18.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of energy balance in land surface", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.energy_balance.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "18.2. Tiling\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the energy balance tiling, if any.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.energy_balance.tiling') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "18.3. Number Of Surface Temperatures\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nThe maximum number of distinct surface temperatures in a grid cell (for example, each subgrid tile may have its own temperature)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.energy_balance.number_of_surface_temperatures') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "18.4. Evaporation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nSpecify the formulation method for land surface evaporation, from soil and vegetation", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.energy_balance.evaporation') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"alpha\" \n# \"beta\" \n# \"combined\" \n# \"Monteith potential evaporation\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "18.5. Processes\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nDescribe which processes are included in the energy balance scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.energy_balance.processes') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"transpiration\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "19. Carbon Cycle\nLand surface carbon cycle\n19.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of carbon cycle in land surface", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "19.2. Tiling\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the carbon cycle tiling, if any.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.tiling') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "19.3. Time Step\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nTime step of carbon cycle in seconds", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.time_step') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "19.4. Anthropogenic Carbon\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nDescribe the treament of the anthropogenic carbon pool", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.anthropogenic_carbon') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"grand slam protocol\" \n# \"residence time\" \n# \"decay time\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "19.5. Prognostic Variables\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nList the prognostic variables of the carbon scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.prognostic_variables') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "20. Carbon Cycle --&gt; Vegetation\nTODO\n20.1. Number Of Carbon Pools\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nEnter the number of carbon pools used", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.vegetation.number_of_carbon_pools') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "20.2. Carbon Pools\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nList the carbon pools used", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.vegetation.carbon_pools') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "20.3. Forest Stand Dynamics\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the treatment of forest stand dyanmics", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.vegetation.forest_stand_dynamics') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "21. Carbon Cycle --&gt; Vegetation --&gt; Photosynthesis\nTODO\n21.1. Method\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the general method used for photosynthesis (e.g. type of photosynthesis, distinction between C3 and C4 grasses, Nitrogen depencence, etc.)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.vegetation.photosynthesis.method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "22. Carbon Cycle --&gt; Vegetation --&gt; Autotrophic Respiration\nTODO\n22.1. Maintainance Respiration\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the general method used for maintainence respiration", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.vegetation.autotrophic_respiration.maintainance_respiration') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "22.2. Growth Respiration\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the general method used for growth respiration", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.vegetation.autotrophic_respiration.growth_respiration') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "23. Carbon Cycle --&gt; Vegetation --&gt; Allocation\nTODO\n23.1. Method\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe the general principle behind the allocation scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "23.2. Allocation Bins\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nSpecify distinct carbon bins used in allocation", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.allocation_bins') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"leaves + stems + roots\" \n# \"leaves + stems + roots (leafy + woody)\" \n# \"leaves + fine roots + coarse roots + stems\" \n# \"whole plant (no distinction)\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "23.3. Allocation Fractions\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe how the fractions of allocation are calculated", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.allocation_fractions') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"fixed\" \n# \"function of vegetation type\" \n# \"function of plant allometry\" \n# \"explicitly calculated\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "24. Carbon Cycle --&gt; Vegetation --&gt; Phenology\nTODO\n24.1. Method\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe the general principle behind the phenology scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.vegetation.phenology.method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "25. Carbon Cycle --&gt; Vegetation --&gt; Mortality\nTODO\n25.1. Method\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe the general principle behind the mortality scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.vegetation.mortality.method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "26. Carbon Cycle --&gt; Litter\nTODO\n26.1. Number Of Carbon Pools\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nEnter the number of carbon pools used", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.litter.number_of_carbon_pools') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "26.2. Carbon Pools\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nList the carbon pools used", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.litter.carbon_pools') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "26.3. Decomposition\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nList the decomposition methods used", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.litter.decomposition') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "26.4. Method\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nList the general method used", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.litter.method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "27. Carbon Cycle --&gt; Soil\nTODO\n27.1. Number Of Carbon Pools\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nEnter the number of carbon pools used", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.soil.number_of_carbon_pools') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "27.2. Carbon Pools\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nList the carbon pools used", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.soil.carbon_pools') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "27.3. Decomposition\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nList the decomposition methods used", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.soil.decomposition') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "27.4. Method\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nList the general method used", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.soil.method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "28. Carbon Cycle --&gt; Permafrost Carbon\nTODO\n28.1. Is Permafrost Included\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs permafrost included?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.is_permafrost_included') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "28.2. Emitted Greenhouse Gases\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nList the GHGs emitted", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.emitted_greenhouse_gases') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "28.3. Decomposition\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nList the decomposition methods used", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.decomposition') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "28.4. Impact On Soil Properties\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the impact of permafrost on soil properties", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.impact_on_soil_properties') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "29. Nitrogen Cycle\nLand surface nitrogen cycle\n29.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of the nitrogen cycle in the land surface", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.nitrogen_cycle.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "29.2. Tiling\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the notrogen cycle tiling, if any.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.nitrogen_cycle.tiling') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "29.3. Time Step\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nTime step of nitrogen cycle in seconds", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.nitrogen_cycle.time_step') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "29.4. Prognostic Variables\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nList the prognostic variables of the nitrogen scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.nitrogen_cycle.prognostic_variables') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "30. River Routing\nLand surface river routing\n30.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of river routing in the land surface", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "30.2. Tiling\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the river routing, if any.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.tiling') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "30.3. Time Step\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nTime step of river routing scheme in seconds", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.time_step') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "30.4. Grid Inherited From Land Surface\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs the grid inherited from land surface?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.grid_inherited_from_land_surface') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "30.5. Grid Description\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nGeneral description of grid, if not inherited from land surface", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.grid_description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "30.6. Number Of Reservoirs\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nEnter the number of reservoirs", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.number_of_reservoirs') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "30.7. Water Re Evaporation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nTODO", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.water_re_evaporation') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"flood plains\" \n# \"irrigation\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "30.8. Coupled To Atmosphere\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nIs river routing coupled to the atmosphere model component?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.coupled_to_atmosphere') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "30.9. Coupled To Land\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the coupling between land and rivers", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.coupled_to_land') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "30.10. Quantities Exchanged With Atmosphere\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nIf couple to atmosphere, which quantities are exchanged between river routing and the atmosphere model components?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.quantities_exchanged_with_atmosphere') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"heat\" \n# \"water\" \n# \"tracers\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "30.11. Basin Flow Direction Map\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nWhat type of basin flow direction map is being used?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.basin_flow_direction_map') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"present day\" \n# \"adapted for other periods\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "30.12. Flooding\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the representation of flooding, if any", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.flooding') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "30.13. Prognostic Variables\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nList the prognostic variables of the river routing", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.prognostic_variables') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "31. River Routing --&gt; Oceanic Discharge\nTODO\n31.1. Discharge Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nSpecify how rivers are discharged to the ocean", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.oceanic_discharge.discharge_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"direct (large rivers)\" \n# \"diffuse\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "31.2. Quantities Transported\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nQuantities that are exchanged from river-routing to the ocean model component", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.oceanic_discharge.quantities_transported') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"heat\" \n# \"water\" \n# \"tracers\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "32. Lakes\nLand surface lakes\n32.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of lakes in the land surface", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.lakes.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "32.2. Coupling With Rivers\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nAre lakes coupled to the river routing model component?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.lakes.coupling_with_rivers') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "32.3. Time Step\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nTime step of lake scheme in seconds", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.lakes.time_step') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "32.4. Quantities Exchanged With Rivers\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nIf coupling with rivers, which quantities are exchanged between the lakes and rivers", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.lakes.quantities_exchanged_with_rivers') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"heat\" \n# \"water\" \n# \"tracers\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "32.5. Vertical Grid\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the vertical grid of lakes", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.lakes.vertical_grid') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "32.6. Prognostic Variables\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nList the prognostic variables of the lake scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.lakes.prognostic_variables') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "33. Lakes --&gt; Method\nTODO\n33.1. Ice Treatment\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs lake ice included?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.lakes.method.ice_treatment') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "33.2. Albedo\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe the treatment of lake albedo", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.lakes.method.albedo') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"prognostic\" \n# \"diagnostic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "33.3. Dynamics\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nWhich dynamics of lakes are treated? horizontal, vertical, etc.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.lakes.method.dynamics') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"No lake dynamics\" \n# \"vertical\" \n# \"horizontal\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "33.4. Dynamic Lake Extent\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs a dynamic lake extent scheme included?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.lakes.method.dynamic_lake_extent') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "33.5. Endorheic Basins\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nBasins not flowing to ocean included?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.lakes.method.endorheic_basins') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "34. Lakes --&gt; Wetlands\nTODO\n34.1. Description\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the treatment of wetlands, if any", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.lakes.wetlands.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "©2017 ES-DOC" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
wbbeyourself/cn-deep-learning
tutorials/intro-to-tflearn/TFLearn_Digit_Recognition.ipynb
mit
[ "Handwritten Number Recognition with TFLearn and MNIST\nIn this notebook, we'll be building a neural network that recognizes handwritten numbers 0-9. \nThis kind of neural network is used in a variety of real-world applications including: recognizing phone numbers and sorting postal mail by address. To build the network, we'll be using the MNIST data set, which consists of images of handwritten numbers and their correct labels 0-9.\nWe'll be using TFLearn, a high-level library built on top of TensorFlow to build the neural network. We'll start off by importing all the modules we'll need, then load the data, and finally build the network.", "# Import Numpy, TensorFlow, TFLearn, and MNIST data\nimport numpy as np\nimport tensorflow as tf\nimport tflearn\nimport tflearn.datasets.mnist as mnist", "Retrieving training and test data\nThe MNIST data set already contains both training and test data. There are 55,000 data points of training data, and 10,000 points of test data.\nEach MNIST data point has:\n1. an image of a handwritten digit and \n2. a corresponding label (a number 0-9 that identifies the image)\nWe'll call the images, which will be the input to our neural network, X and their corresponding labels Y.\nWe're going to want our labels as one-hot vectors, which are vectors that holds mostly 0's and one 1. It's easiest to see this in a example. As a one-hot vector, the number 0 is represented as [1, 0, 0, 0, 0, 0, 0, 0, 0, 0], and 4 is represented as [0, 0, 0, 0, 1, 0, 0, 0, 0, 0].\nFlattened data\nFor this example, we'll be using flattened data or a representation of MNIST images in one dimension rather than two. So, each handwritten number image, which is 28x28 pixels, will be represented as a one dimensional array of 784 pixel values. \nFlattening the data throws away information about the 2D structure of the image, but it simplifies our data so that all of the training data can be contained in one array whose shape is [55000, 784]; the first dimension is the number of training images and the second dimension is the number of pixels in each image. This is the kind of data that is easy to analyze using a simple neural network.", "# Retrieve the training and test data\ntrainX, trainY, testX, testY = mnist.load_data(one_hot=True)\n\ntrainX[0]", "Visualize the training data\nProvided below is a function that will help you visualize the MNIST data. By passing in the index of a training example, the function show_digit will display that training image along with it's corresponding label in the title.", "# Visualizing the data\nimport matplotlib.pyplot as plt\n%matplotlib inline\n\n# Function for displaying a training image by it's index in the MNIST set\ndef show_digit(index):\n label = trainY[index].argmax(axis=0)\n # Reshape 784 array into 28x28 image\n image = trainX[index].reshape([28,28])\n plt.title('Training data, index: %d, Label: %d' % (index, label))\n plt.imshow(image, cmap='gray_r')\n plt.show()\n \n# Display the first (index 0) training image\nshow_digit(3)", "Building the network\nTFLearn lets you build the network by defining the layers in that network. \nFor this example, you'll define:\n\nThe input layer, which tells the network the number of inputs it should expect for each piece of MNIST data. \nHidden layers, which recognize patterns in data and connect the input to the output layer, and\nThe output layer, which defines how the network learns and outputs a label for a given image.\n\nLet's start with the input layer; to define the input layer, you'll define the type of data that the network expects. For example,\nnet = tflearn.input_data([None, 100])\nwould create a network with 100 inputs. The number of inputs to your network needs to match the size of your data. For this example, we're using 784 element long vectors to encode our input data, so we need 784 input units.\nAdding layers\nTo add new hidden layers, you use \nnet = tflearn.fully_connected(net, n_units, activation='ReLU')\nThis adds a fully connected layer where every unit (or node) in the previous layer is connected to every unit in this layer. The first argument net is the network you created in the tflearn.input_data call, it designates the input to the hidden layer. You can set the number of units in the layer with n_units, and set the activation function with the activation keyword. You can keep adding layers to your network by repeated calling tflearn.fully_connected(net, n_units). \nThen, to set how you train the network, use:\nnet = tflearn.regression(net, optimizer='sgd', learning_rate=0.1, loss='categorical_crossentropy')\nAgain, this is passing in the network you've been building. The keywords: \n\noptimizer sets the training method, here stochastic gradient descent\nlearning_rate is the learning rate\nloss determines how the network error is calculated. In this example, with categorical cross-entropy.\n\nFinally, you put all this together to create the model with tflearn.DNN(net).\nExercise: Below in the build_model() function, you'll put together the network using TFLearn. You get to choose how many layers to use, how many hidden units, etc.\nHint: The final output layer must have 10 output nodes (one for each digit 0-9). It's also recommended to use a softmax activation layer as your final output layer.", "# Define the neural network\ndef build_model():\n # This resets all parameters and variables, leave this here\n tf.reset_default_graph()\n \n #### Your code ####\n # Include the input layer, hidden layer(s), and set how you want to train the model\n net = tflearn.input_data([None, 784])\n net = tflearn.fully_connected(net, n_units=200, activation='ReLU')\n net = tflearn.fully_connected(net, n_units=30, activation='ReLU')\n net = tflearn.fully_connected(net, n_units=10, activation='softmax')\n net = tflearn.regression(net, optimizer='sgd', learning_rate=0.1, loss='categorical_crossentropy')\n \n \n # This model assumes that your network is named \"net\" \n model = tflearn.DNN(net)\n return model\n\n# Build the model\nmodel = build_model()", "Training the network\nNow that we've constructed the network, saved as the variable model, we can fit it to the data. Here we use the model.fit method. You pass in the training features trainX and the training targets trainY. Below I set validation_set=0.1 which reserves 10% of the data set as the validation set. You can also set the batch size and number of epochs with the batch_size and n_epoch keywords, respectively. \nToo few epochs don't effectively train your network, and too many take a long time to execute. Choose wisely!", "# Training\nmodel.fit(trainX, trainY, validation_set=0.1, show_metric=True, batch_size=100, n_epoch=20)", "Testing\nAfter you're satisified with the training output and accuracy, you can then run the network on the test data set to measure it's performance! Remember, only do this after you've done the training and are satisfied with the results.\nA good result will be higher than 95% accuracy. Some simple models have been known to get up to 99.7% accuracy!", "# Compare the labels that our model predicts with the actual labels\n\n# Find the indices of the most confident prediction for each item. That tells us the predicted digit for that sample.\npredictions = np.array(model.predict(testX)).argmax(axis=1)\n\n# Calculate the accuracy, which is the percentage of times the predicated labels matched the actual labels\nactual = testY.argmax(axis=1)\ntest_accuracy = np.mean(predictions == actual, axis=0)\n\n# Print out the result\nprint(\"Test accuracy: \", test_accuracy)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
jwjohnson314/data-801
notebooks/introduction_to_python.ipynb
mit
[ "Python is a general-purpose programming language that can be used for many scientific, statistical, and analytical tasks. Python has an elegant structure, clean and intuitive syntax, and is (relative to other programming languages) easy to learn.", "# This is a code cell. In this cell, any line prefaced with a # is not executed\n# the canonical first program\n\nprint('Hello World!')", "'Hello World!' is a data structure called a string.", "type('Hello World')\n\n4+5\n\ntype(9.)\n\n4 * 5\n4**5 # exponentiation\n\n# naming things and storing them in memory for later use\nx = 2**10\n\nprint(x)\n\nwhos\n\n# with explanation\nprint('The value of x is {:,}.'.format(x)) # you can change the formatting of x inside the brackets\n\ntype(3.14159)\n\nprint('The value of pi is approximately {0:.2f}.'.format(3.14159)) # 0 is the argument, .2 means two past .", "Lists\nLists are a commonly used python data structure.", "x = [1, 2, 3]\n\ntype(x)\n\nwhos\n\nx.append(4)\n\nx\n\n# throws an error\nx.prepend(0)\n\ny = [0]\nx+y\n\ny+x\n\nwhos\n\n# didn't save it - let's do it again\ny = y+x\n\ny\n\n# Exercise: there is a more efficient way - find the reference in the docs for the insert command.\n# insert the value 2.5 into the list into the appropriate spot\n# your code here:\n\ny.insert(3, 2.5)\nprint(y)\n\n# a bigger list - list is a function too\nz = list(range(100)) # range is a special type in Python\nprint(z)\n\n# getting help\n?range\n\n# try shift+tab when calling unfamiliar function for quick access to docstring\nrange()\n\n# Exercise: get the docstring for the 'open' function\n# your code here:\nopen()\n\n# Exercise: get the docstring for the 'list' function\n# your code here:\nlist()\n\n# often we need to get extract elements from a list. Python uses zero-based indexing\nprint(z[0])\nprint(z[5])\n\n# ranges/slices\nprint(z[4:5]) # 4 is included\nprint(z[4:]) # 4 is included\nprint(z[:4]) # 4 is not included\n\nz[2:4] + z[7:9]\n\n# Exercise: write a list consisting of the entries in z whose first digit is a prime number\n# your code here:\n\n\n\n# from the end of the list\nz[-10:]\n\n# by step size other than 1\nz[10:20:2] # start:stop:step\n\n# when you're going all the way to the end\nz[10::2] # stop omitted\n\n# exercise: can you write a single operation to return z in reversed order?\n# your code here:\nz[::-1]\n\n# removing values\nz.remove(2)\n\nprint(z[:10])\n\n# strings are a lot like lists\nstring = 'This is A poOrly cAPitAlized string.'\nstring[:4]\n\ntype(string[:4])\n\nstring[::2]\n\nstring[-1]\n\nprint(string.lower())\nprint(string.upper())\nprint(string.split('A'))\n\ntype(string.split('A'))\n\naddress = 'http://www.wikiart.org/en/jan-van-eyck/the-birth-of-john-the-baptist-1422'\nartist, painting = address.split('/')[4:]\nprint(artist)\nprint(painting)\n\n# digression - unicode\nord('.') # encoding\n\nchr(97)\n\n# homework reading: http://www.joelonsoftware.com/articles/Unicode.html\n\n# string arithmetic\nx = 'Hello '\ny = 'World'\nprint(x+y)\n\nx[:5] + '_' + y\n\nx.replace(' ', '_')\n\nx.append('_')\n\nx\n\nx.replace(' ', '_') + y\n\n# strings are not exactly lists!\nx*5 + y.append(' ')*3", "Exercise: take a few minutes to read the docs for text strings here: https://docs.python.org/3/library/stdtypes.html#textseq\nImmutable means 'can't be changed'\nSo if you want to change a string, you need to make a copy of some sort.", "x * 5 + (y + str(' ')) * 3", "Tuples\nExercise: Find the doc page for the tuples datatype. What is the difference between a tuple and a list?", "# Exercise: write a tuple consisting of the first five letters of the alphabet (lower-case) in reversed order\n# your code here\n\ntup = ('z', 'y', 'x', 'w', 'v')\n\ntype(tup)\n\ntup[3]", "Dicts\nThe dictionary data structure consists of key-value pairs. This shows up a lot; for instance, when reading JSON files (http://www.json.org/)", "x = ['Bob', 'Amy', 'Fred']\ny = [32, 27, 19]\n\nz = dict(zip(x, y))\n\ntype(z)\n\nz\n\nz[1]\n\nz['Bob']\n\nz.keys()\n\nz.values()\n\ndetailed ={'amy': {'age': 32, 'school': 'UNH', 'GPA':4.0}, 'bob': {'age': 27, 'school': 'UNC', 'GPA':3.4}}\n\ndetailed['amy']['school']\n\n# less trivial example\n# library imports; ignore for now\n\nfrom urllib.request import urlopen\nimport json\n\nurl = 'http://www.wikiart.org/en/App/Painting/' + \\\n 'PaintingsByArtist?artistUrl=' + \\\n 'pablo-picasso' + '&json=2'\n\nraw = urlopen(url).read().decode('utf8')\n\nd = json.loads(raw)\n\ntype(d)\n\nd\n\ntype(d[0])\n\nd[0].keys()", "Control structures: the 'for' loop", "# indents matter in Python\n\nfor i in range(20):\n print('%s: %s' % (d[i]['title'], d[i]['completitionYear']))\n\n# exercises: print the sizes and titles of the last ten paintings in this list. \n# The statement should print as 'title: width pixels x height pixels'\n# your code here:", "The 'if-then' statement", "data = [1.2, 2.4, 23.3, 4.5]\nnew_data = []\nfor i in range(len(data)):\n if round(data[i]) % 2 == 0: # modular arithmetic, remainder of 0\n new_data.append(round(data[i]))\n else:\n new_data.append(0)\n\nprint(new_data)\n ", "Digression - list comprehensions\nRather than a for loop, in a situation like that above, Python has a method called a list comprehension for creating lists. Sometimes this is more efficient. It's often nicer syntactically, as long as the number of conditions is not too large (<= 2 is a good guideline).", "print(data)\nnew_new_data = [round(i) if round(i) % 2 == 0 else 0 for i in data]\nprint(new_new_data)\n\ndata = list(range(20))\nfor i in data:\n if i % 2 == 0:\n print(i)\n elif i >= 10:\n print('wow, that\\'s a big odd number - still no fun')\n else:\n print('odd num no fun')\n", "The 'while' loop", "# beware loops that don't terminate\ncounter = 0\ntmp = 2\nwhile counter < 10:\n tmp = tmp**2\n counter += 1\nprint('{:,}'.format(tmp))\nprint('tmp is %d digits long, that\\'s huge!' % len(str(tmp)))\n\n# the 'pass' command\nfor i in range(10):\n if i % 2 == 0:\n print(i)\n else:\n pass\n\n# the continue command\nfor letter in 'Python':\n if letter == 'h':\n continue\n print('Current Letter :', letter)\n \n\n# the pass command\nfor letter in 'Python':\n if letter == 'h':\n pass\n print('Current Letter :', letter)\n\n# the break command\nfor letter in 'Python':\n if letter == 'h':\n break\n print('Current Letter :', letter)", "Functions\nFunctions take in inputs and produce outputs.", "def square(x):\n '''input: a numerical value x\n output: the square of x\n '''\n return x**2\n\nsquare(3.14)\n\n# Exercise: write a function called 'reverse' to take in a string and reverse it\n# your code here:\n\n# test \nreverse('Hi, my name is Joan Jett')\n\ndef raise_to_power(x, n=2): # 2 is the default for n\n return x**n\n\nraise_to_power(3)\n\nraise_to_power(3,4)\n\ndef write_to_file(filepath, string):\n '''make sure the file doesn\\'t exist; this will overwrite'''\n with open(filepath, 'w+') as f:\n f.writelines(string)\n\nwrite_to_file('test.txt', 'fred was here')\n\n! cat test.txt\n\nwith open('test.txt') as f:\n content = f.read()\n \nprint(content)\n\nwrite_to_file('test.txt', 'goodbye for now\\n') # \\n is the newline character\n\n! cat test.txt\n\n# Exercise: what are the modes for editing a file?", "Don't repeat yourself!\nUse functions to avoid rewriting the same code over and over.\nHomework\n-Read the blog about unicode: http://www.joelonsoftware.com/articles/Unicode.html\n-Read the docs! (not all of them at once, of course) https://docs.python.org/3/index.html\n-Download the homework assignment notebook, complete it, and email the results to me BEFORE next Wednedsay, 9:00AM. Anything submitted after that time will be penalized 1 point per day (that's one full letter grade per day)." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
goodwordalchemy/thinkstats_notes_and_exercises
code/chap03ex.ipynb
gpl-3.0
[ "Exercise from Think Stats, 2nd Edition (thinkstats2.com)<br>\nAllen Downey\nRead the female respondent file.", "%matplotlib inline\nimport thinkstats2\nimport thinkplot\nimport chap01soln\nresp = chap01soln.ReadFemResp()\nprint len(resp)", "Make a PMF of <tt>numkdhh</tt>, the number of children under 18 in the respondent's household.", "numkdhh = thinkstats2.Pmf(resp.numkdhh)\nnumkdhh", "Display the PMF.", "thinkplot.Hist(numkdhh, label='actual')\nthinkplot.Config(title=\"PMF of num children under 18\",\n xlabel=\"number of children under 18\",\n ylabel=\"probability\")", "Define <tt>BiasPmf</tt>.", "def BiasPmf(pmf, label=''):\n \"\"\"Returns the Pmf with oversampling proportional to value.\n\n If pmf is the distribution of true values, the result is the\n distribution that would be seen if values are oversampled in\n proportion to their values; for example, if you ask students\n how big their classes are, large classes are oversampled in\n proportion to their size.\n\n Args:\n pmf: Pmf object.\n label: string label for the new Pmf.\n\n Returns:\n Pmf object\n \"\"\"\n new_pmf = pmf.Copy(label=label)\n\n for x, p in pmf.Items():\n new_pmf.Mult(x, x)\n \n new_pmf.Normalize()\n return new_pmf", "Make a the biased Pmf of children in the household, as observed if you surveyed the children instead of the respondents.", "biased_pmf = BiasPmf(numkdhh, label='biased')\nthinkplot.Hist(biased_pmf)\nthinkplot.Config(title=\"PMF of num children under 18\",\n xlabel=\"number of children under 18\",\n ylabel=\"probability\")", "Display the actual Pmf and the biased Pmf on the same axes.", "width = 0.45\nthinkplot.PrePlot(2)\nthinkplot.Hist(biased_pmf, align=\"right\", label=\"biased\", width=width)\nthinkplot.Hist(numkdhh, align=\"left\", label=\"actual\", width=width)\nthinkplot.Config(title=\"PMFs of children under 18 in a household\",\n xlabel='number of children',\n ylabel='probability')", "Compute the means of the two Pmfs.", "print \"actual mean:\", numkdhh.Mean()\nprint \"biased mean:\", biased_pmf.Mean()" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
rethore/FUSED-Wake
examples/Script.ipynb
agpl-3.0
[ "%matplotlib inline\n%load_ext autoreload\n%autoreload\nimport numpy as np\nimport scipy as sp\nimport pandas as pd\nfrom scipy import interpolate\nimport matplotlib.pyplot as plt\n\nimport os,sys,inspect\ncurrentdir = os.path.dirname(os.path.abspath(inspect.getfile(inspect.currentframe())))\nparentdir = os.path.dirname(os.path.dirname(currentdir))\n#sys.path.insert(0,parentdir) \n#sys.path.append(parentdir+'/gclarsen/src/gclarsen')\n\nimport fusedwake.WindTurbine as wt\nimport fusedwake.WindFarm as wf\n#import fusedwake.fused as fused_gcl\nfrom fusedwake.gcl import *\n\n#from gclarsen.fusedwasp import PlantFromWWH\n#wwh = PlantFromWWH(filename = parentdir+'/wind-farm-wake-model/gclarsen/src/gclarsen/test/wind_farms/horns_rev/hornsrev1_turbine_nodescription.wwh')\n#wwh = PlantFromWWH(filename = 'hornsrev1_turbine_nodescription.wwh')", "Verification of the FUSED-Wind wrapper\ncommon inputs", "v80 = wt.WindTurbine('Vestas v80 2MW offshore','V80_2MW_offshore.dat',70,40)\nHR1 = wf.WindFarm('Horns Rev 1','HR_coordinates.dat',v80)\nWD = range(0,360,1)", "FUSED-Wind implementation", "\"\"\"\n##Fused inputs\ninputs = dict(\n wind_speed=8.0,\n roughness=0.0001,\n TI=0.05,\n NG=4,\n sup='lin',\n wt_layout = fused_gcl.generate_GenericWindFarmTurbineLayout(HR1))\n\nfgcl = fused_gcl.FGCLarsen()\n# Setting the inputs\nfor k,v in inputs.iteritems():\n setattr(fgcl, k, v)\n\nfP_WF = np.zeros([len(WD)])\nfor iwd, wd in enumerate(WD):\n fgcl.wind_direction = wd\n fgcl.run()\n fP_WF[iwd] = fgcl.power\n\"\"\"", "pure python implementation", "P_WF = np.zeros([len(WD)])\nP_WF_v0 = np.zeros([len(WD)])\nfor iwd, wd in enumerate(WD):\n P_WT,U_WT,CT_WT = GCLarsen(WS=8.0,z0=0.0001,TI=0.05,WD=wd,WF=HR1,NG=4,sup='lin')\n P_WF[iwd] = P_WT.sum()\n P_WT,U_WT,CT_WT = GCLarsen_v0(WS=8.0,z0=0.0001,TI=0.05,WD=wd,WF=HR1,NG=4,sup='lin')\n P_WF_v0[iwd] = P_WT.sum()\n\n\n\n\nfig, ax = plt.subplots()\nax.plot(WD,P_WF/(HR1.WT[0].get_P(8.0)*HR1.nWT),'-o', label='python')\nax.plot(WD,P_WF_v0/(HR1.WT[0].get_P(8.0)*HR1.nWT),'-d', label='python v0')\nax.set_xlabel('wd [deg]')\nax.set_ylabel('Wind farm efficiency [-]')\nax.set_title(HR1.name)\nax.legend(loc=3)\nplt.savefig(HR1.name+'_Power_wd_360.pdf')", "Asserting new implementation", "WD = 261.05\nP_WT,U_WT,CT_WT = GCLarsen_v0(WS=10.,z0=0.0001,TI=0.1,WD=WD,WF=HR1, NG=5, sup='quad')\nP_WT_2,U_WT_2,CT_WT_2 = GCLarsen(WS=10.,z0=0.0001,TI=0.1,WD=WD,WF=HR1, NG=5, sup='quad')\n\nnp.testing.assert_array_almost_equal(U_WT,U_WT_2)\nnp.testing.assert_array_almost_equal(P_WT,P_WT_2)", "There was a bug corrected in the new implementation of the GCL model\nTime comparison\nNew implementation is wrapped inside fusedwind", "WD = range(0,360,1)\n\n%%timeit\nfP_WF = np.zeros([len(WD)])\nfor iwd, wd in enumerate(WD):\n fgcl.wind_direction = wd\n fgcl.run()\n fP_WF[iwd] = fgcl.power\n\n%%timeit\n#%%prun -s cumulative #profiling\nP_WF = np.zeros([len(WD)])\nfor iwd, wd in enumerate(WD):\n P_WT,U_WT,CT_WT = GCLarsen(WS=8.0,z0=0.0001,TI=0.05,WD=wd,WF=HR1,NG=4,sup='lin')\n P_WF[iwd] = P_WT.sum()", "Pandas", "df=pd.DataFrame(data=P_WF, index=WD, columns=['P_WF'])\n\ndf.plot()", "WD uncertainty\nNormally distributed wind direction uncertainty (reference wind direction, not for individual turbines).", "P_WF_GAv8 = np.zeros([len(WD)])\nP_WF_GAv16 = np.zeros([len(WD)])\nfor iwd, wd in enumerate(WD):\n P_WT_GAv,U_WT,CT_WT = GCL_P_GaussQ_Norm_U_WD(meanWD=wd,stdWD=2.5,NG_P=8, WS=8.0,z0=0.0001,TI=0.05,WF=HR1,NG=4,sup='lin')\n P_WF_GAv8[iwd] = P_WT_GAv.sum()\n P_WT_GAv,U_WT,CT_WT = GCL_P_GaussQ_Norm_U_WD(meanWD=wd,stdWD=2.5,NG_P=16, WS=8.0,z0=0.0001,TI=0.05,WF=HR1,NG=4,sup='lin')\n P_WF_GAv16[iwd] = P_WT_GAv.sum()\n\n\nfig, ax = plt.subplots()\nfig.set_size_inches([12,6])\nax.plot(WD,P_WF/(HR1.WT.get_P(8.0)*HR1.nWT),'-o', label='Pure python')\nax.plot(WD,fP_WF/(HR1.WT.get_P(8.0)*HR1.nWT),'-d', label='FUSED wrapper')\nax.plot(WD,P_WF_GAv16/(HR1.WT.get_P(8.0)*HR1.nWT),'-', label='Gauss Avg. FUSED wrapper, NG_P = 16')\nax.plot(WD,P_WF_GAv8/(HR1.WT.get_P(8.0)*HR1.nWT),'-', label='Gauss Avg. FUSED wrapper, NG_P = 8')\nax.set_xlabel('wd [deg]')\nax.set_ylabel('Wind farm efficiency [-]')\nax.set_title(HR1.name)\nax.legend(loc=3)\nplt.savefig(HR1.name+'_Power_wd_360.pdf')", "Uniformly distributed wind direction uncertainty (bin/sectors definition)", "P_WF_GA_u8 = np.zeros([len(WD)])\nfor iwd, wd in enumerate(WD):\n P_WT_GAv,U_WT,CT_WT = GCL_P_GaussQ_Uni_U_WD(meanWD=wd,U_WD=2.5,NG_P=8, WS=8.0,z0=0.0001,TI=0.05,WF=HR1,NG=4,sup='lin')\n P_WF_GA_u8[iwd] = P_WT_GAv.sum()\n\n\nfig, ax = plt.subplots()\nfig.set_size_inches([12,6])\nax.plot(WD,fP_WF/(HR1.WT.get_P(8.0)*HR1.nWT),'-d', label='FUSED wrapper')\nax.plot(WD,P_WF_GAv8/(HR1.WT.get_P(8.0)*HR1.nWT),'-', label='Gauss Quad. Normal, NG_P = 8')\nax.plot(WD,P_WF_GA_u8/(HR1.WT.get_P(8.0)*HR1.nWT),'-', label='Gauss Quad. Uniform, NG_P = 8')\nax.set_xlabel('wd [deg]')\nax.set_ylabel('Wind farm efficiency [-]')\nax.set_title(HR1.name)\nax.legend(loc=3)\nplt.savefig(HR1.name+'_Power_wd_360.pdf')" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
AllenDowney/ThinkBayes2
notebooks/chap07.ipynb
mit
[ "Minimum, Maximum, and Mixture\nThink Bayes, Second Edition\nCopyright 2020 Allen B. Downey\nLicense: Attribution-NonCommercial-ShareAlike 4.0 International (CC BY-NC-SA 4.0)", "# If we're running on Colab, install empiricaldist\n# https://pypi.org/project/empiricaldist/\n\nimport sys\nIN_COLAB = 'google.colab' in sys.modules\n\nif IN_COLAB:\n !pip install empiricaldist\n\n# Get utils.py\n\nfrom os.path import basename, exists\n\ndef download(url):\n filename = basename(url)\n if not exists(filename):\n from urllib.request import urlretrieve\n local, _ = urlretrieve(url, filename)\n print('Downloaded ' + local)\n \ndownload('https://github.com/AllenDowney/ThinkBayes2/raw/master/soln/utils.py')\n\nfrom utils import set_pyplot_params\nset_pyplot_params()", "In the previous chapter we computed distributions of sums.\nIn this chapter, we'll compute distributions of minimums and maximums, and use them to solve both forward and inverse problems.\nThen we'll look at distributions that are mixtures of other distributions, which will turn out to be particularly useful for making predictions.\nBut we'll start with a powerful tool for working with distributions, the cumulative distribution function.\nCumulative Distribution Functions\nSo far we have been using probability mass functions to represent distributions.\nA useful alternative is the cumulative distribution function, or CDF.\nAs an example, I'll use the posterior distribution from the Euro problem, which we computed in <<_BayesianEstimation>>.\nHere's the uniform prior we started with.", "import numpy as np\nfrom empiricaldist import Pmf\n\nhypos = np.linspace(0, 1, 101)\npmf = Pmf(1, hypos)\ndata = 140, 250", "And here's the update.", "from scipy.stats import binom\n\ndef update_binomial(pmf, data):\n \"\"\"Update pmf using the binomial distribution.\"\"\"\n k, n = data\n xs = pmf.qs\n likelihood = binom.pmf(k, n, xs)\n pmf *= likelihood\n pmf.normalize()\n\nupdate_binomial(pmf, data)", "The CDF is the cumulative sum of the PMF, so we can compute it like this:", "cumulative = pmf.cumsum()", "Here's what it looks like, along with the PMF.", "from utils import decorate\n\ndef decorate_euro(title):\n decorate(xlabel='Proportion of heads (x)',\n ylabel='Probability',\n title=title)\n\ncumulative.plot(label='CDF')\npmf.plot(label='PMF')\ndecorate_euro(title='Posterior distribution for the Euro problem')", "The range of the CDF is always from 0 to 1, in contrast with the PMF, where the maximum can be any probability.\nThe result from cumsum is a Pandas Series, so we can use the bracket operator to select an element:", "cumulative[0.61]", "The result is about 0.96, which means that the total probability of all quantities less than or equal to 0.61 is 96%.\nTo go the other way --- to look up a probability and get the corresponding quantile --- we can use interpolation:", "from scipy.interpolate import interp1d\n\nps = cumulative.values\nqs = cumulative.index\n\ninterp = interp1d(ps, qs)\ninterp(0.96)", "The result is about 0.61, so that confirms that the 96th percentile of this distribution is 0.61.\nempiricaldist provides a class called Cdf that represents a cumulative distribution function.\nGiven a Pmf, you can compute a Cdf like this:", "cdf = pmf.make_cdf()", "make_cdf uses np.cumsum to compute the cumulative sum of the probabilities.\nYou can use brackets to select an element from a Cdf:", "cdf[0.61]", "But if you look up a quantity that's not in the distribution, you get a KeyError.", "try:\n cdf[0.615]\nexcept KeyError as e:\n print(repr(e))", "To avoid this problem, you can call a Cdf as a function, using parentheses.\nIf the argument does not appear in the Cdf, it interpolates between quantities.", "cdf(0.615)", "Going the other way, you can use quantile to look up a cumulative probability and get the corresponding quantity:", "cdf.quantile(0.9638303)", "Cdf also provides credible_interval, which computes a credible interval that contains the given probability:", "cdf.credible_interval(0.9)", "CDFs and PMFs are equivalent in the sense that they contain the\nsame information about the distribution, and you can always convert\nfrom one to the other.\nGiven a Cdf, you can get the equivalent Pmf like this:", "pmf = cdf.make_pmf()", "make_pmf uses np.diff to compute differences between consecutive cumulative probabilities.\nOne reason Cdf objects are useful is that they compute quantiles efficiently.\nAnother is that they make it easy to compute the distribution of a maximum or minimum, as we'll see in the next section.\nBest Three of Four\nIn Dungeons & Dragons, each character has six attributes: strength, intelligence, wisdom, dexterity, constitution, and charisma.\nTo generate a new character, players roll four 6-sided dice for each attribute and add up the best three.\nFor example, if I roll for strength and get 1, 2, 3, 4 on the dice, my character's strength would be the sum of 2, 3, and 4, which is 9.\nAs an exercise, let's figure out the distribution of these attributes.\nThen, for each character, we'll figure out the distribution of their best attribute.\nI'll import two functions from the previous chapter: make_die, which makes a Pmf that represents the outcome of rolling a die, and add_dist_seq, which takes a sequence of Pmf objects and computes the distribution of their sum.\nHere's a Pmf that represents a six-sided die and a sequence with three references to it.", "from utils import make_die\n\ndie = make_die(6)\ndice = [die] * 3", "And here's the distribution of the sum of three dice.", "from utils import add_dist_seq\n\npmf_3d6 = add_dist_seq(dice)", "Here's what it looks like:", "def decorate_dice(title=''):\n decorate(xlabel='Outcome',\n ylabel='PMF',\n title=title)\n\npmf_3d6.plot()\ndecorate_dice('Distribution of attributes')", "If we roll four dice and add up the best three, computing the distribution of the sum is a bit more complicated.\nI'll estimate the distribution by simulating 10,000 rolls.\nFirst I'll create an array of random values from 1 to 6, with 10,000 rows and 4 columns:", "n = 10000\na = np.random.randint(1, 7, size=(n, 4))", "To find the best three outcomes in each row, I'll use sort with axis=1, which sorts the rows in ascending order.", "a.sort(axis=1)", "Finally, I'll select the last three columns and add them up.", "t = a[:, 1:].sum(axis=1)", "Now t is an array with a single column and 10,000 rows.\nWe can compute the PMF of the values in t like this:", "pmf_best3 = Pmf.from_seq(t)", "The following figure shows the distribution of the sum of three dice, pmf_3d6, and the distribution of the best three out of four, pmf_best3.", "pmf_3d6.plot(label='sum of 3 dice')\npmf_best3.plot(label='best 3 of 4', style='--')\n\ndecorate_dice('Distribution of attributes')", "As you might expect, choosing the best three out of four tends to yield higher values.\nNext we'll find the distribution for the maximum of six attributes, each the sum of the best three of four dice.\nMaximum\nTo compute the distribution of a maximum or minimum, we can make good use of the cumulative distribution function.\nFirst, I'll compute the Cdf of the best three of four distribution:", "cdf_best3 = pmf_best3.make_cdf()", "Recall that Cdf(x) is the sum of probabilities for quantities less than or equal to x.\nEquivalently, it is the probability that a random value chosen from the distribution is less than or equal to x.\nNow suppose I draw 6 values from this distribution.\nThe probability that all 6 of them are less than or equal to x is Cdf(x) raised to the 6th power, which we can compute like this:", "cdf_best3**6", "If all 6 values are less than or equal to x, that means that their maximum is less than or equal to x.\nSo the result is the CDF of their maximum.\nWe can convert it to a Cdf object, like this:", "from empiricaldist import Cdf\n\ncdf_max6 = Cdf(cdf_best3**6)", "And compute the equivalent Pmf like this:", "pmf_max6 = cdf_max6.make_pmf()", "The following figure shows the result.", "pmf_max6.plot(label='max of 6 attributes')\n\ndecorate_dice('Distribution of attributes')", "Most characters have at least one attribute greater than 12; almost 10% of them have an 18.\nThe following figure shows the CDFs for the three distributions we have computed.", "import matplotlib.pyplot as plt\n\ncdf_3d6 = pmf_3d6.make_cdf()\ncdf_3d6.plot(label='sum of 3 dice')\n\ncdf_best3 = pmf_best3.make_cdf()\ncdf_best3.plot(label='best 3 of 4 dice', style='--')\n\ncdf_max6.plot(label='max of 6 attributes', style=':')\n\ndecorate_dice('Distribution of attributes')\nplt.ylabel('CDF');", "Cdf provides max_dist, which does the same computation, so we can also compute the Cdf of the maximum like this:", "cdf_max_dist6 = cdf_best3.max_dist(6)", "And we can confirm that the differences are small.", "np.allclose(cdf_max_dist6, cdf_max6)", "In the next section we'll find the distribution of the minimum.\nThe process is similar, but a little more complicated.\nSee if you can figure it out before you go on.\nMinimum\nIn the previous section we computed the distribution of a character's best attribute.\nNow let's compute the distribution of the worst.\nTo compute the distribution of the minimum, we'll use the complementary CDF, which we can compute like this:", "prob_gt = 1 - cdf_best3", "As the variable name suggests, the complementary CDF is the probability that a value from the distribution is greater than x.\nIf we draw 6 values from the distribution, the probability that all 6 exceed x is:", "prob_gt6 = prob_gt**6", "If all 6 exceed x, that means their minimum exceeds x, so prob_gt6 is the complementary CDF of the minimum.\nAnd that means we can compute the CDF of the minimum like this:", "prob_le6 = 1 - prob_gt6", "The result is a Pandas Series that represents the CDF of the minimum of six attributes. We can put those values in a Cdf object like this:", "cdf_min6 = Cdf(prob_le6)", "Here's what it looks like, along with the distribution of the maximum.", "cdf_min6.plot(color='C4', label='minimum of 6')\ncdf_max6.plot(color='C2', label='maximum of 6', style=':')\ndecorate_dice('Minimum and maximum of six attributes')\nplt.ylabel('CDF');", "Cdf provides min_dist, which does the same computation, so we can also compute the Cdf of the minimum like this:", "cdf_min_dist6 = cdf_best3.min_dist(6)", "And we can confirm that the differences are small.", "np.allclose(cdf_min_dist6, cdf_min6)", "In the exercises at the end of this chapter, you'll use distributions of the minimum and maximum to do Bayesian inference.\nBut first we'll see what happens when we mix distributions.\nMixture\nIn this section I'll show how we can compute a distribution which is a mixture of other distributions.\nI'll explain what that means with some simple examples;\nthen, more usefully, we'll see how these mixtures are used to make predictions.\nHere's another example inspired by Dungeons & Dragons:\n\n\nSuppose your character is armed with a dagger in one hand and a short sword in the other.\n\n\nDuring each round, you attack a monster with one of your two weapons, chosen at random.\n\n\nThe dagger causes one 4-sided die of damage; the short sword causes one 6-sided die of damage.\n\n\nWhat is the distribution of damage you inflict in each round?\nTo answer this question, I'll make a Pmf to represent the 4-sided and 6-sided dice:", "d4 = make_die(4)\nd6 = make_die(6)", "Now, let's compute the probability you inflict 1 point of damage.\n\n\nIf you attacked with the dagger, it's 1/4.\n\n\nIf you attacked with the short sword, it's 1/6.\n\n\nBecause the probability of choosing either weapon is 1/2, the total probability is the average:", "prob_1 = (d4(1) + d6(1)) / 2\nprob_1", "For the outcomes 2, 3, and 4, the probability is the same, but for 5 and 6 it's different, because those outcomes are impossible with the 4-sided die.", "prob_6 = (d4(6) + d6(6)) / 2\nprob_6", "To compute the distribution of the mixture, we could loop through the possible outcomes and compute their probabilities.\nBut we can do the same computation using the + operator:", "mix1 = (d4 + d6) / 2", "Here's what the mixture of these distributions looks like.", "mix1.bar(alpha=0.7)\ndecorate_dice('Mixture of one 4-sided and one 6-sided die')", "Now suppose you are fighting three monsters:\n\n\nOne has a club, which causes one 4-sided die of damage.\n\n\nOne has a mace, which causes one 6-sided die.\n\n\nAnd one has a quarterstaff, which also causes one 6-sided die. \n\n\nBecause the melee is disorganized, you are attacked by one of these monsters each round, chosen at random.\nTo find the distribution of the damage they inflict, we can compute a weighted average of the distributions, like this:", "mix2 = (d4 + 2*d6) / 3", "This distribution is a mixture of one 4-sided die and two 6-sided dice.\nHere's what it looks like.", "mix2.bar(alpha=0.7)\ndecorate_dice('Mixture of one 4-sided and two 6-sided die')", "In this section we used the + operator, which adds the probabilities in the distributions, not to be confused with Pmf.add_dist, which computes the distribution of the sum of the distributions.\nTo demonstrate the difference, I'll use Pmf.add_dist to compute the distribution of the total damage done per round, which is the sum of the two mixtures:", "total_damage = Pmf.add_dist(mix1, mix2)", "And here's what it looks like.", "total_damage.bar(alpha=0.7)\ndecorate_dice('Total damage inflicted by both parties')", "General Mixtures\nIn the previous section we computed mixtures in an ad hoc way.\nNow we'll see a more general solution.\nIn future chapters, we'll use this solution to generate predictions for real-world problems, not just role-playing games.\nBut if you'll bear with me, we'll continue the previous example for one more section.\nSuppose three more monsters join the combat, each of them with a battle axe that causes one 8-sided die of damage.\nStill, only one monster attacks per round, chosen at random, so the damage they inflict is a mixture of:\n\nOne 4-sided die,\nTwo 6-sided dice, and\nThree 8-sided dice.\n\nI'll use a Pmf to represent a randomly chosen monster:", "hypos = [4,6,8]\ncounts = [1,2,3]\npmf_dice = Pmf(counts, hypos)\npmf_dice.normalize()\npmf_dice", "This distribution represents the number of sides on the die we'll roll and the probability of rolling each one.\nFor example, one of the six monsters has a dagger, so the probability is $1/6$ that we roll a 4-sided die.\nNext I'll make a sequence of Pmf objects to represent the dice:", "dice = [make_die(sides) for sides in hypos]", "To compute the distribution of the mixture, I'll compute the weighted average of the dice, using the probabilities in pmf_dice as the weights.\nTo express this computation concisely, it is convenient to put the distributions into a Pandas DataFrame:", "import pandas as pd\n\npd.DataFrame(dice)", "The result is a DataFrame with one row for each distribution and one column for each possible outcome.\nNot all rows are the same length, so Pandas fills the extra spaces with the special value NaN, which stands for \"not a number\".\nWe can use fillna to replace the NaN values with 0.\nThe next step is to multiply each row by the probabilities in pmf_dice, which turns out to be easier if we transpose the matrix so the distributions run down the columns rather than across the rows:", "df = pd.DataFrame(dice).fillna(0).transpose()\n\ndf", "Now we can multiply by the probabilities in pmf_dice:", "df *= pmf_dice.ps\n\ndf", "And add up the weighted distributions:", "df.sum(axis=1)", "The argument axis=1 means we want to sum across the rows.\nThe result is a Pandas Series.\nPutting it all together, here's a function that makes a weighted mixture of distributions.", "def make_mixture(pmf, pmf_seq):\n \"\"\"Make a mixture of distributions.\"\"\"\n df = pd.DataFrame(pmf_seq).fillna(0).transpose()\n df *= np.array(pmf)\n total = df.sum(axis=1)\n return Pmf(total)", "The first parameter is a Pmf that maps from each hypothesis to a probability.\nThe second parameter is a sequence of Pmf objects, one for each hypothesis.\nWe can call it like this:", "mix = make_mixture(pmf_dice, dice)", "And here's what it looks like.", "mix.bar(label='mixture', alpha=0.6)\ndecorate_dice('Distribution of damage with three different weapons')", "In this section I used Pandas so that make_mixture is concise, efficient, and hopefully not too hard to understand.\nIn the exercises at the end of the chapter, you'll have a chance to practice with mixtures, and we will use make_mixture again in the next chapter.\nSummary\nThis chapter introduces the Cdf object, which represents the cumulative distribution function (CDF).\nA Pmf and the corresponding Cdf are equivalent in the sense that they contain the same information, so you can convert from one to the other.\nThe primary difference between them is performance: some operations are faster and easier with a Pmf; others are faster with a Cdf.\nIn this chapter we used Cdf objects to compute distributions of maximums and minimums; these distributions are useful for inference if we are given a maximum or minimum as data.\nYou will see some examples in the exercises, and in future chapters.\nWe also computed mixtures of distributions, which we will use in the next chapter to make predictions.\nBut first you might want to work on these exercises.\nExercises\nExercise: When you generate a D&D character, instead of rolling dice, you can use the \"standard array\" of attributes, which is 15, 14, 13, 12, 10, and 8.\nDo you think you are better off using the standard array or (literally) rolling the dice?\nCompare the distribution of the values in the standard array to the distribution we computed for the best three out of four:\n\n\nWhich distribution has higher mean? Use the mean method.\n\n\nWhich distribution has higher standard deviation? Use the std method.\n\n\nThe lowest value in the standard array is 8. For each attribute, what is the probability of getting a value less than 8? If you roll the dice six times, what's the probability that at least one of your attributes is less than 8?\n\n\nThe highest value in the standard array is 15. For each attribute, what is the probability of getting a value greater than 15? If you roll the dice six times, what's the probability that at least one of your attributes is greater than 15?\n\n\nTo get you started, here's a Cdf that represents the distribution of attributes in the standard array:", "standard = [15,14,13,12,10,8]\ncdf_standard = Cdf.from_seq(standard)", "We can compare it to the distribution of attributes you get by rolling four dice at adding up the best three.", "cdf_best3.plot(label='best 3 of 4', color='C1', style='--')\ncdf_standard.step(label='standard set', color='C7')\n\ndecorate_dice('Distribution of attributes')\nplt.ylabel('CDF');", "I plotted cdf_standard as a step function to show more clearly that it contains only a few quantities.", "# Solution goes here\n\n# Solution goes here\n\n# Solution goes here\n\n# Solution goes here\n\n# Solution goes here\n\n# Solution goes here", "Exercise: Suppose you are fighting three monsters:\n\n\nOne is armed with a short sword that causes one 6-sided die of damage,\n\n\nOne is armed with a battle axe that causes one 8-sided die of damage, and\n\n\nOne is armed with a bastard sword that causes one 10-sided die of damage.\n\n\nOne of the monsters, chosen at random, attacks you and does 1 point of damage.\nWhich monster do you think it was? Compute the posterior probability that each monster was the attacker.\nIf the same monster attacks you again, what is the probability that you suffer 6 points of damage?\nHint: Compute a posterior distribution as we have done before and pass it as one of the arguments to make_mixture.", "# Solution goes here\n\n# Solution goes here\n\n# Solution goes here\n\n# Solution goes here", "Exercise: Henri Poincaré was a French mathematician who taught at the Sorbonne around 1900. The following anecdote about him is probably fiction, but it makes an interesting probability problem.\nSupposedly Poincaré suspected that his local bakery was selling loaves of bread that were lighter than the advertised weight of 1 kg, so every day for a year he bought a loaf of bread, brought it home and weighed it. At the end of the year, he plotted the distribution of his measurements and showed that it fit a normal distribution with mean 950 g and standard deviation 50 g. He brought this evidence to the bread police, who gave the baker a warning.\nFor the next year, Poincaré continued to weigh his bread every day. At the end of the year, he found that the average weight was 1000 g, just as it should be, but again he complained to the bread police, and this time they fined the baker.\nWhy? Because the shape of the new distribution was asymmetric. Unlike the normal distribution, it was skewed to the right, which is consistent with the hypothesis that the baker was still making 950 g loaves, but deliberately giving Poincaré the heavier ones.\nTo see whether this anecdote is plausible, let's suppose that when the baker sees Poincaré coming, he hefts n loaves of bread and gives Poincaré the heaviest one. How many loaves would the baker have to heft to make the average of the maximum 1000 g?\nTo get you started, I'll generate a year's worth of data from a normal distribution with the given parameters.", "mean = 950\nstd = 50\n\nnp.random.seed(17)\nsample = np.random.normal(mean, std, size=365)\n\n# Solution goes here\n\n# Solution goes here" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
1995parham/Learning
ml/housing/Housing.ipynb
gpl-2.0
[ "Here we are using the California Housing dataset to learn more about Machine Learning.", "import pandas as pd\n\nhousing = pd.read_csv('housing.csv')\nhousing.head()\n\nhousing.info()\n\nhousing.describe()", "In the meanwhile we are trying to have more information about pandas. In the following sections we are using the value_counts method to have more information about each feature values. This method specify number of different values for given feature.", "housing['total_rooms'].value_counts()\n\nhousing['ocean_proximity'].value_counts()", "See the difference between loc and iloc methods in a simple pandas DataFrame.", "pd.DataFrame([{'a': 1, 'b': '1'}, {'a': 2, 'b': 1}, {'a': 3, 'b': 1}]).iloc[1]\n\npd.DataFrame([{'a': 1, 'b': '1'}, {'a': 2, 'b': 1}, {'a': 3, 'b': 1}]).loc[1]\n\npd.DataFrame([{'a': 1, 'b': '1'}, {'a': 2, 'b': 1}, {'a': 3, 'b': 1}]).loc[1, ['b']]\n\npd.DataFrame([{'a': 1, 'b': '1'}, {'a': 2, 'b': 1}, {'a': 3, 'b': 1}]).loc[[True, True, False]]", "Here we want to see the apply function of pandas for an specific feature.", "pd.DataFrame([{'a': 1, 'b': '1'}, {'a': 2, 'b': 1}, {'a': 3, 'b': 1}])['a'].apply(lambda a: a > 10)", "The following function helps to split the given dataset into test and train sets.", "from zlib import crc32\nimport numpy as np\n\ndef test_set_check(identifier, test_ratio):\n return crc32(np.int64(identifier)) & 0xffffffff < test_ratio * 2**32\n\ndef split_train_test_by_id(data, test_ratio, id_column):\n ids = data[id_column]\n in_test_set = ids.apply(lambda _id: test_set_check(_id, test_ratio))\n return data.loc[~in_test_set], data.loc[in_test_set]\n\nhousing_with_id = housing.reset_index() # adds an \"index\" column\ntrain_set, test_set = split_train_test_by_id(housing_with_id, 0.2, 'index')\n\nhousing = train_set.copy()\nhousing.plot(kind=\"scatter\", x=\"longitude\", y=\"latitude\", alpha=0.1)\n\nimport matplotlib.pyplot as plt\n\nhousing.plot(kind='scatter', x='longitude', y='latitude',\n alpha=0.4, s=housing['population']/100, label='population',\n c='median_house_value', cmap=plt.get_cmap('jet'), colorbar=True,\n )" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
bradhowes/keystrokecountdown
src/articles/PDMPlayground/index.ipynb
mit
[ "A camera I'm trying to support contains four microphones arranged 90° apart on the horizontal plane of the device. The microphones are MEMS components (PDF spec sheet SPK0415HM4H-B) that generate a pulse density modulated (PDM) signal. These components are common in digital devices where audio capture is desired in a small form factor, such as in cellular phones and small video cameras.\nIn order to capture high-quality audio with PDM devices, one must perform the following:\n\nDrive the PDM device with a high-frequency clock\nRecord the pulse train with a decimating low-pass filter\n\nAccording to the paper \"Understanding PDM Digital Audio\", a typical configuration for CD-quality audio is to drive the PDM device with a 3.072 MHz clock, then decimating the resulting pulse train by 64x to get a resulting sampling frequency of 48 kHz. This would then leave a final bandwidth of 24 kHz. While decimating, one must also perform low-pass filtering in order to keep noise in the +48 kHz bands from falling into the lower 0-24 kHz band due to aliasing.\nThe NumPy Python code below is my attempt to understand how all of this works. The approach I took is to first define a signal to be sampled (2 sin waves of differing frequencies). Next, I submit the signal to a pseudo-PDM processor that generates a pulse train of -1 and +1 values. These samples then flow into a decimation and low-pass filter, and I compare the resulting signal to see how well it matches with the original.\nSignal Creation\nFor simplicity, I am working in the low-frequency band. This should be immaterial since I believe it is just a scaling issue. Instead of 24 kHz bandwidth, I am working with 512 Hz. The signal that I created is made up of two sinusoids, one at 51 Hz and the other at 247 Hz.", "figWidth = 16 # Width of many of the figures\n\nsampleFrequency = 1024 # Hz\nbandwidth = sampleFrequency / 2 # 0-512 Hz (also Nyquist freq)\nsampleDuration = 1.0 / sampleFrequency # time duration per cycle\n\nsignalTime = np.arange(0, 1, sampleDuration)\nsignal1Freq = 51 # Hz\nsignal1Samples = np.sin(2.0 * np.pi * signal1Freq * signalTime)\nsignal2Freq = 247 # Hz\nsignal2Samples = np.sin(2.0 * np.pi * signal2Freq * signalTime)\nsignalSamples = (signal1Samples + signal2Samples) / 2", "Below is a plot of the signal.", "plt.figure(figsize=(figWidth, 4))\nplt.plot(signalTime, signalSamples)\nplt.xlabel(\"t\")\nplt.ylabel(\"Amplitude\")\nplt.suptitle('Source Signal')\nplt.show()", "To verify that the signal really has only two frequency components, here is the output of the FFT for it.", "fftFreqs = np.arange(bandwidth)\nfftValues = (np.fft.fft(signalSamples) / sampleFrequency)[:int(bandwidth)]\nplt.plot(fftFreqs, np.absolute(fftValues))\nplt.xlim(0, bandwidth)\nplt.ylim(0, 0.3)\nplt.xlabel(\"Frequency\")\nplt.ylabel(\"Magnitude\")\nplt.suptitle(\"Source Signal Frequency Components\")\nplt.show()", "PDM Modulation\nNow that we have a signal to work with, next step is to generate a pulse train from it. The code below is a simple hack that generates 64 samples for every one in the original signal. Normally, this would involve interpolation so that the 63 additional samples vary linearly from the previous sample to the current one. This lack will introduce some noise due to discontinuities.\nThe setting pdmFreq is the number of samples to create for each element in signalSamples.", "pdmFreq = 64\npdmPulses = np.empty(sampleFrequency * pdmFreq)\npdmTime = np.arange(0, pdmPulses.size)\n\npdmIndex = 0\nsignalIndex = 0\nquantizationError = 0\nwhile pdmIndex < pdmPulses.size:\n sample = signalSamples[signalIndex]\n signalIndex += 1\n for tmp in range(pdmFreq):\n if sample >= quantizationError:\n bit = 1\n else:\n bit = -1\n quantizationError = bit - sample + quantizationError\n pdmPulses[pdmIndex] = bit\n pdmIndex += 1\n\nprint(pdmIndex, signalIndex, pdmPulses.size, signalSamples.size)", "Visualize the first 4K PDM samples. We should be able to clearly see the pulsing.", "span = 1024\nplt.figure(figsize=(16, 6))\ncounter = 1\nfor pos in range(0, pdmIndex, span):\n from matplotlib.ticker import MultipleLocator\n plt.subplot(4, 1, counter)\n counter += 1\n \n # Generate a set of time values that correspond to pulses with +1 values. Remove the rest\n # and plot.\n plt.vlines(np.delete(pdmTime[pos:pos + span], np.nonzero(pdmPulses[pos:pos + span] > 0.0)[0]), 0, 1, 'g')\n plt.ylim(0, 1)\n plt.xlim(pos, pos + span)\n plt.tick_params(axis='both', which='major', labelsize=8)\n ca = plt.gca()\n axes = ca.axes\n axes.yaxis.set_visible(False)\n # axes.yaxis.set_ticklabels([])\n axes.xaxis.set_ticks_position('bottom')\n # axes.xaxis.set_ticks(np.arange(pos, pos + span, 64))\n axes.xaxis.set_major_locator(MultipleLocator(64))\n spines = axes.spines\n for tag in ('top', 'bottom'):\n spines[tag].set_visible(False)\n if counter == 5:\n break\nplt.show()", "Low-pass Filter\nA fundamental nature of high-frequency sampling for PCM is that the noise from the quantization resulting from the PCM modulator is also of high-frequency (in a real system, there is also low-freq noise from clock jitter, heat, etc). When we decimate the signal, we do not want to bring the noise into the lower frequencies so we need to filter the samples before incorporating them into the new, lower-frequency signal. Our low-pass filter is a finite impulse response (FIR) type, with tap values\ntaken from the TFilter web application.\nOur filter is designed to operate at 2 x sampleFrequency so that it will cover our original bandwidth (512 Hz) in the pass-band and heavily attenuate everything else above.\nLowPassFilter.py source", "import LowPassFilter\nlpf = LowPassFilter.LowPassFilter()", "PDM Decimation\nOur PDM signal has a sampling frequency of 64 &times; sampleFrequency or 65.536 kHz. To get to our original sampleFrequency we need to ultimately use one sample out of every 64 we see in the PDM pulse train.\nSince we want to filter out high-frequency noise, and our filter is tuned for 2 &times; sampleFrequency (2.048 kHz), will take every 32nd sample and send each to our filter, but with will only use every other filtered sample.\nNOTE: the reconstruction here of a sample value from PDM values is not what would really take place. In particular, the code below obtains an average of the +/- unity values in the chain, where a real implementation would count bits and convert into a sample value.", "derivedSamples = []\npdmIndex = 0\nwhile pdmIndex < pdmPulses.size:\n lpf(pdmPulses[int(pdmIndex)])\n pdmIndex += pdmFreq / 2\n filtered = lpf(pdmPulses[int(pdmIndex)])\n pdmIndex += pdmFreq / 2\n derivedSamples.append(filtered)\nderivedSamples = np.array(derivedSamples)\n\nsignalSamples.size, derivedSamples.size", "Now plots of the resulting signal in both time and frequency domains", "plt.figure(figsize=(figWidth, 4))\nplt.plot(signalTime, derivedSamples)\nplt.xlabel(\"t\")\nplt.ylabel(\"Amplitude\")\nplt.suptitle('Derived Signal')\nplt.show()\n\nfftFreqs = np.arange(bandwidth)\nfftValues = (np.fft.fft(derivedSamples) / sampleFrequency)[:int(bandwidth)]\nplt.plot(fftFreqs, np.absolute(fftValues))\nplt.xlim(0, bandwidth)\nplt.ylim(0, 0.3)\nplt.xlabel(\"Frequency\")\nplt.ylabel(\"Magnitude\")\nplt.suptitle(\"Derived Signal Frequency Components\")\nplt.show()", "Filtering Test\nLet's redo the PCM modulation / decimation steps but this time while injecting a high-frequency (32.767 kHz) signal with 30% intensity during the modulation. Hopefully, we will not see this noise appear in the final result.", "pdmFreq = 64\npdmPulses = np.empty(sampleFrequency * pdmFreq)\npdmTime = np.arange(0, pdmPulses.size)\n\npdmIndex = 0\nsignalIndex = 0\nquantizationError = 0\n\nnoiseFreq = 32767 # Hz\nnoiseAmplitude = .30\nnoiseSampleDuration = 1.0 / (sampleFrequency * pdmFreq)\nnoiseTime = np.arange(0, 1, noiseSampleDuration)\nnoiseSamples = np.sin(2.0 * np.pi * noiseFreq * noiseTime) * noiseAmplitude\n\nwhile pdmIndex < pdmPulses.size:\n sample = signalSamples[signalIndex] + noiseSamples[pdmIndex]\n signalIndex += 1\n for tmp in range(pdmFreq):\n if sample >= quantizationError:\n bit = 1\n else:\n bit = -1\n quantizationError = bit - sample + quantizationError\n pdmPulses[pdmIndex] = bit\n pdmIndex += 1\n\nprint(pdmIndex, signalIndex, pdmPulses.size, signalSamples.size, noiseSamples.size)\n\nderivedSamples = []\npdmIndex = 0\nlpf.reset()\nwhile pdmIndex < pdmPulses.size:\n lpf(pdmPulses[int(pdmIndex)])\n pdmIndex += pdmFreq / 2\n filtered = lpf(pdmPulses[int(pdmIndex)])\n pdmIndex += pdmFreq / 2\n derivedSamples.append(filtered)\nderivedSamples = np.array(derivedSamples)\n\nplt.figure(figsize=(figWidth, 4))\nplt.plot(signalTime, derivedSamples)\nplt.xlabel(\"t\")\nplt.ylabel(\"Amplitude\")\nplt.suptitle('Derived Signal')\nplt.show()\n\nfftFreqs = np.arange(bandwidth)\nfftValues = (np.fft.fft(derivedSamples) / sampleFrequency)[:int(bandwidth)]\nplt.plot(fftFreqs, np.absolute(fftValues))\nplt.xlim(0, bandwidth)\nplt.ylim(0, 0.3)\nplt.xlabel(\"Frequency\")\nplt.ylabel(\"Magnitude\")\nplt.suptitle(\"Derived Signal Frequency Components\")\nplt.show()", "Not bad. There is some noticable attenuation of the 256 Hz signal, but the noise floor does not look much different than the plot without the injected noise.\nFurther Work\nThe filtering above is expensive, requiring 32 multiplications per sample. The main alternative is a cascaded-integrator comb filter which only uses additions. \nThere is a interesting work published by Cheshire Engineering that describes in detail how to integrate a PCM device with small, low-powered devices. They make available C code that performs CIC filtering.\nAdditional Reading\nDesign and Implementation of Efficient CIC Filter Structure for Decimation\nUnderstanding Cascaded Integrator Comb Filters\nExample of Cascaded Integrator Comb Filter in Matlab" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
NLP-Deeplearning-Club/Classic-ML-Methods-Algo
ipynbs/unsupervised/Kmeans.ipynb
mit
[ "简介\n我们的第一讲贡献给K-means方法,这是一种聚类方法,用于讲数据点进行划分.我们常会把它和Lloyd算法,也就是K-means方法的一种实现算法混淆.\n历史和背后思想\nK-means方法是1957年由Hugo Steinhaus提出,而\"K-means\"这个术语是James MacQueen在1967年第一次使用.它的思想是将数据点分到K个聚类(Clusters),使得每个点和所在聚类的中心的距离的平方和最小,也就是最小化intra-class variance,这里我们把这个intra-class variance叫做成本函数(cost function)。数学上就是\n$$ \\underset{S}{\\arg\\min} \\sum\\limits_{k}\\sum\\limits_{x_i \\in S(k)} |x_i - \\mu_k| ^2 $$\n其中 $ \\mu_k $是各个聚类的中心,$ S(k) $是每个聚类的点的集合。\nLloyd算法\n我们以最常用的一种启发式算法(Heuristics)--Lloyd算法为例,介绍K-means方法\nK-means的一般步骤:\n\n初始化K个聚类的中心,一般是在n个数据点中随机选择,n为数据集的基数\n根据每个数据点到每个聚类中心的距离,将它分配到最近的聚类,然后更新聚类的中心,迭代直到收敛,也就是每个点的聚类不再改变.\n\n复杂度和收敛性\n因为聚类一共有 $ n^K $ 种情况,每次迭代都会降低成本函数(聚类内所有点到x点的距离平方和是个二次函数,这个函数在x为聚点中心是取到最小值),所以我们总可以在有限时间内收敛.但是现实操作中,我们往往将迭代次数或者成本函数的改善用于终止函数.简单来讲,就是迭代i次终止,或者当某次迭代的结果对上次迭代的结果改善度小于某个阈值时终止.所以Lloyd的复杂度在固定迭代次数的情况下复杂度为$ O(nKd*i) $,其中d为数据点的维度.\nK-means方法一定会收敛,但不一定收敛到全局最优点(Lloyd算法就是一种启发式算法).初始化的K个聚点中心起着决定性作用,所以人们试着改进在选取初始聚点中心的方法.比如K-means++算法,就是想让初始K个聚类中心相互尽量离得远.\n它的具体步骤是:\n\n随机选择第一个聚点中心\n对数据集中剩下的每个点x,计算它和最近的聚点中心的距离$ d(x)$,将所有的$ d^2(x) $归一化求得概率$ g(x) $,这时所以剩下的点就对应$ (0,1) $上不重复的线段\n随机得在$ (0,1) $上取值,该值落在的x点就成为新的聚点中心\n重复步骤2和3,直到找到K个聚点中心\nK-means一般步骤2\n\n超参数\nK-means方法中聚类的数量K,作为超参数(hyper parameters),可以是提前给定的,也可以是以输入形式得到的.我们必须在训练前有一个K,一个坏的K会带来不好的结果,所以一般都会多训练几次来确定一个合适的K.\nK-means方法处理球面或者超球面的数据集时表现很好,也就是数据呈现比较明显的围绕几个中心分布的情况.但面对其他分布的数据集时表现一般,并且每次运行(run)时结果不一定相同.\n类似方法\n类似的方法有K-medoid和GMM(高斯混合模型).K-medoid和K-means的区别在于一般步骤2时,我们选择聚类的中心点,也就是离中心最近的那个数据点,而不是中心.这样做的好处是减少了极端值对聚类的影响,但加大了计算复杂度,因为每次更新都要计算聚类内每个点到聚类中心的距离,不适合于大规模的数据集.至于GMM,留待第三节讲.\n代码演示", "import matplotlib.pyplot as plt\n\n%matplotlib inline\n\nimport sys\nfrom pathlib import Path\n\np = Path(\".\")\n\np = p.absolute().parent\n\nsys.path.insert(0,str(p))\n\nimport codes\n\ndef draw_2d(dataset,k):\n #colors = cm.rainbow(np.linspace(0, 1, k))\n Color = ['b', 'g', 'r', 'c', 'm', 'y', 'k', 'w']\n index = {j:n for n,j in enumerate(list(set([i[\"label\"] for i in dataset])))}\n for i in dataset:\n plt.scatter(i[\"data\"][0], i[\"data\"][1],color=Color[index.get(i[\"label\"])])\n plt.show()\n \n\ndef main():\n a = [2, 2]\n b = [1, 2]\n c = [1, 1]\n d = [0, 0]\n f = [3, 2]\n dataset = [a, b, c, d, f]\n dataset.append([1.5, 0])\n dataset.append([3, 4])\n res = codes.k_means(dataset, k=2)\n return res\n\nx = main()\n\nx\n\ndraw_2d(x,2)\n\na = [1,2,3]\n\nb = [1.0,2.0,3.0]\n\nall([float(i[0]) == float(i[1]) for i in zip(a,b)])", "使用sklearn实现k-means聚类\nsklearn.cluster.KMeans提供了一个用于做k-means聚类的接口.", "from sklearn.cluster import KMeans\nimport numpy as np\n\nX = np.array([[1, 2], [1, 4], [1, 0],[4, 2], [4, 4], [4, 0]])\nkmeans = KMeans(n_clusters=2, random_state=0).fit(X)", "查看模型训练结束后各个向量的标签", "kmeans.labels_", "模型训练结束后用于预测向量的标签", "kmeans.predict([[0, 0], [4, 4]])", "模型训练结束后的各个簇的中心点", "kmeans.cluster_centers_" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
mne-tools/mne-tools.github.io
0.14/_downloads/plot_mne_dspm_source_localization.ipynb
bsd-3-clause
[ "%matplotlib inline", "Source localization with MNE/dSPM/sLORETA\nThe aim of this tutorials is to teach you how to compute and apply a linear\ninverse method such as MNE/dSPM/sLORETA on evoked/raw/epochs data.", "import numpy as np\nimport matplotlib.pyplot as plt\n\nimport mne\nfrom mne.datasets import sample\nfrom mne.minimum_norm import (make_inverse_operator, apply_inverse,\n write_inverse_operator)", "Process MEG data", "data_path = sample.data_path()\nraw_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw.fif'\n\nraw = mne.io.read_raw_fif(raw_fname)\nraw.set_eeg_reference() # set EEG average reference\nevents = mne.find_events(raw, stim_channel='STI 014')\n\nevent_id = dict(aud_r=1) # event trigger and conditions\ntmin = -0.2 # start of each epoch (200ms before the trigger)\ntmax = 0.5 # end of each epoch (500ms after the trigger)\nraw.info['bads'] = ['MEG 2443', 'EEG 053']\npicks = mne.pick_types(raw.info, meg=True, eeg=False, eog=True,\n exclude='bads')\nbaseline = (None, 0) # means from the first instant to t = 0\nreject = dict(grad=4000e-13, mag=4e-12, eog=150e-6)\n\nepochs = mne.Epochs(raw, events, event_id, tmin, tmax, proj=True, picks=picks,\n baseline=baseline, reject=reject)", "Compute regularized noise covariance\nFor more details see tut_compute_covariance.", "noise_cov = mne.compute_covariance(\n epochs, tmax=0., method=['shrunk', 'empirical'])\n\nfig_cov, fig_spectra = mne.viz.plot_cov(noise_cov, raw.info)", "Compute the evoked response", "evoked = epochs.average()\nevoked.plot()\nevoked.plot_topomap(times=np.linspace(0.05, 0.15, 5), ch_type='mag')\n\n# Show whitening\nevoked.plot_white(noise_cov)", "Inverse modeling: MNE/dSPM on evoked and raw data", "# Read the forward solution and compute the inverse operator\n\nfname_fwd = data_path + '/MEG/sample/sample_audvis-meg-oct-6-fwd.fif'\nfwd = mne.read_forward_solution(fname_fwd, surf_ori=True)\n\n# Restrict forward solution as necessary for MEG\nfwd = mne.pick_types_forward(fwd, meg=True, eeg=False)\n\n# make an MEG inverse operator\ninfo = evoked.info\ninverse_operator = make_inverse_operator(info, fwd, noise_cov,\n loose=0.2, depth=0.8)\n\nwrite_inverse_operator('sample_audvis-meg-oct-6-inv.fif',\n inverse_operator)", "Compute inverse solution", "method = \"dSPM\"\nsnr = 3.\nlambda2 = 1. / snr ** 2\nstc = apply_inverse(evoked, inverse_operator, lambda2,\n method=method, pick_ori=None)\n\ndel fwd, inverse_operator, epochs # to save memory", "Visualization\nView activation time-series", "plt.plot(1e3 * stc.times, stc.data[::100, :].T)\nplt.xlabel('time (ms)')\nplt.ylabel('%s value' % method)\nplt.show()", "Here we use peak getter to move visualization to the time point of the peak\nand draw a marker at the maximum peak vertex.", "vertno_max, time_max = stc.get_peak(hemi='rh')\n\nsubjects_dir = data_path + '/subjects'\nbrain = stc.plot(surface='inflated', hemi='rh', subjects_dir=subjects_dir,\n clim=dict(kind='value', lims=[8, 12, 15]),\n initial_time=time_max, time_unit='s')\nbrain.add_foci(vertno_max, coords_as_verts=True, hemi='rh', color='blue',\n scale_factor=0.6)\nbrain.show_view('lateral')", "Morph data to average brain", "fs_vertices = [np.arange(10242)] * 2\nmorph_mat = mne.compute_morph_matrix('sample', 'fsaverage', stc.vertices,\n fs_vertices, smooth=None,\n subjects_dir=subjects_dir)\nstc_fsaverage = stc.morph_precomputed('fsaverage', fs_vertices, morph_mat)\nbrain_fsaverage = stc_fsaverage.plot(surface='inflated', hemi='rh',\n subjects_dir=subjects_dir,\n clim=dict(kind='value', lims=[8, 12, 15]),\n initial_time=time_max, time_unit='s')\nbrain_fsaverage.show_view('lateral')", "Exercise\n\nBy changing the method parameter to 'sloreta' recompute the source\n estimates using the sLORETA method." ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
kit-cel/wt
SC468/BIAWGN_Capacity.ipynb
gpl-2.0
[ "Capacity of the Binary-Input AWGN (BI-AWGN) Channel\nThis code is provided as supplementary material of the OFC short course SC468\nThis code illustrates\n* Calculating the capacity of the binary input AWGN channel using numerical integration\n* Capacity as a function of $E_s/N_0$ and $E_b/N_0$", "import numpy as np\nimport scipy.integrate as integrate\nimport matplotlib.pyplot as plt", "Conditional pdf $f_{Y|X}(y|x)$ for a channel with noise variance (per dimension) $\\sigma_n^2$. This is merely the Gaussian pdf with mean $x$ and variance $\\sigma_n^2$", "def f_YgivenX(y,x,sigman):\n return np.exp(-((y-x)**2)/(2*sigman**2))/np.sqrt(2*np.pi)/sigman", "Output pdf $f_Y(y) = \\frac12[f_{Y|X}(y|X=+1)+f_{Y|X}(y|X=-1)]$", "def f_Y(y,sigman):\n return 0.5*(f_YgivenX(y,+1,sigman)+f_YgivenX(y,-1,sigman))", "This is the function we like to integrate, $f_Y(y)\\cdot\\log_2(f_Y(y))$. We need to take special care of the case when the input is 0, as we defined $0\\cdot\\log_2(0)=0$, which is usually treated as \"nan\"", "def integrand(y, sigman):\n value = f_Y(y,sigman)\n if value < 1e-20:\n return_value = 0\n else:\n return_value = value * np.log2(value)\n \n return return_value", "Compute the capacity using numerical integration. We have\n\\begin{equation}\nC_{\\text{BI-AWGN}} = -\\int_{-\\infty}^\\infty f_Y(y)\\log_2(f_Y(y))\\mathrm{d}y - \\frac12\\log_2(2\\pi e\\sigma_n^2)\n\\end{equation}", "def C_BIAWGN(sigman):\n # numerical integration of the h(Y) part\n integral = integrate.quad(integrand, -np.inf, np.inf, args=(sigman))[0]\n # take into account h(Y|X)\n return -integral - 0.5*np.log2(2*np.pi*np.exp(1)*sigman**2)", "This is an alternative way of calculating the capacity by approximating the integral using the Gauss-Hermite Quadrature (https://en.wikipedia.org/wiki/Gauss%E2%80%93Hermite_quadrature). The Gauss-Hermite quadrature states that\n\\begin{equation}\n\\int_{-\\infty}^\\infty e^{-x^2}f(x)\\mathrm{d}x \\approx \\sum_{i=1}^nw_if(x_i)\n\\end{equation}\nwhere $w_i$ and $x_i$ are the respective weights and roots that are given by the Hermite polynomials.\nWe have to rearrange the integral $I = \\int_{-\\infty}^\\infty f_Y(y)\\log_2(f_Y(y))\\mathrm{d}y$ a little bit to put it into a form suitable for the Gauss-Hermite quadrature\n\\begin{align}\nI &= \\frac{1}{2}\\sum_{x\\in{\\pm 1}}\\int_{-\\infty}^\\infty f_{Y|X}(y|X=x)\\log_2(f_Y(y))\\mathrm{d}y \\\n&= \\frac{1}{2}\\sum_{x\\in{\\pm 1}}\\int_{-\\infty}^\\infty \\frac{1}{\\sqrt{2\\pi}\\sigma_n}e^{-\\frac{(y-x)^2}{2\\sigma_n^2}}\\log_2(f_Y(y))\\mathrm{d}y \\\n&\\stackrel{(a)}{=} \\frac{1}{2}\\sum_{x\\in{\\pm 1}}\\int_{-\\infty}^\\infty \\frac{1}{\\sqrt{\\pi}}e^{-z^2}\\log_2(f_Y(\\sqrt{2}\\sigma_n z + x))\\mathrm{d}z \\\n&\\approx \\frac{1}{2\\sqrt{\\pi}}\\sum_{x\\in{\\pm 1}} \\sum_{i=1}^nw_i \\log_2(f_Y(\\sqrt{2}\\sigma_n x_i + x))\n\\end{align}\nwhere in $(a)$, we substitute $z = \\frac{y-x}{\\sqrt{2}\\sigma}$", "# alternative method using Gauss-Hermite Quadrature (see https://en.wikipedia.org/wiki/Gauss%E2%80%93Hermite_quadrature) \n# use 40 components to approximate the integral, should be sufficiently exact\nx_GH, w_GH = np.polynomial.hermite.hermgauss(40)\nprint(w_GH)\ndef C_BIAWGN_GH(sigman):\n integral_xplus1 = np.sum(w_GH * [np.log2(f_Y(np.sqrt(2)*sigman*xi + 1, sigman)) for xi in x_GH])\n integral_xminus1 = np.sum(w_GH * [np.log2(f_Y(np.sqrt(2)*sigman*xi - 1, sigman)) for xi in x_GH])\n\n integral = (integral_xplus1 + integral_xminus1)/2/np.sqrt(np.pi)\n return -integral - 0.5*np.log2(2*np.pi*np.exp(1)*sigman**2)\n ", "Compute the capacity for a range of of $E_s/N_0$ values (given in dB)", "esno_dB_range = np.linspace(-16,10,100)\n\n# convert dB to linear\nesno_lin_range = [10**(esno_db/10) for esno_db in esno_dB_range]\n\n# compute sigma_n\nsigman_range = [np.sqrt(1/2/esno_lin) for esno_lin in esno_lin_range]\n\ncapacity_BIAWGN = [C_BIAWGN(sigman) for sigman in sigman_range]\n\n\n# capacity of the AWGN channel\ncapacity_AWGN = [0.5*np.log2(1+1/(sigman**2)) for sigman in sigman_range]", "Plot the capacity curves as a function of $E_s/N_0$ (in dB) and $E_b/N_0$ (in dB). In order to calculate $E_b/N_0$, we recall from the lecture that\n\\begin{equation}\n\\frac{E_s}{N_0} = r\\cdot \\frac{E_b}{N_0}\\qquad\\Rightarrow\\qquad\\frac{E_b}{N_0} = \\frac{1}{r}\\cdot \\frac{E_s}{N_0}\n\\end{equation}\nNext, we know that the best rate that can be achieved is the capacity, i.e., $r=C$. Hence, we get $\\frac{E_b}{N_0}=\\frac{1}{C}\\cdot\\frac{E_s}{N_0}$. Converting to decibels yields\n\\begin{align}\n\\frac{E_b}{N_0}\\bigg|{\\textrm{dB}} &= 10\\cdot\\log{10}\\left(\\frac{1}{C}\\cdot\\frac{E_s}{N_0}\\right) \\\n&= 10\\cdot\\log_{10}\\left(\\frac{1}{C}\\right) + 10\\cdot\\log_{10}\\left(\\frac{E_s}{N_0}\\right) \\\n&= \\frac{E_s}{N_0}\\bigg|{\\textrm{dB}} - 10\\cdot\\log{10}(C)\n\\end{align}", "fig = plt.figure(1,figsize=(15,7))\nplt.subplot(121)\nplt.plot(esno_dB_range, capacity_AWGN)\nplt.plot(esno_dB_range, capacity_BIAWGN)\nplt.xlim((-10,10))\nplt.ylim((0,2))\nplt.xlabel('$E_s/N_0$ (dB)',fontsize=16)\nplt.ylabel('Capacity (bit/channel use)',fontsize=16)\nplt.grid(True)\nplt.legend(['AWGN','BI-AWGN'],fontsize=14)\n\n\n# plot Eb/N0 . Note that in this case, the rate that is used for calculating Eb/N0 is the capcity\n# Eb/N0 = 1/r (Es/N0)\nplt.subplot(122)\nplt.plot(esno_dB_range - 10*np.log10(capacity_AWGN), capacity_AWGN)\nplt.plot(esno_dB_range - 10*np.log10(capacity_BIAWGN), capacity_BIAWGN)\nplt.xlim((-2,10))\nplt.ylim((0,2))\nplt.xlabel('$E_b/N_0$ (dB)',fontsize=16)\nplt.ylabel('Capacity (bit/channel use)',fontsize=16)\nplt.grid(True)\n\nfrom scipy.stats import norm\n# first compute the BSC error probability\n# the Q function (1-CDF) is also often called survival function (sf)\ndelta_range = [norm.sf(1/sigman) for sigman in sigman_range]\n\ncapacity_BIAWGN_hard = [1+delta*np.log2(delta)+(1-delta)*np.log2(1-delta) for delta in delta_range]\n\n\nfig = plt.figure(1,figsize=(15,7))\nplt.subplot(121)\nplt.plot(esno_dB_range, capacity_AWGN)\nplt.plot(esno_dB_range, capacity_BIAWGN)\nplt.plot(esno_dB_range, capacity_BIAWGN_hard)\nplt.xlim((-10,10))\nplt.ylim((0,2))\nplt.xlabel('$E_s/N_0$ (dB)',fontsize=16)\nplt.ylabel('Capacity (bit/channel use)',fontsize=16)\nplt.grid(True)\nplt.legend(['AWGN','BI-AWGN', 'Hard BI-AWGN'],fontsize=14)\n\n# plot Eb/N0 . Note that in this case, the rate that is used for calculating Eb/N0 is the capcity\n# Eb/N0 = 1/r (Es/N0)\nplt.subplot(122)\nplt.plot(esno_dB_range - 10*np.log10(capacity_AWGN), capacity_AWGN)\nplt.plot(esno_dB_range - 10*np.log10(capacity_BIAWGN), capacity_BIAWGN)\nplt.plot(esno_dB_range - 10*np.log10(capacity_BIAWGN_hard), capacity_BIAWGN_hard)\n\nplt.xlim((-2,10))\nplt.ylim((0,2))\nplt.xlabel('$E_b/N_0$ (dB)',fontsize=16)\nplt.ylabel('Capacity (bit/channel use)',fontsize=16)\nplt.grid(True)\n\nW = 4\n" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
qutip/qutip-notebooks
examples/piqs-spin-squeezing-noise.ipynb
lgpl-3.0
[ "Spin Squeezing in presence of local and collective noise\nNotebook author: Nathan Shammah (nathan.shammah at gmail.com)\nHere we study the effect of collective and local processes on a spin squeezing Hamiltonian. \nWe consider a system of $N$ two-level systems (TLSs) with identical frequency $\\omega_{0}$, which can de-excite incoherently or collectively at the rates $\\gamma_\\text{E}$ and $\\gamma_\\text{CE}$,\n\\begin{eqnarray}\n\\dot{\\rho} &=&-i\\lbrack -i\\Lambda\\left(J_{+}^2-J_{-}^2\\right),\\rho \\rbrack\n+\\frac{\\gamma_\\text {CE}}{2}\\mathcal{L}{J{-}}[\\rho]\n+\\frac{\\gamma_\\text{E}}{2}\\sum_{n=1}^{N}\\mathcal{L}{J{-,n}}[\\rho]\n\\end{eqnarray}\nWe study the time evolution of the spin squeezing parameter [1-4]\n\\begin{eqnarray}\n\\xi^2 &=& N\\langle\\Delta J_y^2\\rangle/\\left(\\langle J_z\\rangle^2+\\langle J_x\\rangle^2\\right)\n\\end{eqnarray}\nWe assess how different dynamical conditions and initial states can be explored to optimize the spin squeezing of a given Dicke state [5-7]. This study can be generalized to other types of local and collective incoherent processes. A table grouping this processes is given below, \n<table>\n <tr>\n<td> Keyword</td>\n<td> Rate $\\gamma_j$</td>\n<td> Lindbladian $\\mathcal{L}[\\rho]$</td>\n</tr>\n\n<tr>\n<td> $\\texttt{emission}$ </td>\n<td> $\\gamma_\\text{E}$</td>\n<td> \\begin{eqnarray}\\mathcal{L}[\\rho]&=&\\sum_n^N \\left(J_{-,n}\\rho J_{+,n} - \\frac{1}{2}J_{+,n}J_{-,n}\\rho - \\frac{1}{2}\\rho J_{+,n}J_{-,n} \\right)\\end{eqnarray}</td>\n</tr>\n\n<tr>\n<td> $\\texttt{pumping}$ </td>\n<td> $\\gamma_\\text{P}$</td>\n<td> \\begin{eqnarray}\\mathcal{L}[\\rho]&=&\\sum_n^N \\left(J_{+,n}\\rho J_{-,n} - \\frac{1}{2}J_{-,n}J_{+,n}\\rho - \\frac{1}{2}\\rho J_{-,n}J_{+,n} \\right)\\end{eqnarray}</td>\n</tr>\n\n<tr>\n<td> $\\texttt{dephasing}$ </td>\n<td> $\\gamma_\\text{D}$</td>\n<td> \\begin{eqnarray}\\mathcal{L}[\\rho]&=&\\sum_n^N \\left(J_{z,n}\\rho J_{z,n} - \\frac{1}{2}J_{z,n}J_{z,n}\\rho - \\frac{1}{2}\\rho J_{z,n}J_{z,n} \\right)\\end{eqnarray}</td>\n</tr>\n\n<tr>\n<td> $\\texttt{collective}\\_\\texttt{emission}$ </td>\n<td> $\\gamma_\\text{CE}$</td>\n<td> \\begin{eqnarray}\\mathcal{L}[\\rho]&=& J_{-}\\rho J_{+} - \\frac{1}{2}J_{+}J_{-}\\rho - \\frac{1}{2}\\rho J_{+}J_{-} \\end{eqnarray}</td>\n</tr>\n\n<tr>\n<td> $\\texttt{collective}\\_\\texttt{pumping}$ </td>\n<td> $\\gamma_\\text{CP}$</td>\n<td> \\begin{eqnarray}\\mathcal{L}[\\rho]&=& J_{+}\\rho J_{-} - \\frac{1}{2}J_{-}J_{+}\\rho - \\frac{1}{2}\\rho J_{-}J_{+} \\end{eqnarray}</td>\n</tr>\n\n<tr>\n<td> $\\texttt{collective}\\_\\texttt{dephasing}$ </td>\n<td> $\\gamma_\\text{CD}$</td>\n<td> \\begin{eqnarray}\\mathcal{L}[\\rho]&=& J_{z}\\rho J_{z} - \\frac{1}{2}J_{z}^2\\rho - \\frac{1}{2}\\rho J_{z}^2 \\end{eqnarray}</td>\n</tr>\n\n</table>\n\nNote that in the table above and in $\\texttt{qutip.piqs}$ functions, the Lindbladian $\\mathcal{L}[\\rho]$ is written with a factor 1/2 with respect to $\\mathcal{L}{A}[\\rho]$ reported in the LaTeX math equations, in order to have the Lindbladian and full Liouvillian matrix consistently defined by the rates $\\gamma\\alpha$. \nNote also that the local depolarizing channel can be written in terms of this Lindbladians as\n\\begin{eqnarray}\n\\gamma_{Dep}\\sum_n^N\\left(\\mathcal{L}{J{x,n}}+\\mathcal{L}{J{y,n}}+\\mathcal{L}{J{z,n}}\\right)=\\gamma_{Dep}\\sum_n^N\\left(\\frac{1}{2}\\mathcal{L}{J{+,n}}+\\frac{1}{2}\\mathcal{L}{J{-,n}}+ \\mathcal{L}{J{z,n}}\\right).\n\\end{eqnarray}\nSimilarly, the collective depolarizing channel reads\n\\begin{eqnarray}\n\\gamma_\\text{CDep}\\left(\\mathcal{L}{J{x}}+\\mathcal{L}{J{y}}+\\mathcal{L}{J{z}}\\right)=\\gamma_\\text{CDep}\\left(\n\\frac{1}{2}\\mathcal{L}{J{+}}+\\frac{1}{2}\\mathcal{L}{J{-}}+ \\mathcal{L}{J{z}}\\right).\n\\end{eqnarray}", "from time import clock\nfrom scipy.io import mmwrite\nimport matplotlib.pyplot as plt\nfrom qutip import *\nfrom qutip.piqs import *\nfrom scipy.sparse import load_npz, save_npz\n\ndef isdicke(N, j, m):\n \"\"\"\n Check if an element in a matrix is a valid element in the Dicke space.\n Dicke row: j value index. Dicke column: m value index. \n The function returns True if the element exists in the Dicke space and\n False otherwise.\n\n Parameters\n ----------\n N : int\n The number of two-level systems. \n j: float\n \"j\" index of the element in Dicke space which needs to be checked.\n m: float\n \"m\" index of the element in Dicke space which needs to be checked.\n \"\"\"\n dicke_row = j\n dicke_col = m\n \n rows = N + 1\n cols = 0\n\n if (N % 2) == 0:\n cols = int(N/2 + 1)\n else:\n cols = int(N/2 + 1/2)\n\n if (dicke_row > rows) or (dicke_row < 0):\n return (False)\n\n if (dicke_col > cols) or (dicke_col < 0):\n return (False)\n\n if (dicke_row < int(rows/2)) and (dicke_col > dicke_row):\n return False\n\n if (dicke_row >= int(rows/2)) and (rows - dicke_row <= dicke_col):\n return False\n\n else:\n return True\n\ndef dicke_space(N):\n \"\"\"\n Generate a matrix to visualize the Dicke space.\n j is on the horizontal axis, increasing right to left.\n m is on the vertical axis, increasing bottom to top.\n It puts 1 in all allowed (j,m) values.\n It puts 0 in all not-allowed (j,m) values.\n Parameters\n ----------\n N : int\n The number of two-level systems.\n Returns\n ----------\n dicke_space : ndarray\n The matrix of all allowed (j,m) pairs.\n\n \"\"\" \n rows = N + 1\n cols = 0\n\n if (rows % 2) == 0:\n cols = int((rows/2))\n\n else:\n cols = int((rows + 1)/2)\n\n dicke_space = np.zeros((rows, cols), dtype = int)\n\n for (i, j) in np.ndindex(rows, cols):\n dicke_space[i, j] = isdicke(N, i, j)\n\n return (dicke_space)\n\n## general parameters\nN = 20\nntls = N\nnds = num_dicke_states(N)\n[jx, jy, jz] = jspin(N)\njp = jspin(N, \"+\")\njm = jspin(N, \"-\")\njpjm = jp*jm\n\nLambda = 1\nfactor_l = 5\n\n#spin hamiltonian\nh = -1j*Lambda * (jp**2-jm**2)\ngCE = Lambda/factor_l\ngE = Lambda/factor_l\n\n# system with collective emission only\nsystem = Dicke(N=N)\n# system2 with local emission only\nsystem2 = Dicke(N=N)\nsystem.collective_emission = gCE\nsystem2.emission = gE\nsystem.hamiltonian = h\nsystem2.hamiltonian = h\nliouv = system.liouvillian() \nliouv2 = system2.liouvillian()\n\nprint(system)\nprint(system2)", "Time evolution of Spin Squuezing Parameter $\\xi^2= \\frac{N \\langle\\Delta J_y^2\\rangle}{\\langle J_z\\rangle^2}$", "#set initial state for spins (Dicke basis)\nnt = 1001\ntd0 = 1/(N*Lambda)\ntmax = 10 * td0\nt = np.linspace(0, tmax, nt)\nexcited = dicke(N, N/2, N/2)\nload_file = False\nif load_file == False:\n # cycle over all states in Dicke space\n xi2_1_list = []\n xi2_2_list = []\n xi2_1_min_list = []\n xi2_2_min_list = []\n\n for j in j_vals(N):\n #for m in m_vals(j):\n m = j\n rho0 = dicke(N, j, m)\n #solve using qutip (Dicke basis)\n # Dissipative dynamics: Only collective emission \n result = mesolve(liouv, rho0, t, [], \n e_ops = [jz, jy, jy**2,jz**2, jx],\n options = Options(store_states=True))\n rhot = result.states\n jz_t = result.expect[0]\n jy_t = result.expect[1]\n jy2_t = result.expect[2]\n jz2_t = result.expect[3]\n jx_t = result.expect[4]\n Delta_jy = jy2_t - jy_t**2\n xi2_1 = N * Delta_jy / (jz_t**2+jx_t**2)\n # Dissipative dynamics: Only local emission \n result2 = mesolve(liouv2, rho0, t, [], \n e_ops = [jz, jy, jy**2,jz**2, jx],\n options = Options(store_states=True))\n rhot2 = result2.states\n jz_t2 = result2.expect[0]\n jy_t2 = result2.expect[1]\n jy2_t2 = result2.expect[2]\n jz2_t2 = result2.expect[3]\n jx_t2 = result2.expect[4]\n Delta_jy2 = jy2_t2 - jy_t2**2\n xi2_2 = N * Delta_jy2 / (jz_t2**2+jx_t2**2)\n\n xi2_1_min = np.min(xi2_1)\n xi2_2_min = np.min(xi2_2) \n xi2_1_list.append(xi2_1)\n xi2_2_list.append(xi2_2)\n xi2_1_min_list.append(xi2_1_min)\n xi2_2_min_list.append(xi2_2_min) \n\n print(\"|j, m> = \",j,m)", "Visualization", "label_size2 = 20\nlw = 3\ntexplot = False\n# if texplot == True:\n# plt.rc('text', usetex = True)\n# plt.rc('xtick', labelsize=label_size) \n# plt.rc('ytick', labelsize=label_size)\n\nfig1 = plt.figure(figsize = (10,6))\nfor xi2_1 in xi2_1_list:\n plt.plot(t*(N*Lambda), xi2_1, '-', label = r' $\\gamma_\\Downarrow=0.2$', linewidth = lw)\nfor xi2_2 in xi2_2_list:\n plt.plot(t*(N*Lambda), xi2_2, '-.', label = r'$\\gamma_\\downarrow=0.2$')\nplt.plot(t*(N*Lambda), 1+0*t, '--k')\nplt.xlim([0,3])\nplt.ylim([0,8000.5])\nplt.ylim([0,2.5])\nplt.xlabel(r'$ N \\Lambda t$', fontsize = label_size2)\nplt.ylabel(r'$\\xi^2$', fontsize = label_size2)\n#plt.legend(fontsize = label_size2*0.8)\nplt.title(r'Spin Squeezing Parameter, $N={}$'.format(N), fontsize = label_size2)\nplt.show()\nplt.close()\n\n## Here we find for how long the spin-squeezing parameter, xi2, \n## is less than 1 (non-classical or \"quantum\" condition), in the two dynamics\n\ndt_quantum_xi1_list = []\ndt_quantum_xi2_list = []\n\ndt1_jm =[]\ndt2_jm =[]\nds = dicke_space(N)\ni = 0\nfor j in j_vals(N):\n #for m in m_vals(j):\n m = j\n rho0 = dicke(N, j, m)\n quantum_xi1 = xi2_1_list[i][xi2_1_list[i] < 1.0] \n quantum_xi2 = xi2_2_list[i][xi2_2_list[i] < 1.0]\n\n # first ensemble\n if len(quantum_xi1)>0:\n dt_quantum_xi1 = len(quantum_xi1)\n dt1_jm.append((dt_quantum_xi1, j, m))\n\n else:\n dt_quantum_xi1 = 0.0\n\n # second ensemble\n if len(quantum_xi2)>0:\n dt_quantum_xi2 = len(quantum_xi2)\n dt2_jm.append((dt_quantum_xi2, j, m))\n else:\n dt_quantum_xi2 = 0.0\n\n dt_quantum_xi1_list.append(dt_quantum_xi1)\n dt_quantum_xi2_list.append(dt_quantum_xi2)\n\n i = i+1\n\nprint(\"collective emission: (squeezing time, j, m)\")\nprint(dt1_jm)\nprint(\"local emission: (squeezing time, j, m)\")\nprint(dt2_jm)", "Visualization", "plt.rc('text', usetex = True)\nlabel_size = 20\nlabel_size2 = 20\nlabel_size3 = 20\nplt.rc('xtick', labelsize=label_size) \nplt.rc('ytick', labelsize=label_size)\n\nlw = 3\ni0 = -3\ni0s=2\nfig1 = plt.figure(figsize = (8,5))\n# excited state spin squeezing\nplt.plot(t*(N*Lambda), xi2_1_list[-1], 'k-', \n label = r'$|\\frac{N}{2},\\frac{N}{2}\\rangle$, $\\gamma_\\Downarrow=0.2\\Lambda$', \n linewidth = 0.8)\nplt.plot(t*(N*Lambda), xi2_2_list[-1], 'r--',\n label = r'$|\\frac{N}{2},\\frac{N}{2}\\rangle$, $\\gamma_\\downarrow=0.2\\Lambda$',\n linewidth = 0.8)\n# state with max time of spin squeezing\n\nplt.plot(t*(N*Lambda), xi2_1_list[i0], 'k-', \n label = r'$|j,j\\rangle$, $\\gamma_\\Downarrow=0.2\\Lambda$', \n linewidth = 0.8+0.4*i0s*lw)\nplt.plot(t*(N*Lambda), xi2_2_list[i0], 'r--',\n label = r'$|j,j\\rangle$, $\\gamma_\\downarrow=0.2\\Lambda$',\n linewidth = 0.8+0.4*i0s*lw)\nplt.plot(t*(N*Lambda), 1+0*t, '--k')\n\nplt.xlim([0,2.5])\nplt.yticks([0,1,2])\nplt.ylim([-1,2.])\n\nplt.xlabel(r'$ N \\Lambda t$', fontsize = label_size3)\nplt.ylabel(r'$\\xi^2$', fontsize = label_size3)\nplt.legend(fontsize = label_size2*0.8, ncol=2)\nfname = 'figures/spin_squeezing_N_{}_states.pdf'.format(N)\nplt.title(r'Spin Squeezing Parameter, $N={}$'.format(N), fontsize = label_size2)\nplt.show()\nplt.close()", "The plot shows the spin squeezing parameter for two different dynamics -- only collective de-excitation, black curves; only local de-excitation, red curves -- and for two different inital states, the maximally excited state (thin curves) and another Dicke state with longer squeezing time (thick curves). This study, performed in Refs. [5,6] for the maximally excited state has been extended to any Dicke state in Ref. [7].", "# plot the dt matrix in the Dicke space\nplt.rc('text', usetex = True)\nlabel_size = 20\nlabel_size2 = 20\nlabel_size3 = 20\nplt.rc('xtick', labelsize=label_size) \nplt.rc('ytick', labelsize=label_size)\n\nlw = 3\ni0 = 7\ni0s=2\nratio_squeezing_local = 3\nfig1 = plt.figure(figsize = (6,8))\nds = dicke_space(N)\nvalue_excited = 3\nds[0,0]=value_excited\nds[int(N/2-i0),int(N/2-i0)]=value_excited * ratio_squeezing_local\nplt.imshow(ds, cmap=\"inferno_r\")\nplt.xticks([])\nplt.yticks([])\nplt.xlabel(r\"$j$\", fontsize = label_size3)\nplt.ylabel(r\"$m$\", fontsize = label_size3)\nplt.title(r\"Dicke space $(j,m)$ for $N={}$\".format(N), fontsize = label_size3)\nplt.show()\nplt.close()", "The Plot above shows the two initial states (darker dots) $|\\frac{N}{2},\\frac{N}{2}\\rangle$ (top edge of the Dicke triangle, red dot) and $|j,j\\rangle$, with $j=\\frac{N}{2}-3=7$ (black dot). A study of the Dicke triangle (dark yellow space) and state engineering is performed in Ref. [8] for different initial state. \nReferences\n[1] D. J. Wineland, J. J. Bollinger, W. M. Itano, F. L. Moore, and D. J. Heinzen, Spin squeezing and reduced quantum noise in spectroscopy, Phys. Rev. A 46, R6797 (1992)\n[2] M. Kitagawa and M. Ueda, Squeezed spin states, Phys. Rev. A 47, 5138 (1993)\n[3] J. Ma, X. Wang, C.-P. Sun, and F. Nori, Quantum spin squeezing, Physics Reports 509, 89 (2011)\n[4] L. Pezzè, A. Smerzi, M. K. Oberthaler, R. Schmied, and P. Treutlein, Quantum metrology with nonclassical states of atomic ensembles, Reviews of Modern Physics, in press (2018)\n[5] B. A. Chase and J. Geremia, Collective processes of an ensemble of spin-1 particles, Phys. Rev. A 78,0521012 (2008)\n[6] B. Q. Baragiola, B. A. Chase, and J. Geremia, Collective uncertainty in partially polarized and partially deco- hered spin-1 systems, Phys. Rev. A 81, 032104 (2010)\n[7] N. Shammah, S. Ahmed, N. Lambert, S. De Liberato, and F. Nori, \nOpen quantum systems with local and collective incoherent processes: Efficient numerical simulation using permutational invariance https://arxiv.org/abs/1805.05129\n[8] N. Shammah, N. Lambert, F. Nori, and S. De Liberato, Superradiance with local phase-breaking effects, Phys. Rev. A 96, 023863 (2017).", "qutip.about()" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
hershaw/data-science-101
course/class2/01-clean/examples/00-kill.ipynb
mit
[ "01-kill\nThis notebook presents how to eliminate the diagnosed outliers.\nSome inital imports:", "import pandas as pd\nimport numpy as np\n% matplotlib inline\nfrom matplotlib import pyplot as plt ", "Load the dataset that will be used", "data = pd.read_csv('../../data/all_data.csv', index_col=0)\nprint('Our dataset has %d columns (features) and %d rows (people).' % (data.shape[1], data.shape[0]))\ndata.head(15)", "Let us drop the missing and duplicated values since they don't matter for now (already covered in other notebooks):", "data = data.drop_duplicates()\ndata = data.dropna()\nprint('Our dataset has %d columns (features) and %d rows (people).' % (data.shape[1], data.shape[0]))", "Dealing with outliers\nTime to deal with the issues previously found.\n1) Delete observations - use feature bounds\nThe easiest way is to delete the observations (for when you know the bounds of your features). Let's use Age, since know the limits. Set the limits:", "min_age = 0\nmax_age = 117 # oldest person currently alive", "Create the mask:", "mask_age = (data['age'] >= min_age) & (data['age'] <= max_age)\nmask_age.head(10)", "Check if some outliers were caught:", "data[~mask_age]", "Yes! Two were found! The mask_age variable contains the rows we want to keep, i.e., the rows that meet the bounds above. So, lets drop the above 2 rows:", "data = data[mask_age]\nprint('Our dataset has %d columns (features) and %d rows (people).' % (data.shape[1], data.shape[0]))", "2) Create classes/bins\nInstead of having a range of values you can discretize in classes/bins. Make use of pandas' qcut: Discretize variable into equal-sized buckets.", "data['height'].hist(bins=100)\nplt.title('Height population distribution')\nplt.xlabel('cm')\nplt.ylabel('freq')", "Discretize!", "height_bins = pd.qcut(data['height'], \n 5, \n labels=['very short', 'short', 'average', 'tall', 'very tall'], \n retbins=True)\n\nheight_bins[0].head(10)", "The limits of the defined classes/bins are:", "height_bins[1]", "We could replace the height values by the new five categories. Nevertheless, looks like a person with 252 cm is actually an outlier and the best would be to evaluate this value against two-standard deviations or percentile (e.g., 99%). \nLets define the height bounds according to two-standard deviations from the mean.\n3) Delete observations - use the standard deviation", "# Calculate the mean and standard deviation\nstd_height = data['height'].std()\nmean_height = data['height'].mean()\n# The mask!\nmask_height = (data['height'] > mean_height-2*std_height) & (data['height'] < mean_height+2*std_height)\nprint('Height bounds:')\nprint('> Minimum accepted height: %3.1f' % (mean_height-2*std_height))\nprint('> Maximum accepted height: %3.1f' % (mean_height+2*std_height))", "Which ones are out of the bounds?", "data.loc[~mask_height]", "Let's delete these rows (mask_height contains the rows we want to keep)", "data = data[mask_height]\nprint('Our dataset has %d columns (features) and %d rows (people).' % (data.shape[1], data.shape[0]))", "Done! So, our initial dataset with 200 rows (173 rows after dropping duplicates and missing values), ended up with 166 rows after this data handling." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
Yu-Group/scikit-learn-sandbox
jupyter/backup_deprecated_nbs/06_explore_binary_decision_tree.ipynb
mit
[ "Key Requirements for the iRF scikit-learn implementation\n\nThe following is a documentation of the main requirements for the iRF implementation\n\nPseudocode iRF implementation\nStep 0: Setup\n\nImport required libraries and set up the seed value for reproducibility\nKeep all custom functions in utils/utils.py\n\nInputs:\n* D = {($X_{i}$, $Y_{i}$), $X_{i} \\in \\mathbb{R}$, $Y_{i} \\in \\left {0, 1 \\right }$\np\n, Y i ∈ {0, 1}},C ∈ {0, 1}, B, K", "# Setup\n%matplotlib inline\nimport matplotlib.pyplot as plt\nfrom sklearn.datasets import load_iris\nfrom sklearn.cross_validation import train_test_split\nfrom sklearn.ensemble import RandomForestClassifier\nfrom sklearn.metrics import confusion_matrix\nfrom sklearn.datasets import load_iris\nfrom sklearn import tree\nimport numpy as np\n\n# Define a function to draw the decision trees in IPython\n# Adapted from: http://scikit-learn.org/stable/modules/tree.html\nfrom IPython.display import display, Image\nimport pydotplus\n\n# Custom util functions\nfrom utils import utils\n\n# Set seed for reproducibility\nnp.random.seed(1015)", "Step 1: Fit the Initial Random Forest\n\nJust fit every feature with equal weights per the usual random forest code e.g. DecisionForestClassifier in scikit-learn", "# Load the iris data\niris = load_iris()\n\n# Create the train-test datasets\nX_train, X_test, y_train, y_test = train_test_split(iris.data, iris.target)\n\nnp.random.seed(1039)\n\n# Just fit a simple random forest classifier with 2 decision trees\nrf = RandomForestClassifier(n_estimators = 2)\nrf.fit(X = X_train, y = y_train)\n\n# Now plot the trees individually\nfor idx, dtree in enumerate(rf.estimators_):\n print(idx)\n utils.draw_tree(inp_tree = dtree)\n #utils.draw_tree(inp_tree = rf.estimators_[1])", "Step 2: Get the Gini Importance of Weights\n\nFor the first random forest we just need to get the Gini Importance of Weights\n\nStep 2.1 Get them numerically - most important", "importances = rf.feature_importances_\nstd = np.std([dtree.feature_importances_ for dtree in rf.estimators_]\n , axis=0)\nindices = np.argsort(importances)[::-1]\n\n# Check that the feature importances are standardized to 1\nprint(sum(importances))", "Step 2.2 Display Feature Importances Graphically (just for interest)", "# Print the feature ranking\nprint(\"Feature ranking:\")\n\nfor f in range(X_train.shape[1]):\n print(\"%d. feature %d (%f)\" % (f + 1, indices[f], importances[indices[f]]))\n\n# Plot the feature importances of the forest\nplt.figure()\nplt.title(\"Feature importances\")\nplt.bar(range(X_train.shape[1]), importances[indices],\n color=\"r\", yerr=std[indices], align=\"center\")\nplt.xticks(range(X_train.shape[1]), indices)\nplt.xlim([-1, X_train.shape[1]])\nplt.show()", "Step 3: For each Tree get core leaf node features\n\nFor each decision tree in the classifier, get:\nThe list of leaf nodes\nDepth of the leaf node \nLeaf node predicted class i.e. {0, 1}\nProbability of predicting class in leaf node\nNumber of observations in the leaf node i.e. weight of node\n\n\n\nName the Features", "feature_names = [\"X\" + str(i) for i in range(X_train.shape[1])]\ntarget_vals = list(np.sort(np.unique(y_train)))\ntarget_names = [\"y\" + str(i) for i in target_vals]\nprint(feature_names)\nprint(target_names)", "Get the second Decision tree to use for testing", "estimator = rf.estimators_[1]\n\nfrom sklearn.tree import _tree\nestimator.tree_.node_count\nestimator.tree_.children_left[0]\nestimator.tree_.children_right[0]\n_tree.TREE_LEAF", "Write down an efficient Binary Tree Traversal Function", "# Now plot the trees individually\nutils.draw_tree(inp_tree = estimator)\n\ndef binaryTreePaths(dtree, root_node_id = 0):\n\n # Use these lists to parse the tree structure\n children_left = dtree.tree_.children_left\n children_right = dtree.tree_.children_right\n \n if root_node_id is None: \n paths = []\n \n if root_node_id == _tree.TREE_LEAF:\n raise ValueError(\"Invalid node_id %s\" % _tree.TREE_LEAF)\n \n # if left/right is None we'll get empty list anyway\n if children_left[root_node_id] != _tree.TREE_LEAF:\n paths = [str(root_node_id) + '->' + str(l)\n for l in binaryTreePaths(dtree, children_left[root_node_id]) + \n binaryTreePaths(dtree, children_right[root_node_id])]\n else:\n paths = [root_node_id]\n \n return paths\n\nx1 = binaryTreePaths(rf.estimators_[1], root_node_id = 0)\nx1\n\ndef binaryTreePaths2(dtree, root_node_id = 0):\n\n # Use these lists to parse the tree structure\n children_left = dtree.tree_.children_left\n children_right = dtree.tree_.children_right\n\n if root_node_id is None: \n paths = []\n \n if root_node_id == _tree.TREE_LEAF:\n raise ValueError(\"Invalid node_id %s\" % _tree.TREE_LEAF)\n \n # if left/right is None we'll get empty list anyway\n if children_left[root_node_id] != _tree.TREE_LEAF:\n paths = [np.append(root_node_id, l)\n for l in binaryTreePaths2(dtree, children_left[root_node_id]) + \n binaryTreePaths2(dtree, children_right[root_node_id])]\n\n else:\n paths = [root_node_id]\n \n return paths\n\nx = binaryTreePaths2(rf.estimators_[1], root_node_id = 0)\nx\n\nleaf_nodes = [y[-1] for y in x]\nleaf_nodes\n\nn_node_samples = estimator.tree_.n_node_samples\nnum_samples = [n_node_samples[y].astype(int) for y in leaf_nodes]\nprint(n_node_samples)\nprint(len(n_node_samples))\nnum_samples\nprint(num_samples)\nprint(sum(num_samples))\nprint(sum(n_node_samples))\n\nX_train.shape\n\nvalue = estimator.tree_.value\nvalues = [value[y].astype(int) for y in leaf_nodes]\nprint(values)\n# This should match the number of rows in the training feature set\nprint(sum(values).sum())\nvalues\n\nfeature_names = [\"X\" + str(i) for i in range(X_train.shape[1])]\nnp.asarray(feature_names)\nprint(type(feature_names))\n\nprint(feature_names[0])\nprint(feature_names[-2])\n\n\nfeature = estimator.tree_.feature\nz = [feature[y].astype(int) for y in x]\nz\n#[feature_names[i] for i in z]\n\nmax_dpth = estimator.tree_.max_depth\nmax_dpth\n\nmax_n_class = estimator.tree_.max_n_classes\nmax_n_class\n\nprint(\"nodes\", np.asarray(a = nodes, dtype = \"int64\"), sep = \":\\n\")\nprint(\"node_depth\", node_depth, sep = \":\\n\")\nprint(\"leaf_node\", is_leaves, sep = \":\\n\")\nprint(\"feature_names\", used_feature_names, sep = \":\\n\")\nprint(\"feature\", feature, sep = \":\\n\")", "Step 4: For each tree get the paths to the leaf node from root node\n\nFor each decision tree in the classifier, get:\nFull path sequence to all leaf nodes i.e. SEQUENCE of all features that led to a leaf node\nPath to all leaf nodes i.e. SET of all features that led to a leaf node i.e. remove duplicate features\nGet the node_ids and the feature_ids at each node_id\nGet the feature SET associated with each node along a path\n\n\n\nGet the tree paths\n\nThe following code is taken from:\nhttps://github.com/andosa/treeinterpreter/blob/master/treeinterpreter/treeinterpreter.py#L12-L33\n\nStep 5: Refit the Random Forest but change the feature weighting procedure\n\nWe need to be able to refit the Random Forests (after the first one by re-weighting the weights)\nThe key parts of the code that need to be modified are:\nhttps://github.com/scikit-learn/scikit-learn/blob/master/sklearn/tree/_splitter.pyx#L401-L402\nhttps://github.com/scikit-learn/scikit-learn/blob/master/sklearn/tree/_utils.pyx#L72-L75\n\n\n\nStep 6: Use Export Graphviz code to obtain the node paths\n\nhttps://github.com/scikit-learn/scikit-learn/blob/14031f6/sklearn/tree/export.py#L165-L167\n\nDiscussion with Karl\n\nCheck the RIT procedure\nCheck any additional data values required\nOOB is tricky - the warm start aspect means that we can't easily paralellize this. Also tracking is difficult\nGo through the general structure of the code" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
karlbenedict/karlbenedict.github.io
presentations/2014-04-CI-day/examples/notebook_02-Copy1.ipynb
mit
[ "Network Data Access - USGS NWIS Service-based Data Access\nKarl Benedict\nDirector, Earth Data Analysis Center\nAssociate Professor, University Libraries\nUniversity of New Mexico\nkbene@unm.edu\nAn Analysis\nThis analysis demonstrates searching for datasets that meet a set of specified conditions, accessing via advertised services, processing and plotting the data from the service.\nService Documentation: http://waterservices.usgs.gov/rest/IV-Service.html\nEnable the needed python libraries", "import urllib\nimport zipfile\nimport StringIO\nimport string\nimport pandas\nimport matplotlib.pyplot as plt\nimport numpy as np\nfrom IPython.display import HTML\nimport json", "Set some initial variables", "county_name = \"\"\nstart_date = \"20140101\"\nend_date = \"20150101\"\ndiag = False", "Options", "## Retrieve the bounding box of the specified county - if no county is specified, the bounding boxes for all NM counties will be requested\ncountyBBOXlink = \"http://gstore.unm.edu/apps/epscor/search/nm_counties.json?limit=100&query=\" + county_name ## define the request URL\nprint countyBBOXlink ## print the request URL for verification\nprint\nbboxFile = urllib.urlopen(countyBBOXlink) ## request the bounding box information from the server\nbboxData = json.load(bboxFile)\n# print bboxData\n\n# Get data for BBOX defined by specified county(ies)\nmyCounties = []\nfor countyBBOX in bboxData[\"results\"]:\n minx,miny,maxx,maxy = countyBBOX[u'box']\n myDownloadLink = \"http://waterservices.usgs.gov/nwis/iv/?bBox=%f,%f,%f,%f&format=json&period=P7D&parameterCd=00060\" % (minx,miny,maxx,maxy) # retrieve data for the specified BBOX for the last 7 days as JSON\n print myDownloadLink\n myCounty = {u'name':countyBBOX[u'text'],u'minx':minx,u'miny':miny,u'maxx':maxx,u'maxy':maxy,u'downloadLink':myDownloadLink}\n myCounties.append(myCounty)\n\n\n#countySubset = [myCounties[0]]\n#print countySubset\n\nvalueList = []\n\nfor county in myCounties:\n print \"processing: %s\" % county[\"downloadLink\"]\n try:\n datafile = urllib.urlopen(county[\"downloadLink\"])\n data = json.load(datafile)\n values = data[\"value\"][\"timeSeries\"][0][\"values\"]\n for item in values:\n for valueItem in item[\"value\"]:\n #print json.dumps(item[\"value\"], sort_keys=True, indent=4)\n myValue = {\"dateTime\":valueItem[\"dateTime\"].replace(\"T\",\" \").replace(\".000-06:00\",\"\"),\"value\":valueItem[\"value\"], \"county\":county[\"name\"]}\n #print myValue\n valueList.append(myValue)\n #print valueList\n except:\n print \"\\tfailed for this one ...\"\n \n #print json.dumps(values, sort_keys=True, indent=4)\n\ndf = pandas.DataFrame(valueList)\n\ndf['dateTime'] = pandas.to_datetime(df[\"dateTime\"])\ndf['value'] = df['value'].astype(float).fillna(-1)\n\nprint df.shape\nprint df.dtypes\nprint \"column names\"\nprint \"------------\"\nfor colName in df.columns:\n print colName\nprint\nprint df.head()\n\n%matplotlib inline\nfig,ax = plt.subplots(figsize=(10,8))\nax.width = 1\nax.height = .5\nplt.xkcd()\n#plt.ylim(-25,30)\nax.plot_date(df['dateTime'], df['value'], '.', label=\"Discharge (cf/sec)\", color=\"0.2\")\nfig.autofmt_xdate()\nplt.legend(loc=2, bbox_to_anchor=(1.0,1))\nplt.title(\"15-minute Discharge - cubic feet per second\")\nplt.ylabel(\"Discharge\")\nplt.xlabel(\"Date\")\nplt.show()" ]
[ "markdown", "code", "markdown", "code", "markdown", "code" ]
PythonFreeCourse/Notebooks
week02/6_Documentation.ipynb
mit
[ "<img src=\"images/logo.jpg\" style=\"display: block; margin-left: auto; margin-right: auto;\" alt=\"לוגו של מיזם לימוד הפייתון. נחש מצויר בצבעי צהוב וכחול, הנע בין האותיות של שם הקורס: לומדים פייתון. הסלוגן המופיע מעל לשם הקורס הוא מיזם חינמי ללימוד תכנות בעברית.\">\n<p style=\"text-align: right; direction: rtl; float: right;\">תיעוד</p>\n<p style=\"text-align: right; direction: rtl; float: right; clear: both;\">הגדרה</p>\n<p style=\"text-align: right; direction: rtl; float: right; clear: both;\">\nבעולם התכנות, בניגוד לשיעורי היסטוריה בתיכון, אנחנו שמים דגש על היכולות השונות של המתכנת ופחות על הזיכרון שלו בנוגע לפרטים קטנים.<br>\nאומנם מרשים לראות מתכנת ששולט היטב בכל רזי שפת תכנות מסוימת, אבל בעולם האמיתי כולנו משתמשים באתרים שמטרתם לסייע לנו, בין אם להבין איך לכתוב את הקוד בצורה יעילה ובין אם כדי להיזכר בפונקציה או בפעולה ששכחנו.\n</p>\n\n<p style=\"text-align: right; direction: rtl; float: right; clear: both;\">\nלרוב שפות התכנות או המערכות המוכרות שבהן תתעסקו, יש מעין \"מדריך למשתמש\" שנקרא <dfn>תיעוד</dfn>, או <dfn>דוקומנטציה</dfn>.<br>\nככל שהמוצר שאתם משתמשים בו בשל יותר ויש לו משתמשים רבים, כך תוכלו למצוא עבורו תיעוד מפורט וברור יותר.<br>\nתיעוד טוב הוא לעיתים רבות שיקול מכריע בהחלטה אם להשתמש בטכנולוגיה מסוימת או לא.<br>\nלא אחת אפילו מתכנתים מעולים ומנוסים נעזרים בתיעוד ובאמצעים מקבילים במשך עבודתם.\n</p>\n\n<p style=\"text-align: right; direction: rtl; float: right; clear: both;\">תיעוד בפייתון</p>\n<p style=\"text-align: right; direction: rtl; float: right; clear: both;\">\nלפייתון יש <a href=\"https://docs.python.org/3/\">אתר תיעוד</a> מרשים ומלא במידע. רוב המידע המתועד באתר כתוב בצורה טובה ונהירה.<br>\nבאתר ישנה תיבת חיפוש, ואפילו <a href=\"https://docs.python.org/3/tutorial/index.html\">מדריך כניסה עבור מתחילים</a>, שמתבסס על ידע מסוים בתכנות.<br>\nרוב מתכנתי הפייתון משתמשים באתר התיעוד הזה כמדריך עזר, ופונים אליו כשהם צריכים פרטים נוספים בקשר לרעיון שכבר קיים אצלם בראש.\n</p>\n\n<p style=\"text-align: right; direction: rtl; float: right; clear: both;\">רשימת פעולות</p>\n<p style=\"text-align: right; direction: rtl; float: right; clear: both;\">\nבשיעור הקודם למדנו שלכל סוג נתונים (כמו <i>str</i> או <i>int</i>) יש פעולות ששייכות לו.<br>\nעבור מחרוזות, לדוגמה, אנחנו מכירים פעולות כמו <code>str.count</code> ו־<code>str.replace</code>.<br>\nכדי לקבל את רשימת הפעולות עבור סוג נתונים מסוים נשתמש בפונקציה <code dir=\"ltr\" style=\"direction: ltr;\">dir()</code>:\n</p>", "dir(str)", "<p style=\"text-align: right; direction: rtl; float: right; clear: both;\">\nהתוצאה היא רשימה של כל הפעולות שאפשר להפעיל על <i>str</i>.<br>\nבשלב הזה, אמליץ לכם להתעלם מפעולות ברשימה ששמן מתחיל בקו תחתון.\n</p>\n\n<p style=\"text-align: right; direction: rtl; float: right; clear: both;\">\nטריק נוסף שכנראה נוח יותר, זמין בסביבות פיתוח רבות שיצא לכם לעבוד בהן.<br>\nהטריק הוא ציון סוג הנתון או המשתנה שאתם עובדים עליו, הסימן \"נקודה\" ואז לחיצה על <kbd dir=\"ltr\" style=\"direction: ltr\">↹ TAB</kbd>.\n</p>", "# מקמו את הסמן אחרי הנקודה, ואז לחצו על המקש \"טאב\" במקלדת\nstr.\n# ניתן גם כך:\n\"Hello\".\n# או כך:\ns = \"Hello\"\ns.", "<span style=\"text-align: right; direction: rtl; float: right; clear: both;\">תיעוד על פעולה או על פונקציה</span>\n<p style=\"text-align: right; direction: rtl; float: right; clear: both;\">\nבמקרה שנרצה לחפש פרטים נוספים על אחת הפונקציות או הפעולות (נניח <code>len</code>, או <code dir=\"ltr\" style=\"direction: ltr\">str.upper()</code>), התיעוד של פייתון הוא מקור מידע נהדר לכך.<br>\nאם אנחנו נמצאים בתוך המחברת, יש טריק נחמד לקבל חלק מהתיעוד הזה בצורה מהירה – פשוט נרשום בתא קוד את שם הפונקציה, ואחריו סימן שאלה:\n</p>", "len?", "<p style=\"text-align: right; direction: rtl; float: right; clear: both;\">\nברגע שנריץ את התא, תקפוץ לנו חלונית עם מידע נוסף על הפונקציה.<br>\nאם אנחנו רוצים לקבל מידע על פעולה, נכתוב את סוג הערך שעליו אנחנו רוצים לבצע אותה (נניח, str):\n</p>", "# str - השם של טיפוס הנתונים (הסוג של הערך)\n# . - הנקודה היא סימון שהפעולה שכתבנו אחריה שייכת לסוג שכתבנו לפניה\n# upper - השם של הפעולה שעליה רוצים לקבל עזרה\n# ? - מבקש את המידע על הפעולה\nstr.upper?", "<div class=\"align-center\" style=\"display: flex; text-align: right; direction: rtl;\">\n <div style=\"display: flex; width: 10%; float: right; \">\n <img src=\"images/warning.png\" style=\"height: 50px !important;\" alt=\"אזהרה!\"> \n </div>\n <div style=\"width: 90%\">\n <p style=\"text-align: right; direction: rtl;\">\n קריאה לפונקציה, קרי הוספת התווים <code>()</code> לפני סימן השאלה, תפעיל את הפונקציה או הפעולה במקום לתת לכם עזרה.\n </p>\n </div>\n</div>\n\n<p style=\"text-align: right; direction: rtl; float: right; clear: both;\">\nבתוך חלונית העזרה שתיפתח במחברת נראה שורות המכילות פרטים מעניינים:\n</p>\n<ul style=\"text-align: right; direction: rtl; float: right; clear: both;\">\n <li><dfn>Signature</dfn> – חתימת הפעולה או הפונקציה, הכוללת את השם שלה ואת הפרמטרים שלה.</li>\n <li><dfn>Docstring</dfn> – כמה מילים שמתארות היטב מה הפונקציה עושה, ולעיתים נותנות מידע נוסף על הפרמטרים.</li>\n</ul>\n<p style=\"text-align: right; direction: rtl; float: right; clear: both;\">\nלעת עתה, נתעלם מהרכיבים <em>self</em>, <em>*</em> או <em>/</em> שיופיעו מדי פעם בשדה Signature.\n</p>\n\n<p style=\"text-align: right; direction: rtl; float: right; clear: both;\">משאבי עזרה נוספים</p>\n<p style=\"text-align: right; direction: rtl; float: right; clear: both;\">\nעולם התכנות הוא אדיר בממדיו, וקיימים משאבים נהדרים שמטרתם לעזור למתכנת.<br>\nלפניכם כמה מהפופולריים שבהם:\n</p>\n<ul style=\"text-align: right; direction: rtl; float: right; clear: both;\">\n <li><a href=\"https://google.com\">Google</a> – חפשו היטב את השאלה שלכם ב־Google. מתכנת טוב עושה את זה פעמים רבות ביום. קרוב לוודאי שמישהו בעולם כבר נתקל בעבר בבעיה שלכם.</li>\n <li><a href=\"https://docs.python.org/3\">התיעוד של פייתון</a> – כולל הרבה מידע, ולעיתים דוגמאות מועילות.</li>\n <li><a href=\"https://stackoverflow.com\">Stack Overflow</a> – אחד האתרים הכי מוכרים בעולם הפיתוח, המכיל מערכת שאלות ותשובות עם דירוג בנוגע לכל מה שקשור בתכנות.</li>\n <li><a href=\"https://github.com\">GitHub</a> – אתר שבו אנשים מנהלים את הקוד שלהם ומשתפים אותו עם אחרים. יש בו שורת חיפוש, והוא מעולה לצורך מציאת דוגמאות לשימוש בקוד.</li>\n</ul>\n\n<p style=\"text-align: right; direction: rtl; float: right; clear: both;\">תרגול</p>\n<p style=\"text-align: right; direction: rtl; float: right; clear: both;\">\nבתרגול זה השתמשו במידת הצורך בתיעוד של פייתון כדי לגלות פעולות שלא למדנו עליהן.\n</p>\n\n<p style=\"text-align: right; direction: rtl; float: right; clear: both;\">סידור רשימה</p>\n<p style=\"text-align: right; direction: rtl; float: right; clear: both;\">\nלפניכם רשימת המספרים הטבעיים מ־1 עד 10 בסדר מבולבל.<br>\nהאם תוכלו לסדר אותה בשורה אחת של קוד, ולהדפיס אותה בשורה אחת נוספת?<br>\nהפלט שיודפס על המסך צריך להיות: <samp dir=\"ltr\" style=\"direction: ltr;\">[1, 2, 3, 4, 5, 6, 7, 8, 9, 10]</samp>.\n</p>", "numbers = [2, 9, 10, 8, 7, 4, 3, 5, 6, 1]", "<p style=\"text-align: right; direction: rtl; float: right; clear: both;\">הספרייה של דיואי</p>\n<p style=\"text-align: right; direction: rtl; float: right; clear: both;\">\n <a href=\"https://he.wikipedia.org/wiki/שיטת_דיואי\">שיטת דיואי</a> משמשת לחלוקת ספרים לתחומי תוכן.<br>\n כך, קטגוריה 000 המכילה ספרות כללית הנוגעת למדעי המחשב, מידע ועבודות כלליות, 500 היא ספרות הנוגעת למדע טהור ו־700 היא ספרות הנוגעת לאומנות.<br>\n בתוך כל קטגוריה יש תתי־קטגוריות נוספות, כמו 004 שמתעסקת בעיבוד מידע, 005 שמתעסקת בתכנות, או 755 שמתעסקת באומנות בדת.<br>\n קבלו מהמשתמש שם ספר, ואת הקטגוריה שאליה הוא משתייך.<br>\n אם משתמש הקליד מספר שאינו בעל 3 ספרות, כמו \"4\", הניחו שהמשתמש התכוון להקליד 004 והשלימו עבורו את המלאכה.<br>\n הדפיסו למשתמש את מספר הקטגוריה אחרי התיקון, או \"קטגוריה שגויה\" אם הקלט שהוזן לא היה מספרי.\n</p>\n<p style=\"text-align: right; direction: rtl; float: right; clear: both;\">\nלדוגמה:\n</p>\n<ul style=\"text-align: right; direction: rtl; float: right; clear: both;\">\n <li>אם משתמש הקליד 5, הדפיסו לו <samp>005</samp>.</li>\n <li>אם משתמש הקליד 007, הדפיסו לו <samp>007</samp>.</li>\n <li>אם משתמש הקליד 70, הדפיסו לו <samp>070</samp>.</li>\n <li>אם משתמש הקליד 700, הדפיסו לו <samp>700</samp>.</li>\n <li>אם משתמש הקליד <span dir=\"ltr\" style=\"direction: ltr\">-1</span>, הדפיסו לו <samp>\"Wrong Category\"</samp>.</li>\n <li>אם משתמש הקליד Art, הדפיסו לו <samp>\"Wrong Category\"</samp>.</li>\n</ul>" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
briennakh/BIOF509
Wk11/Wk11-regression-classification.ipynb
mit
[ "Week 11 - Regression and Classification\nIn previous weeks we have looked at the steps needed in preparing different types of data for use by machine learning algorithms.", "import matplotlib.pyplot as plt\nimport numpy as np\n\n%matplotlib inline\n\nfrom sklearn import datasets\n\ndiabetes = datasets.load_diabetes()\n\n# Description at http://www4.stat.ncsu.edu/~boos/var.select/diabetes.html\n\nX = diabetes.data\ny = diabetes.target\n\nprint(X.shape, y.shape)\n\nfrom sklearn import linear_model\n\nclf = linear_model.LinearRegression()\nclf.fit(X, y)\n\nplt.plot(y, clf.predict(X), 'k.')\nplt.show()", "All the different models in scikit-learn follow a consistent structure. \n\nThe class is passed any parameters needed at initialization. In this case none are needed.\nThe fit method takes the features and the target as the parameters X and y.\nThe predict method takes an array of features and returns the predicted values\n\nThese are the basic components with additional methods added when needed. For example, classifiers also have \n\nA predict_proba method that gives the probability that a sample belongs to each of the classes.\nA predict_log_proba method that gives the log of the probability that a sample belongs to each of the classes.\n\nEvaluating models\nBefore we consider whether we have a good model, or which model to choose, we must first decide on how we will evaluate our models.\nMetrics\nAs part of our evaluation having a single number with which to compare models can be very useful. Choosing a metric that is as close a representation of our goal as possible enables many models to be automatically compared. This can be important when choosing model parameters or comparing different types of algorithm. \nEven if we have a metric we feel is reasonable it can be worthwhile considering in detail the predictions made by any model. Some questions to ask:\n\nIs the model sufficiently sensitive for our use case?\nIs the model sufficiently specific for our use case?\nIs there any systemic bias?\nDoes the model perform equally well over the distribution of features?\nHow does the model perform outside the range of the training data?\nIs the model overly dependent on one or two samples in the training dataset?\n\nThe metric we decide to use will depend on the type of problem we have (regression or classification) and what aspects of the prediction are most important to us. For example, a decision we might have to make is between:\n\nA model with intermediate errors for all samples\nA model with low errors for the majority of samples but with a small number of samples that have large errors.\n\nFor these two situations in a regression task we might choose mean_squared_error and mean_absolute_error.\nThere are lists for regression metrics and classification metrics.\nWe can apply the mean_squared_error metric to the linear regression model on the diabetes dataset:", "diabetes = datasets.load_diabetes()\nX = diabetes.data\ny = diabetes.target\n\nclf = linear_model.LinearRegression()\nclf.fit(X, y)\n\nplt.plot(y, clf.predict(X), 'k.')\nplt.show()\n\nfrom sklearn import metrics\n\nmetrics.mean_squared_error(y, clf.predict(X))", "Although this single number might seem unimpressive, metrics are a key component for model evaluation. As a simple example, we can perform a permutation test to determine whether we might see this performance by chance.", "diabetes = datasets.load_diabetes()\nX = diabetes.data\ny = diabetes.target\n\nclf = linear_model.LinearRegression()\nclf.fit(X, y)\n\nerror = metrics.mean_squared_error(y, clf.predict(X))\n\nrounds = 1000\nnp.random.seed(0)\nerrors = []\n\nfor i in range(rounds):\n y_shuffle = y.copy()\n np.random.shuffle(y_shuffle)\n clf_shuffle = linear_model.LinearRegression()\n clf_shuffle.fit(X, y_shuffle)\n errors.append(metrics.mean_squared_error(y_shuffle, clf_shuffle.predict(X)))\n\nbetter_models_by_chance = len([i for i in errors if i <= error])\n\nif better_models_by_chance > 0:\n print('Probability of observing a mean_squared_error of {0} by chance is {1}'.format(error, \n better_models_by_chance / rounds))\nelse:\n print('Probability of observing a mean_squared_error of {0} by chance is <{1}'.format(error, \n 1 / rounds))", "Training, validation, and test datasets\nWhen evaluating different models the approach taken above is not going to work. Particularly for models with high variance, that overfit the training data, we will get very good performance on the training data but perform no better than chance on new data.", "from sklearn import tree\n\ndiabetes = datasets.load_diabetes()\nX = diabetes.data\ny = diabetes.target\n\nclf = tree.DecisionTreeRegressor()\nclf.fit(X, y)\n\nplt.plot(y, clf.predict(X), 'k.')\nplt.show()\n\n\nmetrics.mean_squared_error(y, clf.predict(X))\n\nfrom sklearn import neighbors\n\ndiabetes = datasets.load_diabetes()\nX = diabetes.data\ny = diabetes.target\n\nclf = neighbors.KNeighborsRegressor(n_neighbors=1)\nclf.fit(X, y)\n\nplt.plot(y, clf.predict(X), 'k.')\nplt.show()\n\n\nmetrics.mean_squared_error(y, clf.predict(X))", "Both these models appear to give perfect solutions but all they do is map our test samples back to the training samples and return the associated value.\nTo understand how our model truly performs we need to evaluate the performance on previously unseen samples. The general approach is to divide a dataset into training, validation and test datasets. Each model is trained on the training dataset. Multiple models can then be compared by evaluating the model against the validation dataset. There is still the potential of choosing a model that performs well on the validation dataset by chance so a final check is made against a test dataset.\nThis unfortunately means that part of our, often expensively gathered, data can't be used to train our model. Although it is important to leave out a test dataset an alternative approach can be used for the validation dataset. Rather than just building one model we can build multiple models, each time leaving out a different validation dataset. Our validation score is then the average across each of the models. This is known as cross-validation.\nScikit-learn provides classes to support cross-validation but a simple solution can also be implemented directly. Below we will separate out a test dataset to evaluate the nearest neighbor model.", "from sklearn import neighbors\n\ndiabetes = datasets.load_diabetes()\nX = diabetes.data\ny = diabetes.target\n\nnp.random.seed(0)\n\nsplit = np.random.random(y.shape) > 0.3\n\nX_train = X[split]\ny_train = y[split]\nX_test = X[np.logical_not(split)]\ny_test = y[np.logical_not(split)]\n\nprint(X_train.shape, X_test.shape)\n\nclf = neighbors.KNeighborsRegressor(1)\nclf.fit(X_train, y_train)\n\nplt.plot(y_test, clf.predict(X_test), 'k.')\nplt.show()\n\n\nmetrics.mean_squared_error(y_test, clf.predict(X_test))\n\ndiabetes = datasets.load_diabetes()\nX = diabetes.data\ny = diabetes.target\n\nnp.random.seed(0)\n\nsplit = np.random.random(y.shape) > 0.3\n\nX_train = X[split]\ny_train = y[split]\nX_test = X[np.logical_not(split)]\ny_test = y[np.logical_not(split)]\n\nprint(X_train.shape, X_test.shape)\n\nclf = linear_model.LinearRegression()\nclf.fit(X_train, y_train)\n\nplt.plot(y_test, clf.predict(X_test), 'k.')\nplt.show()\n\n\nmetrics.mean_squared_error(y_test, clf.predict(X_test))", "Model types\nScikit-learn includes a variety of different models. The most commonly used algorithms probably include the following:\n\nRegression\nSupport Vector Machines\nNearest neighbors\nDecision trees\nEnsembles & boosting\n\nRegression\nWe have already seen several examples of regression. The basic form is: \n$$f(X) = \\beta_{0} + \\sum_{j=1}^p X_j\\beta_j$$\nEach feature is multipled by a coefficient and then the sum returned. This value is then transformed for classification to limit the value to the range 0 to 1.\nSupport Vector Machines\nSupport vector machines attempt to project samples into a higher dimensional space such that they can be divided by a hyperplane. A good explanation can be found in this article.\nNearest neighbors\nNearest neighbor methods identify a number of samples from the training set that are close to the new sample and then return the average or most common value depending on the task. \nDecision trees\nDecision trees attempt to predict the value of a new sample by learning simple rules from the training samples.\nEnsembles & boosting\nEnsembles are combinations of other models. Combining different models together can improve performance by boosting generalizability. An average or most common value from the models is returned.\nBoosting builds one model and then attempts to reduce the errors with the next model. At each stage the bias in the model is reduced. In this way many weak predictors can be combined into one much more powerful predictor.\nI often begin with an ensemble or boosting approach as they typically give very good performance without needing to be carefully optimized. Many of the other algorithms are sensitive to their parameters.\nParameter selection\nMany of the models require several different parameters to be specified. Their performance is typically heavily influenced by these parameters and choosing the best values is vital in developing the best model.\nSome models have alternative implementations that handle parameter selection in an efficient way.", "from sklearn import datasets\n\ndiabetes = datasets.load_diabetes()\n\n# Description at http://www4.stat.ncsu.edu/~boos/var.select/diabetes.html\n\nX = diabetes.data\ny = diabetes.target\n\nprint(X.shape, y.shape)\n\nfrom sklearn import linear_model\n\nclf = linear_model.LassoCV(cv=20)\nclf.fit(X, y)\n\nprint('Alpha chosen was ', clf.alpha_)\n\nplt.plot(y, clf.predict(X), 'k.')", "There is an expanded example in the documentation.\nThere are also general classes to handle parameter selection for situations when dedicated classes are not available. As we will often have parameters in preprocessing steps these general classes will be used much more often.", "from sklearn import grid_search\n\nfrom sklearn import neighbors\n\ndiabetes = datasets.load_diabetes()\nX = diabetes.data\ny = diabetes.target\n\nnp.random.seed(0)\n\nsplit = np.random.random(y.shape) > 0.3\n\nX_train = X[split]\ny_train = y[split]\nX_test = X[np.logical_not(split)]\ny_test = y[np.logical_not(split)]\n\nprint(X_train.shape, X_test.shape)\n\nknn = neighbors.KNeighborsRegressor()\n\nparameters = {'n_neighbors':[1,2,3,4,5,6,7,8,9,10]}\nclf = grid_search.GridSearchCV(knn, parameters)\n\nclf.fit(X_train, y_train)\n\nplt.plot(y_test, clf.predict(X_test), 'k.')\nplt.show()\n\n\nprint(metrics.mean_squared_error(y_test, clf.predict(X_test)))\n\nclf.get_params()", "Exercises\n\nLoad the handwritten digits dataset and choose an appropriate metric\nDivide the data into a training and test dataset\nBuild a RandomForestClassifier on the training dataset, using cross-validation to evaluate performance\nChoose another classification algorithm and apply it to the digits dataset. \nUse grid search to find the optimal parameters for the chosen algorithm.\nComparing the true values with the predictions from the best model identify the numbers that are most commonly confused." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
marburg-open-courseware/gmoc
docs/mpg-if_error_continue/examples/e-02-2_conditionals.ipynb
mit
[ "Conditionals\nYou have already encountered conditional (or comparison) operators in this course <a href=\"https://oer.uni-marburg.de/goto.php?target=pg_2624_720&client_id=mriliasmooc\">before</a> (not least during our session on while loops), performed simple arithmetic comparisons and probably came across them in other programming languages as well &ndash; so what comes next should be 'old hat'.\nBriefly, conditional statements in the form of simple if statements tell Python to perform a certain action depending on whether the stated condition is True or False (in this context, please note that these two so-called Boolean values are always written without quotes!). The syntax of a simple if statement is \n\nif condition : <br>\n&ensp;&ensp;&ensp;indentedStatement\n\nwhich translates into \"If the stated condition evaluates to True, execute the indented statement.\" Or, to express this in a more visual form:\n<br>\n<center>\n\n<figure>\n <img src=\"https://www.tutorialspoint.com/python/images/if_else_statement.jpg\" alt=\"nino-3.4\" width=\"275\"> \n <figcaption>Source: https://www.tutorialspoint.com</figcaption>\n </figure>\n</center>\n<br>\nJust like loop constructs in Python, indentation is mandatory for any kind of conditional statement. Accordingly, in its most basic form, a simple if statement would look as follows:", "m = 5\n\nif m > 0:\n print(\"Larger than zero.\")", "In this example, it is True that our variable m is larger than zero, and therefore, the print call ('if code' in the above figure) is executed. Now, what if the condition were not True? Well...", "n = -5\n\nif n > 0:\n print(\"Larger than zero.\")", "In such a case, the condition evaluates to False and the print call included in the indented statement is simply skipped. However, showing (or rather creating) no output at all is not always desirable, and in the majority of use cases, there will definitely be a pool of two or more possibilities that need to be taken into consideration. \nIn order to overcome such limitations of simple if statements, we could insert an else clause followed by a code body that should be executed once the initial condition evaluates to False ('else code' in the above figure) in order for our code to behave more informative.", "if n > 0:\n print(\"Larger than zero.\")\nelse:\n print(\"Smaller than or equal to zero.\")", "At this point, be aware that the lines including if and else are not indented, whereas the related code bodies are. Due to the black-and-white nature of such an if-else statement, exactly one out of two possible blocks is executed. Starting from the top, \n\nthe if statement is evaluated and returns False (because n is not larger than zero), \nhence the indented 'if code' underneath is skipped. \nOur code now jumps directly to the next statement that is not indented \nand evaluates the else statement included therein. \nSince else means: \"Perform the following operation if all previous conditional statements (plural!) failed\", which evaluates to True in our case, \nthe subsequent print operation is executed.\n\nNow, what if we wanted to know if a value is larger than, smaller than, or equal to zero, i.e. add another layer of information to our initial condition \"Is a value larger than zero or not?\". In order to solve this, elif (short for 'else if' in other languages) is the right way to go as it lets you insert an arbitrary number of additional conditions between if and else that go beyond the rather basic capabilities of else.", "if n > 0:\n print(\"Larger than zero.\")\nelif n < 0:\n print(\"Smaller than zero.\")\nelse:\n print(\"Exactly zero.\")", "And similarly,", "p = 0\n\nif p > 0:\n print(\"Larger than zero.\")\nelif p < 0:\n print(\"Smaller than zero.\")\nelse:\n print(\"Exactly zero.\")", "Of course, indented blocks can have more than one statement, i.e. consist of multiple indented lines of code. In addition, they can embrace, or be embraced by, for or while loops. For example, if we wanted to count all the non-negative entries in a list, the following code snippet would be a proper solution that relies on both of the aforementioned features.", "x = [0, 3, -6, -2, 7, 1, -4]\n\n## set a counter\nn = 0\n\nfor i in range(len(x)):\n # if a non-negative integer is found, increment the counter by 1\n if x[i] >= 0:\n print(\"The value at position\", i, \"is larger than or equal to zero.\")\n n += 1\n # else do not increment the counter \n else:\n print(\"The value at position\", i, \"is smaller than zero.\")\n \n if i == (len(x)-1):\n print(\"\\n\")\n \nprint(n, \"out of\", len(x), \"elements are larger than or equal to zero.\") ", "<hr>\n\nBrief digression: continue and break\nThere are (at least) two key words that allow for an even finer control of what happens inside a for loop, viz.\n\ncontinue and\nbreak.\n\nAs the name implies, continue moves directly on to the next iteration of a loop without executing the remaining code body.", "for i in range(5):\n if i in [1, 3]:\n continue\n print(i) ", "break, on the other hand, breaks out of the innermost loop. Here, (i) the remaining code body following the break statement in the current iteration, but also (ii) any outstanding iterations are not executed anymore.", "for i in range(5):\n if i == 2:\n break\n print(i) ", "Bear in mind that break jumps out of the innermost loop only. This still means that other for (or while) loops embracing the one broken out from will continue running until the last item has been processed (or the condition is no longer True). See also the official <a href=\"https://docs.python.org/2/tutorial/controlflow.html\">Python Docs</a> for a full list of control flow tools.\n<hr>\n\nAlright, now that you know your way around with conditional if-elif-else constructs, its time to move on to some more sophisticated use-case scenarios. Therefore, head over to <a href=\"https://oer.uni-marburg.de/goto.php?target=pg_5104_720&client_id=mriliasmooc\">W02-2: Conditionals</a> and tackle the tasks waiting for you there." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
rasbt/algorithms_in_ipython_notebooks
ipython_nbs/search/breadth-first-search.ipynb
gpl-3.0
[ "%load_ext watermark\n%watermark -a 'Sebastian Raschka' -u -d -v", "Breadth-First Search\nBreadt-first search (BFS) is algorithm that can find the closest members in a graph that match a certain search criterion.\nBFS requires that we model our problem as a graph (nodes connected through edges). BFS can be applied to directed and undirected graph, where it can be applied to answer to types of question:\n\nIs there are connection between a particular pair of nodes?\nWhich is the closest node to a given node that satisfies a certain criterion?\n\nTo answer these questions, BFS starts by checking all direct neighbors of a given node -- neighbors are nodes that have a direct connection to a particular node. Then, if none of those neighbors satisfies the criterion that we are looking for, the search is expanded to the neighbors of the nodes we just checked, and so on, until a match is found or all nodes in the graph were checked.\nTo keep track of the nodes that we have already checked and that we are going to check, we need two additional data structures: \n1) A hash table to keep track of nodes we have already checked. If we don't check for previously checked nodes, we may end up in cycles depending on the structure of the graph.\n2) A queue that stores the items to be checked.\nRepresenting the graph\nTo represent the graph, its nodes and edges, we can simply use a hash table such as Python's built-in dictionaries. Imagine we have an undirected, social network graph that lists our direct friends (Elijah, Marissa, Nikolai) and friends of friends:\n<img src=\"images/breadth-first-search/friend-graph-1.jpg\" alt=\"\" style=\"width: 400px;\"/>\nSay we are going to move to a new apartment next weekend, and we want to ask our friends if they have a pick-up truck that can be helpful in this endeavor. First, we would reach out to our directed friends (or 1st degree connections). If none of these have a pick-up truck, we ask them to ask their 1st degree connections (which are our 2nd degree connections), and so forth:\n<img src=\"images/breadth-first-search/friend-graph-2.jpg\" alt=\"\" style=\"width: 600px;\"/>\nWe can represent such a graph using a simple hash table (here: Python dictionary) as follows:", "graph = {}\n\ngraph['You'] = ['Elijah', 'Marissa', 'Nikolai', 'Cassidy']\ngraph['Elijah'] = ['You']\ngraph['Marissa'] = ['You']\ngraph['Nikolai'] = ['John', 'Thomas', 'You']\ngraph['Cassidy'] = ['John', 'You']\ngraph['John'] = ['Cassidy', 'Nikolai']\ngraph['Thomas'] = ['Nikolai', 'Mario']\ngraph['Mario'] = ['Thomas']", "The Queue data structure\nNext, let's setup a simple queue data structure. Of course, we can also use a regular Python list like a queue (using .insert(0, x) and .pop(), but this way, our breadth-first search implementation is maybe more illustrative. For more information about queues, please see the Queues and Deques notebook.", "class QueueItem():\n def __init__(self, value, pointer=None):\n self.value = value\n self.pointer = pointer\n\nclass Queue():\n def __init__(self):\n self.last = None\n self.first = None\n self.length = 0\n \n def enqueue(self, value):\n item = QueueItem(value, None)\n if self.last:\n self.last.pointer = item\n if not self.first:\n self.first = item\n self.last = item\n self.length += 1\n \n def dequeue(self):\n if self.first is not None:\n value = self.first.value\n self.first = self.first.pointer\n self.length -= 1\n else:\n value = None\n return value\n\nqe = Queue()\nqe.enqueue('a')\n\nprint('First element:', qe.first.value)\nprint('Last element:', qe.last.value)\nprint('Queue length:', qe.length)\n\nqe.enqueue('b')\n\nprint('First element:', qe.first.value)\nprint('Last element:', qe.last.value)\nprint('Queue length:', qe.length)\n\nqe.enqueue('c')\n\nprint('First element:', qe.first.value)\nprint('Last element:', qe.last.value)\nprint('Queue length:', qe.length)\n\nval = qe.dequeue()\n\nprint('Dequeued value:', val)\nprint('Queue length:', qe.length)\n\nval = qe.dequeue()\n\nprint('Dequeued value:', val)\nprint('Queue length:', qe.length)\n\nval = qe.dequeue()\n\nprint('Dequeued value:', val)\nprint('Queue length:', qe.length)\n\nval = qe.dequeue()\n\nprint('Dequeued value:', val)\nprint('Queue length:', qe.length)\n\nqe.enqueue('c')\n\nprint('First element:', qe.first.value)\nprint('Last element:', qe.last.value)\nprint('Queue length:', qe.length)", "Implementing freadth-first search to find the shortest path\nNow, back to the graph, where we want to identify the closest connection that owns a truck, which can be helpful for moving (if we are allowed to borrow it, that is):\n<img src=\"images/breadth-first-search/friend-graph-2.jpg\" alt=\"\" style=\"width: 600px;\"/>", "graph = {}\n\ngraph['You'] = ['Elijah', 'Marissa', 'Nikolai', 'Cassidy']\ngraph['Elijah'] = ['You']\ngraph['Marissa'] = ['You']\ngraph['Nikolai'] = ['John', 'Thomas', 'You']\ngraph['Cassidy'] = ['John', 'You']\ngraph['John'] = ['Cassidy', 'Nikolai']\ngraph['Thomas'] = ['Nikolai', 'Mario']\ngraph['Mario'] = ['Thomas']", "For simplicity, let's assume we have function that checks if a person ows a pick-up truck. (Say, Mario owns a pick-up truck, the check function knows it but we don't know it.)", "def has_truck(person):\n if person == 'Mario':\n return True\n else:\n return False", "Now, the breadth_first_search implementation below will check our closest neighbors first, then, it will check the neighnors of our neighbors and so forth. We will make use both of the graph we constructed and the Queue data structure that we implemented. Also, note that we are keeping track of people we already checked to prevent cycles in our search:", "def breadth_first_search(graph):\n\n # initialize queue\n queue = Queue()\n for person in graph['You']:\n queue.enqueue(person)\n\n people_checked = set()\n degree = 0\n \n while queue.length:\n \n person = queue.dequeue()\n\n if has_truck(person):\n return person\n else:\n degree += 1\n people_checked.add(person)\n for next_person in graph[person]:\n # check to prevent endless cycles\n if next_person not in people_checked:\n queue.enqueue(next_person)\n\nbreadth_first_search(graph)" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
biof-309-python/BIOF309-2016-Fall
Week_06/Week06 - 01 - Homework Solutions.ipynb
mit
[ "Week 6 Homework\n\nSource: Python for Biologists\nIn this folder you’ll find a text file called data.csv, containing some made-up data for a number of genes. Each line contains the following fields for a single gene in this order: species name, sequence, gene name, expression level. The fields are separated by commas (hence the name of the file – csv stands for Comma Separated Values). Think of it as a representation of a table in a spreadsheet – each line is a row, and each field in a line is a column. All the exercises for this section use the data read from this file.\nSeveral species\nPrint out the gene names for all genes belonging to Drosophila melanogaster or Drosophila simulans.", "# %load data.csv\nDrosophila melanogaster,atatatatatcgcgtatatatacgactatatgcattaattatagcatatcgatatatatatcgatattatatcgcattatacgcgcgtaattatatcgcgtaattacga,kdy647,264\nDrosophila melanogaster,actgtgacgtgtactgtacgactatcgatacgtagtactgatcgctactgtaatgcatccatgctgacgtatctaagt,jdg766,185\nDrosophila simulans,atcgatcatgtcgatcgatgatgcatccgactatcgtcgatcgtgatcgatcgatcgatcatcgatcgatgtcgatcatgtcgatatcgt,kdy533,485\nDrosophila yakuba,cgcgcgctcgcgcatacggcctaatgcgcgcgctagcgatgc,hdt739,85\nDrosophila ananassae,ttacgatcgatcgatcgatcgatcgtcgatcgtcgatgctacatcgatcatcatcggattagtcacatcgatcgatcatcgactgatcgtcgatcgtagatgctgacatcgatagca,hdu045,356\nDrosophila ananassae,gcatcgatcgatcgcggcgcatcgatcgcgatcatcgatcatacgcgtcatatctatacgtcactgccgcgcgtatctacgcgatgactagctagact,teg436,222\n\n\n# Look at csv module\n\nimport csv\nwith open('data.csv') as csvfile:\n raw_data = csv.reader(csvfile, delimiter=' ', quotechar='|')\n for row in raw_data:\n print(', '.join(row))\n\n# Look at csv module\n\nimport csv\nwith open('data.csv') as csvfile:\n raw_data = csv.reader(csvfile)\n for row in raw_data:\n print(row)\n\n# Look at csv module\n\nimport csv\nwith open('data.csv') as csvfile:\n raw_data = csv.reader(csvfile)\n for row in raw_data:\n if row[0] == 'Drosophila melanogaster' or row[0] == 'Drosophila simulans':\n print(row[2])\n\nimport csv\nwith open('data.csv') as csvfile:\n raw_data = csv.reader(csvfile)\n for row in raw_data:\n if row[0] in ['Drosophila melanogaster', 'Drosophila simulans']:\n print(row[2])", "Length range\nPrint out the gene names for all genes between 90 and 110 bases long.", "import csv\nwith open('data.csv') as csvfile:\n raw_data = csv.reader(csvfile)\n for row in raw_data:\n if len(row[1]) >= 90 or len(row[1]) <= 110:\n print(row[2])", "AT content\nPrint out the gene names for all genes whose AT content is less than 0.5 and whose expression level is greater than 200.", "def is_at_rich(dna):\n length = len(dna)\n a_count = dna.upper().count('A')\n t_count = dna.upper().count('T')\n at_content = (a_count + t_count) / length\n return at_content < 0.5\n\nimport csv\nwith open('data.csv') as csvfile:\n raw_data = csv.reader(csvfile)\n for row in raw_data:\n if is_at_rich(row[1]) and int(row[3]) > 200:\n print(row[2])", "Complex condition\nPrint out the gene names for all genes whose name begins with “k” or “h” except those belonging to Drosophila melanogaster.", "import csv\nwith open('data.csv') as csvfile:\n raw_data = csv.reader(csvfile)\n for row in raw_data:\n if (row[2].startswith('k') or row[2].startswith('h')) and row[0] != 'Drosophila melanogaster':\n print(row[2])", "High low medium\nFor each gene, print out a message giving the gene name and saying whether its AT content is high (greater than 0.65), low (less than 0.45) or medium (between 0.45 and 0.65).", "def at_percentage(dna):\n length = len(dna)\n a_count = dna.upper().count('A')\n t_count = dna.upper().count('T')\n at_content = (a_count + t_count) / length\n return at_content\n\nimport csv\nwith open('data.csv') as csvfile:\n raw_data = csv.reader(csvfile)\n for row in raw_data:\n at_percent = at_percentage(row[1])\n if at_percent > 0.65:\n print('AT content is high')\n elif at_percent < 0.45:\n print('AT content is high')\n else:\n print('AT content is medium')" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
tensorflow/docs-l10n
site/zh-cn/federated/tutorials/high_performance_simulation_with_kubernetes.ipynb
apache-2.0
[ "High-performance Simulation with Kubernetes\nThis tutorial will describe how to set up high-performance simulation using a\nTFF runtime running on Kubernetes. The model is the same as in the previous\ntutorial, High-performance simulations with TFF. The only difference is that\nhere we use a worker pool instead of a local executor.\nThis tutorial refers to Google Cloud's GKE to create the Kubernetes cluster,\nbut all the steps after the cluster is created can be used with any Kubernetes\ninstallation.\n<table class=\"tfo-notebook-buttons\" align=\"left\">\n <td>\n <a target=\"_blank\" href=\"https://tensorflow.google.cn/federated/tutorials/high_performance_simulation_with_kubernetes\"><img src=\"https://tensorflow.google.cn/images/tf_logo_32px.png\" />View on TensorFlow.org</a>\n </td>\n <td>\n <a target=\"_blank\" href=\"https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/zh-cn/federated/tutorials/high_performance_simulation_with_kubernetes.ipynb\"><img src=\"https://tensorflow.google.cn/images/colab_logo_32px.png\" />Run in Google Colab</a>\n </td>\n <td>\n <a target=\"_blank\" href=\"https://github.com/tensorflow/docs-l10n/blob/master/site/zh-cn/federated/tutorials/high_performance_simulation_with_kubernetes.ipynb\"><img src=\"https://tensorflow.google.cn/images/GitHub-Mark-32px.png\" />View source on GitHub</a>\n </td>\n <td>\n <a href=\"https://storage.googleapis.com/tensorflow_docs/docs-l10n/site/zh-cn/federated/tutorials/high_performance_simulation_with_kubernetes.ipynb\"><img src=\"https://tensorflow.google.cn/images/download_logo_32px.png\" />Download notebook</a>\n </td>\n</table>\n\n在 GKE 上启动 TFF 工作进程\n\n注:本教程假定用户目前拥有 GCP 项目。\n\n创建一个 Kubernetes 集群\n以下步骤只需执行一次。可以将该集群重用于将来的工作负载。\n按照 GKE 说明来创建容器集群。本教程的其余部分假定集群的名称为 tff-cluster,但实际名称并不重要。当您到达“第 5 步:部署应用”时,请停止按照说明操作。\n部署 TFF 工作进程应用\n与 GCP 交互的命令可以在本地运行,也可以在 Google Cloud Shell 中运行。我们建议使用 Google Cloud Shell,因为它不需要其他设置。\n\n运行以下命令来启动 Kubernetes 应用。\n\n$ kubectl create deployment tff-workers --image=gcr.io/tensorflow-federated/remote-executor-service:latest\n\n为应用添加一个负载均衡器。\n\n$ kubectl expose deployment tff-workers --type=LoadBalancer --port 80 --target-port 8000\n\n注:这会将您的部署公开到互联网,并且仅用于演示目的。对于生产用途,强烈建议使用防火墙和身份验证。\n\n在 Google Cloud Console 上查找负载均衡器的 IP 地址。您稍后会需要它来将训练循环连接到工作进程应用。\n(或者)在本地启动 Docker 容器\n$ docker run --rm -p 8000:8000 gcr.io/tensorflow-federated/remote-executor-service:latest\n设置 TFF 环境", "#@test {\"skip\": true}\n!pip install --quiet --upgrade tensorflow-federated-nightly\n!pip install --quiet --upgrade nest-asyncio\n\nimport nest_asyncio\nnest_asyncio.apply()", "定义要训练的模型", "import collections\nimport time\n\nimport tensorflow as tf\nimport tensorflow_federated as tff\n\nsource, _ = tff.simulation.datasets.emnist.load_data()\n\n\ndef map_fn(example):\n return collections.OrderedDict(\n x=tf.reshape(example['pixels'], [-1, 784]), y=example['label'])\n\n\ndef client_data(n):\n ds = source.create_tf_dataset_for_client(source.client_ids[n])\n return ds.repeat(10).batch(20).map(map_fn)\n\n\ntrain_data = [client_data(n) for n in range(10)]\ninput_spec = train_data[0].element_spec\n\n\ndef model_fn():\n model = tf.keras.models.Sequential([\n tf.keras.layers.InputLayer(input_shape=(784,)),\n tf.keras.layers.Dense(units=10, kernel_initializer='zeros'),\n tf.keras.layers.Softmax(),\n ])\n return tff.learning.from_keras_model(\n model,\n input_spec=input_spec,\n loss=tf.keras.losses.SparseCategoricalCrossentropy(),\n metrics=[tf.keras.metrics.SparseCategoricalAccuracy()])\n\n\ntrainer = tff.learning.build_federated_averaging_process(\n model_fn, client_optimizer_fn=lambda: tf.keras.optimizers.SGD(0.02))\n\n\ndef evaluate(num_rounds=10):\n state = trainer.initialize()\n for round in range(num_rounds):\n t1 = time.time()\n state, metrics = trainer.next(state, train_data)\n t2 = time.time()\n print('Round {}: loss {}, round time {}'.format(round, metrics.loss, t2 - t1))", "设置远程执行器\n默认情况下,TFF 在本地执行所有计算。在此步骤中,我们指示 TFF 连接到我们在上面设置的 Kubernetes 服务。确保在此处复制服务的 IP 地址。", "import grpc\n\nip_address = '0.0.0.0' #@param {type:\"string\"}\nport = 80 #@param {type:\"integer\"}\n\nchannels = [grpc.insecure_channel(f'{ip_address}:{port}') for _ in range(10)]\n\ntff.backends.native.set_remote_execution_context(channels)", "运行训练", "evaluate()" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
ES-DOC/esdoc-jupyterhub
notebooks/miroc/cmip6/models/miroc-es2h/seaice.ipynb
gpl-3.0
[ "ES-DOC CMIP6 Model Properties - Seaice\nMIP Era: CMIP6\nInstitute: MIROC\nSource ID: MIROC-ES2H\nTopic: Seaice\nSub-Topics: Dynamics, Thermodynamics, Radiative Processes. \nProperties: 80 (63 required)\nModel descriptions: Model description details\nInitialized From: -- \nNotebook Help: Goto notebook help page\nNotebook Initialised: 2018-02-20 15:02:40\nDocument Setup\nIMPORTANT: to be executed each time you run the notebook", "# DO NOT EDIT ! \nfrom pyesdoc.ipython.model_topic import NotebookOutput \n\n# DO NOT EDIT ! \nDOC = NotebookOutput('cmip6', 'miroc', 'miroc-es2h', 'seaice')", "Document Authors\nSet document authors", "# Set as follows: DOC.set_author(\"name\", \"email\") \n# TODO - please enter value(s)", "Document Contributors\nSpecify document contributors", "# Set as follows: DOC.set_contributor(\"name\", \"email\") \n# TODO - please enter value(s)", "Document Publication\nSpecify document publication status", "# Set publication status: \n# 0=do not publish, 1=publish. \nDOC.set_publication_status(0)", "Document Table of Contents\n1. Key Properties --&gt; Model\n2. Key Properties --&gt; Variables\n3. Key Properties --&gt; Seawater Properties\n4. Key Properties --&gt; Resolution\n5. Key Properties --&gt; Tuning Applied\n6. Key Properties --&gt; Key Parameter Values\n7. Key Properties --&gt; Assumptions\n8. Key Properties --&gt; Conservation\n9. Grid --&gt; Discretisation --&gt; Horizontal\n10. Grid --&gt; Discretisation --&gt; Vertical\n11. Grid --&gt; Seaice Categories\n12. Grid --&gt; Snow On Seaice\n13. Dynamics\n14. Thermodynamics --&gt; Energy\n15. Thermodynamics --&gt; Mass\n16. Thermodynamics --&gt; Salt\n17. Thermodynamics --&gt; Salt --&gt; Mass Transport\n18. Thermodynamics --&gt; Salt --&gt; Thermodynamics\n19. Thermodynamics --&gt; Ice Thickness Distribution\n20. Thermodynamics --&gt; Ice Floe Size Distribution\n21. Thermodynamics --&gt; Melt Ponds\n22. Thermodynamics --&gt; Snow Processes\n23. Radiative Processes \n1. Key Properties --&gt; Model\nName of seaice model used.\n1.1. Model Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of sea ice model.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.model.model_overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "1.2. Model Name\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nName of sea ice model code (e.g. CICE 4.2, LIM 2.1, etc.)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.model.model_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "2. Key Properties --&gt; Variables\nList of prognostic variable in the sea ice model.\n2.1. Prognostic\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nList of prognostic variables in the sea ice component.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.variables.prognostic') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Sea ice temperature\" \n# \"Sea ice concentration\" \n# \"Sea ice thickness\" \n# \"Sea ice volume per grid cell area\" \n# \"Sea ice u-velocity\" \n# \"Sea ice v-velocity\" \n# \"Sea ice enthalpy\" \n# \"Internal ice stress\" \n# \"Salinity\" \n# \"Snow temperature\" \n# \"Snow depth\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "3. Key Properties --&gt; Seawater Properties\nProperties of seawater relevant to sea ice\n3.1. Ocean Freezing Point\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nEquation used to compute the freezing point (in deg C) of seawater, as a function of salinity and pressure", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.seawater_properties.ocean_freezing_point') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"TEOS-10\" \n# \"Constant\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "3.2. Ocean Freezing Point Value\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nIf using a constant seawater freezing point, specify this value.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.seawater_properties.ocean_freezing_point_value') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "4. Key Properties --&gt; Resolution\nResolution of the sea ice grid\n4.1. Name\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nThis is a string usually used by the modelling group to describe the resolution of this grid e.g. N512L180, T512L70, ORCA025 etc.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.resolution.name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "4.2. Canonical Horizontal Resolution\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nExpression quoted for gross comparisons of resolution, eg. 50km or 0.1 degrees etc.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.resolution.canonical_horizontal_resolution') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "4.3. Number Of Horizontal Gridpoints\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nTotal number of horizontal (XY) points (or degrees of freedom) on computational grid.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.resolution.number_of_horizontal_gridpoints') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "5. Key Properties --&gt; Tuning Applied\nTuning applied to sea ice model component\n5.1. Description\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nGeneral overview description of tuning: explain and motivate the main targets and metrics retained. Document the relative weight given to climate performance metrics versus process oriented metrics, and on the possible conflicts with parameterization level tuning. In particular describe any struggle with a parameter value that required pushing it to its limits to solve a particular model deficiency.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.tuning_applied.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "5.2. Target\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nWhat was the aim of tuning, e.g. correct sea ice minima, correct seasonal cycle.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.tuning_applied.target') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "5.3. Simulations\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\n*Which simulations had tuning applied, e.g. all, not historical, only pi-control? *", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.tuning_applied.simulations') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "5.4. Metrics Used\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nList any observed metrics used in tuning model/parameters", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.tuning_applied.metrics_used') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "5.5. Variables\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nWhich variables were changed during the tuning process?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.tuning_applied.variables') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "6. Key Properties --&gt; Key Parameter Values\nValues of key parameters\n6.1. Typical Parameters\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nWhat values were specificed for the following parameters if used?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.key_parameter_values.typical_parameters') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Ice strength (P*) in units of N m{-2}\" \n# \"Snow conductivity (ks) in units of W m{-1} K{-1} \" \n# \"Minimum thickness of ice created in leads (h0) in units of m\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "6.2. Additional Parameters\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nIf you have any additional paramterised values that you have used (e.g. minimum open water fraction or bare ice albedo), please provide them here as a comma separated list", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.key_parameter_values.additional_parameters') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "7. Key Properties --&gt; Assumptions\nAssumptions made in the sea ice model\n7.1. Description\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nGeneral overview description of any key assumptions made in this model.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.assumptions.description') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "7.2. On Diagnostic Variables\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nNote any assumptions that specifically affect the CMIP6 diagnostic sea ice variables.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.assumptions.on_diagnostic_variables') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "7.3. Missing Processes\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nList any key processes missing in this model configuration? Provide full details where this affects the CMIP6 diagnostic sea ice variables?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.assumptions.missing_processes') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "8. Key Properties --&gt; Conservation\nConservation in the sea ice component\n8.1. Description\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nProvide a general description of conservation methodology.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.conservation.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "8.2. Properties\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nProperties conserved in sea ice by the numerical schemes.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.conservation.properties') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Energy\" \n# \"Mass\" \n# \"Salt\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "8.3. Budget\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nFor each conserved property, specify the output variables which close the related budgets. as a comma separated list. For example: Conserved property, variable1, variable2, variable3", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.conservation.budget') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "8.4. Was Flux Correction Used\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDoes conservation involved flux correction?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.conservation.was_flux_correction_used') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "8.5. Corrected Conserved Prognostic Variables\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nList any variables which are conserved by more than the numerical scheme alone.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.conservation.corrected_conserved_prognostic_variables') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "9. Grid --&gt; Discretisation --&gt; Horizontal\nSea ice discretisation in the horizontal\n9.1. Grid\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nGrid on which sea ice is horizontal discretised?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.discretisation.horizontal.grid') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Ocean grid\" \n# \"Atmosphere Grid\" \n# \"Own Grid\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "9.2. Grid Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nWhat is the type of sea ice grid?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.discretisation.horizontal.grid_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Structured grid\" \n# \"Unstructured grid\" \n# \"Adaptive grid\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "9.3. Scheme\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nWhat is the advection scheme?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.discretisation.horizontal.scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Finite differences\" \n# \"Finite elements\" \n# \"Finite volumes\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "9.4. Thermodynamics Time Step\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nWhat is the time step in the sea ice model thermodynamic component in seconds.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.discretisation.horizontal.thermodynamics_time_step') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "9.5. Dynamics Time Step\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nWhat is the time step in the sea ice model dynamic component in seconds.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.discretisation.horizontal.dynamics_time_step') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "9.6. Additional Details\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nSpecify any additional horizontal discretisation details.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.discretisation.horizontal.additional_details') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "10. Grid --&gt; Discretisation --&gt; Vertical\nSea ice vertical properties\n10.1. Layering\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nWhat type of sea ice vertical layers are implemented for purposes of thermodynamic calculations?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.discretisation.vertical.layering') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Zero-layer\" \n# \"Two-layers\" \n# \"Multi-layers\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "10.2. Number Of Layers\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIf using multi-layers specify how many.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.discretisation.vertical.number_of_layers') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "10.3. Additional Details\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nSpecify any additional vertical grid details.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.discretisation.vertical.additional_details') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "11. Grid --&gt; Seaice Categories\nWhat method is used to represent sea ice categories ?\n11.1. Has Mulitple Categories\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nSet to true if the sea ice model has multiple sea ice categories.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.seaice_categories.has_mulitple_categories') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "11.2. Number Of Categories\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIf using sea ice categories specify how many.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.seaice_categories.number_of_categories') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "11.3. Category Limits\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIf using sea ice categories specify each of the category limits.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.seaice_categories.category_limits') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "11.4. Ice Thickness Distribution Scheme\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe the sea ice thickness distribution scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.seaice_categories.ice_thickness_distribution_scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "11.5. Other\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nIf the sea ice model does not use sea ice categories specify any additional details. For example models that paramterise the ice thickness distribution ITD (i.e there is no explicit ITD) but there is assumed distribution and fluxes are computed accordingly.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.seaice_categories.other') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "12. Grid --&gt; Snow On Seaice\nSnow on sea ice details\n12.1. Has Snow On Ice\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs snow on ice represented in this model?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.snow_on_seaice.has_snow_on_ice') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "12.2. Number Of Snow Levels\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nNumber of vertical levels of snow on ice?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.snow_on_seaice.number_of_snow_levels') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "12.3. Snow Fraction\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe how the snow fraction on sea ice is determined", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.snow_on_seaice.snow_fraction') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "12.4. Additional Details\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nSpecify any additional details related to snow on ice.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.snow_on_seaice.additional_details') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "13. Dynamics\nSea Ice Dynamics\n13.1. Horizontal Transport\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nWhat is the method of horizontal advection of sea ice?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.dynamics.horizontal_transport') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Incremental Re-mapping\" \n# \"Prather\" \n# \"Eulerian\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "13.2. Transport In Thickness Space\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nWhat is the method of sea ice transport in thickness space (i.e. in thickness categories)?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.dynamics.transport_in_thickness_space') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Incremental Re-mapping\" \n# \"Prather\" \n# \"Eulerian\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "13.3. Ice Strength Formulation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nWhich method of sea ice strength formulation is used?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.dynamics.ice_strength_formulation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Hibler 1979\" \n# \"Rothrock 1975\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "13.4. Redistribution\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nWhich processes can redistribute sea ice (including thickness)?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.dynamics.redistribution') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Rafting\" \n# \"Ridging\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "13.5. Rheology\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nRheology, what is the ice deformation formulation?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.dynamics.rheology') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Free-drift\" \n# \"Mohr-Coloumb\" \n# \"Visco-plastic\" \n# \"Elastic-visco-plastic\" \n# \"Elastic-anisotropic-plastic\" \n# \"Granular\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "14. Thermodynamics --&gt; Energy\nProcesses related to energy in sea ice thermodynamics\n14.1. Enthalpy Formulation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nWhat is the energy formulation?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.energy.enthalpy_formulation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Pure ice latent heat (Semtner 0-layer)\" \n# \"Pure ice latent and sensible heat\" \n# \"Pure ice latent and sensible heat + brine heat reservoir (Semtner 3-layer)\" \n# \"Pure ice latent and sensible heat + explicit brine inclusions (Bitz and Lipscomb)\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "14.2. Thermal Conductivity\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nWhat type of thermal conductivity is used?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.energy.thermal_conductivity') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Pure ice\" \n# \"Saline ice\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "14.3. Heat Diffusion\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nWhat is the method of heat diffusion?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.energy.heat_diffusion') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Conduction fluxes\" \n# \"Conduction and radiation heat fluxes\" \n# \"Conduction, radiation and latent heat transport\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "14.4. Basal Heat Flux\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nMethod by which basal ocean heat flux is handled?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.energy.basal_heat_flux') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Heat Reservoir\" \n# \"Thermal Fixed Salinity\" \n# \"Thermal Varying Salinity\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "14.5. Fixed Salinity Value\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nIf you have selected {Thermal properties depend on S-T (with fixed salinity)}, supply fixed salinity value for each sea ice layer.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.energy.fixed_salinity_value') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "14.6. Heat Content Of Precipitation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe the method by which the heat content of precipitation is handled.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.energy.heat_content_of_precipitation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "14.7. Precipitation Effects On Salinity\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nIf precipitation (freshwater) that falls on sea ice affects the ocean surface salinity please provide further details.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.energy.precipitation_effects_on_salinity') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "15. Thermodynamics --&gt; Mass\nProcesses related to mass in sea ice thermodynamics\n15.1. New Ice Formation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe the method by which new sea ice is formed in open water.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.mass.new_ice_formation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "15.2. Ice Vertical Growth And Melt\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe the method that governs the vertical growth and melt of sea ice.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.mass.ice_vertical_growth_and_melt') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "15.3. Ice Lateral Melting\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nWhat is the method of sea ice lateral melting?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.mass.ice_lateral_melting') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Floe-size dependent (Bitz et al 2001)\" \n# \"Virtual thin ice melting (for single-category)\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "15.4. Ice Surface Sublimation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe the method that governs sea ice surface sublimation.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.mass.ice_surface_sublimation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "15.5. Frazil Ice\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe the method of frazil ice formation.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.mass.frazil_ice') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "16. Thermodynamics --&gt; Salt\nProcesses related to salt in sea ice thermodynamics.\n16.1. Has Multiple Sea Ice Salinities\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDoes the sea ice model use two different salinities: one for thermodynamic calculations; and one for the salt budget?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.salt.has_multiple_sea_ice_salinities') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "16.2. Sea Ice Salinity Thermal Impacts\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDoes sea ice salinity impact the thermal properties of sea ice?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.salt.sea_ice_salinity_thermal_impacts') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "17. Thermodynamics --&gt; Salt --&gt; Mass Transport\nMass transport of salt\n17.1. Salinity Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nHow is salinity determined in the mass transport of salt calculation?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.salinity_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Constant\" \n# \"Prescribed salinity profile\" \n# \"Prognostic salinity profile\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "17.2. Constant Salinity Value\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nIf using a constant salinity value specify this value in PSU?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.constant_salinity_value') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "17.3. Additional Details\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the salinity profile used.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.additional_details') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "18. Thermodynamics --&gt; Salt --&gt; Thermodynamics\nSalt thermodynamics\n18.1. Salinity Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nHow is salinity determined in the thermodynamic calculation?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.salinity_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Constant\" \n# \"Prescribed salinity profile\" \n# \"Prognostic salinity profile\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "18.2. Constant Salinity Value\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nIf using a constant salinity value specify this value in PSU?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.constant_salinity_value') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "18.3. Additional Details\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the salinity profile used.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.additional_details') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "19. Thermodynamics --&gt; Ice Thickness Distribution\nIce thickness distribution details.\n19.1. Representation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nHow is the sea ice thickness distribution represented?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.ice_thickness_distribution.representation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Explicit\" \n# \"Virtual (enhancement of thermal conductivity, thin ice melting)\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "20. Thermodynamics --&gt; Ice Floe Size Distribution\nIce floe-size distribution details.\n20.1. Representation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nHow is the sea ice floe-size represented?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.ice_floe_size_distribution.representation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Explicit\" \n# \"Parameterised\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "20.2. Additional Details\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nPlease provide further details on any parameterisation of floe-size.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.ice_floe_size_distribution.additional_details') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "21. Thermodynamics --&gt; Melt Ponds\nCharacteristics of melt ponds.\n21.1. Are Included\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nAre melt ponds included in the sea ice model?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.are_included') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "21.2. Formulation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nWhat method of melt pond formulation is used?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.formulation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Flocco and Feltham (2010)\" \n# \"Level-ice melt ponds\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "21.3. Impacts\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nWhat do melt ponds have an impact on?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.impacts') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Albedo\" \n# \"Freshwater\" \n# \"Heat\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "22. Thermodynamics --&gt; Snow Processes\nThermodynamic processes in snow on sea ice\n22.1. Has Snow Aging\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nSet to True if the sea ice model has a snow aging scheme.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.snow_processes.has_snow_aging') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "22.2. Snow Aging Scheme\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the snow aging scheme.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.snow_processes.snow_aging_scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "22.3. Has Snow Ice Formation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nSet to True if the sea ice model has snow ice formation.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.snow_processes.has_snow_ice_formation') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "22.4. Snow Ice Formation Scheme\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the snow ice formation scheme.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.snow_processes.snow_ice_formation_scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "22.5. Redistribution\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nWhat is the impact of ridging on snow cover?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.snow_processes.redistribution') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "22.6. Heat Diffusion\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nWhat is the heat diffusion through snow methodology in sea ice thermodynamics?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.snow_processes.heat_diffusion') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Single-layered heat diffusion\" \n# \"Multi-layered heat diffusion\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "23. Radiative Processes\nSea Ice Radiative Processes\n23.1. Surface Albedo\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nMethod used to handle surface albedo.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.radiative_processes.surface_albedo') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Delta-Eddington\" \n# \"Parameterized\" \n# \"Multi-band albedo\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "23.2. Ice Radiation Transmission\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nMethod by which solar radiation through sea ice is handled.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.radiative_processes.ice_radiation_transmission') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Delta-Eddington\" \n# \"Exponential attenuation\" \n# \"Ice radiation transmission per category\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "©2017 ES-DOC" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
geektoni/shogun
doc/ipython-notebooks/clustering/KMeans.ipynb
bsd-3-clause
[ "Clustering with KMeans in Shogun Machine Learning Toolbox\nNotebook by Parijat Mazumdar (GitHub ID: <a href='https://github.com/mazumdarparijat'>mazumdarparijat</a>)\nThis notebook demonstrates <a href=\"http://en.wikipedia.org/wiki/K-means_clustering\">clustering with KMeans</a> in Shogun along with its initialization and training. The initialization of cluster centres is shown manually, randomly and using the <a href=\"http://en.wikipedia.org/wiki/K-means%2B%2B\">KMeans++</a> algorithm. Training is done via the classical <a href=\"http://en.wikipedia.org/wiki/Lloyd%27s_algorithm\">Lloyds</a> and mini-batch KMeans method.\nIt is then applied to a real world data set. Furthermore, the effect of dimensionality reduction using <a href=\"http://en.wikipedia.org/wiki/Principal_component_analysis\">PCA</a> is analysed on the KMeans algorithm.\nKMeans - An Overview\nThe <a href=\"http://en.wikipedia.org/wiki/K-means_clustering\">KMeans clustering algorithm</a> is used to partition a space of n observations into k partitions (or clusters). Each of these clusters is denoted by the mean of the observation vectors belonging to it and a unique label which is attached to all the observations belonging to it. Thus, in general, the algorithm takes parameter k and an observation matrix (along with the notion of distance between points ie <i>distance metric</i>) as input and returns mean of each of the k clusters along with labels indicating belongingness of each observations. Let us construct a simple example to understand how it is done in Shogun using the <a href=\"http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1KMeans.html\">KMeans</a> class. \nLet us start by creating a toy dataset.", "import numpy as np\nimport shogun as sg\nimport os\nSHOGUN_DATA_DIR=os.getenv('SHOGUN_DATA_DIR', '../../../data')\n\nnum = 200\nd1 = np.concatenate((np.random.randn(1,num),10.*np.random.randn(1,num)),0)\nd2 = np.concatenate((np.random.randn(1,num),10.*np.random.randn(1,num)),0)+np.array([[10.],[0.]])\nd3 = np.concatenate((np.random.randn(1,num),10.*np.random.randn(1,num)),0)+np.array([[0.],[100.]])\nd4 = np.concatenate((np.random.randn(1,num),10.*np.random.randn(1,num)),0)+np.array([[10.],[100.]])\n\nrectangle = np.concatenate((d1,d2,d3,d4),1)\ntotalPoints = 800 ", "The toy data created above consists of 4 gaussian blobs, having 200 points each, centered around the vertices of a rectancle. Let's plot it for convenience.", "import matplotlib.pyplot as plt\n%matplotlib inline\n\nfigure,axis = plt.subplots(1,1)\naxis.plot(rectangle[0], rectangle[1], 'o', color='r', markersize=5)\naxis.set_xlim(-5,15)\naxis.set_ylim(-50,150)\naxis.set_title('Toy data : Rectangle')\nplt.show()", "With data at our disposal, it is time to apply KMeans to it using the KMeans class in Shogun. First we construct Shogun features from our data:", "train_features = sg.create_features(rectangle)", "Next we specify the number of clusters we want and create a distance object specifying the distance metric to be used over our data for our KMeans training:", "# number of clusters\nk = 2\n\n# distance metric over feature matrix - Euclidean distance\ndistance = sg.create_distance('EuclideanDistance')\ndistance.init(train_features, train_features)", "Next, we create a KMeans object with our desired inputs/parameters and train:", "# KMeans object created\nkmeans = sg.create_machine(\"KMeans\", k=k, distance=distance)\n\n# KMeans training \nkmeans.train()", "Now that training has been done, let's get the cluster centers and label for each data point", "# cluster centers\ncenters = kmeans.get(\"cluster_centers\")\n\n# Labels for data points\nresult = kmeans.apply()", "Finally let us plot the centers and the data points (in different colours for different clusters):", "def plotResult(title = 'KMeans Plot'):\n figure,axis = plt.subplots(1,1)\n for i in range(totalPoints):\n if result.get(\"labels\")[i]==0.0:\n axis.plot(rectangle[0,i], rectangle[1,i], 'go', markersize=3)\n else:\n axis.plot(rectangle[0,i], rectangle[1,i], 'yo', markersize=3)\n axis.plot(centers[0,0], centers[1,0], 'go', markersize=10)\n axis.plot(centers[0,1], centers[1,1], 'yo', markersize=10)\n axis.set_xlim(-5,15)\n axis.set_ylim(-50,150)\n axis.set_title(title)\n plt.show()\n \nplotResult('KMeans Results')", "<b>Note:</b> You might not get the perfect result always. That is an inherent flaw of KMeans algorithm. In subsequent sections, we will discuss techniques which allow us to counter this.<br>\nNow that we have already worked out a simple KMeans implementation, it's time to understand certain specifics of KMeans implementaion and the options provided by Shogun to its users.\nInitialization of cluster centers\nThe KMeans algorithm requires that the cluster centers are initialized with some values. Shogun offers 3 ways to initialize the clusters. <ul><li>Random initialization (default)</li><li>Initialization by hand</li><li>Initialization using <a href=\"http://en.wikipedia.org/wiki/K-means%2B%2B\">KMeans++ algorithm</a></li></ul>Unless the user supplies initial centers or tells Shogun to use KMeans++, Random initialization is the default method used for cluster center initialization. This was precisely the case in the example discussed above.\nInitialization by hand\nThere are 2 ways to initialize centers by hand. One way is to pass on the centers during KMeans object creation, as follows:", "initial_centers = np.array([[0.,10.],[50.,50.]])\n\n# initial centers passed\nkmeans = sg.create_machine(\"KMeans\", k=k, distance=distance, initial_centers=initial_centers)", "Now, let's first get results by repeating the rest of the steps:", "# KMeans training \nkmeans.train(train_features)\n\n# cluster centers\ncenters = kmeans.get(\"cluster_centers\")\n\n# Labels for data points\nresult = kmeans.apply()\n\n# plot the results\nplotResult('Hand initialized KMeans Results 1')", "The other way to initialize centers by hand is as follows:", "new_initial_centers = np.array([[5.,5.],[0.,100.]])\n\n# set new initial centers\nkmeans.put(\"initial_centers\", new_initial_centers)", "Let's complete the rest of the code to get results.", "# KMeans training \nkmeans.train(train_features)\n\n# cluster centers\ncenters = kmeans.get(\"cluster_centers\")\n\n# Labels for data points\nresult = kmeans.apply()\n\n# plot the results\nplotResult('Hand initialized KMeans Results 2')", "Note the difference that inititial cluster centers can have on final result. \nInitializing using KMeans++ algorithm\nIn Shogun, a user can also use <a href=\"http://en.wikipedia.org/wiki/K-means%2B%2B\">KMeans++ algorithm</a> for center initialization. Using KMeans++ for center initialization is beneficial because it reduces total iterations used by KMeans and also the final centers mostly correspond to the global minima, which is often not the case with KMeans with random initialization. One of the ways to use KMeans++ is to set flag as <i>true</i> during KMeans object creation, as follows:", "# set flag for using KMeans++\nkmeans = sg.create_machine(\"KMeans\", k=k, distance=distance, kmeanspp=True)", "Completing rest of the steps to get result:", "# KMeans training \nkmeans.train(train_features)\n\n# cluster centers\ncenters = kmeans.get(\"cluster_centers\")\n\n# Labels for data points\nresult = kmeans.apply()\n\n# plot the results\nplotResult('KMeans with KMeans++ Results')", "Training Methods\nShogun offers 2 training methods for KMeans clustering:<ul><li><a href='http://en.wikipedia.org/wiki/K-means_clustering#Standard_algorithm'>Classical Lloyd's training</a> (default)</li><li><a href='http://www.eecs.tufts.edu/~dsculley/papers/fastkmeans.pdf'>mini-batch KMeans training</a></li></ul>Lloyd's training method is used by Shogun by default unless user switches to mini-batch training method.\nMini-Batch KMeans\nMini-batch KMeans is very useful in case of extremely large datasets and/or very high dimensional data which is often the case in text mining. One can switch to Mini-batch KMeans training while creating KMeans object as follows:", "# set training method to mini-batch\nkmeans = sg.create_machine(\"KMeansMiniBatch\", k=k, distance=distance)", "Completing the code to get results:", "# KMeans training \nkmeans.train(train_features)\n\n# cluster centers\ncenters = kmeans.get(\"cluster_centers\")\n\n# Labels for data points\nresult = kmeans.apply()\n\n# plot the results\nplotResult('Mini-batch KMeans Results')", "Applying KMeans on Real Data\nIn this section we see how useful KMeans can be in classifying the different varieties of Iris plant. For this purpose, we make use of Fisher's Iris dataset borrowed from the <a href='http://archive.ics.uci.edu/ml/datasets/Iris'>UCI Machine Learning Repository</a>. There are 3 varieties of Iris plants\n<ul><li>Iris Sensosa</li><li>Iris Versicolour</li><li>Iris Virginica</li></ul>\nThe Iris dataset enlists 4 features that can be used to segregate these varieties, namely\n<ul><li>sepal length</li><li>sepal width</li><li>petal length</li><li>petal width</li></ul>\nIt is additionally acknowledged that petal length and petal width are the 2 most important features (ie. features with very high class correlations)[refer to <a href='http://archive.ics.uci.edu/ml/machine-learning-databases/iris/iris.names'>summary statistics</a>]. Since the entire feature vector is impossible to plot, we only plot these two most important features in order to understand the dataset (at least partially). Note that we could have extracted the 2 most important features by applying PCA (or any one of the many dimensionality reduction methods available in Shogun) as well.", "with open(os.path.join(SHOGUN_DATA_DIR, 'uci/iris/iris.data')) as f:\n feats = []\n # read data from file\n for line in f:\n words = line.rstrip().split(',')\n feats.append([float(i) for i in words[0:4]])\n\n# create observation matrix\nobsmatrix = np.array(feats).T\n\n# plot the data\nfigure,axis = plt.subplots(1,1)\n# First 50 data belong to Iris Sentosa, plotted in green\naxis.plot(obsmatrix[2,0:50], obsmatrix[3,0:50], 'o', color='green', markersize=5)\n# Next 50 data belong to Iris Versicolour, plotted in red\naxis.plot(obsmatrix[2,50:100], obsmatrix[3,50:100], 'o', color='red', markersize=5)\n# Last 50 data belong to Iris Virginica, plotted in blue\naxis.plot(obsmatrix[2,100:150], obsmatrix[3,100:150], 'o', color='blue', markersize=5)\naxis.set_xlim(-1,8)\naxis.set_ylim(-1,3)\naxis.set_title('3 varieties of Iris plants')\nplt.show()", "In the above plot we see that the data points labelled Iris Sentosa form a nice separate cluster of their own. But in case of other 2 varieties, while the data points of same label do form clusters of their own, there is some mixing between the clusters at the boundary. Now let us apply KMeans algorithm and see how well we can extract these clusters.", "def apply_kmeans_iris(data):\n # wrap to Shogun features\n train_features = sg.create_features(data)\n\n # number of cluster centers = 3\n k = 3\n\n # distance function features - euclidean\n distance = sg.create_distance('EuclideanDistance')\n distance.init(train_features, train_features)\n\n # initialize KMeans object, use kmeans++ to initialize centers [play around: change it to False and compare results]\n kmeans = sg.create_machine(\"KMeans\", k=k, distance=distance, kmeanspp=True)\n\n # training method is Lloyd by default [play around: change it to mini-batch by uncommenting the following lines]\n #kmeans = sg.create_machine(\"KMeansMiniBatch\", k=k, distance=distance)\n\n # training kmeans\n kmeans.train(train_features)\n\n # labels for data points\n result = kmeans.apply()\n return result\n\nresult = apply_kmeans_iris(obsmatrix)", "Now let us create a 2-D plot of the clusters formed making use of the two most important features (petal length and petal width) and compare it with the earlier plot depicting the actual labels of data points.", "# plot the clusters over the original points in 2 dimensions\nfigure,axis = plt.subplots(1,1)\nfor i in range(150):\n if result.get(\"labels\")[i]==0.0:\n axis.plot(obsmatrix[2,i],obsmatrix[3,i],'ro', markersize=5)\n elif result.get(\"labels\")[i]==1.0:\n axis.plot(obsmatrix[2,i],obsmatrix[3,i],'go', markersize=5)\n else:\n axis.plot(obsmatrix[2,i],obsmatrix[3,i],'bo', markersize=5)\n\naxis.set_xlim(-1,8)\naxis.set_ylim(-1,3)\naxis.set_title('Iris plants clustered based on attributes')\nplt.show()", "From the above plot, it can be inferred that the accuracy of KMeans algorithm is very high for Iris dataset. Don't believe me? Alright, then let us make use of one of Shogun's clustering evaluation techniques to formally validate the claim. But before that, we have to label each sample in the dataset with a label corresponding to the class to which it belongs.", "# first 50 are iris sensosa labelled 0, next 50 are iris versicolour labelled 1 and so on\nlabels = np.concatenate((np.zeros(50),np.ones(50),2.*np.ones(50)),0)\n\n# bind labels assigned to Shogun multiclass labels\nground_truth = sg.create_labels(np.array(labels,dtype='float64'))", "Now we can compute clustering accuracy making use of the ClusteringAccuracy class in Shogun", "def analyzeResult(result): \n # shogun object for clustering accuracy\n AccuracyEval = sg.create_evaluation(\"ClusteringAccuracy\")\n\n # evaluates clustering accuracy\n accuracy = AccuracyEval.evaluate(result, ground_truth)\n\n # find out which sample points differ from actual labels (or ground truth)\n compare = result.get(\"labels\")-labels\n diff = np.nonzero(compare)\n return (diff,accuracy)\n\n(diff,accuracy_4d) = analyzeResult(result)\nprint('Accuracy : ' + str(accuracy_4d))\n\n# plot the difference between ground truth and predicted clusters\nfigure,axis = plt.subplots(1,1)\naxis.plot(obsmatrix[2,:],obsmatrix[3,:],'x',color='black', markersize=5)\naxis.plot(obsmatrix[2,diff],obsmatrix[3,diff],'x',color='r', markersize=7)\naxis.set_xlim(-1,8)\naxis.set_ylim(-1,3)\naxis.set_title('Difference')\nplt.show()", "In the above plot, wrongly clustered data points are marked in red. We see that the Iris Sentosa plants are perfectly clustered without error. The Iris Versicolour plants and Iris Virginica plants are also clustered with high accuracy, but there are some plant samples of either class that have been clustered with the wrong class. This happens near the boundary of the 2 classes in the plot and was well expected. Having mastered KMeans, it's time to move on to next interesting topic. \nPCA as a preprocessor to KMeans\nKMeans is highly affected by the <i>curse of dimensionality</i>. So, dimension reduction becomes an important preprocessing step. Shogun offers a variety of dimension reduction techniques to choose from. Since our data is not very high dimensional, PCA is a good choice for dimension reduction. We have already seen the accuracy of KMeans when all four dimensions are used. In the following exercise we shall see how the accuracy varies as one chooses lower dimensions to represent data. \n1-Dimensional representation\nLet us first apply PCA to reduce training features to 1 dimension", "def apply_pca_to_data(target_dims):\n train_features = sg.create_features(obsmatrix)\n submean = sg.create_transformer(\"PruneVarSubMean\", divide_by_std=False)\n submean.fit(train_features)\n submean.transform(train_features)\n preprocessor = sg.create_transformer(\"PCA\", target_dim=target_dims)\n preprocessor.fit(train_features)\n pca_transform = preprocessor.get(\"transformation_matrix\")\n new_features = np.dot(pca_transform.T, train_features.get(\"feature_matrix\"))\n return new_features\n\noneD_matrix = apply_pca_to_data(1)", "Next, let us get an idea of the data in 1-D by plotting it.", "figure,axis = plt.subplots(1,1)\n# First 50 data belong to Iris Sentosa, plotted in green\naxis.plot(oneD_matrix[0,0:50], np.zeros(50), 'go', markersize=5)\n# Next 50 data belong to Iris Versicolour, plotted in red\naxis.plot(oneD_matrix[0,50:100], np.zeros(50), 'ro', markersize=5)\n# Last 50 data belong to Iris Virginica, plotted in blue\naxis.plot(oneD_matrix[0,100:150], np.zeros(50), 'bo', markersize=5)\naxis.set_xlim(-5,5)\naxis.set_ylim(-1,1)\naxis.set_title('3 varieties of Iris plants')\nplt.show()", "Let us now apply KMeans to the 1-D data to get clusters.", "result = apply_kmeans_iris(oneD_matrix)", "Now that we have the results, the inevitable step is to check how good these results are.", "(diff,accuracy_1d) = analyzeResult(result)\nprint('Accuracy : ' + str(accuracy_1d))\n\n# plot the difference between ground truth and predicted clusters\nfigure,axis = plt.subplots(1,1)\naxis.plot(oneD_matrix[0,:],np.zeros(150),'x',color='black', markersize=5)\naxis.plot(oneD_matrix[0,diff],np.zeros(len(diff)),'x',color='r', markersize=7)\naxis.set_xlim(-5,5)\naxis.set_ylim(-1,1)\naxis.set_title('Difference')\nplt.show()", "2-Dimensional Representation\nWe follow the same steps as above and get the clustering accuracy.\nSTEP 1 : Apply PCA and plot the data (plotting is optional)", "twoD_matrix = apply_pca_to_data(2)\n\nfigure,axis = plt.subplots(1,1)\n# First 50 data belong to Iris Sentosa, plotted in green\naxis.plot(twoD_matrix[0,0:50], twoD_matrix[1,0:50], 'go', markersize=5)\n# Next 50 data belong to Iris Versicolour, plotted in red\naxis.plot(twoD_matrix[0,50:100], twoD_matrix[1,50:100], 'ro', markersize=5)\n# Last 50 data belong to Iris Virginica, plotted in blue\naxis.plot(twoD_matrix[0,100:150], twoD_matrix[1,100:150], 'bo', markersize=5)\naxis.set_title('3 varieties of Iris plants')\nplt.show()", "STEP 2 : Apply KMeans to obtain clusters", "result = apply_kmeans_iris(twoD_matrix)", "STEP 3: Get the accuracy of the results", "(diff,accuracy_2d) = analyzeResult(result)\nprint('Accuracy : ' + str(accuracy_2d))\n\n# plot the difference between ground truth and predicted clusters\nfigure,axis = plt.subplots(1,1)\naxis.plot(twoD_matrix[0,:],twoD_matrix[1,:],'x',color='black', markersize=5)\naxis.plot(twoD_matrix[0,diff],twoD_matrix[1,diff],'x',color='r', markersize=7)\naxis.set_title('Difference')\nplt.show()", "3-Dimensional Representation\nAgain, we follow the same steps, but skip plotting data.\nSTEP 1: Apply PCA to data", "threeD_matrix = apply_pca_to_data(3)", "STEP 2: Apply KMeans to 3-D representation of data", "result = apply_kmeans_iris(threeD_matrix)", "STEP 3: Get accuracy of results. In this step, the 'difference' plot positions data points based petal length \n and petal width in the original data. This will enable us to visually compare these results with that of KMeans applied\n to 4-Dimensional data (ie. our first result on Iris dataset)", "(diff,accuracy_3d) = analyzeResult(result)\nprint('Accuracy : ' + str(accuracy_3d))\n\n# plot the difference between ground truth and predicted clusters\nfigure,axis = plt.subplots(1,1)\naxis.plot(obsmatrix[2,:],obsmatrix[3,:],'x',color='black', markersize=5)\naxis.plot(obsmatrix[2,diff],obsmatrix[3,diff],'x',color='r', markersize=7)\naxis.set_title('Difference')\naxis.set_xlim(-1,8)\naxis.set_ylim(-1,3)\nplt.show()", "Finally, let us plot clustering accuracy vs. number of dimensions to consolidate our results.", "from scipy.interpolate import interp1d\n\nx = np.array([1, 2, 3, 4])\ny = np.array([accuracy_1d, accuracy_2d, accuracy_3d, accuracy_4d])\nf = interp1d(x, y)\nxnew = np.linspace(1,4,10)\nplt.plot(x,y,'o',xnew,f(xnew),'-')\nplt.xlim([0,5])\nplt.xlabel('no. of dims')\nplt.ylabel('Clustering Accuracy')\nplt.title('PCA Results')\nplt.show()", "The above plot is not very intuitive theoretically. The accuracy obtained by using just one latent dimension is much more than that obtained by taking all four features features. A plausible explanation could be that the mixing of data points from Iris Versicolour and Iris Virginica is least along the single principal dimension chosen by PCA. Additional dimensions only aggrevate this inter-mixing, thus resulting in poorer clustering accuracy. While there could be other explanations to the observed results, our small experiment has successfully highlighted the importance of PCA. Not only does it reduce the complexity of running KMeans, it also enhances results at times.\nReferences\n[1] D. Sculley. Web-scale k-means clustering. In Proceedings of the 19th international conference on World wide web, pages 1177–1178. ACM, 2010\n[2] Bishop, C. M., & others. (2006). Pattern recognition and machine learning. Springer New York.\n[3] Bache, K. & Lichman, M. (2013). UCI Machine Learning Repository [http://archive.ics.uci.edu/ml]. Irvine, CA: University of California, School of Information and Computer Science" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
chapman-cs510-2016f/cw-12-redyellow
Stack.ipynb
mit
[ "CS510_CW12_Nengyin & Kaiqin \n Examine the source code in src/stack carefully. \ni. How the Stack type is defined and how it works in detail. Explain in particular the differences between this implementation and the C implementation that you have already coded.\nNengyin:\nIt use a class Stack to define. In Stack there are public and private namespaces. In private part, it defines static varibles, like struct, depth and head which is the properties of the class. In public part, it defines all methods, like stack(), push(), pop(), and so on, which will use the private variables. However, C does not have class and unique_ptr, which free memories automatically when it is not used. It uses struct as the Node structure to store data and next pointer, and use functions.\nKaiqin:\nIn C++, Stack is defined as a class. Node is a struct in this class. Node consists of an integer value and an address pointing to the next Node. In class Stack, there are two namespaces, private and public. Private namespace contains struct Node, depth (how many Nodes are there in class Stack), and head (which points to the first Node). Public namespace defines size() const, functions void push(), SValue pop(), void print() const, and bool full().\nii. Explain the difference between a class and a struct\nStructs have default public members and bases and classes have default private members and bases. However, when both private and public are explicitly declared, there is no difference between a class and a struct. Both classes and structs can have a mixture of public and private members, can use inheritance and can have member functions. Classes create a namespace that also encapsulates the functions for manipulating its data elements. Classes may not be used when interfacing with C, because C does not have a concept of classes.\n//From: http://stackoverflow.com/questions/54585/when-should-you-use-a-class-vs-a-struct-in-c", "//***Do like your comments in stack.h so copied here for future review and study:***\n\n// In C++ a class is just a fancy struct\n// Both struct and class have two internal namespaces:\n// private: only accessible by the struct/class itself\n// public: accessible by other code that is using the struct/class\n//\n// When both private and public are explicitly declared, there is no\n// difference between a class and a struct\n//\n// However, when neither private nor public are declared, then:\n// struct: defaults to public\n// class: defaults to private\n//\n// By convention (Google), use structs for static data, and use\n// classes for everything else\n//\n// Notably, classes can include internal function definitions (like in Python)\n// Internal functions of a class are called \"methods\"\n//\n// A class also always has a pointer to itself available, named \"this\"\n// The keyword \"this\" serves a similar purpose to \"self\" in Python\n// It allows you to access a specific \"instance\" of the class so that\n// you can manipulate it within the definitions of your methods\n//\n// Recall that C++ automatically typedefs structs/classes", "iii. Explain what private and public do\nThe private and public, also protected restrict the access to the class members.\nA private member variable or function cannot be accessed, or even viewed from outside the class. Only the class and friend functions can access private members.\nA public member is accessible from anywhere outside the class but within a program. You can set and get the value of public variables without any member function.\nA protected member variable or function is very similar to a private member but it provided one additional benefit that they can be accessed in child classes which are called derived classes.\n//From: https://www.tutorialspoint.com/cplusplus/cpp_class_access_modifiers.htm", "//*Quote from comments:*\n // This structure type is private to the class, and used as a form of\n // linked list in order to contain the actual (static) data stored by the Stack class", "iv. Explain what size_t is used for\nIt is a type that can represent the size of any object in bytes: size_t is the type returned by the sizeof operator and is widely used in the standard library to represent sizes and counts.\n//From: http://www.cplusplus.com/reference/cstring/size_t/", "//*Quote from comments:*\n // Size method\n // Specifying const tells the compiler that the method will not change the\n // internal state of the instance of the class", "v. Explain why this code avoids the use of C pointers\nFirst, raw pointers must under no circumstances own memory. That means you must delete after use it. \nSecond, most uses of pointers in C++ are unnecessary. C++ has very strong support for value semantics, you can use smart pointer, container classes, design patterns like RAII, ect, instead of pointer.\nIn computer science, a smart pointer is an abstract data type that simulates a pointer while providing additional features, such as automatic garbage collection or bounds checking. These additional features are intended to reduce bugs caused by the misuse of pointers while retaining efficiency. Smart pointers typically keep track of the objects they point to for the purpose of memory management.\nThe misuse of pointers is a major source of bugs: the constant allocation, deallocation and referencing that must be performed by a program written using pointers introduces the risk that memory leaks will occur. Smart pointers try to prevent memory leaks by making the resource deallocation automatic: when the pointer (or the last in a series of pointers) to an object is destroyed, for example because it goes out of scope, the pointed object is destroyed too.\n//From: http://softwareengineering.stackexchange.com/questions/56935/why-are-pointers-not-recommended-when-coding-with-c\nvi. Explain what new and delete do in C++, and how they relate to what you have done in C\n\"New\" creates a pointer to an allocated memory block. \"Delete\" deallocates the memory that is allocated by \"new\".\nIt works differently from the way in C:\nAllocate memory: <br>\nC++: Node *n = new Node(); <br>\nC : Node *n = (Node *)calloc(1, sizeof(Node));\nDeallocate memory: <br>\nC++: delete n; <br>\nC : free(n);\nvii. Explain what a memory leak is, and what you should do to avoid it\nMemory leak means running out of system memory. When a program needs to store some temporary information during execution, it can dynamically request a chunk of memory from the system. However, the system has a fixed amount of total memory available. If one application uses up all of the system’s free memory, then other applications will not be able to obtain the memory that they require.\n//From: https://msdn.microsoft.com/en-us/library/ms859408.aspx\nI got three ways to avoid memory leak: <br>\n1.free(C) or delete(C++) the memory you allocated after finishing use it; <br>\n2.use smart pointer(C++) or other \"garbage collector\" to deallocate memory automatically after finishing use; <br>\n3.use fewer pointers if it is possible. \nviii. Explain what a unique_ptr is and how it relates to both new and C pointers\nstd::unique_ptr is a smart pointer that owns and manages another object through a pointer and disposes of that object when the unique_ptr goes out of scope.\nThe object is disposed of using the associated deleter when either of the following happens: <br>\nthe managing unique_ptr object is destroyed <br>\nthe managing unique_ptr object is assigned another pointer via operator= or reset().\nIt uses \"new Node()\" to allocate a new pointer, new_node_ptr, whose type is std::unique_ptr. It is a pointer but would deallocate the memory automatically when it is useless.\n//From: http://en.cppreference.com/w/cpp/memory/unique_ptr", "//*Quote from comments:*\n // However, by using the \"unique_ptr\" type above, we carefully avoid any\n // explicit memory allocation by using the allocation pre-defined inside the\n // unique_ptr itself. By using memory-safe structures in this way, we are using\n // the \"Rule of Zero\" and simplifying our life by defining ZERO of them:\n // https://rmf.io/cxx11/rule-of-zero/\n // http://www.cplusplus.com/reference/memory/unique_ptr/", "ix. Explain what a list initializer does\nConstructor is a special non-static member function of a class that is used to initialize objects of its class type.\nIn the definition of a constructor of a class, member initializer list specifies the initializers for direct and virtual base subobjects and non-static data members.\nThe order of member initializers in the list is irrelevant: the actual order of initialization is as follows:\n1) If the constructor is for the most-derived class, virtual base classes are initialized in the order in which they appear in depth-first left-to-right traversal of the base class declarations (left-to-right refers to the appearance in base-specifier lists). <br>\n2) Then, direct base classes are initialized in left-to-right order as they appear in this class's base-specifier list. <br>\n3) Then, non-static data members are initialized in order of declaration in the class definition. <br>\n4) Finally, the body of the constructor is executed.\n//From:http://en.cppreference.com/w/cpp/language/initializer_list", "//*Quote from comments*\n\n// Implementation of default constructor\nStack::Stack()\n : depth(0) // internal depth is 0\n , head(nullptr) // internal linked list is null to start\n{};\n// The construction \": var1(val1), var2(val2) {}\" is called a\n// \"list initializer\" for a constructor, and is the preferred\n// way of setting default field values for a class instance\n// Here 0 is the default value for Stack::depth\n// and nullptr is the default value for Stack::head", "x. Explain what the \"Rule of Zero\" is, and how it relates to the \"Rule of Three\"\nRule of Zero: Classes that have custom destructors, copy/move constructors or copy/move assignment operators should deal exclusively with ownership (which follows from the Single Responsibility Principle). Other classes should not have custom destructors, copy/move constructors or copy/move assignment operators.\nRule of Three: a class requires a user-defined destructor, a user-defined copy constructor, or a user-defined copy assignment operator. It almost certainly requires all three.\nRule of Zero does not need those three functions, but Rule of Three requires them.\n//From: http://en.cppreference.com/w/cpp/language/rule_of_three", "//*Quote from comments:*\n // Normally we would have to implement the following things in C++ here:\n // 1) Class Destructor : to deallocate memory when a Stack is deleted\n // ~Stack();\n //\n // 2) Copy Constructor : to define what Stack b(a) does when a is a Stack\n // This should create a copy b of the Stack a, but\n // should be defined appropriately to do that\n // Stack(const Stack&);\n //\n // 3) Copy Assignment : to define what b = a does when a is a Stack\n // This should create a shallow copy of the outer\n // structure of a, but leave the inner structure as\n // pointers to the memory contained in a, and should\n // be defined appropriately to do that\n // Stack& operator=(const Stack&);\n //\n // The need for defining ALL THREE of these things when managing memory for a\n // class explicitly is known as the \"Rule of Three\", and is standard\n // http://stackoverflow.com/questions/4172722/what-is-the-rule-of-three\n //\n // However, by using the \"unique_ptr\" type above, we carefully avoid any\n // explicit memory allocation by using the allocation pre-defined inside the\n // unique_ptr itself. By using memory-safe structures in this way, we are using\n // the \"Rule of Zero\" and simplifying our life by defining ZERO of them:\n // https://rmf.io/cxx11/rule-of-zero/\n // http://www.cplusplus.com/reference/memory/unique_ptr/", "xiii. Use valgrind to verify that you have no memory leaks in your working program. (You will have to edit the primary Makefile to change the CXXFLAGS to enable -g for debugging.)\nNo memory leaks.\n```\n$ valgrind --leak-check=yes test\n==13418== Memcheck, a memory error detector\n==13418== Copyright (C) 2002-2015, and GNU GPL'd, by Julian Seward et al.\n==13418== Using Valgrind-3.11.0 and LibVEX; rerun with -h for copyright info\n==13418== Command: test\n==13418==\n==13418==\n==13418== HEAP SUMMARY:\n==13418== in use at exit: 0 bytes in 0 blocks\n==13418== total heap usage: 523 allocs, 523 frees, 31,174 bytes allocated\n==13418==\n==13418== All heap blocks were freed -- no leaks are possible\n==13418==\n==13418== For counts of detected and suppressed errors, rerun with: -v\n==13418== ERROR SUMMARY: 0 errors from 0 contexts (suppressed: 1 from 1)\n```" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
ernestyalumni/MLgrabbag
theano_ML.ipynb
mit
[ "%matplotlib inline\n\nimport matplotlib.pyplot as plt\nimport sklearn\nfrom sklearn import datasets\n\nimport pandas as pd\n\nimport theano", "I accomplished the above by running this command at the command prompt: \nTHEANO_FLAGS='mode=FAST_RUN,device=gpu,floatX=float32' jupyter notebook", "#import theano\nfrom theano import function, config, sandbox, shared \nimport theano.tensor as T\nimport numpy as np\nimport scipy\nimport time", "More theano setup in jupyter notebook boilerplate", "print( theano.config.device )\nprint( theano.config.lib.cnmem) # cf. http://deeplearning.net/software/theano/library/config.html\nprint( theano.config.print_active_device)# Print active device at when the GPU device is initialized.\n\nimport os, sys\nos.getcwd()\nos.listdir( os.getcwd() ) \n\n%run gpu_test.py THEANO_FLAGS='mode=FAST_RUN,device=gpu,floatX=float32,lib.cnmem=0.85' # note lib.cnmem option for CnMem", "sample data boilerplate", "# Load the diabetes dataset\ndiabetes = sklearn.datasets.load_diabetes()\n\ndiabetes_X = diabetes.data\ndiabetes_Y = diabetes.target\n\n#diabetes_X1 = diabetes_X[:,np.newaxis,2]\ndiabetes_X1 = diabetes_X[:,np.newaxis, 2].astype(theano.config.floatX)\n#diabetes_Y = diabetes_Y.reshape( diabetes_Y.shape[0], 1)\ndiabetes_Y = diabetes_Y.astype(theano.config.floatX)", "Linear regression\ncf. Linear Regression In Theano\n1_linear_regression.py from github Newmu/Theano-Tutorials\nTrain on $m$ number of input data points", "m_lin = diabetes_X1.shape[0]", "input, output variables $x$, $y$ for Theano", "#x1 = T.vector('x1') # X1, input data, with only 1 feature, i.e. X \\in \\mathbb{R}^N, d=1 \n#ylin = T.vector('ylin') # target variable for linear regression, so that Y \\in \\mathbb{R}\n\nx1 = T.scalar('x1') # X1, input data, with only 1 feature, i.e. X \\in \\mathbb{R}^N, d=1 \nylin = T.scalar('ylin') # target variable for linear regression, so that Y \\in \\mathbb{R}", "Parameters (for a linear slope)\n$$ \n(\\theta^0, \\theta^1) \\in \\mathbb{R}^2 \n$$", "thet0_init_val = np.random.randn()\nthet1_init_val = np.random.randn()\n\nthet0 = theano.shared( value=thet0_init_val, name='thet0', borrow=True) # \\theta^0\nthet1 = theano.shared( thet1_init_val, name='thet1', borrow=True) # \\theta^1\n", "hypothesis function $h_{\\theta}$\n$$ \nh_{\\theta}(x) = \\theta_1 x + \\theta_0\n$$", "#h_thet = T.dot( thet1, x1) + thet0\n# whereas, Newmu uses\nh_thet = thet1 * x1 + thet0", "Cost function $J(\\theta)$", "# roshansanthosh uses \n#Jthet = T.sum( T.pow(h_thet-ylin,2))/(2*m_lin)\n\n# whereas, Newmu uses\n# Jthet = T.mean( T.sqr( thet_1*x1 + thet_0 - ylin ))\n\nJthet = T.mean( T.pow( h_thet-ylin,2))/2\n#Jthet = sandbox.cuda.basic_ops.gpu_from_host( T.mean( \n# sandbox.cuda.basic_ops.gpu_from_host( T.pow( h_thet-ylin,2))))/2", "$$\n\\text{grad}{\\theta}J(\\theta) = ( \\text{grad}{\\theta^0} J , \\text{grad}_{\\theta^1} J ) \n$$", "grad_thet0 = T.grad(Jthet, thet0)\ngrad_thet1 = T.grad(Jthet, thet1)\n\n\n# so-called \"learning rate\"\ngamma = 0.01", "Note that \"updates (iterable over pairs (shared_variable, new_expression) List, tuple or dict.) – expressions for new SharedVariable values\" cf. Theano doc", "train_lin = theano.function(inputs = [x1,ylin], outputs=Jthet, \n updates=[[thet1,thet1-gamma*grad_thet1],[thet0,thet0-gamma*grad_thet0]])\n\n\n\ntest_lin = theano.function([x1],h_thet)\n\n#X1_lin_in = shared( diabetes_X1 ,'float32')\n#Y_lin_out = shared( diabetes_Y, 'float32')\n\ntraining_steps = 1000 # 10000\n\nsh_diabetes_X1 = shared( diabetes_X1 , borrow=True)\nsh_diabetes_Y = shared( diabetes_Y, borrow=True)\n\n\"\"\"\nfor i in range(training_steps):\n for x,y in zip( diabetes_X1, diabetes_Y):\n Jthet_val = train_lin( x, y )\n \"\"\"\n\nfor i in range(training_steps):\n# for x,y in zip( sh_diabetes_X1, sh_diabetes_Y) :\n# Jthet_val = train_lin( x,y)\n Jthet_val = train_lin( sh_diabetes_X1, sh_diabetes_Y)\n\nprint(Jthet_val)\n\nprint( thet0.get_value() ); print( thet1.get_value() )\n\n\n\ntest_lin_out = np.array( [ test_lin( x ) for x in diabetes_X1 ] ) \n\nplt.plot(diabetes_X1,diabetes_Y,'ro')\nplt.plot(diabetes_X1,test_lin_out)\n\nif any([x.op.__class__.__name__ in ['GpuGemm','GpuGemv'] for x in train_lin.maker.fgraph.toposort()]):\n print(\"Used the gpu\")\nelse:\n print(train_lin.maker.fgraph.toposort())\n\nif np.any([isinstance(x.op,T.Elemwise) for x in train_lin.maker.fgraph.toposort()]):\n print(\"Used the cpu\")", "Linear Algebra and theano\ncf. Week 1, Linear Algebra Review, Coursera, Machine Learning with Ng\nI'll take this opportunity to provide a dictionary between the syntax of linear algebra math and numpy. \nEssentially, what I did was take Coursera's Week 1, Linear Algebra Review and then translated the math into theano, and in particular, running theano on the GPU.\nOther reference that I used was \nhttps://simplyml.com/linear-algebra-shootout-numpy-vs-theano-vs-tensorflow-2/\nLinear Algebra Shootout: NumPy vs. Theano vs. TensorFlow by Charanpal Dhanjal - 14/07/16 \nMatrix addition\ncf. Coursera, Intro. to Machine Learning, Linear Algebra Review, Addition and Scalar Multiplication", "A = T.matrix('A')\nB = T.matrix('B')\n#matadd = function([A,B], A+B)\n#matadd = function([A,B],sandbox.cuda.basic_ops.gpu_from_host(A+B) )\n# Note: we are just defining the expressions, nothing is evaluated here! \nC = sandbox.cuda.basic_ops.gpu_from_host(A+B)\nmatadd = function([A,B], C)\n\n#A = T.dmatrix('A')\n#B = T.dmatrix('B')\n\nA = T.matrix('A')\nB = T.matrix('B')\n\nC_out = A + B\nmatadd_CPU = function([A,B], C_out)\n\nA_eg = shared( np.array([[8,6,9],[10,1,10]]), 'float32')\nB_eg = shared( np.array([[3,10,2],[6,1,-1]]), 'float32')\n\n\nA_eg_CPU = np.array([[8,6,9],[10,1,10]])\nB_eg_CPU = np.array([[3,10,2],[6,1,-1]])\n\nprint(A_eg_CPU)\nprint( type( A_eg_CPU ))\nprint( A_eg_CPU.shape)\nprint( B_eg_CPU.shape)\n\nprint( matadd.maker.fgraph.toposort() )\n\nprint( matadd_CPU.maker.fgraph.toposort() )\n\nmatadd( A_eg, B_eg)", "The way to do it, to \"force\" on the GPU, is like this (cf. Speeding up your Neural Network with Theano and the GPU - Wild ML):", "np.random.randn( *A_eg_CPU.shape )\n\nC_out = theano.shared( np.random.randn( *A_eg_CPU.shape).astype('float32') )\n\nC_out.type()\n\n#A_in = shared( A_eg_CPU, \"float32\")\n#A_in = shared( A_eg_CPU, \"float32\")\n\nA_in = shared( A_eg_CPU.astype(\"float32\"), \"float32\")\nB_in = shared( B_eg_CPU.astype(\"float32\"), \"float32\")\n#C_out_GPU = A_in + B_in\nC_out_GPU = sandbox.cuda.basic_ops.gpu_from_host(A_in+B_in)\n\n\nmatadd_GPU = theano.function( [], C_out_GPU)\n\nC_out_GPU_result = matadd_GPU()\n\nC_out_GPU_result", "Notice how DIFFERENT this setup or syntax is: we have to set up tensor or matrix shared variables A_n, B_in, which are then used to define the theano function, theano.function. \"By using shared variables we ensure that they are present in the GPU memory\". cf. Linear Algebra Shootout: NumPy vs. Theano vs. TensorFlow", "print( matadd_GPU.maker.fgraph.toposort() )\n\n#if np.any([isinstance(C_out_GPU.op, tensor.Elemwise ) and \nif np.any([isinstance( C_out_GPU.op, T.Elemwise ) and \n ('Gpu' not in type( C_out_GPU.op).__name__) for x in matadd_GPU.maker.fgraph.toposort()]) :\n print('Used the cpu')\nelse:\n print('Used the gpu')\n\nmatadd_CPU( A_eg_CPU.astype(\"float32\"), B_eg_CPU.astype(\"float32\") )\n\ntype(A_eg)\n\nprint( type( numpy.asarray(rng.rand(2000)) ) )\nnumpy.asarray(rng.rand(2000)).shape", "Bottom Line: there are 2 ways of doing linear algebra on the GPU\n\nsymbolic computation with the usual arguments \n\n$$\nA + B = C \\in \\text{Mat}_{\\mathbb{R}}(M,N) \n$$ \n$ \\forall \\, A, B \\in \\text{Mat}_{\\mathbb{R}}(M,N)$", "A = T.matrix('A')\nB = T.matrix('B')\n\nC = sandbox.cuda.basic_ops.gpu_from_host( A + B ) # vs. \n# C = A + B # this will result in an output array on the host, as opposed to CudaNdarray on device\nmatadd = function([A,B], C)\n\nprint( matadd.maker.fgraph.toposort() )\n\nmatadd( A_eg_CPU.astype(\"float32\"), B_eg_CPU.astype(\"float32\") )", "with shared variables", "A_in = shared( A_eg_CPU.astype(\"float32\"), \"float32\") # initialize with the input values, A_eg_CPU, anyway\nB_in = shared( B_eg_CPU.astype(\"float32\"), \"float32\") # initialize with the input values B_eg_CPU, anyway\n\n# C_out = A_in + B_in # this version will output to the host as a numpy.ndarray\n# indeed, reading the graph,\n\"\"\"\n[GpuElemwise{add,no_inplace}(float32, float32), HostFromGpu(GpuElemwise{add,no_inplace}.0)]\n\"\"\"\n# this version immediately below, in 1 line, will result in a CudaNdarray on device\nC_out = sandbox.cuda.basic_ops.gpu_from_host(A_in+B_in)\n\nmatadd_GPU = theano.function( [], C_out)\n\nprint( matadd_GPU.maker.fgraph.toposort() )\n\nC_out_result = matadd_GPU()\n\nC_out_result", "Scalar Multiplication (on the GPU)\ncf. Scalar Multiplication of Linear Algebra Review, coursera, Machine Learning Intro by Ng", "A_2 = np.array( [[4,5],[1,7] ])\n\na = T.scalar('a')\n\nF = sandbox.cuda.basic_ops.gpu_from_host( a*A )\nscalarmul = theano.function([a,A],F)\n\nprint( scalarmul.maker.fgraph.toposort() )\n\nscalarmul( np.float32( 2.), A_2.astype(\"float32\"))", "Composition; Confirming that you can do composition of scalar multiplication on a matrix (or ring) addition\nBeing able to do composition is very important in math", "scalarmul( np.float32(2.), matadd( A_eg_CPU.astype(\"float32\"), B_eg_CPU.astype(\"float32\") ) )\n\nu = T.vector('u')\nv = T.vector('v')\n\nw = sandbox.cuda.basic_ops.gpu_from_host( u + v)\nvecadd = theano.function( [u,v],w)\n\nt = sandbox.cuda.basic_ops.gpu_from_host( a * u)\nscalarmul_vec = theano.function([a,u], t)\n\n\nprint(vecadd.maker.fgraph.toposort()) \nprint(scalarmul_vec.maker.fgraph.toposort()) \n\n\nu_eg = np.array( [4,6,7], dtype=\"float32\")\nv_eg = np.array( [2,1,0], dtype=\"float32\")\n\nprint( u_eg.shape)\n\nscalarmul_vec( np.float32(0.5), u_eg )\n\nvecadd( scalarmul_vec( np.float32(0.5), u_eg ) , scalarmul_vec( np.float32(-3.), v_eg ) )", "This was the computer equivalent to mathematical expression: \n$$ \n\\left[ \\begin{matrix} 4 \\ 6 \\ 7 \\end{matrix} \\right] /2 - 3 * \\left[ \\begin{matrix} 2 \\ 1 \\ 0 \\end{matrix} \\right]\n$$ \nsAxy or A-V multiplication or so-called \"Gemv\", or Matrix Multiplication on a vector, or linear transformation on a R-module, or vector space\ni.e. \n$$\nAv = B \n$$", "B_out = sandbox.cuda.basic_ops.gpu_from_host( T.dot(A,v))\nAVmul = theano.function([A,v], B_out)\nprint(AVmul.maker.fgraph.toposort())\n\nAVmul( np.array([[1,0,3],[2,1,5],[3,1,2]]).astype(\"float32\"), np.array([1,6,2]).astype(\"float32\"))\n\nAVmul( np.array([[1,0,0],[0,1,0],[0,0,1]]).astype(\"float32\"), np.array([1,6,2]).astype(\"float32\"))", "AB or Gemm or Matrix Multiplication, i.e. Ring multiplication\ni.e. \n$$\nA*B = C\n$$", "C_f = sandbox.cuda.basic_ops.gpu_from_host( T.dot(A,B)) \nmatmul = theano.function([A,B], C_f)\nprint( matmul.maker.fgraph.toposort())\n\nmatmul( np.array( [[1,3],[2,4],[0,5]] ).astype(\"float32\"), np.array([[1,0],[2,3]]).astype(\"float32\") )", "Inverse and Transpose\ncf. Inverse and Transpose", "Ainverse = sandbox.cuda.basic_ops.gpu_from_host( T.inv(A))\nAinv = theano.function([A], Ainverse)\nprint(Ainv.maker.fgraph.toposort())\n\nAtranspose = sandbox.cuda.basic_ops.gpu_from_host( A.T)\nAT = theano.function([A],Atranspose)\nprint(AT.maker.fgraph.toposort())", "Summation, sum, mean, scan\nLinear Regression (again), via Coursera's Machine Learning Intro by Ng, Programming Exercise 1 for Week 2\nBoilerplate, load sample data", "linregdata = pd.read_csv('./coursera_Ng/machine-learning-ex1/ex1/ex1data1.txt', header=None)\n\nX_linreg_training = linregdata.as_matrix([0]) # pandas.DataFrame.as_matrix convert frame to its numpy-array representation\ny_linreg_training = linregdata.as_matrix([1])\nm_linreg_training = len(y_linreg_training) # number of training examples \nprint( X_linreg_training.shape, type(X_linreg_training)) \nprint( y_linreg_training.shape, type(y_linreg_training)) \nprint m_linreg_training", "Try representing $\\theta$, parameters or \"weights\", of size $|\\theta|$ which should be equal to the number of features $n$ (or $d$).", "# theta_linreg = T.vector('theta_linreg')\nd = X_linreg_training.shape[1] # d = features\n\n# Declare Theano symbolic variables\nX = T.matrix('x')\ny = T.vector('y')", "Preprocess training data (due to numpy's treatment of arrays) (note, this is not needed, if you use pandas to choose which column(s) you want to make into a numpy array)", "#X_linreg_training = X_linreg_training.reshape( m_linreg_training,1)\n#y_linreg_training = y_linreg_training.reshape( m_linreg_training,1)\n\n# Instead, the training data X and test data values y are going to be represented by Theano symbolic variable above\n#X_linreg = theano.shared(X_linreg_training.astype(\"float32\"),\"float32\")\n#y_linreg = theano.shared(y_linreg_training.astype(\"float32\"),\"float32\")\n\n#theta_0 = np.zeros( ( d+1,1)); print(theta_0)\ntheta_0 = np.zeros( d+1); print(theta_0)\n\ntheta = theano.shared( theta_0.astype(\"float32\"), \"theta\")\n\nalpha = np.float32(0.01) # learning rate gamma or alpha\n\n# Construct Theano \"expression graph\"\n\npredicted_vals = sandbox.cuda.basic_ops.gpu_from_host( T.dot(X,theta) ) # h_{\\theta}\nm = np.float32( y_linreg_training.shape[0] ) \nJ_theta = sandbox.cuda.basic_ops.gpu_from_host( \n T.dot( (T.dot(X,theta) - y).T, T.dot(X,theta) - y) * np.float32( 0.5 ) * np.float32( 1./ m ) \n ) # cost function\n\n\n\nupdate_theta = sandbox.cuda.basic_ops.gpu_from_host( \n theta - alpha * T.grad( J_theta, theta) )\n\n\ngradientDescent = theano.function( \n inputs=[X,y],\n outputs=[predicted_vals,J_theta], \n updates=[(theta, update_theta)], \n name = \"gradientDescent\")\n\n\nprint( gradientDescent.maker.fgraph.toposort() )\n\nnum_iters = 1500\nJ_History = []", "Preprocess X to include intercepts", "input_X_linreg = np.hstack( ( np.ones((m_linreg_training,1)), X_linreg_training ) ).astype(\"float32\")\n\ny_linreg_training_processed = y_linreg_training.reshape( m_linreg_training,).astype(\"float32\")\n\nJ_History = [0 for iter in range(num_iters)]\nfor iter in range(num_iters):\n predicted_vals_out, J_out = \\\n gradientDescent(input_X_linreg.astype(\"float32\"), y_linreg_training_processed.astype(\"float32\") ) \n J_History[iter] = J_out\n\nDeg = (np.random.randn(40,10).astype(\"float32\"), np.random.randint(size=40,low=0,high=2).astype(\"float32\") )\n\nDeg[0].shape\n\nDeg[1].shape\n\ntheta.get_value()\n\ndir( J_History[0] )\n\nJ_History[-5].gpudata\n\nplt.plot( [ele.gpudata for ele in J_History])", "Denny Britz's way: \nhttp://www.wildml.com/2015/09/speeding-up-your-neural-network-with-theano-and-the-gpu/\nSpeeding up your Neural Network with Theano and the GPU\nand his jupyter notebook\nhttps://github.com/dennybritz/nn-theano/blob/master/nn-theano-gpu.ipynb\n nn-theano/nn-theano-gpu.ipynb", "input_X_linreg.shape\n\n# GPU NOTE: Conversion to float32 to store them on the GPU!\nX = theano.shared( input_X_linreg.astype('float32'), name='X' )\ny = theano.shared( y_linreg_training.astype('float32'), name='y')\n\n# GPU NOTE: Conversion to float32 to store them on the GPU! \ntheta = theano.shared( np.vstack(theta_0).astype(\"float32\"), name='theta')\n\n# Construct Theano \"expression graph\"\n\npredicted_vals = sandbox.cuda.basic_ops.gpu_from_host( \n T.dot(X,theta) ) # h_{\\theta}\nm = np.float32( y_linreg_training.shape[0] )\n# cost function J_theta, J_{\\theta}\nJ_theta = sandbox.cuda.basic_ops.gpu_from_host( \n (\n T.dot( (T.dot(X,theta) - y).T, T.dot(X,theta) - y) * np.float32(0.5) * np.float32( 1./m) \n ).reshape([]) ) # cost function # reshape is to force \"broadcast\" into 0-dim. scalar for cost function\n \n\n\nupdate_theta = sandbox.cuda.basic_ops.gpu_from_host( \n theta - alpha * T.grad( J_theta, theta) )\n\n# Note that we removed the input values because we will always use the same shared variable\n# GPU Note: Removed the input values to avoid copying data to the GPU.\ngradientDescent = theano.function( \n inputs=[],\n# outputs=[predicted_vals,J_theta], \n updates=[(theta, update_theta)], \n name = \"gradientDescent\")\n\n\nprint( gradientDescent.maker.fgraph.toposort() )\n\n#J_History = [0 for iter in range(num_iters)]\nfor iter in range(num_iters):\n gradientDescent( ) \n \n\nprint( np.vstack( theta_0).shape )\nprint( y_linreg_training.shape )\n\ntheta.get_value()\n\n# Profiling\nprint( theano.config.profile ) # Do the vm/cvm linkers profile the execution time of Theano functions?\nprint( theano.config.profile_memory ) # Do the vm/cvm linkers profile the memory usage of Theano functions? It only works when profile=True.\n\ntheano.printing.debugprint(gradientDescent)\n\n#print( gradientDescent.profile.print_summary() )\ndir( gradientDescent.profile)", "Testing the Linear Regression with (Batch) Gradient Descent classes in ./ML/", "import sys\nimport os\n\n#sys.path.append( os.getcwd() + '/ML')\nsys.path.append( os.getcwd() + '/ML' )\n\nfrom linreg_gradDes import LinearReg, LinearReg_loaded\n#from ML import LinearReg, LinearReg_loaded", "Boilerplate for sample input data", "linregdata1 = pd.read_csv('./coursera_Ng/machine-learning-ex1/ex1/ex1data1.txt', header=None)\nlinregdata1.as_matrix([0]).shape\nlinregdata1.as_matrix([1]).shape\n\nfeatures = linregdata1.as_matrix([0]).shape[1]\nnumberoftraining = linregdata1.as_matrix([0]).shape[0]\nLinReg_housing = LinearReg( features, numberoftraining , 0.01)\n\nXin = LinReg_housing.preprocess_X( linregdata1.as_matrix([0]))\nytest = linregdata1.as_matrix([1]).flatten()\n\n%time LinReg_housing.build_model( Xin, ytest )\n\nLinRegloaded_housing = LinearReg_loaded( linregdata1.as_matrix([0]), linregdata1.as_matrix([1]), \n features, numberoftraining )\n\n%time LinRegloaded_housing.build_model()\n\nprint( LinReg_housing.gradientDescent.maker.fgraph.toposort() )\nprint( LinRegloaded_housing.gradientDescent.maker.fgraph.toposort() )\n", "Other (sample) datasets\nConsider feature normalization", "def featureNormalize(X):\n \"\"\"\n FEATURENORMALIZE Normalizes the features in X \n FEATURENORMALIZE(X) returns a normalized version of X where \n the mean value of each feature is 0 and the standard deviation \n is 1. This is often a good preprocessing step to do when \n working with learning algorithms.\n \n \"\"\"\n # You need to set these values correctly \n X_norm = (X-X.mean(axis=0))/X.std(axis=0)\n mu = X.mean(axis=0)\n sigma = X.std(axis=0)\n \n return [X_norm, mu, sigma]\n\nlinregdata2 = pd.read_csv('./coursera_Ng/machine-learning-ex1/ex1/ex1data2.txt', header=None)\n\n\nfeatures = linregdata2.as_matrix().shape[1] - 1\nnumberoftraining = linregdata2.as_matrix().shape[0]\nXdat = linregdata2.as_matrix( range(features) )\nytest = linregdata2.as_matrix( [features])\n\n[Xnorm, mus,sigmas] = featureNormalize(Xdat)\n\nLinReg_housing2 = LinearReg( features, numberoftraining, 0.01)\nprocessed_X = LinReg_housing2.preprocess_X( Xnorm )\n\n%time LinReg_housing2.build_model( processed_X, ytest.flatten(), 400)\n\nLinRegloaded_housing2 = LinearReg_loaded( Xnorm, ytest, \n features, numberoftraining )\n\n%time LinRegloaded_housing2.build_model( 400)", "Diabetes data from sklearn, sci-kit learn", "# Load the diabetes dataset\ndiabetes = sklearn.datasets.load_diabetes()\n\ndiabetes_X = diabetes.data\ndiabetes_Y = diabetes.target\n\n#diabetes_X1 = diabetes_X[:,np.newaxis,2]\ndiabetes_X1 = diabetes_X[:,np.newaxis, 2].astype(theano.config.floatX)\n#diabetes_Y = diabetes_Y.reshape( diabetes_Y.shape[0], 1)\ndiabetes_Y = np.vstack( diabetes_Y.astype(theano.config.floatX) )\n\nfeatures1 = 1 \nnumberoftraining = diabetes_Y.shape[0]\n\nLinReg_diabetes = LinearReg( features1, numberoftraining, 0.01)\n\n\nprocessed_X = LinReg_diabetes.preprocess_X( diabetes_X1 )\n\n%time LinReg_diabetes.build_model( processed_X, diabetes_Y.flatten(), 10000)\n\nLinRegloaded_diabetes = LinearReg_loaded( diabetes_X1, diabetes_Y, \n features1, numberoftraining )\n\n%time LinRegloaded_diabetes.build_model( 10000)", "Multiple number of features case:", "features = diabetes_X.shape[1]\n\n\nLinReg_diabetes = LinearReg( features, numberoftraining, 0.01)\nprocessed_X = LinReg_diabetes.preprocess_X( diabetes_X )\n\n%time LinReg_diabetes.build_model( processed_X, diabetes_Y.flatten(), 10000)\n\nLinRegloaded_diabetes = LinearReg_loaded( diabetes_X, diabetes_Y, \n features, numberoftraining )\n\n%time LinRegloaded_diabetes.build_model( 10000)", "ex2 Linear Regression, on d=2 features", "data_ex1data2 = pd.read_csv('./coursera_Ng/machine-learning-ex1/ex1/ex1data2.txt', header=None)\nX_ex1data2 = data_ex1data2.iloc[:,0:2]\ny_ex1data2 = data_ex1data2.iloc[:,2]\nm_ex1data2 = y_ex1data2.shape[0]\nX_ex1data2=X_ex1data2.values.astype(np.float32)\ny_ex1data2=y_ex1data2.values.reshape((m_ex1data2,1)).astype(np.float32)\nprint(type(X_ex1data2))\nprint(type(y_ex1data2))\nprint(X_ex1data2.shape)\nprint(y_ex1data2.shape)\nprint(m_ex1data2)\nprint(X_ex1data2[:5])\nprint(y_ex1data2[:5])\n\n((X_ex1data2[:,1] - X_ex1data2[:,1].mean())/( X_ex1data2[:,1].std()) ).std()\n\n# feature Normalize\n#X_ex1data2_norm = sklearn.preprocessing.Normalizer.transform(X_ex1data2 )\nX_ex1data2_norm = (X_ex1data2 - np.mean(X_ex1data2, axis=0)) / np.std(X_ex1data2, axis=0)\nprint(X_ex1data2_norm[:,0].mean())\nprint(X_ex1data2_norm[:,0].std())\nprint(X_ex1data2_norm[:,1].mean())\nprint(X_ex1data2_norm[:,1].std())\n\n# X_ex1data2_norm[:5];\n\nX=T.matrix(dtype=theano.config.floatX)\ny=T.matrix(dtype=theano.config.floatX)\n\nTheta=theano.shared(np.zeros((2,1)).astype(theano.config.floatX))\nb = theano.shared(np.zeros(1).astype(theano.config.floatX))\n\nprint(b.get_value().shape)\n\nyhat = T.dot( X, Theta) + b\n\n# L2 norm\nJ = np.cast[theano.config.floatX](0.5)*T.mean( T.sqr( yhat-y))\n\nalpha=0.01 # learning rate\n# sandbox.cuda.basic_ops.gpu_from_host\nupdateThetab = [ Theta-np.float32(alpha)*T.grad(J,Theta), b-np.float32(alpha)*T.grad(J,b)]\ngradientDescent_step = theano.function(inputs=[X,y], \n outputs=J,\n updates = zip([Theta,b],updateThetab) )\n\n\nnum_iters =400\nJList=[]\nfor iter in range(num_iters):\n err = gradientDescent_step(X_ex1data2_norm,y_ex1data2)\n JList.append(err)\n\n# Final mode:\nprint(Theta.get_value())\nprint(b.get_value())\n\n# JList[-10:]\nplt.plot(JList)\nplt.show()", "Multi-class Classification\ncf. ex3, Programming Exercise 3: Multi-class Classification and Neural Networks, Machine Learning\n1 Multi-class Classification", "os.getcwd()\n\nos.listdir( './coursera_Ng/machine-learning-ex3/' )\n\nos.listdir( './coursera_Ng/machine-learning-ex3/ex3' )\n\n# Load saved matrices from file \nmulticlscls_data = scipy.io.loadmat('./coursera_Ng/machine-learning-ex3/ex3/ex3data1.mat')", "import the classes from ML", "import sys\nimport os\n\nos.getcwd()\n\n#sys.path.append( os.getcwd() + '/ML')\nsys.path.append( os.getcwd() + '/ML' )\n\nfrom gradDes import LogReg\n\n# Test case for Cost function J_{\\theta} with regularization\n\ntheta_t = np.vstack( np.array( [-2, -1, 1, 2]) )\nX_t = np.array( [i/10. for i in range(1,16)]).reshape((3,5)).T\n#X_t = np.hstack( ( np.ones((5,1)), X_t) ) # no need to preprocess the input data X with column of 1's\ny_t = np.vstack( np.array( [1,0,1,0,1]))\n\n\nMulClsCls_digits = LogReg( X_t, y_t, 3,5,0.01, 3. )\n\nMulClsCls_digits.calculate_cost()\n\nMulClsCls_digits.z.get_value()\n\nprint( MulClsCls_digits.X.get_value() )\nMulClsCls_digits.y.get_value()\n\ncalc_z_test = theano.function([], MulClsCls_digits.z)\n\ncalc_z_test()\n\nMulClsCls_digits.theta.set_value( theta_t.astype('float32') )\n\ncalc_z_test()\n\nMulClsCls_digits.calculate_cost()\n\nprint( 1/(1+np.exp( np.dot( -np.hstack( ( np.ones((5,1)), X_t) ), theta_t) ) ) )\nh_test = 1/(1+np.exp( np.dot( -np.hstack( ( np.ones((5,1)), X_t) ), theta_t) ) ) \nprint( np.dot( (h_test - y_t).T, h_test- y_t) * 0.5/5 ) # non-regularized J_theta cost term\nnp.dot( theta_t[1:].T, theta_t[1:]) * 3 / (2.* 5)\n\n\nMulClsCls_digits.predict()\n\nMulClsCls_digit\n\ntheano.config.floatX", "Neural Networks\nModel representation\ncf. 2 Neural Networks, 2.1 Model representation, ex3.pdf", "os.getcwd()\n\nos.listdir( './coursera_Ng/machine-learning-ex3/' )\n\nos.listdir( './coursera_Ng/machine-learning-ex3/ex3/' )", "$ \\Theta_1, \\Theta_2 $", "# Load saved matrices from file \nnn3_data = scipy.io.loadmat('./coursera_Ng/machine-learning-ex3/ex3/ex3weights.mat')\n\nprint( nn3_data.keys() )\nprint( type( nn3_data['Theta1']) )\nprint( type( nn3_data['Theta2']) )\nprint( nn3_data['Theta1'].shape )\nprint( nn3_data['Theta2'].shape )\n\nTheta1[0]", "Feedforward", "%load_ext tikzmagic", "$$\n\\begin{tikzpicture}\n \\matrix (m) [matrix of math nodes, row sep=3em, column sep=4em, minimum width=2em]\n {\n \\mathbb{R}^{s_l} & \\mathbb{R}^{ s_l +1 } & \\mathbb{R}^{s_{l+1} } & \\mathbb{R}^{s_{l+1} } \\\na^{(l)} & (a_0^{(l)} = 1, a^{(l)} ) & z^{(l+1)} & g(z^{(l+1)}) = a^{(l+1)} \\\n };\n \\path[->]\n (m-1-1) edge node [above] {$a_0^{(l)}=1$} (m-1-2)\n (m-1-2) edge node [above] {$\\Theta^{(l)}$} (m-1-3)\n (m-1-3) edge node [above] {$g$} (m-1-4) \n ;\n \\path[|->]\n (m-2-1) edge node [above] {$a_0^{(l)}=1$} (m-2-2)\n (m-2-2) edge node [above] {$\\Theta^{(l)}$} (m-2-3)\n (m-2-3) edge node [above] {$g$} (m-2-4) \n ;\n\\end{tikzpicture}\n$$", "np.random.seed(0)\ns_l = 400 # (layer) size of layer l, i.e. number of nodes, units in layer l\ns_lp1 = 25\nal = theano.shared( np.random.randn(s_l+1,1).astype('float32'), name=\"al\")\n#alp1 = theano.shared( np.random.randn(s_lp1,1).astype('float32'), name=\"al\")\n#Thetal = theano.shared( np.random.randn( s_lp1,s_l+1).astype('float32') , name=\"Thetal\")\n\n# Feedforward, forward propagation\n#z = T.dot( Thetal, al)\n#g = T.nnet.sigmoid( z)\n\n\ns_l = 25\ns_lp1 = 10\n\n\n\n\nrng = np.random.RandomState(99)\nTheta_values = np.asarray( rng.uniform( \n low=-np.sqrt( 6. / (s_l+ s_lp1)), \n high=np.sqrt( 6./(s_l + s_lp1)), size=(s_lp1,s_l+1)), dtype=theano.config.floatX )\nprint( Theta_values.shape )\nprint( Theta_values.dtype )\n#Theta_values *= np.float32(4)\nTheta_values *= 4.\n\nprint( Theta_values.dtype)\nTheta_values.shape\n\nnp.float32( 4)", "From Deep Learning Tutorials of LISA lab of University of Montreal; logistic_sgd.py, mlp.py", "%env\n\nos.getcwd()\n\nprint( sys.path )\n\n#sys.path.append( os.getcwd() + '/ML')\nsys.path.append( '../DeepLearningTutorials/code/' )\n\n#from logistic_sgd import LogisticRegression, load_data, sgd_optimization_mnist, predict\nimport logistic_sgd \n\nMNIST_MTLdat = logistic_sgd.load_data(\"../DeepLearningTutorials/data/mnist.pkl.gz\") # list of training data\n\nprint(len(MNIST_MTLdat))\nprint(type(MNIST_MTLdat))\nfor ele in MNIST_MTLdat: print type(ele), len(ele) # test_set_x, test_set_y, valid_set_x, valid_set_y, train_set_x, \n\nprint( MNIST_MTLdat[0][0].get_value().shape)\nprint( type(MNIST_MTLdat[0][1]))\nprint( MNIST_MTLdat[0][1].get_scalar_constant_value )\n\nprint( type( MNIST_MTLdat[1][1] ) )\nMNIST_MTLdat[1][1].shape\n\ndir(MNIST_MTLdat[0][1]) ;\n\nimport gzip\nimport six.moves.cPickle as pickle\nwith gzip.open(\"../DeepLearningTutorials/data/mnist.pkl.gz\", 'rb') as f:\n try:\n train_set, valid_set, test_set = pickle.load(f, encoding='latin1')\n except:\n train_set, valid_set, test_set = pickle.load(f)\n\nprint( type( train_set[0] ))\nprint( train_set[0].shape )\nprint( type( train_set[1]))\nprint( train_set[1].shape )\nprint( type( valid_set[0] ))\nprint( valid_set[0].shape )\nprint( type( valid_set[1]))\nprint( valid_set[1].shape )\nprint( type( test_set[0] ))\nprint( test_set[0].shape )\nprint( type( test_set[1]))\nprint( test_set[1].shape )\n\n\nX = train_set[0].T\n\npd.DataFrame(X.T).describe()\n\n28*28\n\nX_i = theano.shared( X.astype(\"float32\"))\n\nm = X_i.get_value().shape[1]\n\na1 = T.stack( [ theano.shared( np.ones((1,m)).astype(\"float32\") ) , X_i ] , axis=1 )\n\nprint( type(a1) )\n#print( a1.get_scalar_constant_value() )\ndir(a1)\na1.get_parents()\n\na1.ndim\n\na1_0 = theano.shared( np.ones((1,m)).astype(\"float32\"),name='a1_0')\n\n\na1 = T.stack( [a1_0,X_i], axis=0)\n\nd = X_i.get_value().shape[0]\ns_2 = d/2\nrng1 = np.random.RandomState(1234)\nTheta1_values = np.asarray( rng1.uniform( low=-np.sqrt(6./(d+s_2)),high=np.sqrt(6./(d+s_2)),size=(s_2,d+1)),\n dtype=theano.config.floatX)\nTheta1 = theano.shared(value=Theta1_values, name=\"Theta\",borrow=True)\n\n#rng1.uniform( low=-np.sqrt(6./(d+s_2)),high=np.sqrt(6./(d+s_2)),size=(s_2,d+1))\nz1 = T.dot( Theta1, a1)\na2 = T.tanh(z1)\n\npassthru1 = theano.function( [], a2)\n\nprint(d)\npassthru1()\n\nprint(X.shape)\nX_i = theano.shared( X.astype(\"float32\"))\n#m = X_i.get_value().shape[1]\nm = X.shape[1]\nprint(m)\na1_0 = theano.shared( np.ones((1,m)).astype(\"float32\"),name='a1_0')\nprint(a1_0.get_value().shape)\na1 = T.stack( [a1_0,X_i], axis=0)\naddintercept = theano.function([],a1)\n\naddintercept()\n\nd = X_i.get_value().shape[0]\nprint(d)\ns_2 = d/2\nprint(s_2)\nrng1 = np.random.RandomState(1234)\nTheta1_values = np.asarray( rng1.uniform( low=-np.sqrt(6./(d+s_2)),high=np.sqrt(6./(d+s_2)),size=(s_2,d)),\n dtype=theano.config.floatX)\nTheta1 = theano.shared(value=Theta1_values, name=\"Theta1\",borrow=True)\nb_values = np.vstack( np.zeros(s_2) ).astype(theano.config.floatX)\nb1 = theano.shared(value=b_values, name='b1',borrow=True)\na1_values=np.array( np.zeros( (d,m)), dtype=theano.config.floatX)\na1 = theano.shared(value=a1_values, name='a1', borrow=True)\nlin_z2 = T.dot( Theta1, a1) + T.tile(b1,(1,m))\n#lin_z2 = T.dot( Theta1, a1)\n\ntest_mult = theano.function([],lin_z2)\n\nprint( type(b_values))\nb_values.dtype\n\n\ntest_mult()\n\nprint( b1.get_value().shape )\nT.tile( b1, (0,m))", "NN.py, load NN.py for Layer class for Neural Net for Multiple Layers", "import sys\nimport os\n\n#sys.path.append( os.getcwd() + '/ML')\nsys.path.append( os.getcwd() + '/ML' )\n\nfrom NN import Layer, cost_functional, cost_functional_noreg, gradientDescent_step\n", "Boilerplate sample data, from Coursera's Machine Learning Introduction", "# Load Training Data\nprint(\"Loading and Visualizing Data ... \\n\")\nex4data1 = scipy.io.loadmat('./coursera_Ng/machine-learning-ex4/ex4/ex4data1.mat')\n\nex4data1.keys()\n\nprint( ex4data1['X'].shape )\nprint( ex4data1['y'].shape )\n\ntest_rng = np.random.RandomState(1234)\n#Theta1 = Layer( test_rng, 1, 400,25, 5000)\n\n#help(Theta1.al.set_value); # Beginning with Theano 0.3.1, set_value will work in-place on the GPU, if ... source on CPU\nTheta1.al.set_value( ex4data1['X'].T.astype(theano.config.floatX))\n\nTheta1.alp1\n\nprint( type( Theta1.alp1 ) )\nTheta2 = Layer( test_rng, 2, 25,10,5000, al=Theta1.alp1 )\n\nTheta2.alp1\n\npredicted = theano.function([],sandbox.cuda.basic_ops.gpu_from_host( Theta2.alp1 ) )\n\npredicted().shape\n\nprint( ex4data1['y'].shape )\npd.DataFrame( ex4data1['y']).describe()\n\n# recall that whereas the original labels (in the variable y) were 1, 2, ..., 10, for the purpose of training a \n# neural network, we need to recode the labels as vectors containing only values 0 or 1\nK=10\nm = ex4data1['y'].shape[0]\ny_prob = [np.zeros(K) for row in ex4data1['y']] # list of 5000 numpy arrays of size dims. (10,)\nfor i in range( m):\n y_prob[i][ ex4data1['y'][i]-1] = 1\ny_prob = np.array(y_prob).T.astype(theano.config.floatX) # size dims. (K,m)\nprint(y_prob.shape)\n\nprint( type(y_prob) )\ntype( np.asarray( y_prob, dtype=theano.config.floatX) )\n\nhelp( T.nlinalg.trace )\n\ny_sh_var = theano.shared( np.asarray( y_prob,dtype=theano.config.floatX),name='y')\n\nh_test = Theta2.alp1\nJ = sandbox.cuda.basic_ops.gpu_from_host(\n (-T.nlinalg.trace( T.dot( T.log( h_test ), y_sh_var.T)) - T.nlinalg.trace( \n T.dot( T.log( np.float32(1.)-h_test),(np.float32(1.)- y_sh_var.T ) )))/np.float32(m)\n )\n\nprint(type(J))\ntest_cost_func = theano.function([],J)\n\ntest_cost_func()\n\nJ_test_build = sandbox.cuda.basic_ops.gpu_from_host( -T.nlinalg.trace( T.dot( T.log(h_test),y_sh_var.T) ) )\ntest_cost_build_func = theano.function([], J_test_build)\n\ntest_cost_build_func()", "Sanity check using ex4.m, Exercise 4 or Programming Exercise 4 from Coursera's Machine Learning Introduction by Ng", "Theta_testvals = scipy.io.loadmat('./coursera_Ng/machine-learning-ex4/ex4/ex4weights.mat')\n\nprint( Theta_testvals.keys() )\nprint( Theta_testvals['Theta1'].shape )\nprint( Theta_testvals['Theta2'].shape )\nTheta1_testval = Theta_testvals['Theta1'][:,1:]\nb1_testval = Theta_testvals['Theta1'][:,0:1]\nprint( Theta1_testval.shape )\nprint( b1_testval.shape )\nTheta2_testval = Theta_testvals['Theta2'][:,1:]\nb2_testval = Theta_testvals['Theta2'][:,0:1]\nprint( Theta2_testval.shape )\nprint( b2_testval.shape )\n\nTheta1 = Layer( test_rng, 1, 400,25, 5000, activation=T.nnet.sigmoid)\n\n\nTheta1.Theta.set_value( Theta1_testval.astype(\"float32\"))\nTheta1.b.set_value( b1_testval.astype('float32') )\nTheta1.al.set_value( ex4data1['X'].T.astype('float32'))", "For $\\Theta^{(2)}$, the key to connecting $\\Theta^{(2)}$ with $\\Theta^{(1)}$ is to set the argument in class Layer with al=Theta1.alp1,", "Theta2 = Layer( test_rng, 2, 25,10,5000, al=Theta1.alp1 , activation=T.nnet.sigmoid)\n\nTheta2.Theta.set_value( Theta2_testval.astype('float32'))\nTheta2.b.set_value( b2_testval.astype('float32'))\n\nh_test = Theta2.alp1\nJ = sandbox.cuda.basic_ops.gpu_from_host(\n T.mean( T.sum( \n - y_sh_var * T.log( h_test ) - ( np.float32( 1) - y_sh_var) * T.log( np.float32(1) - h_test), axis =0), axis=0)\n )\n#J = sandbox.cuda.basic_ops.gpu_from_host( \n# T.log(h_test) * y_sh_var\n# )\n\ntest_cost_func = theano.function([],J)\n\ntest_cost_func()\n\nprint(type( y_sh_var) )\nprint( y_sh_var.get_value().shape )\nprint( type( h_test ))\n\nchecklayer2 = theano.function([], sandbox.cuda.basic_ops.gpu_from_host(Theta1.alp1))\n\nchecklayer2() \n\ntestreg = theano.function([], T.sum( Theta1.Theta * Theta1.Theta ) )\n\ntestreg()\n\nrange(1,3)\n\nThetas_lst = [ Theta1.Theta, Theta2.Theta ]\n\nT.sum( [ T.sum( theta*theta) for theta in Thetas_lst] )\n\ncost_func_test = cost_functional(3, 1, y_prob, Theta2.alp1, [Theta1.Theta, Theta2.Theta])\n\ncost_test = theano.function([], cost_func_test)\n\ncost_test() # (this value should be about 0.383770)\n\ngrad_test = T.grad( cost_func_test,[Theta1.Theta, Theta2.Theta])\n\ngrad_test_test = theano.function([], grad_test)\n\nprint( type(grad_test_test() ) )\nprint( len( grad_test_test() ))\nprint( type(grad_test_test()[0] ))\nprint( grad_test_test()[0].shape )\nprint( grad_test_test()[1].shape )\n\nprint( range(6))\nprint( list( \"Ernest\") )\nzip( range(6), list(\"Ernest\"))\nprint( type(grad_test))\n\nprint( grad_test_test.maker.fgraph.toposort() )\n\n0.01 * grad_test\n\ntest_update = [(Theta,sandbox.cuda.basic_ops.gpu_from_host( Theta - np.float32(0.01)*T.grad(cost_func_test, Theta)+0.0001*Theta ) ) for Theta in [Theta1.Theta, Theta2.Theta] ]\n\ntest_gradDes_step = theano.function( inputs=[], updates= test_update )\n\ntest_gradDes_step()\n\nprint( Theta1.Theta.get_value() )\nprint( Theta2.Theta.get_value() )\n\ntest_gradDes_step()\n\nprint( Theta1.Theta.get_value() )\nprint( Theta2.Theta.get_value() )\n\ngradDes_test_res = gradientDescent_step(cost_func_test, [Theta1.Theta, Theta2.Theta], 0.01, 0.00001 )\n\nprint( type(gradDes_test_res) )\ngradDes_step_test = gradDes_test_res[1]\n\ngradDes_step_test()\n\nprint( Theta1.Theta.get_value() )\nprint( Theta2.Theta.get_value() )\n\ngradDes_step_test()\n\nprint( Theta1.Theta.get_value() )\nprint( Theta2.Theta.get_value() )\n\ny_prob.shape\n\nex4data1['y'].shape\n\npd.DataFrame( ex4data1['y']).describe()\n\nprint( Theta2.alp1.shape )\nprint( Theta2.alp1.shape.ndim )\n# Theta2.alp1.shape.get_scalar_constant_value()\npredicted_logreg = theano.function([],Theta2.alp1)\n\npd.DataFrame( predicted_logreg().T ).describe()\n\npd.DataFrame(predicted_logreg().T).describe().iloc[1:-1,:].plot()\n\nprint( np.argmax( predicted_logreg(), axis=0).shape )\nnp.vstack( np.argmax( predicted_logreg(),axis=0) ).shape\n\npd.DataFrame( np.vstack( np.argmax(predicted_logreg(),axis=0)) + 1).describe()\n\nres = np.float32( ( np.vstack( np.argmax( predicted_logreg(),axis=0)) + 1 ) == ex4data1['y'] )\npd.DataFrame(res).describe()\n\nrange(1,3)\n\npredicted_logreg().shape\n\nprint(y_prob.shape); print( np.argmax( y_prob,axis=0 ).shape)", "Summary for Neural Net with Multiple Layers for logistic regression (but can be extended to linear regression)\n\nLoad boilerplate training data:", "sys.path.append( os.getcwd() + '/ML' )\n\nfrom NN import Layer, cost_functional, cost_functional_noreg, gradientDescent_step, MLP\n\n# Load Training Data\nprint(\"Loading and Visualizing Data ... \\n\")\nex4data1 = scipy.io.loadmat('./coursera_Ng/machine-learning-ex4/ex4/ex4data1.mat')\n\n# recall that whereas the original labels (in the variable y) were 1, 2, ..., 10, for the purpose of training a \n# neural network, we need to recode the labels as vectors containing only values 0 or 1\nK=10\nm = ex4data1['y'].shape[0]\ny_prob = [np.zeros(K) for row in ex4data1['y']] # list of 5000 numpy arrays of size dims. (10,)\nfor i in range( m):\n y_prob[i][ ex4data1['y'][i]-1] = 1\ny_prob = np.array(y_prob).T.astype(theano.config.floatX) # size dims. (K,m)\n\nprint(ex4data1['X'].T.shape)\nprint(y_prob.shape)\n\ndigitsMLP = MLP(3,[400,25,10], 5000, ex4data1['X'].T, y_prob, T.nnet.sigmoid, 1., 0.1, 0.0000)\n\ndigitsMLP.train_model(100000)\n\ndigitsMLP.accuracy_log_reg()\n\nprint( digitsMLP.Thetas[0].Theta.get_value() )\ndigitsMLP.Thetas[1].Theta.get_value()\n\n\ndigitsMLP.predicted_vals_logreg()\n\ntestL1a2 = theano.function([], digitsMLP.Thetas[0].alp1 )\nprint( testL1a2() )\ntestL2a2 = theano.function([], digitsMLP.Thetas[1].al )\nprint( testL2a2() )\n\n\n[1,2,3,4,5] + [8,1,5]\n\nprint( digitsMLP.y.shape )\ny_cls_test = np.vstack( np.argmax( digitsMLP.y, axis=0) )\nprint( y_cls_test.shape )\npd.DataFrame( y_cls_test ).describe()\n\npred_y_cls_test = np.vstack( np.argmax( digitsMLP.predicted_vals_logreg() , axis=0))\nprint( pred_y_cls_test.shape )\npd.DataFrame( pred_y_cls_test ).describe()\n\nnp.mean( pred_y_cls_test == y_cls_test )", "Testing on MNIST, from University of Montreal, Deep Learning Tutorial, data", "K=10\nm = len(train_set[1])\ny_train_prob = [np.zeros(K) for row in train_set[1]] # list of 5000 numpy arrays of size dims. (10,)\nfor i in range( m):\n y_train_prob[i][ train_set[1][i]] = 1\ny_train_prob = np.array(y_train_prob).T.astype(theano.config.floatX) # size dims. (K,m)\nprint( y_train_prob.shape )\n\nprint( pd.DataFrame( y_train_prob).describe() )\n\nm,d= train_set[0].shape\nMNIST_MTL = MLP(3,[d,25,10], m, train_set[0].T, y_train_prob, T.nnet.sigmoid, 1., 0.1, 0.00001)\n\nMNIST_MTL.accuracy_log_reg()\n\nprint( MNIST_MTL.Thetas[0].Theta.get_value() )\nMNIST_MTL.Thetas[1].Theta.get_value()\n\nMNIST_MTL.predicted_vals_logreg()\n\nMNIST_MTL.train_model(100000)\n\nMNIST_MTL.accuracy_log_reg()\n\nprint( MNIST_MTL.Thetas[0].Theta.get_value() )\nMNIST_MTL.Thetas[1].Theta.get_value()\n\nMNIST_MTL.predicted_vals_logreg()", "Save the mode; cf. Getting Started, DeepLearning 0.1 documentation, Loading and Saving Models", "import cPickle\n\nsave_file = open('./saved_models/MNIST_MTL_log_reg','wb')\n\nfor Thet in MNIST_MTL.Thetas:\n cPickle.dump( Thet.Theta.get_value(borrow=True), save_file,-1) # the -1 is for HIGHEST priority\n cPickle.dump( Thet.b.get_value(borrow=True), save_file,-1)\n\nsave_file.close()\n\nMNIST_MTL.Thetas[0].al.set_value( valid_set[0].T.astype(theano.config.floatX) )\n\nK=10\nm = len(valid_set[1])\ny_valid_prob = [np.zeros(K) for row in valid_set[1]] # list of 5000 numpy arrays of size dims. (10,)\nfor i in range( m):\n y_valid_prob[i][ valid_set[1][i]] = 1\ny_valid_prob = np.array(y_valid_prob).T.astype(theano.config.floatX) # size dims. (K,m)\nprint( y_valid_prob.shape )\n\nMNIST_MTL.y = y_valid_prob\n\nMNIST_MTL.predicted_vals_logreg()\n\ntheano.function([], MNIST_MTL.Thetas[0].alp1)()\n\nLayer1 = MNIST_MTL.Thetas[0]\nLayer2 = MNIST_MTL.Thetas[1]\nm = valid_set[0].shape[0]\nprint(m)\n\na2 = T.nnet.sigmoid( T.dot( Layer1.Theta, Layer1.al) + T.tile( Layer1.b, (1,m)) )\na3 = T.nnet.sigmoid( T.dot( Layer2.Theta, a2) + T.tile( Layer2.b, (1,m)) )\nvalid_pred = theano.function([], a3)()\nprint( valid_pred.shape)\n\npd.DataFrame( valid_pred.T).describe()\n\nnp.mean( np.vstack( np.argmax( valid_pred,axis=0)) == np.vstack( valid_set[1] ) )\n\nX_in = T.matrix()\n\nX_in.set_value( valid_set[0].T.astype(theano.config.floatX))\n\na2_giv = T.nnet.sigmoid( T.dot( Layer1.Theta, X_in) + T.tile(Layer1.b, (1,m)))\na3_giv = T.nnet.sigmoid( T.dot( Layer2.Theta, a2_giv) + T.tile( Layer2.b, (1,m)) )\nvalid_pred_givens = theano.function([], outputs=a3_giv, givens={ X_in: valid_set[0].T.astype(theano.config.floatX)} )\n\nprint( valid_pred_givens().shape )\npd.DataFrame( valid_pred_givens().T).describe()\n\nnp.mean( np.vstack( np.argmax( valid_pred_givens(),axis=0)) == np.vstack( valid_set[1] ) )\n\ntest_pred_givens = theano.function([], outputs=a3_giv, givens={ X_in: test_set[0].T.astype(theano.config.floatX)} )\n\nnp.mean( np.vstack( np.argmax( test_pred_givens(),axis=0)) == np.vstack( test_set[1] ) )\n\nrange(1,3)\n\nrange(3)\n\nrange(1,3-1)", "cf. Glass Classification", "gls_data = pd.read_csv( \"./kaggle/glass.csv\")\n\ngls_data.describe()\n\ngls_data.get_values().shape\n\nX_gls = gls_data.get_values()[:,:-1]\nprint(X_gls.shape)\ny_gls = gls_data.get_values()[:,-1]\nprint(y_gls.shape)\nprint( y_gls[:10])\nX_gls_train = gls_data.get_values()[:-14,:-1]\nprint(X_gls_train.shape)\ny_gls_train = gls_data.get_values()[:-14,-1]\nprint(y_gls_train.shape)\n\nK=7\nm = len(y_gls_train)\ny_gls_train_prob = [np.zeros(K) for row in y_gls_train] # list of 5000 numpy arrays of size dims. (10,)\nfor i in range( m):\n y_gls_train_prob[i][ y_gls_train[i]-1] = 1\ny_gls_train_prob = np.array(y_gls_train_prob).T.astype(theano.config.floatX) # size dims. (K,m)\nprint( y_gls_train_prob.shape )\n\ngls_MLP = MLP( 3, [9,8,7],200, X_gls_train.T, y_gls_train_prob, T.nnet.sigmoid, 0.01,0.05,0.0001 )\n\ngls_MLP.accuracy_log_reg()\n\ngls_MLP.train_model(10000)\n\ngls_MLP.accuracy_log_reg()\n\ngls_MLP.predicted_vals_logreg()\n\ngls_MLP.train_model(10000)\ngls_MLP.accuracy_log_reg()\n\nga\n\nX_gls_test = gls_data.get_values()[-14:,:-1]\nprint( X_gls_test.shape )\ny_gls_test = gls_data.get_values()[-14:,-1]\nprint( y_gls_test.shape)\n\ngls_predict_on_test = gls_MLP.predict_on( 14, X_gls_test.T )\n\nnp.mean( np.vstack( np.argmax( gls_predict_on_test(), axis=0) ) == (y_gls_test-1) )\n\ny_gls_test\n\nnp.vstack( np.argmax( gls_predict_on_test(), axis=0))\n\nX_sym = T.matrix()\n\nrng = np.random.RandomState(1234)\nThetab1 = Layer( rng, 1, 4,3,2, al = X_sym, activation=T.nnet.sigmoid)\n\n\nThetab1.alp1\nThetab1.Theta.get_value().shape\n\nThetab2 = Layer( rng, 2, 3,2,2, al=Thetab1.alp1, activation=T.nnet.sigmoid)\n\n\nThetab2.al = Thetab1.alp1\n\nX_sym.shape[0]\n\nT.tile( Thetab1.b, (1, X_sym.shape[0]))\n\ntest12comp = theano.function( [], outputs=Thetab2.alp1, givens={ X_sym : X42test} )\n\nX42test = np.array([1,2,3,4,5,6,7,8]).reshape((4,2)).astype(theano.config.floatX)\n\ntest12comp()\n\nX43test = np.array(range(1,13)).reshape((4,3)).astype(theano.config.floatX)\n\nX43test\n\ntest43comp = theano.function( [], outputs=Thetab2.alp1, givens={ X_sym : X43test} )\n\ntest43comp()\n\nprint( type(Thetab1.al ))\n\nlin_zlp1 = T.dot(Thetab1.Theta, Thetab1.al)+T.tile( Thetab1.b, (1,Thetab1.al.shape[1]))\na1p1 = Thetab1.g( lin_zlp1 )\n\nThetab1.al = X_sym\n\nThetab2.al = a1p1\n\nlin_z2p1 = T.dot(Thetab2.Theta, Thetab2.al)+T.tile( Thetab2.b, (1, Thetab2.al.shape[1]))\na2p1 = Thetab2.g( lin_z2p1 )\n\ntest_gen_conn = theano.function([], outputs=a2p1, givens={ Thetab1.al : X42test })\n\ntest_gen_conn()\n\ntest_gen_conn = theano.function([], outputs=a2p1, givens={ Thetab1.al : X43test })\n\ntest_gen_conn()", "GPU test", "test_gen_conn = theano.function([], outputs=sandbox.cuda.basic_ops.gpu_from_host(a2p1), givens={ Thetab1.al : X42test })\n\ntest_gen_conn()\n\ntest_gen_conn = theano.function([], outputs=sandbox.cuda.basic_ops.gpu_from_host(a2p1), givens={ Thetab1.al : X43test })\n\ntest_gen_conn()", "Summary for Neural Net with Multiple Layers for logistic regression (but can be extended to linear regression)", "sys.path.append( os.getcwd() + '/ML' )\n\nfrom NN import MLP\n\n# Load Training Data\nprint(\"Loading and Visualizing Data ... \\n\")\nex4data1 = scipy.io.loadmat('./coursera_Ng/machine-learning-ex4/ex4/ex4data1.mat')\n\n# recall that whereas the original labels (in the variable y) were 1, 2, ..., 10, for the purpose of training a \n# neural network, we need to recode the labels as vectors containing only values 0 or 1\nK=10\nm = ex4data1['y'].shape[0]\ny_prob = [np.zeros(K) for row in ex4data1['y']] # list of 5000 numpy arrays of size dims. (10,)\nfor i in range( m):\n y_prob[i][ ex4data1['y'][i]-1] = 1\ny_prob = np.array(y_prob).T.astype(theano.config.floatX) # size dims. (K,m)\n\nprint(ex4data1['X'].T.shape)\nprint(y_prob.shape)\n\ndigitsMLP = MLP( 3, [400,25,10], ex4data1['X'].T, y_prob, T.nnet.sigmoid, 1.)\n\ndigitsMLP.build_update(ex4data1['X'].T, y_prob, 0.01, 0.00001)\n\ndigitsMLP.predicted_vals_logreg()\n\ndigitsMLP.accuracy_logreg( ex4data1['X'].T, y_prob)\n\ndigitsMLP.train_model(10000)\n\ndigitsMLP.accuracy_logreg( ex4data1['X'].T, y_prob)\n\ndigitsMLP.train_model(50000)\n\ndigitsMLP.accuracy_logreg( ex4data1['X'].T, y_prob)", "Testing on University of Montreal LISA lab MNIST data", "import gzip\nimport six.moves.cPickle as pickle\nwith gzip.open(\"../DeepLearningTutorials/data/mnist.pkl.gz\", 'rb') as f:\n try:\n train_set, valid_set, test_set = pickle.load(f, encoding='latin1')\n except:\n train_set, valid_set, test_set = pickle.load(f)\n\nK=10\nm = len(train_set[1])\ny_train_prob = [np.zeros(K) for row in train_set[1]] # list of 5000 numpy arrays of size dims. (10,)\nfor i in range( m):\n y_train_prob[i][ train_set[1][i]] = 1\ny_train_prob = np.array(y_train_prob).T.astype(theano.config.floatX) # size dims. (K,m)\nprint( y_train_prob.shape )\n\nMNIST_MLP = MLP( 3,[784,49,10], train_set[0].T, y_train_prob, T.nnet.sigmoid, 1.)\n\nMNIST_MLP.build_update( train_set[0].T, y_train_prob, 0.01, 0.0001)\n\nMNIST_MLP.accuracy_logreg( train_set[0].T, y_train_prob)\n\nMNIST_MLP.train_model(50000)\n\nMNIST_MLP.accuracy_logreg( train_set[0].T, y_train_prob)\n\n%time MNIST_MLP.train_model(100000)\n\nMNIST_MLP.accuracy_logreg( train_set[0].T,y_train_prob)\n\nm = len(valid_set[1])\ny_valid_prob = [np.zeros(K) for row in valid_set[1]] # list of 5000 numpy arrays of size dims. (10,)\nfor i in range( m):\n y_valid_prob[i][ valid_set[1][i]] = 1\ny_valid_prob = np.array(y_valid_prob).T.astype(theano.config.floatX) # size dims. (K,m)\nprint( y_valid_prob.shape )\n\nm = len(test_set[1])\ny_test_prob = [np.zeros(K) for row in test_set[1]] # list of 5000 numpy arrays of size dims. (10,)\nfor i in range( m):\n y_test_prob[i][ test_set[1][i]] = 1\ny_test_prob = np.array(y_test_prob).T.astype(theano.config.floatX) # size dims. (K,m)\nprint( y_test_prob.shape )\n\nMNIST_MLP.accuracy_logreg( valid_set[0].T,y_valid_prob)\n\nMNIST_MLP.accuracy_logreg( test_set[0].T,y_test_prob)\n\nMNIST_d = train_set[0].T.shape[0]\nprint(MNIST_d)\nMNIST_MLP = MLP( 3,[MNIST_d,25,10], train_set[0].T, y_train_prob, T.nnet.sigmoid, 1.)\nMNIST_MLP.build_update( train_set[0].T, y_train_prob, 0.1, 0.00001)\n\nMNIST_MLP.accuracy_logreg( train_set[0].T, y_train_prob)\n\nMNIST_MLP.train_model(150000)\n\nMNIST_MLP.accuracy_logreg( train_set[0].T, y_train_prob)\n\nMNIST_MLP.accuracy_logreg( valid_set[0].T, y_valid_prob)\n\nMNIST_MLP.accuracy_logreg( test_set[0].T, y_test_prob)" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
coolharsh55/advent-of-code
2016/python3/Day09.ipynb
mit
[ "Day 9: Explosives in Cyberspace\nauthor: Harshvardhan Pandit\nlicense: MIT\nlink to problem statement\nWandering around a secure area, you come across a datalink port to a new part of the network. After briefly scanning it for interesting files, you find one file in particular that catches your attention. It's compressed with an experimental format, but fortunately, the documentation for the format is nearby.\nThe format compresses a sequence of characters. Whitespace is ignored. To indicate that some sequence should be repeated, a marker is added to the file, like (10x2). To decompress this marker, take the subsequent 10 characters and repeat them 2 times. Then, continue reading the file after the repeated data. The marker itself is not included in the decompressed output.\nIf parentheses or other characters appear within the data referenced by a marker, that's okay - treat it like normal data, not a marker, and then resume looking for markers after the decompressed section.\nFor example:\n\nADVENT contains no markers and decompresses to itself with no changes, resulting in a decompressed length of 6.\nA(1x5)BC repeats only the B a total of 5 times, becoming ABBBBBC for a decompressed length of 7.\n(3x3)XYZ becomes XYZXYZXYZ for a decompressed length of 9.\nA(2x2)BCD(2x2)EFG doubles the BC and EF, becoming ABCBCDEFEFG for a decompressed length of 11.\n(6x1)(1x3)A simply becomes (1x3)A - the (1x3) looks like a marker, but because it's within a data section of another marker, it is not treated any differently from the A that comes after it. It has a decompressed length of 6.\nX(8x2)(3x3)ABCY becomes X(3x3)ABC(3x3)ABCY (for a decompressed length of 18), because the decompressed data from the (8x2) marker (the (3x3)ABC) is skipped and not processed further.\n\nWhat is the decompressed length of the file (your puzzle input)? Don't count whitespace.\nSolution logic\nAny time we encounter brackets of the format (AxB), we take in the next A characters regardless of what they are, and repeat them B times in the decompressed data. There are a few points to note here:\n\nIf any of the A characters contain bracket(s), they are to be ignored and treated as normal characters\nBrackets of other formats could be present in the input. For e.g. (), (A), (AxBxC), etc. These are to be ignored and not considered as markers.\nWhitespace is to be ignored. So any newlines, spaces, tabs, etc. are ignored and not parsed.\n\nWe iterate over each character at a time, and check if it is a start of the marker (bracket). If it is, we skip the next A characters, and add A * B to the count. \nSince we only need to count the length of the decompressed data, we do not need to actually store it. We can simply use a count variable to hold the count of the decompressed characters.\nWe define a function called parse_marker to get the values of A and B from the marker and to calculate the decompressed length. If the start of the sequence is not a (, it is a single character with the decompressed length of 1. If the character is (, it is a marker and we extract the values and count the decompressed length as A x B.\nReading input: The input needs to be stitched together, ignoring whitespace.\nWe use the string.split method that splits a string on any whitespace character. Reading in each line of input from the file, we split it, and join it together back again.\ndata = [''.join(line.split()) for line in f.readlines()]\n\nThen, we join all lines from the file to create a continuos data stream.\ndata = ''.join(data)\n\nAnother way to read the file would be to read the file character by character usign file.read. This would require us to check whether each character is a whitespace or not, and then to ignore it if it is.", "with open('../inputs/day09.txt', 'r') as f:\n data = ''.join([''.join(line.split()) for line in f.readlines()])\n# TEST DATA\n# data = ''.join([\n# 'ADVENT',\n# 'A(1x5)BC',\n# '(3x3)XYZ',\n# 'A(2x2)BCD(2x2)EFG',\n# '(6x1)(1x3)A',\n# 'X(8x2)(3x3)ABCY'\n# ])", "Reading in markers, Calculating decompressed length: We use the (very awesome) itertools module to do the iterating and filtering for us.\nWe use an iterator to go over the input values, so that we can use the itertools functions such as takewhile that selects characters as long as a condition is fulfilled.\ndef takewhile(condition, data):\n filtered_data = []\n for item in data:\n if condition(data):\n filtered_data.append(item)\n else:\n break\n return filtered_data\n\nWe use takewhile to swallow characters until we reach the markers, and then to get the marker itself. Using regular expressions, we extract the two values from the marker. Since we are using iterators, we need to skip the next A characters, which we do using a for loop.\nAt the end, the answer is in the count variable.", "from itertools import islice, takewhile\nimport re\nnumbers = re.compile(r'(\\d+)')\n\n\ndef decompress(data_iterator):\n '''parses markers and returns index of last character and length of decompressed data'''\n count = 0\n index = 0\n\n while True:\n # handle single tokens that decompress to length 1 until start of marker\n count += len(list(takewhile(lambda character: character != '(', data_iterator)))\n # extract marker\n marker = ''.join(takewhile(lambda character: character != ')', data_iterator))\n # extract A and B\n try:\n a, b = map(int, numbers.findall(marker))\n except ValueError:\n # EOF or no other markers present\n break\n # skip the next a characters\n for i in range(a):\n next(data_iterator)\n # increment count\n count += a * b\n \n return count\n\nprint(decompress(iter(data)))", "Part Two\nApparently, the file actually uses version two of the format.\nIn version two, the only difference is that markers within decompressed data are decompressed. This, the documentation explains, provides much more substantial compression capabilities, allowing many-gigabyte files to be stored in only a few kilobytes.\nFor example:\n\n(3x3)XYZ still becomes XYZXYZXYZ, as the decompressed section contains no markers.\nX(8x2)(3x3)ABCY becomes XABCABCABCABCABCABCY, because the decompressed data from the (8x2) marker is then further decompressed, thus triggering the (3x3) marker twice for a total of six ABC sequences.\n(27x12)(20x12)(13x14)(7x10)(1x12)A decompresses into a string of A repeated 241920 times.\n(25x3)(3x3)ABC(2x3)XY(5x2)PQRSTX(18x9)(3x2)TWO(5x7)SEVEN becomes 445 characters long.\n\nUnfortunately, the computer you brought probably doesn't have enough memory to actually decompress the file; you'll have to come up with another way to get its decompressed length.\nWhat is the decompressed length of the file using this improved format?\nSolution logic\nIn this part, we need to keep track of the markers within the skipped marked from part one. As an assumption, we take the approach that no internal marker will extend the limits of the external marker. If it does, we will need to take a different approach to scan the string over and over again. Instead, we use a recursive approach to parse the string by marker and return the correct length.\nX(8x2)(3x3)ABCY\n\n(8x2) --&gt; 8 characters: (3x3)ABC multiplied by 2\n --&gt; 2 x decompressed (3x3)ABC\n --&gt; 2 x 3 x ABC\n\nFor this, we extend the decompress function so that it will return the length of the string plus recursively scan any marker within it and return the final index of the last character scanned. This is the same function as in part one, except that it recusively checks for markers.\nThe recursive part of this approach is to further decompress the string (or characters) that were skipped in the first part. For this, we use islice to extract part of the string specified by the markers and recursively call the function on it to get its decompressed length.", "def decompress(data_iterator):\n count = 0\n '''parses markers and returns index of last character and length of decompressed data'''\n while(True):\n # handle all single characters\n count += len(list(takewhile(lambda character: character != '(', data_iterator)))\n # marker occurs here, extract marker\n marker = ''.join(takewhile(lambda character: character != ')', data_iterator))\n # extract A and B\n try:\n a, b = map(int, numbers.findall(marker))\n except ValueError:\n break\n count += b * decompress(islice(data_iterator, a))\n return count\n\nprint(decompress(iter(data)))", "== END ==" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
hongguangguo/shogun
doc/ipython-notebooks/statistics/mmd_two_sample_testing.ipynb
gpl-3.0
[ "Kernel hypothesis testing in Shogun\nBy Heiko Strathmann - <a href=\"mailto:heiko.strathmann@gmail.com\">heiko.strathmann@gmail.com</a> - <a href=\"github.com/karlnapf\">github.com/karlnapf</a> - <a href=\"herrstrathmann.de\">herrstrathmann.de</a>\nThis notebook describes Shogun's framework for <a href=\"http://en.wikipedia.org/wiki/Statistical_hypothesis_testing\">statistical hypothesis testing</a>. We begin by giving a brief outline of the problem setting and then describe various implemented algorithms. All the algorithms discussed here are for <a href=\"http://en.wikipedia.org/wiki/Kernel_embedding_of_distributions#Kernel_two_sample_test\">Kernel two-sample testing</a> with Maximum Mean Discrepancy and are based on embedding probability distributions into <a href=\"http://en.wikipedia.org/wiki/Reproducing_kernel_Hilbert_space\">Reproducing Kernel Hilbert Spaces</a>( RKHS ).\nMethods for two-sample testing currently consist of tests based on the Maximum Mean Discrepancy. There are two types of tests available, a quadratic time test and a linear time test. Both come in various flavours.\nIndependence testing is currently based in the Hilbert Schmidt Independence Criterion.", "%pylab inline\n%matplotlib inline\n# import all Shogun classes\nfrom modshogun import *", "Some Formal Basics (skip if you just want code examples)\nTo set the context, we here briefly describe statistical hypothesis testing. Informally, one defines a hypothesis on a certain domain and then uses a statistical test to check whether this hypothesis is true. Formally, the goal is to reject a so-called null-hypothesis $H_0$, which is the complement of an alternative-hypothesis $H_A$. \nTo distinguish the hypotheses, a test statistic is computed on sample data. Since sample data is finite, this corresponds to sampling the true distribution of the test statistic. There are two different distributions of the test statistic -- one for each hypothesis. The null-distribution corresponds to test statistic samples under the model that $H_0$ holds; the alternative-distribution corresponds to test statistic samples under the model that $H_A$ holds.\nIn practice, one tries to compute the quantile of the test statistic in the null-distribution. In case the test statistic is in a high quantile, i.e. it is unlikely that the null-distribution has generated the test statistic -- the null-hypothesis $H_0$ is rejected.\nThere are two different kinds of errors in hypothesis testing:\n\nA type I error is made when $H_0: p=q$ is wrongly rejected. That is, the test says that the samples are from different distributions when they are not.\nA type II error is made when $H_A: p\\neq q$ is wrongly accepted. That is, the test says that the samples are from the same distribution when they are not.\n\nA so-called consistent test achieves zero type II error for a fixed type I error.\nTo decide whether to reject $H_0$, one could set a threshold, say at the $95\\%$ quantile of the null-distribution, and reject $H_0$ when the test statistic lies below that threshold. This means that the chance that the samples were generated under $H_0$ are $5\\%$. We call this number the test power $\\alpha$ (in this case $\\alpha=0.05$). It is an upper bound on the probability for a type I error. An alternative way is simply to compute the quantile of the test statistic in the null-distribution, the so-called p-value, and to compare the p-value against a desired test power, say $\\alpha=0.05$, by hand. The advantage of the second method is that one not only gets a binary answer, but also an upper bound on the type I error.\nIn order to construct a two-sample test, the null-distribution of the test statistic has to be approximated. One way of doing this for any two-sample test is called bootstrapping, or the permutation test, where samples from both sources are mixed and permuted repeatedly and the test statistic is computed for every of those configurations. While this method works for every statistical hypothesis test, it might be very costly because the test statistic has to be re-computed many times. For many test statistics, there are more sophisticated methods of approximating the null distribution.\nBase class for Hypothesis Testing\nShogun implements statistical testing in the abstract class <a href=\"http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1CHypothesisTest.html\">CHypothesisTest</a>. All implemented methods will work with this interface at their most basic level. This class offers methods to\n\ncompute the implemented test statistic,\ncompute p-values for a given value of the test statistic,\ncompute a test threshold for a given p-value,\nsampling the null distribution, i.e. perform the permutation test or bootstrappig of the null-distribution, and\nperforming a full two-sample test, and either returning a p-value or a binary rejection decision. This method is most useful in practice. Note that, depending on the used test statistic, it might be faster to call this than to compute threshold and test statistic seperately with the above methods.\n\nThere are special subclasses for testing two distributions against each other (<a href=\"http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1CTwoSampleTest.html\">CTwoSampleTest</a>, <a href=\"http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1CIndependenceTest.html\">CIndependenceTest</a>), kernel two-sample testing (<a href=\"http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1CKernelTwoSampleTest.html\">CKernelTwoSampleTest</a>), and kernel independence testing (<a href=\"http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1CKernelIndependenceTest.html\">CKernelIndependenceTest</a>), which however mostly differ in internals and constructors.\nKernel Two-Sample Testing with the Maximum Mean Discrepancy\n$\\DeclareMathOperator{\\mmd}{MMD}$\nAn important class of hypothesis tests are the two-sample tests. \nIn two-sample testing, one tries to find out whether two sets of samples come from different distributions. Given two probability distributions $p,q$ on some arbritary domains $\\mathcal{X}, \\mathcal{Y}$ respectively, and i.i.d. samples $X={x_i}{i=1}^m\\subseteq \\mathcal{X}\\sim p$ and $Y={y_i}{i=1}^n\\subseteq \\mathcal{Y}\\sim p$, the two sample test distinguishes the hypothesises\n\\begin{align}\nH_0: p=q\\\nH_A: p\\neq q\n\\end{align}\nIn order to solve this problem, it is desirable to have a criterion than takes a positive unique value if $p\\neq q$, and zero if and only if $p=q$. The so called Maximum Mean Discrepancy (MMD), has this property and allows to distinguish any two probability distributions, if used in a reproducing kernel Hilbert space (RKHS). It is the distance of the mean embeddings $\\mu_p, \\mu_q$ of the distributions $p,q$ in such a RKHS $\\mathcal{F}$ -- which can also be expressed in terms of expectation of kernel functions, i.e.\n\\begin{align}\n\\mmd[\\mathcal{F},p,q]&=||\\mu_p-\\mu_q||\\mathcal{F}^2\\\n&=\\textbf{E}{x,x'}\\left[ k(x,x')\\right]-\n 2\\textbf{E}{x,y}\\left[ k(x,y)\\right]\n +\\textbf{E}{y,y'}\\left[ k(y,y')\\right]\n\\end{align}\nNote that this formulation does not assume any form of the input data, we just need a kernel function whose feature space is a RKHS, see [2, Section 2] for details. This has the consequence that in Shogun, we can do tests on any type of data (<a href=\"http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1CDenseFeatures.html\">CDenseFeatures</a>, <a href=\"http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1CSparseFeatures.html\">CSparseFeatures</a>, <a href=\"http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1CStringFeatures.html\">CStringFeatures</a>, etc), as long as we or you provide a positive definite kernel function under the interface of <a href=\"http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1CKernel.html\">CKernel</a>.\nWe here only describe how to use the MMD for two-sample testing. Shogun offers two types of test statistic based on the MMD, one with quadratic costs both in time and space, and one with linear time and constant space costs. Both come in different versions and with different methods how to approximate the null-distribution in order to construct a two-sample test.\nRunning Example Data. Gaussian vs. Laplace\nIn order to illustrate kernel two-sample testing with Shogun, we use a couple of toy distributions. The first dataset we consider is the 1D Standard Gaussian\n$p(x)=\\frac{1}{\\sqrt{2\\pi\\sigma^2}}\\exp\\left(-\\frac{(x-\\mu)^2}{\\sigma^2}\\right)$\nwith mean $\\mu$ and variance $\\sigma^2$, which is compared against the 1D Laplace distribution\n$p(x)=\\frac{1}{2b}\\exp\\left(-\\frac{|x-\\mu|}{b}\\right)$\nwith the same mean $\\mu$ and variance $2b^2$. In order to increase difficulty, we set $b=\\sqrt{\\frac{1}{2}}$, which means that $2b^2=\\sigma^2=1$.", "# use scipy for generating samples\nfrom scipy.stats import norm, laplace\n\ndef sample_gaussian_vs_laplace(n=220, mu=0.0, sigma2=1, b=sqrt(0.5)): \n # sample from both distributions\n X=norm.rvs(size=n, loc=mu, scale=sigma2)\n Y=laplace.rvs(size=n, loc=mu, scale=b)\n \n return X,Y\n\nmu=0.0\nsigma2=1\nb=sqrt(0.5)\nn=220\nX,Y=sample_gaussian_vs_laplace(n, mu, sigma2, b)\n\n# plot both densities and histograms\nfigure(figsize=(18,5))\nsuptitle(\"Gaussian vs. Laplace\")\nsubplot(121)\nXs=linspace(-2, 2, 500)\nplot(Xs, norm.pdf(Xs, loc=mu, scale=sigma2))\nplot(Xs, laplace.pdf(Xs, loc=mu, scale=b))\ntitle(\"Densities\")\nxlabel(\"$x$\")\nylabel(\"$p(x)$\")\n_=legend([ 'Gaussian','Laplace'])\n\nsubplot(122)\nhist(X, alpha=0.5)\nxlim([-5,5])\nylim([0,100])\nhist(Y,alpha=0.5)\nxlim([-5,5])\nylim([0,100])\nlegend([\"Gaussian\", \"Laplace\"])\n_=title('Histograms')", "Now how to compare these two sets of samples? Clearly, a t-test would be a bad idea since it basically compares mean and variance of $X$ and $Y$. But we set that to be equal. By chance, the estimates of these statistics might differ, but that is unlikely to be significant. Thus, we have to look at higher order statistics of the samples. In fact, kernel two-sample tests look at all (infinitely many) higher order moments.", "print \"Gaussian vs. Laplace\"\nprint \"Sample means: %.2f vs %.2f\" % (mean(X), mean(Y))\nprint \"Samples variances: %.2f vs %.2f\" % (var(X), var(Y))", "Quadratic Time MMD\nWe now describe the quadratic time MMD, as described in [1, Lemma 6], which is implemented in Shogun. All methods in this section are implemented in <a href=\"http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1CQuadraticTimeMMD.html\">CQuadraticTimeMMD</a>, which accepts any type of features in Shogun, and use it on the above toy problem.\nAn unbiased estimate for the MMD expression above can be obtained by estimating expected values with averaging over independent samples\n$$\n\\mmd_u[\\mathcal{F},X,Y]^2=\\frac{1}{m(m-1)}\\sum_{i=1}^m\\sum_{j\\neq i}^mk(x_i,x_j) + \\frac{1}{n(n-1)}\\sum_{i=1}^n\\sum_{j\\neq i}^nk(y_i,y_j)-\\frac{2}{mn}\\sum_{i=1}^m\\sum_{j\\neq i}^nk(x_i,y_j)\n$$\nA biased estimate would be\n$$\n\\mmd_b[\\mathcal{F},X,Y]^2=\\frac{1}{m^2}\\sum_{i=1}^m\\sum_{j=1}^mk(x_i,x_j) + \\frac{1}{n^ 2}\\sum_{i=1}^n\\sum_{j=1}^nk(y_i,y_j)-\\frac{2}{mn}\\sum_{i=1}^m\\sum_{j\\neq i}^nk(x_i,y_j)\n.$$\nComputing the test statistic using <a href=\"http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1CQuadraticTimeMMD.html\">CQuadraticTimeMMD</a> does exactly this, where it is possible to choose between the two above expressions. Note that some methods for approximating the null-distribution only work with one of both types. Both statistics' computational costs are quadratic both in time and space. Note that the method returns $m\\mmd_b[\\mathcal{F},X,Y]^2$ since null distribution approximations work on $m$ times null distribution. Here is how the test statistic itself is computed.", "# turn data into Shogun representation (columns vectors)\nfeat_p=RealFeatures(X.reshape(1,len(X)))\nfeat_q=RealFeatures(Y.reshape(1,len(Y)))\n\n# choose kernel for testing. Here: Gaussian\nkernel_width=1\nkernel=GaussianKernel(10, kernel_width)\n\n# create mmd instance of test-statistic\nmmd=QuadraticTimeMMD(kernel, feat_p, feat_q)\n\n# compute biased and unbiased test statistic (default is unbiased)\nmmd.set_statistic_type(BIASED)\nbiased_statistic=mmd.compute_statistic()\n\nmmd.set_statistic_type(UNBIASED)\nunbiased_statistic=mmd.compute_statistic()\n\nprint \"%d x MMD_b[X,Y]^2=%.2f\" % (len(X), biased_statistic)\nprint \"%d x MMD_u[X,Y]^2=%.2f\" % (len(X), unbiased_statistic)", "Any sub-class of <a href=\"http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1CHypothesisTest.html\">CHypothesisTest</a> can compute approximate the null distribution using permutation/bootstrapping. This way always is guaranteed to produce consistent results, however, it might take a long time as for each sample of the null distribution, the test statistic has to be computed for a different permutation of the data. Note that each of the below calls samples from the null distribution. It is wise to choose one method in practice. Also note that we set the number of samples from the null distribution to a low value to reduce runtime. Choose larger in practice, it is in fact good to plot the samples.", "# this is not necessary as bootstrapping is the default\nmmd.set_null_approximation_method(PERMUTATION)\nmmd.set_statistic_type(UNBIASED)\n\n# to reduce runtime, should be larger practice\nmmd.set_num_null_samples(100)\n\n# now show a couple of ways to compute the test\n\n# compute p-value for computed test statistic\np_value=mmd.compute_p_value(unbiased_statistic)\nprint \"P-value of MMD value %.2f is %.2f\" % (unbiased_statistic, p_value)\n\n# compute threshold for rejecting H_0 for a given test power\nalpha=0.05\nthreshold=mmd.compute_threshold(alpha)\nprint \"Threshold for rejecting H0 with a test power of %.2f is %.2f\" % (alpha, threshold)\n\n# performing the test by hand given the above results, note that those two are equivalent\nif unbiased_statistic>threshold:\n print \"H0 is rejected with confidence %.2f\" % alpha\n \nif p_value<alpha:\n print \"H0 is rejected with confidence %.2f\" % alpha\n\n# or, compute the full two-sample test directly\n# fixed test power, binary decision\nbinary_test_result=mmd.perform_test(alpha)\nif binary_test_result:\n print \"H0 is rejected with confidence %.2f\" % alpha\n\nsignificance_test_result=mmd.perform_test()\nprint \"P-value of MMD test is %.2f\" % significance_test_result\nif significance_test_result<alpha:\n print \"H0 is rejected with confidence %.2f\" % alpha", "Precomputing Kernel Matrices\nBootstrapping re-computes the test statistic for a bunch of permutations of the test data. For kernel two-sample test methods, in particular those of the MMD class, this means that only the joint kernel matrix of $X$ and $Y$ needs to be permuted. Thus, we can precompute the matrix, which gives a significant performance boost. Note that this is only possible if the matrix can be stored in memory. Below, we use Shogun's <a href=\"http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1CCustomKernel.html\">CCustomKernel</a> class, which allows to precompute a kernel matrix (multithreaded) of a given kernel and store it in memory. Instances of this class can then be used as if they were standard kernels.", "# precompute kernel to be faster for null sampling\np_and_q=mmd.get_p_and_q()\nkernel.init(p_and_q, p_and_q);\nprecomputed_kernel=CustomKernel(kernel);\nmmd.set_kernel(precomputed_kernel);\n\n# increase number of iterations since should be faster now\nmmd.set_num_null_samples(500);\np_value_boot=mmd.perform_test();\nprint \"P-value of MMD test is %.2f\" % p_value_boot", "Now let us visualise distribution of MMD statistic under $H_0:p=q$ and $H_A:p\\neq q$. Sample both null and alternative distribution for that. Use the interface of <a href=\"http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1CTwoSampleTest.html\">CTwoSampleTest</a> to sample from the null distribution (permutations, re-computing of test statistic is done internally). For the alternative distribution, compute the test statistic for a new sample set of $X$ and $Y$ in a loop. Note that the latter is expensive, as the kernel cannot be precomputed, and infinite data is needed. Though it is not needed in practice but only for illustrational purposes here.", "num_samples=500\n\n# sample null distribution\nmmd.set_num_null_samples(num_samples)\nnull_samples=mmd.sample_null()\n\n# sample alternative distribution, generate new data for that\nalt_samples=zeros(num_samples)\nfor i in range(num_samples):\n X=norm.rvs(size=n, loc=mu, scale=sigma2)\n Y=laplace.rvs(size=n, loc=mu, scale=b)\n feat_p=RealFeatures(reshape(X, (1,len(X))))\n feat_q=RealFeatures(reshape(Y, (1,len(Y))))\n mmd=QuadraticTimeMMD(kernel, feat_p, feat_q)\n alt_samples[i]=mmd.compute_statistic()", "Null and Alternative Distribution Illustrated\nVisualise both distributions, $H_0:p=q$ is rejected if a sample from the alternative distribution is larger than the $(1-\\alpha)$-quantil of the null distribution. See [1] for more details on their forms. From the visualisations, we can read off the test's type I and type II error:\n\ntype I error is the area of the null distribution being right of the threshold\ntype II error is the area of the alternative distribution being left from the threshold", "def plot_alt_vs_null(alt_samples, null_samples, alpha):\n figure(figsize=(18,5))\n \n subplot(131)\n hist(null_samples, 50, color='blue')\n title('Null distribution')\n subplot(132)\n title('Alternative distribution')\n hist(alt_samples, 50, color='green')\n \n subplot(133)\n hist(null_samples, 50, color='blue')\n hist(alt_samples, 50, color='green', alpha=0.5)\n title('Null and alternative distriution')\n \n # find (1-alpha) element of null distribution\n null_samples_sorted=sort(null_samples)\n quantile_idx=int(num_samples*(1-alpha))\n quantile=null_samples_sorted[quantile_idx]\n axvline(x=quantile, ymin=0, ymax=100, color='red', label=str(int(round((1-alpha)*100))) + '% quantile of null')\n _=legend()\n\nplot_alt_vs_null(alt_samples, null_samples, alpha)", "Different Ways to Approximate the Null Distribution for the Quadratic Time MMD\nAs already mentioned, bootstrapping the null distribution is expensive business. There exist a couple of methods that are more sophisticated and either allow very fast approximations without guarantees or reasonably fast approximations that are consistent. We present a selection from [2], which are implemented in Shogun.\nThe first one is a spectral method that is based around the Eigenspectrum of the kernel matrix of the joint samples. It is faster than bootstrapping while being a consistent test. Effectively, the null-distribution of the biased statistic is sampled, but in a more efficient way than the bootstrapping approach. The converges as\n$$\nm\\mmd^2_b \\rightarrow \\sum_{l=1}^\\infty \\lambda_l z_l^2\n$$\nwhere $z_l\\sim \\mathcal{N}(0,2)$ are i.i.d. normal samples and $\\lambda_l$ are Eigenvalues of expression 2 in [2], which can be empirically estimated by $\\hat\\lambda_l=\\frac{1}{m}\\nu_l$ where $\\nu_l$ are the Eigenvalues of the centred kernel matrix of the joint samples $X$ and $Y$. The distribution above can be easily sampled. Shogun's implementation has two parameters:\n\nNumber of samples from null-distribution. The more, the more accurate. As a rule of thumb, use 250.\nNumber of Eigenvalues of the Eigen-decomposition of the kernel matrix to use. The more, the better the results get. However, the Eigen-spectrum of the joint gram matrix usually decreases very fast. Plotting the Spectrum can help. See [2] for details.\n\nIf the kernel matrices are diagonal dominant, this method is likely to fail. For that and more details, see the original paper. Computational costs are much lower than bootstrapping, which is the only consistent alternative. Since Eigenvalues of the gram matrix has to be computed, costs are in $\\mathcal{O}(m^3)$.\nBelow, we illustrate how to sample the null distribution and perform two-sample testing with the Spectrum approximation in the class <a href=\"http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1CQuadraticTimeMMD.html\">CQuadraticTimeMMD</a>. This method only works with the biased statistic.", "# optional: plot spectrum of joint kernel matrix\nfrom numpy.linalg import eig\n\n# get joint feature object and compute kernel matrix and its spectrum\nfeats_p_q=mmd.get_p_and_q()\nmmd.get_kernel().init(feats_p_q, feats_p_q)\nK=mmd.get_kernel().get_kernel_matrix()\nw,_=eig(K)\n\n# visualise K and its spectrum (only up to threshold)\nfigure(figsize=(18,5))\nsubplot(121)\nimshow(K, interpolation=\"nearest\")\ntitle(\"Kernel matrix K of joint data $X$ and $Y$\")\nsubplot(122)\nthresh=0.1\nplot(w[:len(w[w>thresh])])\n_=title(\"Eigenspectrum of K until component %d\" % len(w[w>thresh]))", "The above plot of the Eigenspectrum shows that the Eigenvalues are decaying extremely fast. We choose the number for the approximation such that all Eigenvalues bigger than some threshold are used. In this case, we will not loose a lot of accuracy while gaining a significant speedup. For slower decaying Eigenspectrums, this approximation might be more expensive.", "# threshold for eigenspectrum\nthresh=0.1\n\n# compute number of eigenvalues to use\nnum_eigen=len(w[w>thresh])\n\n# finally, do the test, use biased statistic\nmmd.set_statistic_type(BIASED)\n\n#tell Shogun to use spectrum approximation\nmmd.set_null_approximation_method(MMD2_SPECTRUM)\nmmd.set_num_eigenvalues_spectrum(num_eigen)\nmmd.set_num_samples_spectrum(num_samples)\n\n# the usual test interface\np_value_spectrum=mmd.perform_test()\nprint \"Spectrum: P-value of MMD test is %.2f\" % p_value_spectrum\n\n# compare with ground truth bootstrapping\nmmd.set_null_approximation_method(PERMUTATION)\nmmd.set_num_null_samples(num_samples)\np_value_boot=mmd.perform_test()\nprint \"Bootstrapping: P-value of MMD test is %.2f\" % p_value_spectrum", "The Gamma Moment Matching Approximation and Type I errors\n$\\DeclareMathOperator{\\var}{var}$\nAnother method for approximating the null-distribution is by matching the first two moments of a <a href=\"http://en.wikipedia.org/wiki/Gamma_distribution\">Gamma distribution</a> and then compute the quantiles of that. This does not result in a consistent test, but usually also gives good results while being very fast. However, there are distributions where the method fail. Therefore, the type I error should always be monitored. Described in [2]. It uses\n$$\nm\\mmd_b(Z) \\sim \\frac{x^{\\alpha-1}\\exp(-\\frac{x}{\\beta})}{\\beta^\\alpha \\Gamma(\\alpha)}\n$$\nwhere\n$$\n\\alpha=\\frac{(\\textbf{E}(\\text{MMD}_b(Z)))^2}{\\var(\\text{MMD}_b(Z))} \\qquad \\text{and} \\qquad\n \\beta=\\frac{m \\var(\\text{MMD}_b(Z))}{(\\textbf{E}(\\text{MMD}_b(Z)))^2}\n$$\nThen, any threshold and p-value can be computed using the gamma distribution in the above expression. Computational costs are in $\\mathcal{O}(m^2)$. Note that the test is parameter free. It only works with the biased statistic.", "# tell Shogun to use gamma approximation\nmmd.set_null_approximation_method(MMD2_GAMMA)\n\n# the usual test interface\np_value_gamma=mmd.perform_test()\nprint \"Gamma: P-value of MMD test is %.2f\" % p_value_gamma\n\n# compare with ground truth bootstrapping\nmmd.set_null_approximation_method(PERMUTATION)\np_value_boot=mmd.perform_test()\nprint \"Bootstrapping: P-value of MMD test is %.2f\" % p_value_spectrum", "As we can see, the above example was kind of unfortunate, as the approximation fails badly. We check the type I error to verify that. This works similar to sampling the alternative distribution: re-sample data (assuming infinite amounts), perform the test and average results. Below we compare type I errors or all methods for approximating the null distribution. This will take a while.", "# type I error is false alarm, therefore sample data under H0\nnum_trials=50\nrejections_gamma=zeros(num_trials)\nrejections_spectrum=zeros(num_trials)\nrejections_bootstrap=zeros(num_trials)\nnum_samples=50\nalpha=0.05\nfor i in range(num_trials):\n X=norm.rvs(size=n, loc=mu, scale=sigma2)\n Y=laplace.rvs(size=n, loc=mu, scale=b)\n \n # simulate H0 via merging samples before computing the \n Z=hstack((X,Y))\n X=Z[:len(X)]\n Y=Z[len(X):]\n feat_p=RealFeatures(reshape(X, (1,len(X))))\n feat_q=RealFeatures(reshape(Y, (1,len(Y))))\n \n # gamma\n mmd=QuadraticTimeMMD(kernel, feat_p, feat_q)\n mmd.set_null_approximation_method(MMD2_GAMMA)\n mmd.set_statistic_type(BIASED)\n rejections_gamma[i]=mmd.perform_test(alpha)\n \n # spectrum\n mmd=QuadraticTimeMMD(kernel, feat_p, feat_q)\n mmd.set_null_approximation_method(MMD2_SPECTRUM)\n mmd.set_num_eigenvalues_spectrum(num_eigen)\n mmd.set_num_samples_spectrum(num_samples)\n mmd.set_statistic_type(BIASED)\n rejections_spectrum[i]=mmd.perform_test(alpha)\n \n # bootstrap (precompute kernel)\n mmd=QuadraticTimeMMD(kernel, feat_p, feat_q)\n p_and_q=mmd.get_p_and_q()\n kernel.init(p_and_q, p_and_q)\n precomputed_kernel=CustomKernel(kernel)\n mmd.set_kernel(precomputed_kernel)\n mmd.set_null_approximation_method(PERMUTATION)\n mmd.set_num_null_samples(num_samples)\n mmd.set_statistic_type(BIASED)\n rejections_bootstrap[i]=mmd.perform_test(alpha)\n\nconvergence_gamma=cumsum(rejections_gamma)/(arange(num_trials)+1)\nconvergence_spectrum=cumsum(rejections_spectrum)/(arange(num_trials)+1)\nconvergence_bootstrap=cumsum(rejections_bootstrap)/(arange(num_trials)+1)\n\nprint \"Average rejection rate of H0 for Gamma is %.2f\" % mean(convergence_gamma)\nprint \"Average rejection rate of H0 for Spectrum is %.2f\" % mean(convergence_spectrum)\nprint \"Average rejection rate of H0 for Bootstrapping is %.2f\" % mean(rejections_bootstrap)", "We see that Gamma basically never rejects, which is inline with the fact that the p-value was massively overestimated above. Note that for the other tests, the p-value is also not at its desired value, but this is due to the low number of samples/repetitions in the above code. Increasing them leads to consistent type I errors.\nLinear Time MMD on Gaussian Blobs\nSo far, we basically had to precompute the kernel matrix for reasonable runtimes. This is not possible for more than a few thousand points. The linear time MMD statistic, implemented in <a href=\"http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1CLinearTimeMMD.html\">CLinearTimeMMD</a> can help here, as it accepts data under the streaming interface <a href=\"http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1CStreamingFeatures.html\">CStreamingFeatures</a>, which deliver data one-by-one.\nAnd it can do more cool things, for example choose the best single (or combined) kernel for you. But we need a more fancy dataset for that to show its power. We will use one of Shogun's streaming based data generator, <a href=\"http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1CGaussianBlobsDataGenerator.html\">CGaussianBlobsDataGenerator</a> for that. This dataset consists of two distributions which are a grid of Gaussians where in one of them, the Gaussians are stretched and rotated. This dataset is regarded as challenging for two-sample testing.", "# paramters of dataset\nm=20000\ndistance=10\nstretch=5\nnum_blobs=3\nangle=pi/4\n\n# these are streaming features\ngen_p=GaussianBlobsDataGenerator(num_blobs, distance, 1, 0)\ngen_q=GaussianBlobsDataGenerator(num_blobs, distance, stretch, angle)\n\t\t\n# stream some data and plot\nnum_plot=1000\nfeatures=gen_p.get_streamed_features(num_plot)\nfeatures=features.create_merged_copy(gen_q.get_streamed_features(num_plot))\ndata=features.get_feature_matrix()\n\nfigure(figsize=(18,5))\nsubplot(121)\ngrid(True)\nplot(data[0][0:num_plot], data[1][0:num_plot], 'r.', label='$x$')\ntitle('$X\\sim p$')\nsubplot(122)\ngrid(True)\nplot(data[0][num_plot+1:2*num_plot], data[1][num_plot+1:2*num_plot], 'b.', label='$x$', alpha=0.5)\n_=title('$Y\\sim q$')", "We now describe the linear time MMD, as described in [1, Section 6], which is implemented in Shogun. A fast, unbiased estimate for the original MMD expression which still uses all available data can be obtained by dividing data into two parts and then compute\n$$\n\\mmd_l^2[\\mathcal{F},X,Y]=\\frac{1}{m_2}\\sum_{i=1}^{m_2} k(x_{2i},x_{2i+1})+k(y_{2i},y_{2i+1})-k(x_{2i},y_{2i+1})-\n k(x_{2i+1},y_{2i})\n$$\nwhere $ m_2=\\lfloor\\frac{m}{2} \\rfloor$. While the above expression assumes that $m$ data are available from each distribution, the statistic in general works in an online setting where features are obtained one by one. Since only pairs of four points are considered at once, this allows to compute it on data streams. In addition, the computational costs are linear in the number of samples that are considered from each distribution. These two properties make the linear time MMD very applicable for large scale two-sample tests. In theory, any number of samples can be processed -- time is the only limiting factor.\nWe begin by illustrating how to pass data to <a href=\"http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1CLinearTimeMMD.html\">CLinearTimeMMD</a>. In order not to loose performance due to overhead, it is possible to specify a block size for the data stream.", "block_size=100\n\n# if features are already under the streaming interface, just pass them\nmmd=LinearTimeMMD(kernel, gen_p, gen_q, m, block_size)\n\n# compute an unbiased estimate in linear time\nstatistic=mmd.compute_statistic()\nprint \"MMD_l[X,Y]^2=%.2f\" % statistic\n\n# note: due to the streaming nature, successive calls of compute statistic use different data\n# and produce different results. Data cannot be stored in memory\nfor _ in range(5):\n print \"MMD_l[X,Y]^2=%.2f\" % mmd.compute_statistic()", "Sometimes, one might want to use <a href=\"http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1CLinearTimeMMD.html\">CLinearTimeMMD</a> with data that is stored in memory. In that case, it is easy to data in the form of for example <a href=\"http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1CStreamingDenseFeatures.html\">CStreamingDenseFeatures</a> into <a href=\"http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1CDenseFeatures.html\">CDenseFeatures</a>.", "# data source\ngen_p=GaussianBlobsDataGenerator(num_blobs, distance, 1, 0)\ngen_q=GaussianBlobsDataGenerator(num_blobs, distance, stretch, angle)\n\n# retreive some points, store them as non-streaming data in memory\ndata_p=gen_p.get_streamed_features(100)\ndata_q=gen_q.get_streamed_features(data_p.get_num_vectors())\nprint \"Number of data is %d\" % data_p.get_num_vectors()\n\n# cast data in memory as streaming features again (which now stream from the in-memory data)\nstreaming_p=StreamingRealFeatures(data_p)\nstreaming_q=StreamingRealFeatures(data_q)\n\n# it is important to start the internal parser to avoid deadlocks\nstreaming_p.start_parser()\nstreaming_q.start_parser()\n\n# example to create mmd (note that m can be maximum the number of data in memory)\n\nmmd=LinearTimeMMD(GaussianKernel(10,1), streaming_p, streaming_q, data_p.get_num_vectors(), 1)\nprint \"Linear time MMD statistic: %.2f\" % mmd.compute_statistic()", "The Gaussian Approximation to the Null Distribution\nAs for any two-sample test in Shogun, bootstrapping can be used to approximate the null distribution. This results in a consistent, but slow test. The number of samples to take is the only parameter. Note that since <a href=\"http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1CLinearTimeMMD.html\">CLinearTimeMMD</a> operates on streaming features, new data is taken from the stream in every iteration.\nBootstrapping is not really necessary since there exists a fast and consistent estimate of the null-distribution. However, to ensure that any approximation is accurate, it should always be checked against bootstrapping at least once.\nSince both the null- and the alternative distribution of the linear time MMD are Gaussian with equal variance (and different mean), it is possible to approximate the null-distribution by using a linear time estimate for this variance. An unbiased, linear time estimator for\n$$\n\\var[\\mmd_l^2[\\mathcal{F},X,Y]]\n$$\ncan simply be computed by computing the empirical variance of\n$$\nk(x_{2i},x_{2i+1})+k(y_{2i},y_{2i+1})-k(x_{2i},y_{2i+1})-k(x_{2i+1},y_{2i}) \\qquad (1\\leq i\\leq m_2)\n$$\nA normal distribution with this variance and zero mean can then be used as an approximation for the null-distribution. This results in a consistent test and is very fast. However, note that it is an approximation and its accuracy depends on the underlying data distributions. It is a good idea to compare to the bootstrapping approach first to determine an appropriate number of samples to use. This number is usually in the tens of thousands.\n<a href=\"http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1CLinearTimeMMD.html\">CLinearTimeMMD</a> allows to approximate the null distribution in the same pass as computing the statistic itself (in linear time). This should always be used in practice since seperate calls of computing statistic and p-value will operator on different data from the stream. Below, we compute the test on a large amount of data (impossible to perform quadratic time MMD for this one as the kernel matrices cannot be stored in memory)", "mmd=LinearTimeMMD(kernel, gen_p, gen_q, m, block_size)\nprint \"m=%d samples from p and q\" % m\nprint \"Binary test result is: \" + (\"Rejection\" if mmd.perform_test(alpha) else \"No rejection\")\nprint \"P-value test result is %.2f\" % mmd.perform_test()", "Kernel Selection for the MMD -- Overview\n$\\DeclareMathOperator{\\argmin}{arg\\,min}\n\\DeclareMathOperator{\\argmax}{arg\\,max}$\nNow which kernel do we actually use for our tests? So far, we just plugged in arbritary ones. However, for kernel two-sample testing, it is possible to do something more clever.\nShogun's kernel selection methods for MMD based two-sample tests are all based around [3, 4]. For the <a href=\"http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1CLinearTimeMMD.html\">CLinearTimeMMD</a>, [3] describes a way of selecting the optimal kernel in the sense that the test's type II error is minimised. For the linear time MMD, this is the method of choice. It is done via maximising the MMD statistic divided by its standard deviation and it is possible for single kernels and also for convex combinations of them. For the <a href=\"http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1CQuadraticTimeMMD.html\">CQuadraticTimeMMD</a>, the best method in literature is choosing the kernel that maximised the MMD statistic [4]. For convex combinations of kernels, this can be achieved via a $L2$ norm constraint. A detailed comparison of all methods on numerous datasets can be found in [5].\nMMD Kernel selection in Shogun always involves an implementation of the base class <a href=\"http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1CMMDKernelSelection.html\">CMMDKernelSelection</a>, which defines the interface for kernel selection. If combinations of kernel should be considered, there is a sub-class <a href=\"http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1CMMDKernelSelectionComb.html\">CMMDKernelSelectionComb</a>. In addition, it involves setting up a number of baseline kernels $\\mathcal{K}$ to choose from/combine in the form of a <a href=\"http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1CCombinedKernel.html\">CCombinedKernel</a>. All methods compute their results for a fixed set of these baseline kernels. We later give an example how to use these classes after providing a list of available methods.\n\n\n<a href=\"http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1CMMDKernelSelectionMedian.html\">CMMDKernelSelectionMedian</a> Selects from a set <a href=\"http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1CGaussianKernel.html\">CGaussianKernel</a> instances the one whose width parameter is closest to the median of the pairwise distances in the data. The median is computed on a certain number of points from each distribution that can be specified as a parameter. Since the median is a stable statistic, one does not have to compute all pairwise distances but rather just a few thousands. This method a useful (and fast) heuristic that in many cases gives a good hint on where to start looking for Gaussian kernel widths. It is for example described in [1]. Note that it may fail badly in selecting a good kernel for certain problems.\n\n\n<a href=\"http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1CMMDKernelSelectionMax.html\">CMMDKernelSelectionMax</a> Selects from a set of arbitrary baseline kernels a single one that maximises the used MMD statistic -- more specific its estimate.\n$$\nk^*=\\argmax_{k\\in\\mathcal{K}} \\hat \\eta_k,\n$$\nwhere $\\eta_k$ is an empirical MMD estimate for using a kernel $k$.\nThis was first described in [4] and was empirically shown to perform better than the median heuristic above. However, it remains a heuristic that comes with no guarantees. Since MMD estimates can be computed in linear and quadratic time, this method works for both methods. However, for the linear time statistic, there exists a better method.\n\n\n<a href=\"http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1CMMDKernelSelectionOpt.html\">CMMDKernelSelectionOpt</a> Selects the optimal single kernel from a set of baseline kernels. This is done via maximising the ratio of the linear MMD statistic and its standard deviation.\n$$\nk^=\\argmax_{k\\in\\mathcal{K}} \\frac{\\hat \\eta_k}{\\hat\\sigma_k+\\lambda},\n$$\nwhere $\\eta_k$ is a linear time MMD estimate for using a kernel $k$ and $\\hat\\sigma_k$ is a linear time variance estimate of $\\eta_k$ to which a small number $\\lambda$ is added to prevent division by zero.\nThese are estimated in a linear time way with the streaming framework that was described earlier. Therefore, this method is only available for <a href=\"http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1CLinearTimeMMD.html\">CLinearTimeMMD</a>. Optimal here means that the resulting test's type II error is minimised for a fixed type I error. Important: For this method to work, the kernel needs to be selected on different* data than the test is performed on. Otherwise, the method will produce wrong results.\n\n\n<a href=\"http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1CMMDKernelSelectionCombMaxL2.html\">CMMDKernelSelectionCombMaxL2</a> Selects a convex combination of kernels that maximises the MMD statistic. This is the multiple kernel analogous to <a href=\"http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1CMMDKernelSelectionMax.html\">CMMDKernelSelectionMax</a>. This is done via solving the convex program\n$$\n\\boldsymbol{\\beta}^*=\\min_{\\boldsymbol{\\beta}} {\\boldsymbol{\\beta}^T\\boldsymbol{\\beta} : \\boldsymbol{\\beta}^T\\boldsymbol{\\eta}=\\mathbf{1}, \\boldsymbol{\\beta}\\succeq 0},\n$$\nwhere $\\boldsymbol{\\beta}$ is a vector of the resulting kernel weights and $\\boldsymbol{\\eta}$ is a vector of which each component contains a MMD estimate for a baseline kernel. See [3] for details. Note that this method is unable to select a single kernel -- even when this would be optimal.\nAgain, when using the linear time MMD, there are better methods available.\n\n\n<a href=\"http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1CMMDKernelSelectionCombOpt.html\">CMMDKernelSelectionCombOpt</a> Selects a convex combination of kernels that maximises the MMD statistic divided by its covariance. This corresponds to \\emph{optimal} kernel selection in the same sense as in class <a href=\"http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1CMMDKernelSelectionOpt.html\">CMMDKernelSelectionOpt</a> and is its multiple kernel analogous. The convex program to solve is\n$$\n\\boldsymbol{\\beta}^*=\\min_{\\boldsymbol{\\beta}} (\\hat Q+\\lambda I) {\\boldsymbol{\\beta}^T\\boldsymbol{\\beta} : \\boldsymbol{\\beta}^T\\boldsymbol{\\eta}=\\mathbf{1}, \\boldsymbol{\\beta}\\succeq 0},\n$$\nwhere again $\\boldsymbol{\\beta}$ is a vector of the resulting kernel weights and $\\boldsymbol{\\eta}$ is a vector of which each component contains a MMD estimate for a baseline kernel. The matrix $\\hat Q$ is a linear time estimate of the covariance matrix of the vector $\\boldsymbol{\\eta}$ to whose diagonal a small number $\\lambda$ is added to prevent division by zero. See [3] for details. In contrast to <a href=\"http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1CMMDKernelSelectionCombMaxL2.html\">CMMDKernelSelectionCombMaxL2</a>, this method is able to select a single kernel when this gives a lower type II error than a combination. In this sense, it contains <a href=\"http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1CMMDKernelSelectionOpt.html\">CMMDKernelSelectionOpt</a>.\n\n\nMMD Kernel Selection in Shogun\nIn order to use one of the above methods for kernel selection, one has to create a new instance of <a href=\"http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1CCombinedKernel.html\">CCombinedKernel</a> append all desired baseline kernels to it. This combined kernel is then passed to the MMD class. Then, an object of any of the above kernel selection methods is created and the MMD instance is passed to it in the constructor. There are then multiple methods to call\n\n\ncompute_measures to compute a vector kernel selection criteria if a single kernel selection method is used. It will return a vector of selected kernel weights if a combined kernel selection method is used. For \\shogunclass{CMMDKernelSelectionMedian}, the method does throw an error.\n\n\nselect_kernel returns the selected kernel of the method. For single kernels this will be one of the baseline kernel instances. For the combined kernel case, this will be the underlying <a href=\"http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1CCombinedKernel.html\">CCombinedKernel</a> instance where the subkernel weights are set to the weights that were selected by the method. \n\n\nIn order to utilise the selected kernel, it has to be passed to an MMD instance. We now give an example how to select the optimal single and combined kernel for the Gaussian Blobs dataset.\nWhat is the best kernel to use here? This is tricky since the distinguishing characteristics are hidden at a small length-scale. Create some kernels to select the best from", "sigmas=[2**x for x in linspace(-5,5, 10)]\nprint \"Choosing kernel width from\", [\"{0:.2f}\".format(sigma) for sigma in sigmas]\ncombined=CombinedKernel()\nfor i in range(len(sigmas)):\n combined.append_kernel(GaussianKernel(10, sigmas[i]))\n\n# mmd instance using streaming features\nblock_size=1000\nmmd=LinearTimeMMD(combined, gen_p, gen_q, m, block_size)\n\n# optmal kernel choice is possible for linear time MMD\nselection=MMDKernelSelectionOpt(mmd)\n\n# select best kernel\nbest_kernel=selection.select_kernel()\nbest_kernel=GaussianKernel.obtain_from_generic(best_kernel)\nprint \"Best single kernel has bandwidth %.2f\" % best_kernel.get_width()", "Now perform two-sample test with that kernel", "alpha=0.05\nmmd=LinearTimeMMD(best_kernel, gen_p, gen_q, m, block_size)\nmmd.set_null_approximation_method(MMD1_GAUSSIAN);\np_value_best=mmd.perform_test();\n\nprint \"Bootstrapping: P-value of MMD test with optimal kernel is %.2f\" % p_value_best", "For the linear time MMD, the null and alternative distributions look different than for the quadratic time MMD as plotted above. Let's sample them (takes longer, reduce number of samples a bit). Note how we can tell the linear time MMD to smulate the null hypothesis, which is necessary since we cannot permute by hand as samples are not in memory)", "mmd=LinearTimeMMD(best_kernel, gen_p, gen_q, 5000, block_size)\nnum_samples=500\n\n# sample null and alternative distribution, implicitly generate new data for that\nnull_samples=zeros(num_samples)\nalt_samples=zeros(num_samples)\nfor i in range(num_samples):\n alt_samples[i]=mmd.compute_statistic()\n \n # tell MMD to merge data internally while streaming\n mmd.set_simulate_h0(True)\n null_samples[i]=mmd.compute_statistic()\n mmd.set_simulate_h0(False)", "And visualise again. Note that both null and alternative distribution are Gaussian, which allows the fast null distribution approximation and the optimal kernel selection", "plot_alt_vs_null(alt_samples, null_samples, alpha)", "Soon to come:\n\nTwo-sample tests on strings\nTwo-sample tests on audio data (quite fun)\nTesting for independence with the Hilbert Schmidt Independence Criterion\n\nReferences\n[1]: Gretton, A., Borgwardt, K. M., Rasch, M. J., Schölkopf, B., & Smola, A. (2012). A Kernel Two-Sample Test. Journal of Machine Learning Research, 13, 671–721.\n[2]: Gretton, A., Fukumizu, K., Harchaoui, Z., & Sriperumbudur, B. K. (2012). A fast, consistent kernel two-sample test. In Advances in Neural Information Processing Systems (pp. 673–681).\n[3]: Gretton, A., Sriperumbudur, B., Sejdinovic, D., Strathmann, H., Balakrishnan, S., Pontil, M., & Fukumizu, K. (2012). Optimal kernel choice for large-scale two-sample tests. In Advances in Neural Information Processing Systems.\n[4]: Sriperumbudur, B., Fukumizu, K., Gretton, A., Lanckriet, G. R. G., & Schölkopf, B. (2009). Kernel choice and classifiability for RKHS embeddings of probability distributions. In Advances in Neural Information Processing Systems\n[5]: Strathmann, H. (2012). M.Sc. Adaptive Large-Scale Kernel Two-Sample Testing. University College London." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
slipguru/ignet
notebooks/icing_tutorial.ipynb
bsd-2-clause
[ "%matplotlib inline\nimport numpy as np\nimport pandas as pd\nimport seaborn as sns\nimport time\nimport warnings; warnings.filterwarnings(\"ignore\")\n\nfrom Bio.Seq import Seq\nfrom Bio.Alphabet import generic_dna\nfrom functools import partial\nfrom sklearn.cluster import DBSCAN, MiniBatchKMeans\nfrom sklearn.neighbors import BallTree\nfrom sklearn.preprocessing import LabelEncoder\n\nfrom icing.core import distances; reload(distances)\nfrom icing.core.distances import *\nfrom icing.externals.DbCore import parseAllele, gene_regex, junction_re\nfrom icing.utils import io\nfrom icing import inference", "ICING tutorial\n<hr>\nICING is a IG clonotype inference library developed in Python.\n<font color=\"red\"><b>NB:</b></font> This is <font color=\"red\"><b>NOT</b></font> a quickstart guide for ICING. This intended as a detailed tutorial on how ICING works internally. If you're only interested into using ICING, please refer to the Quickstart Manual on github, or the <font color=\"blue\">Quickstart section at the end of this notebook</font>.\nICING needs as input a file (TAB-delimited or CSV) which contains, in each row, a particular sequence.\nThe format used is the same as returned by Change-O's MakeDb.py script, which, starting from a IMGT results, it builds a single file with all the information extracted from IMGT starting from the RAW fasta sequences.\n0. Data loading\nLoad the dataset into a single pandas dataframe called 'df'.\nThe dataset MUST CONTAIN at least the following columns (NOT case-sensitive):\n- SEQUENCE_ID\n- V_CALL\n- J_CALL\n- JUNCTION\n- MUT (only if correct is True)", "db_file = '../examples/data/clones_100.100.tab'\n\n# dialect=\"excel\" for CSV or XLS files\n# for computational reasons, let's limit the dataset to the first 1000 sequences\nX = io.load_dataframe(db_file, dialect=\"excel-tab\")[:1000] \n\n# turn the following off if data are real\n# otherwise, assume that the \"SEQUENCE_ID\" field is composed as\n# \"[db]_[extension]_[id]_[id-true-clonotype]_[other-info]\"\n# See the example file for the format of the input.\nX['true_clone'] = [x[3] for x in X.sequence_id.str.split('_')] ", "1. Preprocessing step: data shrinking\nSpecially in CLL patients, most of the input sequences have the same V genes AND junction. In this case, it is possible to remove such sequences from the analysis (we just need to remember them after.)\nIn other words, we can collapse repeated sequences into a single one, which will weight as high as the number of sequences it represents.", "# group by junction and v genes\ngroups = X.groupby([\"v_gene_set_str\", \"junc\"]).groups.values()\nidxs = np.array([elem[0] for elem in groups]) # take one of them\nweights = np.array([len(elem) for elem in groups]) # assign its weight", "2. High-level group inference\nThe number of sequences at this point may be still very high, in particular when IGs are mutated and there is not much replication. However, we rely on the fact that IG similarity is mainly constrained on their junction length. Therefore, we infer high-level groups based on their junction lengths.\nThis is a fast and efficient step. Also, by exploiting MiniBatchKMeans, we can specify an upperbound on the number of clusters we want to obtain. However, contrary to the standard KMeans algorithm, in this case some clusters may vanish. If one is expected to have related IGs with very different junction lengths, however, it is reasonable to specify a low value of clusters.\nKeep in mind, however, that a low number of clusters correspond to higher computational workload of the method in the next phases.", "n_clusters = 50\n\nX_all = idxs.reshape(-1,1)\nkmeans = MiniBatchKMeans(n_init=100, n_clusters=min(n_clusters, X_all.shape[0]))\n\nlengths = X['junction_length'].values\nkmeans.fit(lengths[idxs].reshape(-1,1))", "3. Fine-grained group inference\nNow we have higih-level groups of IGs we have to extract clonotypes from.\nDivide the dataset based on the labels extracted from MiniBatchKMeans.\nFor each one of the cluster, find clonotypes contained in it using DBSCAN.\nThis algorithm allows us to use a custom metric between IGs.\n[<font color='blue'><b>ADVANCED</b></font>] To develop a custom metric, see the format of icing.core.distances.distance_dataframe. If you use a custom function, then you only need to put it as parameter of DBSCAN metric. Note that partial is required if the metric has more than 2 parameters. To be a valid metric for DBSCAN, the function must take ONLY two params (the two elements to compare). For this reason, the other arguments are pre-computed with partial in the following example.", "dbscan = DBSCAN(min_samples=20, n_jobs=-1, algorithm='brute', eps=0.2,\n metric=partial(distance_dataframe, X, \n junction_dist=distances.StringDistance(model='ham'),\n correct=True, tol=0))\n\ndbscan_labels = np.zeros_like(kmeans.labels_).ravel()\nfor label in np.unique(kmeans.labels_):\n idx_row = np.where(kmeans.labels_ == label)[0]\n \n X_idx = idxs[idx_row].reshape(-1,1).astype('float64')\n weights_idx = weights[idx_row]\n \n if idx_row.size == 1:\n db_labels = np.array([0])\n \n db_labels = dbscan.fit_predict(X_idx, sample_weight=weights_idx)\n \n if len(dbscan.core_sample_indices_) < 1:\n db_labels[:] = 0\n \n if -1 in db_labels:\n # this means that DBSCAN found some IG as noise. We choose to assign to the nearest cluster\n balltree = BallTree(\n X_idx[dbscan.core_sample_indices_],\n metric=dbscan.metric)\n noise_labels = balltree.query(\n X_idx[db_labels == -1], k=1, return_distance=False).ravel()\n # get labels for core points, then assign to noise points based\n # on balltree\n dbscan_noise_labels = db_labels[\n dbscan.core_sample_indices_][noise_labels]\n db_labels[db_labels == -1] = dbscan_noise_labels\n \n # hopefully, there are no noisy samples at this time\n db_labels[db_labels > -1] = db_labels[db_labels > -1] + np.max(dbscan_labels) + 1\n dbscan_labels[idx_row] = db_labels # + np.max(dbscan_labels) + 1\n\nlabels = dbscan_labels\n\n# new part: put together the labels\nlabels_ext = np.zeros(X.shape[0], dtype=int)\nlabels_ext[idxs] = labels\nfor i, list_ in enumerate(groups):\n labels_ext[list_] = labels[i]\nlabels = labels_ext", "Quickstart\n<hr>\n\nAll of the above-mentioned steps are integrated in ICING with a simple call to the class inference.ICINGTwoStep.\nThe following is an example of a working script.", "db_file = '../examples/data/clones_100.100.tab'\ncorrect = True\ntolerance = 0\n\nX = io.load_dataframe(db_file)[:1000]\n\n# turn the following off if data are real\nX['true_clone'] = [x[3] for x in X.sequence_id.str.split('_')] \ntrue_clones = LabelEncoder().fit_transform(X.true_clone.values)\n\nii = inference.ICINGTwoStep(\n model='nt', eps=0.2, method='dbscan', verbose=True,\n kmeans_params=dict(n_init=100, n_clusters=20),\n dbscan_params=dict(min_samples=20, n_jobs=-1, algorithm='brute',\n metric=partial(distance_dataframe, X, **dict(\n junction_dist=StringDistance(model='ham'),\n correct=correct, tol=tolerance))))\n\ntic = time.time()\nlabels = ii.fit_predict(X)\ntac = time.time() - tic\nprint(\"\\nElapsed time: %.1fs\" % tac)", "If you want to save the results:", "X['icing_clones (%s)' % ('_'.join(('StringDistance', str(eps), '0', 'corr' if correct else 'nocorr',\n \"%.4f\" % tac)))] = labels\n\nX.to_csv(db_file.split('/')[-1] + '_icing.csv')", "How is the result?", "from sklearn import metrics\ntrue_clones = LabelEncoder().fit_transform(X.true_clone.values)\n\nprint \"FMI: %.5f\" % (metrics.fowlkes_mallows_score(true_clones, labels))\nprint \"ARI: %.5f\" % (metrics.adjusted_rand_score(true_clones, labels))\nprint \"AMI: %.5f\" % (metrics.adjusted_mutual_info_score(true_clones, labels))\nprint \"NMI: %.5f\" % (metrics.normalized_mutual_info_score(true_clones, labels))\nprint \"Hom: %.5f\" % (metrics.homogeneity_score(true_clones, labels))\nprint \"Com: %.5f\" % (metrics.completeness_score(true_clones, labels))\nprint \"Vsc: %.5f\" % (metrics.v_measure_score(true_clones, labels))", "Is it better or worse than the result with everyone at the same time?", "labels = dbscan.fit_predict(np.arange(X.shape[0]).reshape(-1, 1))\n\nprint \"FMI: %.5f\" % metrics.fowlkes_mallows_score(true_clones, labels)\nprint \"ARI: %.5f\" % (metrics.adjusted_rand_score(true_clones, labels))\nprint \"AMI: %.5f\" % (metrics.adjusted_mutual_info_score(true_clones, labels))\nprint \"NMI: %.5f\" % (metrics.normalized_mutual_info_score(true_clones, labels))\nprint \"Hom: %.5f\" % (metrics.homogeneity_score(true_clones, labels))\nprint \"Com: %.5f\" % (metrics.completeness_score(true_clones, labels))\nprint \"Vsc: %.5f\" % (metrics.v_measure_score(true_clones, labels))", "This should be the same (or worse). We reduced the computational workload while maintaining or improving the result we would obtain without the step 1 and 2." ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
pdh21/XID_plus
docs/notebooks/examples/XID+example_run_script.ipynb
mit
[ "XID+ Example Run Script\n(This is based on a Jupyter notebook, available in the XID+ package and can be interactively run and edited)\nXID+ is a probababilistic deblender for confusion dominated maps. It is designed to:\n\nUse a MCMC based approach to get FULL posterior probability distribution on flux\nProvide a natural framework to introduce additional prior information\nAllows more representative estimation of source flux density uncertainties\nProvides a platform for doing science with the maps (e.g XID+ Hierarchical stacking, Luminosity function from the map etc)\n\nCross-identification tends to be done with catalogues, then science with the matched catalogues.\nXID+ takes a different philosophy. Catalogues are a form of data compression. OK in some cases, not so much in others, i.e. confused images: catalogue compression loses correlation information. Ideally, science should be done without compression.\nXID+ provides a framework to cross identify galaxies we know about in different maps, with the idea that it can be extended to do science with the maps!!\nPhilosophy: \n\nbuild a probabilistic generative model for the SPIRE maps\nInfer model on SPIRE maps\n\nBayes Theorem\n$p(\\mathbf{f}|\\mathbf{d}) \\propto p(\\mathbf{d}|\\mathbf{f}) \\times p(\\mathbf{f})$\nIn order to carry out Bayesian inference, we need a model to carry out inference on.\nFor the SPIRE maps, our model is quite simple, with likelihood defined as:\n $L = p(\\mathbf{d}|\\mathbf{f}) \\propto |\\mathbf{N_d}|^{-1/2} \\exp\\big{ -\\frac{1}{2}(\\mathbf{d}-\\mathbf{Af})^T\\mathbf{N_d}^{-1}(\\mathbf{d}-\\mathbf{Af})\\big}$\nwhere:\n $\\mathbf{N_{d,ii}} =\\sigma_{inst.,ii}^2+\\sigma_{conf.}^2$\nSimplest model for XID+ assumes following:\n\nAll sources are known and have positive flux (fi)\nA global background (B) contributes to all pixels \nPRF is fixed and known\nConfusion noise is constant and not correlated across pixels\n\n\nBecause we are getting the joint probability distribution, our model is generative i.e. given parameters, we generate data and vica-versa\nCompared to discriminative model (i.e. neural network), which only obtains conditional probability distribution i.e. Neural network, give inputs, get output. Can't go other way'\nGenerative model is full probabilistic model. Allows more complex relationships between observed and target variables\nXID+ SPIRE\nXID+ applied to GALFORM simulation of COSMOS field\n\nSAM simulation (with dust) ran through SMAP pipeline_ similar depth and size as COSMOS\nUse galaxies with an observed 100 micron flux of gt. $50\\mathbf{\\mu Jy}$. Gives 64823 sources\nUninformative prior: uniform $0 - 10{^3} \\mathbf{mJy}$\n\nImport required modules", "from astropy.io import ascii, fits\nimport pylab as plt\n%matplotlib inline\nfrom astropy import wcs\n\n\nimport numpy as np\nimport xidplus\nfrom xidplus import moc_routines\nimport pickle", "Set image and catalogue filenames", "xidplus.__path__[0]\n\n#Folder containing maps\nimfolder=xidplus.__path__[0]+'/../test_files/'\n\npswfits=imfolder+'cosmos_itermap_lacey_07012015_simulated_observation_w_noise_PSW_hipe.fits.gz'#SPIRE 250 map\npmwfits=imfolder+'cosmos_itermap_lacey_07012015_simulated_observation_w_noise_PMW_hipe.fits.gz'#SPIRE 350 map\nplwfits=imfolder+'cosmos_itermap_lacey_07012015_simulated_observation_w_noise_PLW_hipe.fits.gz'#SPIRE 500 map\n\n\n#Folder containing prior input catalogue\ncatfolder=xidplus.__path__[0]+'/../test_files/'\n#prior catalogue\nprior_cat='lacey_07012015_MillGas.ALLVOLS_cat_PSW_COSMOS_test.fits'\n\n\n#output folder\noutput_folder='./'", "Load in images, noise maps, header info and WCS information", "#-----250-------------\nhdulist = fits.open(pswfits)\nim250phdu=hdulist[0].header\nim250hdu=hdulist[1].header\n\nim250=hdulist[1].data*1.0E3 #convert to mJy\nnim250=hdulist[2].data*1.0E3 #convert to mJy\nw_250 = wcs.WCS(hdulist[1].header)\npixsize250=3600.0*w_250.wcs.cd[1,1] #pixel size (in arcseconds)\nhdulist.close()\n#-----350-------------\nhdulist = fits.open(pmwfits)\nim350phdu=hdulist[0].header\nim350hdu=hdulist[1].header\n\nim350=hdulist[1].data*1.0E3 #convert to mJy\nnim350=hdulist[2].data*1.0E3 #convert to mJy\nw_350 = wcs.WCS(hdulist[1].header)\npixsize350=3600.0*w_350.wcs.cd[1,1] #pixel size (in arcseconds)\nhdulist.close()\n#-----500-------------\nhdulist = fits.open(plwfits)\nim500phdu=hdulist[0].header\nim500hdu=hdulist[1].header \nim500=hdulist[1].data*1.0E3 #convert to mJy\nnim500=hdulist[2].data*1.0E3 #convert to mJy\nw_500 = wcs.WCS(hdulist[1].header)\npixsize500=3600.0*w_500.wcs.cd[1,1] #pixel size (in arcseconds)\nhdulist.close()", "Load in catalogue you want to fit (and make any cuts)", "hdulist = fits.open(catfolder+prior_cat)\nfcat=hdulist[1].data\nhdulist.close()\ninra=fcat['RA']\nindec=fcat['DEC']\n# select only sources with 100micron flux greater than 50 microJy\nsgood=fcat['S100']>0.050\ninra=inra[sgood]\nindec=indec[sgood]", "XID+ uses Multi Order Coverage (MOC) maps for cutting down maps and catalogues so they cover the same area. It can also take in MOCs as selection functions to carry out additional cuts. Lets use the python module pymoc to create a MOC, centered on a specific position we are interested in. We will use a HEALPix order of 15 (the resolution: higher order means higher resolution), have a radius of 100 arcseconds centered around an R.A. of 150.74 degrees and Declination of 2.03 degrees.", "from astropy.coordinates import SkyCoord\nfrom astropy import units as u\nc = SkyCoord(ra=[150.74]*u.degree, dec=[2.03]*u.degree) \nimport pymoc\nmoc=pymoc.util.catalog.catalog_to_moc(c,100,15)", "XID+ is built around two python classes. A prior and posterior class. There should be a prior class for each map being fitted. It is initiated with a map, noise map, primary header and map header and can be set with a MOC. It also requires an input prior catalogue and point spread function.", "#---prior250--------\nprior250=xidplus.prior(im250,nim250,im250phdu,im250hdu, moc=moc)#Initialise with map, uncertianty map, wcs info and primary header\nprior250.prior_cat(inra,indec,prior_cat)#Set input catalogue\nprior250.prior_bkg(-5.0,5)#Set prior on background (assumes Gaussian pdf with mu and sigma)\n#---prior350--------\nprior350=xidplus.prior(im350,nim350,im350phdu,im350hdu, moc=moc)\nprior350.prior_cat(inra,indec,prior_cat)\nprior350.prior_bkg(-5.0,5)\n\n#---prior500--------\nprior500=xidplus.prior(im500,nim500,im500phdu,im500hdu, moc=moc)\nprior500.prior_cat(inra,indec,prior_cat)\nprior500.prior_bkg(-5.0,5)", "Set PRF. For SPIRE, the PRF can be assumed to be Gaussian with a FWHM of 18.15, 25.15, 36.3 '' for 250, 350 and 500 $\\mathrm{\\mu m}$ respectively. Lets use the astropy module to construct a Gaussian PRF and assign it to the three XID+ prior classes.", "#pixsize array (size of pixels in arcseconds)\npixsize=np.array([pixsize250,pixsize350,pixsize500])\n#point response function for the three bands\nprfsize=np.array([18.15,25.15,36.3])\n#use Gaussian2DKernel to create prf (requires stddev rather than fwhm hence pfwhm/2.355)\nfrom astropy.convolution import Gaussian2DKernel\n\n##---------fit using Gaussian beam-----------------------\nprf250=Gaussian2DKernel(prfsize[0]/2.355,x_size=101,y_size=101)\nprf250.normalize(mode='peak')\nprf350=Gaussian2DKernel(prfsize[1]/2.355,x_size=101,y_size=101)\nprf350.normalize(mode='peak')\nprf500=Gaussian2DKernel(prfsize[2]/2.355,x_size=101,y_size=101)\nprf500.normalize(mode='peak')\n\npind250=np.arange(0,101,1)*1.0/pixsize[0] #get 250 scale in terms of pixel scale of map\npind350=np.arange(0,101,1)*1.0/pixsize[1] #get 350 scale in terms of pixel scale of map\npind500=np.arange(0,101,1)*1.0/pixsize[2] #get 500 scale in terms of pixel scale of map\n\nprior250.set_prf(prf250.array,pind250,pind250)#requires PRF as 2d grid, and x and y bins for grid (in pixel scale)\nprior350.set_prf(prf350.array,pind350,pind350)\nprior500.set_prf(prf500.array,pind500,pind500)\n\nprint('fitting '+ str(prior250.nsrc)+' sources \\n')\nprint('using ' + str(prior250.snpix)+', '+ str(prior250.snpix)+' and '+ str(prior500.snpix)+' pixels')\n", "Before fitting, the prior classes need to take the PRF and calculate how much each source contributes to each pixel. This process provides what we call a pointing matrix. Lets calculate the pointing matrix for each prior class", "prior250.get_pointing_matrix()\nprior350.get_pointing_matrix()\nprior500.get_pointing_matrix()\n", "Default prior on flux is a uniform distribution, with a minimum and maximum of 0.00 and 1000.0 $\\mathrm{mJy}$ respectively for each source. running the function upper_lim _map resets the upper limit to the maximum flux value (plus a 5 sigma Background value) found in the map in which the source makes a contribution to.", "prior250.upper_lim_map()\nprior350.upper_lim_map()\nprior500.upper_lim_map()", "Now fit using the XID+ interface to pystan", "%%time\nfrom xidplus.stan_fit import SPIRE\nfit=SPIRE.all_bands(prior250,prior350,prior500,iter=1000)\n", "Initialise the posterior class with the fit object from pystan, and save alongside the prior classes", "posterior=xidplus.posterior_stan(fit,[prior250,prior350,prior500])\nxidplus.save([prior250,prior350,prior500],posterior,'test')", "Alternatively, you can fit with the pyro backend.", "%%time\nfrom xidplus.pyro_fit import SPIRE\nfit_pyro=SPIRE.all_bands([prior250,prior350,prior500],n_steps=10000,lr=0.001,sub=0.1)\n\nposterior_pyro=xidplus.posterior_pyro(fit_pyro,[prior250,prior350,prior500])\nxidplus.save([prior250,prior350,prior500],posterior_pyro,'test_pyro')\n\nplt.semilogy(posterior_pyro.loss_history)", "You can fit with the numpyro backend.", "%%time\nfrom xidplus.numpyro_fit import SPIRE\nfit_numpyro=SPIRE.all_bands([prior250,prior350,prior500])\n\nposterior_numpyro=xidplus.posterior_numpyro(fit_numpyro,[prior250,prior350,prior500])\nxidplus.save([prior250,prior350,prior500],posterior_numpyro,'test_numpyro')\n\nprior250.bkg", "CPU times: user 9min 17s, sys: 4.08 s, total: 9min 21s\nWall time: 3min 33s\nCPU times: user 59.6 s, sys: 511 ms, total: 1min\nWall time: 27.3 s" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
mercybenzaquen/foundations-homework
databases_hw/db05/Homework_5_Graded.ipynb
mit
[ "graded = 10/10\nHomework #5\nThis homework presents a sophisticated scenario in which you must design a SQL schema, insert data into it, and issue queries against it.\nThe scenario\nIn the year 20XX, I have won the lottery and decided to leave my programming days behind me in order to pursue my true calling as a cat cafe tycoon. This webpage lists the locations of my cat cafes and all the cats that are currently in residence at these cafes.\nI'm interested in doing more detailed analysis of my cat cafe holdings and the cats that are currently being cared for by my cafes. For this reason, I've hired you to convert this HTML page into a workable SQL database. (Why don't I just do it myself? Because I am far too busy hanging out with adorable cats in all of my beautiful, beautiful cat cafes.)\nSpecifically, I want to know the answers to the following questions:\n\nWhat's the name of the youngest cat at any location?\nIn which zip codes can I find a lilac-colored tabby?\nWhat's the average weight of cats currently residing at any location (grouped by location)?\nWhich location has the most cats with tortoiseshell coats?\n\nBecause I'm not paying you very much, and because I am a merciful person who has considerable experience in these matters, I've decided to write the queries for you. (See below.) Your job is just to scrape the data from the web page, create the appropriate tables in PostgreSQL, and insert the data into those tables.\nBefore you continue, scroll down to \"The Queries\" below to examine the queries as I wrote them.\nProblem set #1: Scraping the data\nYour first goal is to create two data structures, both lists of dictionaries: one for the list of locations and one for the list of cats. You'll get these from scraping two &lt;table&gt; tags in the HTML: the first table has a class of cafe-list, the second has a class of cat-list.\nBefore you do anything else, though, execute the following cell to import Beautiful Soup and create a BeautifulSoup object with the content of the web page:", "from bs4 import BeautifulSoup\nfrom urllib.request import urlopen\nhtml = urlopen(\"http://static.decontextualize.com/cats.html\").read()\ndocument = BeautifulSoup(html, \"html.parser\")", "Let's tackle the list of cafes first. In the cell below, write some code that creates a list of dictionaries with information about each cafe, assigning it to the variable cafe_list. I've written some of the code for you; you just need to fill in the rest. The list should end up looking like this:\n[{'name': 'Hang In There', 'zip': '11237'},\n {'name': 'Independent Claws', 'zip': '11201'},\n {'name': 'Paws and Play', 'zip': '11215'},\n {'name': 'Tall Tails', 'zip': '11222'},\n {'name': 'Cats Meow', 'zip': '11231'}]", "cafe_list = list()\ncafe_table = document.find('table', {'class': 'cafe-list'})\ntbody = cafe_table.find('tbody')\nfor tr_tag in tbody.find_all('tr'):\n name_zip_dic= {}\n cat_name_tag = tr_tag.find ('td', {'class': 'name'})\n name_zip_dic['name']= cat_name_tag.string\n location_zipcode_tag = tr_tag.find ('td', {'class': 'zip'})\n name_zip_dic['zip'] = location_zipcode_tag.string\n cafe_list.append(name_zip_dic)\ncafe_list", "Great! In the following cell, write some code that creates a list of cats from the &lt;table&gt; tag on the page, storing them as a list of dictionaries in a variable called cat_list. Again, I've written a bit of the code for you. Expected output:\n[{'birthdate': '2015-05-20',\n 'color': 'black',\n 'locations': ['Paws and Play', 'Independent Claws*'],\n 'name': 'Sylvester',\n 'pattern': 'colorpoint',\n 'weight': 10.46},\n {'birthdate': '2000-01-03',\n 'color': 'cinnamon',\n 'locations': ['Independent Claws*'],\n 'name': 'Jasper',\n 'pattern': 'solid',\n 'weight': 8.06},\n {'birthdate': '2006-02-27',\n 'color': 'brown',\n 'locations': ['Independent Claws*'],\n 'name': 'Luna',\n 'pattern': 'tortoiseshell',\n 'weight': 10.88},\n[...many records omitted for brevity...]\n {'birthdate': '1999-01-09',\n 'color': 'white',\n 'locations': ['Cats Meow*', 'Independent Claws', 'Tall Tails'],\n 'name': 'Lafayette',\n 'pattern': 'tortoiseshell',\n 'weight': 9.3}]\nNote: Observe the data types of the values in each dictionary! Make sure to explicitly convert values retrieved from .string attributes of Beautiful Soup tag objects to strs using the str() function.", "cat_list = list()\ncat_table = document.find('table', {'class': 'cat-list'})\ntbody = cat_table.find('tbody')\nfor tr_tag in tbody.find_all('tr'):\n cat_dict = {}\n name_tag = tr_tag.find('td', {'class': 'name'})\n cat_dict['name']= name_tag.string\n birthdate_tag = tr_tag.find('td', {'class': 'birthdate'})\n cat_dict['birthdate']= birthdate_tag.string\n weight_tag = tr_tag.find('td', {'class': 'weight'})\n cat_dict['weight']= weight_tag.string\n color_tag = tr_tag.find('td', {'class': 'color'})\n cat_dict['color']= color_tag.string\n pattern_tag = tr_tag.find('td', {'class': 'pattern'})\n cat_dict['pattern']= pattern_tag.string\n locations_tag = tr_tag.find('td', {'class': 'locations'})\n cat_dict['locations']= locations_tag.string\n cat_list.append(cat_dict)\n\ncat_list", "Problem set #2: Designing the schema\nBefore you do anything else, use psql to create a new database for this homework assignment using the following command:\nCREATE DATABASE catcafes;\n\nIn the following cell, connect to the database using pg8000. (You may need to provide additional arguments to the .connect() method, depending on the distribution of PostgreSQL you're using.)", "import pg8000\nconn = pg8000.connect(database=\"catcafes\")", "Here's a cell you can run if something goes wrong and you need to rollback the current query session:", "conn.rollback()", "In the cell below, you're going to create three tables, necessary to represent the data you scraped above. I've given the basic framework of the Python code and SQL statements to create these tables. I've given the entire CREATE TABLE statement for the cafe table, but for the other two, you'll need to supply the field names and the data types for each column. If you're unsure what to call the fields, or what fields should be in the tables, consult the queries in \"The Queries\" below. Hints:\n\nMany of these fields will be varchars. Don't worry too much about how many characters you need—it's okay just to eyeball it.\nFeel free to use a varchar type to store the birthdate field. No need to dig too deep into PostgreSQL's date types for this particular homework assignment.\nCats and locations are in a many-to-many relationship. You'll need to create a linking table to represent this relationship. (That's why there's space for you to create three tables.)\nThe linking table will need a field to keep track of whether or not a particular cafe is the \"current\" cafe for a given cat.", "cursor = conn.cursor()\ncursor.execute(\"\"\"\nCREATE TABLE cafe (\n id serial,\n name varchar(40),\n zip varchar(5)\n)\n\"\"\")\n\ncursor.execute(\"\"\"\nCREATE TABLE cat (\n id serial,\n name varchar(60),\n birthdate varchar(40),\n color varchar(40),\n pattern varchar(40),\n weight numeric\n)\n\"\"\")\n\ncursor.execute(\"\"\"\nCREATE TABLE cat_cafe (\n cat_id integer,\n cafe_id integer,\n active boolean\n)\n\"\"\")\nconn.commit()", "After executing the above cell, issuing a \\d command in psql should yield something that looks like the following:\nList of relations\n Schema | Name | Type | Owner \n--------+-------------+----------+---------\n public | cafe | table | allison\n public | cafe_id_seq | sequence | allison\n public | cat | table | allison\n public | cat_cafe | table | allison\n public | cat_id_seq | sequence | allison\n(5 rows)\nIf something doesn't look right, you can always use the DROP TABLE command to drop the tables and start again. (You can also issue a DROP DATABASE catcafes command to drop the database altogether.) Don't worry if it takes a few tries to get it right—happens to the best and most expert among us. You'll probably have to drop the database and start again from scratch several times while completing this homework.\n\nNote: If you try to issue a DROP TABLE or DROP DATABASE command and psql seems to hang forever, it could be that PostgreSQL is waiting for current connections to close before proceeding with your command. To fix this, create a cell with the code conn.close() in your notebook and execute it. After the DROP commands have completed, make sure to run the cell containing the pg8000.connect() call again.\n\nProblem set #3: Inserting the data\nIn the cell below, I've written the code to insert the cafes into the cafe table, using data from the cafe_list variable that we made earlier. If the code you wrote to create that table was correct, the following cell should execute without error or incident. Execute it before you continue.", "cafe_name_id_map = {}\nfor item in cafe_list:\n cursor.execute(\"INSERT INTO cafe (name, zip) VALUES (%s, %s) RETURNING id\",\n [str(item['name']), str(item['zip'])])\n cafe_rowid = cursor.fetchone()[0]\n cafe_name_id_map[str(item['name'])] = cafe_rowid\nconn.commit()", "Issuing SELECT * FROM cafe in the psql client should yield something that looks like this:\nid | name | zip \n----+-------------------+-------\n 1 | Hang In There | 11237\n 2 | Independent Claws | 11201\n 3 | Paws and Play | 11215\n 4 | Tall Tails | 11222\n 5 | Cats Meow | 11231\n(5 rows)\n(The id values may be different depending on how many times you've cleaned the table out with DELETE.)\nNote that the code in the cell above created a dictionary called cafe_name_id_map. What's in it? Let's see:", "cafe_name_id_map", "The dictionary maps the name of the cat cafe to its ID in the database. You'll need these values later when you're adding records to the linking table (cat_cafe).\nNow the tricky part. (Yes, believe it or not, this is the tricky part. The other stuff has all been easy by comparison.) In the cell below, write the Python code to insert each cat's data from the cat_list variable (created in Problem Set #1) into the cat table. The code should also insert the relevant data into the cat_cafe table. Hints:\n\nYou'll need to get the id of each cat record using the RETURNING clause of the INSERT statement and the .fetchone() method of the cursor object.\nHow do you know whether or not the current location is the \"active\" location for a particular cat? The page itself contains some explanatory text that might be helpful here. You might need to use some string checking and manipulation functions in order to make this determination and transform the string as needed.\nThe linking table stores an ID only for both the cat and the cafe. Use the cafe_name_id_map dictionary to get the id of the cafes inserted earlier.", "import re\n\ncat_name_id_map = {}\nfor cat in cat_list:\n cursor.execute(\"INSERT INTO cat (name, birthdate, weight, color, pattern) VALUES (%s, %s, %s, %s, %s) RETURNING id\",\n [str(cat['name']), str(cat['birthdate']) , str(cat['weight']), str(cat['color']), str(cat['pattern'])])\n cat_rowid = cursor.fetchone()[0]\n cat_name_id_map[str(cat['name'])] = cat_rowid \nconn.commit()\n\n\n\ncat_name_id_map = {}\nfor cat in cat_list:\n cursor.execute(\"INSERT INTO cat (name, birthdate, weight, color, pattern) VALUES (%s, %s, %s, %s, %s) RETURNING id\",\n [str(cat['name']), str(cat['birthdate']) , str(cat['weight']), str(cat['color']), str(cat['pattern'])])\n cat_rowid = cursor.fetchone()[0]\n cat_name_id_map[str(cat['name'])] = cat_rowid \n\n cat_id = cat_name_id_map[str(cat['name'])]\n \n \n locations_str = cat['locations']\n locations_list = locations_str.split(',')\n \n for item in locations_list: \n match = re.search((r\"[*]\"), item)\n \n if match: \n active = 't'\n else:\n active = 'f'\n \n cafe_id = cafe_name_id_map[item.replace(\"*\", \"\").strip()]\n\n cursor.execute(\"INSERT INTO cat_cafe (cat_id, cafe_id, active) VALUES (%s, %s, %s)\",[cat_id, cafe_id, active])\nconn.commit()", "Issuing a SELECT * FROM cat LIMIT 10 in psql should yield something that looks like this:\nid | name | birthdate | weight | color | pattern \n----+-----------+------------+--------+----------+---------------\n 1 | Sylvester | 2015-05-20 | 10.46 | black | colorpoint\n 2 | Jasper | 2000-01-03 | 8.06 | cinnamon | solid\n 3 | Luna | 2006-02-27 | 10.88 | brown | tortoiseshell\n 4 | Georges | 2015-08-13 | 9.40 | white | tabby\n 5 | Millie | 2003-09-13 | 9.27 | red | bicolor\n 6 | Lisa | 2009-07-30 | 8.84 | cream | colorpoint\n 7 | Oscar | 2011-12-15 | 8.44 | cream | solid\n 8 | Scaredy | 2015-12-30 | 8.83 | lilac | tabby\n 9 | Charlotte | 2013-10-16 | 9.54 | blue | tabby\n 10 | Whiskers | 2011-02-07 | 9.47 | white | colorpoint\n(10 rows)\nAnd a SELECT * FROM cat_cafe LIMIT 10 in psql should look like this:\ncat_id | cafe_id | active \n--------+---------+--------\n 1 | 3 | f\n 1 | 2 | t\n 2 | 2 | t\n 3 | 2 | t\n 4 | 4 | t\n 4 | 1 | f\n 5 | 3 | t\n 6 | 1 | t\n 7 | 1 | t\n 7 | 5 | f\n(10 rows)\nAgain, the exact values for the ID columns might be different, depending on how many times you've deleted and dropped the tables.\nThe Queries\nOkay. To verify your work, run the following queries and check their output. If you've correctly scraped the data and imported it into SQL, running the cells should produce exactly the expected output, as indicated. If not, then you performed one of the steps above incorrectly; check your work and try again. (Note: Don't modify these cells, just run them! This homework was about scraping and inserting data, not querying it.)\nWhat's the name of the youngest cat at any location?\nExpected output: Scaredy", "cursor.execute(\"SELECT max(birthdate) FROM cat\")\nbirthdate = cursor.fetchone()[0]\ncursor.execute(\"SELECT name FROM cat WHERE birthdate = %s\", [birthdate])\nprint(cursor.fetchone()[0])", "In which zip codes can I find a lilac-colored tabby?\nExpected output: 11237, 11215", "cursor.execute(\"\"\"SELECT DISTINCT(cafe.zip)\n FROM cat\n JOIN cat_cafe ON cat.id = cat_cafe.cat_id\n JOIN cafe ON cafe.id = cat_cafe.cafe_id\n WHERE cat.color = 'lilac' AND cat.pattern = 'tabby' AND cat_cafe.active = true\n\"\"\")\nprint(', '.join([x[0] for x in cursor.fetchall()]))", "What's the average weight of cats currently residing at all locations?\nExpected output:\nIndependent Claws: 9.33\nPaws and Play: 9.28\nTall Tails: 9.82\nHang In There: 9.25\nCats Meow: 9.76", "cursor.execute(\"\"\"\n SELECT cafe.name, avg(cat.weight)\n FROM cat\n JOIN cat_cafe ON cat.id = cat_cafe.cat_id\n JOIN cafe ON cafe.id = cat_cafe.cafe_id\n WHERE cat_cafe.active = true\n GROUP BY cafe.name\n \"\"\")\nfor rec in cursor.fetchall():\n print(rec[0]+\":\", \"%0.2f\" % rec[1])", "Which location has the most cats with tortoiseshell coats?\nExpected output: Independent Claws", "cursor.execute(\"\"\"\n SELECT cafe.name\n FROM cat\n JOIN cat_cafe ON cat.id = cat_cafe.cat_id\n JOIN cafe ON cafe.id = cat_cafe.cafe_id\n WHERE cat_cafe.active = true AND cat.pattern = 'tortoiseshell'\n GROUP BY cafe.name\n ORDER BY count(cat.name) DESC\n LIMIT 1\n\"\"\")\nprint(cursor.fetchone()[0])", "Did they all work? Great job! You're done." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
NYUDataBootcamp/Materials
Code/notebooks/bootcamp_format_plotting.ipynb
mit
[ ".format() and matplotlib styles\nIntro to when we would like to use format in printing", "# Import Snap's share price from Google\n# google 'pandas remote data access' --> pandas-datareader.readthedocs...\n\n%matplotlib inline\nimport pandas as pd\nimport pandas_datareader.data as web\nimport matplotlib.pyplot as plt\n\nimport datetime", "We will want to run the notebook in the future with updated values. How can we do this? Make the dates automatically updated.", "start = datetime.datetime(2017, 3, 2) # the day Snap went public\nend = datetime.date.today() # datetime.date.today\n\nsnap = web.DataReader(\"SNAP\", 'google', start, end)\n\nsnap\n\nsnap.index.tolist()", ".format()\nWe want to print something with systematic changes in the text.\nSuppose we want to print out the following information:\n'On day X Snap closed at VALUE Y and the volume was Z.'", "# How did we do this before?\n\nfor index in snap.index:\n print('On day', index, 'Snap closed at', snap['Close'][index], 'and the volume was', snap['Volume'][index], '.')\n", "This looks aweful. We want to cut the day and express the volume in millions.", "# express Volume in millions\nsnap['Volume'] = snap['Volume']/10**6\n\nsnap", "The .format() method\nwhat is format and how does it work? Google and find a good link", "print('Today is {}.'.format(datetime.date.today()))\n\nfor index in snap.index:\n print('On {} Snap closed at ${} and the volume was {} million.'.format(index, snap['Close'][index], snap['Volume'][index]))\n\n\nfor index in snap.index:\n print('On {:.10} Snap closed at ${} and the volume was {:.1f} million.'.format(str(index), snap['Close'][index], snap['Volume'][index]))\n", "Check Olson's blog and style recommendation", "fig, ax = plt.subplots() #figsize=(8,5))\n\nsnap['Close'].plot(ax=ax, grid=True, style='o', alpha=.6)\nax.set_xlim([snap.index[0]-datetime.timedelta(days=1), snap.index[-1]+datetime.timedelta(days=1)])\nax.spines['right'].set_visible(False)\nax.spines['top'].set_visible(False)\nax.yaxis.set_ticks_position('left')\nax.xaxis.set_ticks_position('bottom')\nax.vlines(snap.index, snap['Low'], snap['High'], alpha=.2, lw=.9)\nax.set_ylabel('SNAP share price', fontsize=14)\nax.set_xlabel('Date', fontsize=14)\nplt.show()\n\nstart_w = datetime.datetime(2008, 6, 8)\noilwater = web.DataReader(['BP', 'AWK'], 'google', start_w, end)\n\noilwater.describe\n\ntype(oilwater[:,:,'AWK'])\n\nwater = oilwater[:, :, 'AWK']\noil = oilwater[:, :, 'BP']\n\n#import seaborn as sns\n#import matplotlib as mpl\n#mpl.rcParams.update(mpl.rcParamsDefault)\n\nplt.style.use('seaborn-notebook')\nplt.rc('font', family='serif')\n\ndeepwater = datetime.datetime(2010, 4, 20)\n\nfig, ax = plt.subplots(figsize=(8, 5))\nwater['Close'].plot(ax=ax, label='AWK', lw=.7) #grid=True,\noil['Close'].plot(ax=ax, label='BP', lw=.7) #grid=True, \nax.yaxis.grid(True)\nax.spines['right'].set_visible(False)\nax.spines['top'].set_visible(False)\nax.yaxis.set_ticks_position('left')\nax.xaxis.set_ticks_position('bottom')\nax.vlines(deepwater, 0, 100, linestyles='dashed', alpha=.6)\nax.text(deepwater, 70, 'Deepwater catastrophe', horizontalalignment='center')\nax.set_ylim([0, 100])\nax.legend(bbox_to_anchor=(1.2, .9), frameon=False)\nplt.show()\n\nprint(plt.style.available)\n\nfig, ax = plt.subplots(figsize=(8, 5))\nwater['AWK_pct_ch'] = water['Close'].diff().cumsum()/water['Close'].iloc[0]\noil['BP_pct_ch'] = oil['Close'].diff().cumsum()/oil['Close'].iloc[0]\n#water['Close'].pct_change().cumsum().plot(ax=ax, label='AWK')\nwater['AWK_pct_ch'].plot(ax=ax, label='AWK', lw=.7)\n#oil['Close'].pct_change().cumsum().plot(ax=ax, label='BP')\noil['BP_pct_ch'].plot(ax=ax, label='BP', lw=.7)\nax.yaxis.grid(True)\nax.spines['right'].set_visible(False)\nax.spines['top'].set_visible(False)\nax.yaxis.set_ticks_position('left')\nax.xaxis.set_ticks_position('bottom')\nax.vlines(deepwater, -1, 3, linestyles='dashed', alpha=.6)\nax.text(deepwater, 1.2, 'Deepwater catastrophe', horizontalalignment='center')\nax.set_ylim([-1, 3])\nax.legend(bbox_to_anchor=(1.2, .9), frameon=False)\nax.set_title('Percentage change relative to {:.10}\\n'.format(str(start_w)), fontsize=14, loc='left')\nplt.show()" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
SheffieldML/notebook
GPy/config.ipynb
bsd-3-clause
[ "GPy Configuration Settings\nThe GPy default configuration settings are stored in a file in the main GPy directory called defaults.cfg. These settings should not be changed in this file. The file gives you an overview of what the configuration settings are for the install.", "# This is the default configuration file for GPy\n\n# Do note edit this file.\n\n# For machine specific changes (i.e. those specific to a given installation) edit GPy/installation.cfg\n\n# For user specific changes edit $HOME/.gpy_user.cfg\n[parallel]\n# Enable openmp support. This speeds up some computations, depending on the number\n# of cores available. Setting up a compiler with openmp support can be difficult on\n# some platforms, hence by default it is off.\nopenmp=False\n\n[anaconda]\n# if you have an anaconda python installation please specify it here.\ninstalled = False\nlocation = None\nMKL = False # set this to true if you have the MKL optimizations installed", "Machine Dependent Options\nEach installation of GPy also creates an installation.cfg file. This file should include any installation specific settings for your GPy installation. For example, if a particular machine is set up to run OpenMP then the installation.cfg file should contain", "# This is the local installation configuration file for GPy\n[parallel]\nopenmp=True", "User Dependent Options\nOptions that are user dependent should be installed in the user's home directory in .gpy_user.cfg." ]
[ "markdown", "code", "markdown", "code", "markdown" ]
gaufung/Data_Analytics_Learning_Note
python-statatics-tutorial/basic-theme/python-language/Function.ipynb
mit
[ "函数\n1 默认参数\n函数的参数中如果有默认参数,那么函数在定义的时候将被计算而不是等到函数被调用的时候", "bigx = 10\ndef double_times(x = bigx):\n return x * 2\nbigx = 1000\ndouble_times()", "在可变的集合类型中(list和dictionary)中,如果默认参数为该类型,那么所有的操作调用该函数的操作将会发生变化", "def foo(values, x=[]):\n for value in values:\n x.append(value)\n return x\nfoo([0,1,2])\n\nfoo([4,5])\n\ndef foo_fix(values, x=[]):\n if len(x) != 0:\n x = []\n for value in values:\n x.append(value)\n return x\nfoo_fix([0,1,2])\n\nfoo_fix([4,5])", "2 global 参数", "x = 5 \ndef set_x(y):\n x = y\n print 'inner x is {}'.format(x)\nset_x(10)\nprint 'global x is {}'.format(x)", "x = 5 表明为global变量,但是在set_x函数内部中,出现了x,但是其为局部变量,因此全局变量x并没有发生改变。", "def set_global_x(y):\n global x\n x = y \n print 'global x is {}'.format(x)\nset_global_x(10)\nprint 'global x now is {}'.format(x)", "通过添加global关键字,使得global变量x发生了改变。\n3 Exercise\nFibonacci sequence\n$F_{n+1}=F_{n}+F_{n-1}$ 其中 $F_{0}=0,F_{1}=1,F_{2}=1,F_{3}=2 \\cdots$\n\n递归版本\n算法时间时间复杂度高达 $T(n)=n^2$", "def fib_recursive(n):\n if n == 0 or n == 1:\n return n\n else:\n return fib_recursive(n-1) + fib_recursive(n-2)\nfib_recursive(10)", "迭代版本\n算法时间复杂度为$T(n)=n$", "def fib_iterator(n):\n g = 0\n h = 1\n i = 0\n while i < n:\n h = g + h \n g = h - g\n i += 1\n return g\nfib_iterator(10)", "迭代器版本\n使用 yield 关键字可以实现迭代器", "def fib_iter(n):\n g = 0\n h = 1 \n i = 0\n while i < n:\n h = g + h\n g = h -g\n i += 1\n yield g\nfor value in fib_iter(10):\n print value,", "矩阵求解法\n$$\\begin{bmatrix}F_{n+1}\\F_{n}\\end{bmatrix}=\\begin{bmatrix}1&1\\1&0\\end{bmatrix}\\begin{bmatrix}F_{n}\\F_{n-1}\\end{bmatrix}$$ \n令$u_{n+1}=Au_{n}$ 其中 $u_{n+1}=\\begin{bmatrix}F_{n+1}\\F_{n}\\end{bmatrix}$\n通过矩阵的迭代求解\n$u_{n+1}=A^{n}u_{0}$,其中 $u_{0}=\\begin{bmatrix}1 \\0 \\end{bmatrix}$,对于$A^n$ 可以通过 $(A^{n/2})^{2}$ 方式求解,使得算法时间复杂度达到 $log(n)$", "import numpy as np\na = np.array([[1,1],[1,0]])\ndef pow_n(n):\n if n == 1:\n return a \n elif n % 2 == 0:\n half = pow_n(n/2)\n return half.dot(half)\n else:\n half = pow_n((n-1)/2)\n return a.dot(half).dot(half)\ndef fib_pow(n):\n a_n = pow_n(n)\n u_0 = np.array([1,0])\n return a_n.dot(u_0)[1]\nfib_pow(10)", "Quick Sort", "def quick_sort(array):\n if len(array) < 2:\n return array\n else:\n pivot = array[0]\n left = [item for item in array[1:] if item < pivot]\n right = [item for item in array[1:] if item >= pivot]\n return quick_sort(left)+[pivot]+quick_sort(right)\nquick_sort([10,11,3,21,9,22])" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
beangoben/HistoriaDatos_Higgs
Dia1/.ipynb_checkpoints/2_PandasIntro-checkpoint.ipynb
gpl-2.0
[ "Hola Pandas!\nPandas = Manejo de informacion facil!\nQue es pandas?\n\nPandas es un libreria de alto rendimiento, facil de usar para manejar estructuras de datos y analizarlas.\nChecate mas en :\n\nPagina oficial : http://pandas.pydata.org/\nUn tour de diez minutos con Pandas: http://vimeo.com/59324550\n\nPara usar pandas, solo tiene que importar el modulo ..tambien te conviene importar numpy y matplotlib..juega\nn muy bien con pandas", "import pandas as pd\nimport numpy as np # modulo de computo numerico\nimport matplotlib.pyplot as plt # modulo de graficas\n# esta linea hace que las graficas salgan en el notebook\n%matplotlib inline", "Y yo para que quiero eso? De que sirve pandas?\nPandas te sirve si quieres:\n\nTrabajar con datos de manera facil.\nExplorar un conjunto de datos de manera rapida, enterder los datos que tienes.\nFacilmente manipular informacion, por ejemplo sacar estadisticas.\nGraficas patrones y distribuciones de datos.\nTrabajar con Exceles, base de datos, sin tener que suar esas herramientas.\n\nY mucho mas...\nEl DataFrame en Pandas\nUna estructura de datos en Pandas se llama un DataFrame, con el manejamos todos los datos y aplicamos tranformaciones.\nAsi creamos un DataFrame vacio:", "df.head()", "No nos sirve nada vacio, entonces agreguemos le informacion!\nLLenando informacion con un Dataframe\n\nSituacion:\nSuponte que eres un taquero y quieres hacer un dataframe de cuantos tacos vendes en una semana igual y para ver que tacos son mas populares y echarle mas ganas en ellos, \nAsumiremos:\n\nQue vende tacos de Pastor, Tripa y Chorizo\nHay 7 dias en una semana de Lunes a Domingo (obvio)\nCrearemos el numero de tacos como numeros enteros aleatorios (np.random.randint)\n\n Ojo! Si ponemos la variable de un dataframe al final de una celda no saldra una tabla con los datos, eah!", "df['Pastor']=np.random.randint(100, size=7)\ndf['Tripas']=np.random.randint(100, size=7)\ndf['Chorizo']=np.random.randint(100, size=7)\n\ndf.index=['Lunes','Martes','Miercoles','Jueves','Viernes','Sabado','Domingo']\ndf.", "Jugando con el Dataframe!\nEstadisticas\nYa con teniendo un dataframe podemos hacer muchas cosas, por ejemplo sacar estadisticas, de medias, desviaciones estandares, cuantos elementos:", "df.describe()", "pero talvez solo queramos estadisticas de Pastor, entonces seria:", "df['Chorizo'].describe()", "o talvez solo nos interese del Lunes:\n Ojo! Tenemos que usar .ix para seleccionar un renglon", "df.ix['Lunes']", "Grafica de cajas 'Boxplot'\nUn boxplot nos da mucha informacion:\n* Cada caja esta centrada en la mediana\n* Tiene de alto la desviacion estandar entonces nos dice donde se encuentra el 60% de los datos.\n* Tiene los minimos y maximos de cada dato.", "df.boxplot()\nplt.title(\"Boxplot\")\nplt.show()", "Combinando columnas\nQue tal si queremos saber cuantos tacos vendimos en total?\nPues solamente sumamos las columnas:", "df['Tacos Total']=df['Pastor']+df['Tripas']+df['Chorizo']\ndf", "Borrando columnas\nAveces simplemente queremos reducir el numero de datos, entonces podemos usar el drop:", "df=df.drop(\"Chorizo\",axis=1)\ndf", "Exportando a otro formato\nAveces queremos guardar nuestros datos (excel, csv, sql database, pickle, json) , por ejemplo a un excel:", "df.to_csv(\"Tacos.csv\")", "Leyendo un DataFrame de otro formato\nO alrevez, queremos crear un dataframe apartir de un archivo:", "df=pd.read_csv(\"Tacos.csv\")", "Ejercicios:\n\nInventa un Dataframe de tu propio negocio, que quieres vender? Usa numeros aleatorios.\nSaca estadisticas de cada columna.\nCrea una nueva columna que toma en cuenta la otra informacion. (Promedio, etc..)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
babraham123/script-runner
notebooks/pandas_tutorial.ipynb
mit
[ "{\n \"nb_display_name\": \"Intro to Pandas\",\n \"nb_description\": \"A detailed tutorial on Pandas, a Python numerical library\",\n \"nb_filename\": \"pandas_tutorial.ipynb\",\n \"params\":[\n {\n \"name\":\"testnum\",\n \"display_name\":\"Test num\",\n \"description\":\"\",\n \"input_type\":\"integer\"\n },\n {\n \"name\":\"teststr\",\n \"display_name\":\"Test str\",\n \"description\":\"\",\n \"input_type\":\"string\"\n }\n ]\n}", "Lesson 1\nCreate Data - We begin by creating our own data set for analysis. This prevents the end user reading this tutorial from having to download any files to replicate the results below. We will export this data set to a text file so that you can get some experience pulling data from a text file.\nGet Data - We will learn how to read in the text file. The data consist of baby names and the number of baby names born in the year 1880.\nPrepare Data - Here we will simply take a look at the data and make sure it is clean. By clean I mean we will take a look inside the contents of the text file and look for any anomalities. These can include missing data, inconsistencies in the data, or any other data that seems out of place. If any are found we will then have to make decisions on what to do with these records.\nAnalyze Data - We will simply find the most popular name in a specific year.\nPresent Data - Through tabular data and a graph, clearly show the end user what is the most popular name in a specific year. \nThe pandas library is used for all the data analysis excluding a small piece of the data presentation section. The matplotlib library will only be needed for the data presentation section. Importing the libraries is the first step we will take in the lesson.", "# Import all libraries needed for the tutorial\n\n# General syntax to import specific functions in a library: \n##from (library) import (specific library function)\nfrom pandas import DataFrame, read_csv\n\n# General syntax to import a library but no functions: \n##import (library) as (give the library a nickname/alias)\nimport matplotlib.pyplot as plt\nimport pandas as pd #this is how I usually import pandas\nimport sys #only needed to determine Python version number\n\n# Enable inline plotting\n%matplotlib inline\n\nprint 'Python version ' + sys.version\nprint 'Pandas version ' + pd.__version__", "Create Data\nThe data set will consist of 5 baby names and the number of births recorded for that year (1880).", "# The inital set of baby names and bith rates\nnames = ['Bob','Jessica','Mary','John','Mel']\nbirths = [968, 155, 77, 578, 973]\n", "To merge these two lists together we will use the zip function.", "zip?\n\nBabyDataSet = zip(names,births)\nBabyDataSet", "We are basically done creating the data set. We now will use the pandas library to export this data set into a csv file. \ndf will be a DataFrame object. You can think of this object holding the contents of the BabyDataSet in a format similar to a sql table or an excel spreadsheet. Lets take a look below at the contents inside df.", "df = pd.DataFrame(data = BabyDataSet, columns=['Names', 'Births'])\ndf", "Export the dataframe to a csv file. We can name the file births1880.csv. The function to_csv will be used to export the file. The file will be saved in the same location of the notebook unless specified otherwise.", "df.to_csv?", "The only parameters we will use is index and header. Setting these parameters to True will prevent the index and header names from being exported. Change the values of these parameters to get a better understanding of their use.", "df.to_csv('births1880.csv',index=False,header=False)", "Get Data\nTo pull in the csv file, we will use the pandas function read_csv. Let us take a look at this function and what inputs it takes.", "read_csv?", "Even though this functions has many parameters, we will simply pass it the location of the text file. \nLocation = C:\\Users\\ENTER_USER_NAME.xy\\startups\\births1880.csv \nNote: Depending on where you save your notebooks, you may need to modify the location above.", "Location = r'C:\\Users\\david\\notebooks\\pandas\\births1880.csv'\ndf = pd.read_csv(Location)", "Notice the r before the string. Since the slashes are special characters, prefixing the string with a r will escape the whole string.", "df", "This brings us the our first problem of the exercise. The read_csv function treated the first record in the csv file as the header names. This is obviously not correct since the text file did not provide us with header names. \nTo correct this we will pass the header parameter to the read_csv function and set it to None (means null in python).", "df = pd.read_csv(Location, header=None)\ndf", "If we wanted to give the columns specific names, we would have to pass another paramter called names. We can also omit the header parameter.", "df = pd.read_csv(Location, names=['Names','Births'])\ndf", "You can think of the numbers [0,1,2,3,4] as the row numbers in an Excel file. In pandas these are part of the index of the dataframe. You can think of the index as the primary key of a sql table with the exception that an index is allowed to have duplicates. \n[Names, Births] can be though of as column headers similar to the ones found in an Excel spreadsheet or sql database.\nDelete the csv file now that we are done using it.", "import os\nos.remove(Location)", "Prepare Data\nThe data we have consists of baby names and the number of births in the year 1880. We already know that we have 5 records and none of the records are missing (non-null values). \nThe Names column at this point is of no concern since it most likely is just composed of alpha numeric strings (baby names). There is a chance of bad data in this column but we will not worry about that at this point of the analysis. The Births column should just contain integers representing the number of babies born in a specific year with a specific name. We can check if the all the data is of the data type integer. It would not make sense to have this column have a data type of float. I would not worry about any possible outliers at this point of the analysis. \nRealize that aside from the check we did on the \"Names\" column, briefly looking at the data inside the dataframe should be as far as we need to go at this stage of the game. As we continue in the data analysis life cycle we will have plenty of opportunities to find any issues with the data set.", "# Check data type of the columns\ndf.dtypes\n\n# Check data type of Births column\ndf.Births.dtype", "As you can see the Births column is of type int64, thus no floats (decimal numbers) or alpha numeric characters will be present in this column.\nAnalyze Data\nTo find the most popular name or the baby name with the higest birth rate, we can do one of the following. \n\nSort the dataframe and select the top row\nUse the max() attribute to find the maximum value", "# Method 1:\nSorted = df.sort(['Births'], ascending=False)\nSorted.head(1)\n\n# Method 2:\ndf['Births'].max()", "Present Data\nHere we can plot the Births column and label the graph to show the end user the highest point on the graph. In conjunction with the table, the end user has a clear picture that Mel is the most popular baby name in the data set. \nplot() is a convinient attribute where pandas lets you painlessly plot the data in your dataframe. We learned how to find the maximum value of the Births column in the previous section. Now to find the actual baby name of the 973 value looks a bit tricky, so lets go over it. \nExplain the pieces:\ndf['Names'] - This is the entire list of baby names, the entire Names column\ndf['Births'] - This is the entire list of Births in the year 1880, the entire Births column\ndf['Births'].max() - This is the maximum value found in the Births column \n[df['Births'] == df['Births'].max()] IS EQUAL TO [Find all of the records in the Births column where it is equal to 973]\ndf['Names'][df['Births'] == df['Births'].max()] IS EQUAL TO Select all of the records in the Names column WHERE [The Births column is equal to 973] \nAn alternative way could have been to use the Sorted dataframe:\nSorted['Names'].head(1).value \nThe str() function simply converts an object into a string.", "# Create graph\ndf['Births'].plot()\n\n# Maximum value in the data set\nMaxValue = df['Births'].max()\n\n# Name associated with the maximum value\nMaxName = df['Names'][df['Births'] == df['Births'].max()].values\n\n# Text to display on graph\nText = str(MaxValue) + \" - \" + MaxName\n\n# Add text to graph\nplt.annotate(Text, xy=(1, MaxValue), xytext=(8, 0), \n xycoords=('axes fraction', 'data'), textcoords='offset points')\n\nprint \"The most popular name\"\ndf[df['Births'] == df['Births'].max()]\n#Sorted.head(1) can also be used", "Author: David Rojas" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
mhallett/MeDaReDa
demos/demo1/00_DEMO_01.ipynb
mit
[ "00 DEMO\npre demo start office with 4 workers\nstart\n* sever monitoring\n* worker monitoring\n* table counts\n* ccy output\nClear tables", "import demo\nimport process\n\ndemo.clear_tables()", "Single computer\nget data (5) # usdgbp", "demo.get_price('GBPUSD')\n\nprocess.processSinglePrice()\n\ndemo.get_price('USDEUR')\nprocess.processSinglePrice()\n\ndemo.get_price('EURGBP')\nprocess.processSinglePrice()\n\ndemo.get_prices(1)\nprocess.processPrices(3)\n", "limitations\nMulti computer (cloud)\nset workers to work", "while True:\n process.processSinglePrice()\n #break" ]
[ "markdown", "code", "markdown", "code", "markdown", "code" ]
dm-wyncode/zipped-code
content/posts/coding/recursion_looping_relationship.ipynb
mit
[ "Functional Programming Basics\nRecursion and looping are related.\nThis is my attempt to grok how recursion and looping are related.\nI was already reading Functional Programming in Python when I discovered Grokking Algorithms: An illustrated guide for programmers and other curious people.\nI was trying to make sense of this passage from Functional Programming in Python:\n\n29.0 / 73 Functional Programming in Python by David Mertz \n\n\nEliminating Recursion\nAs with the simple factorial example given above, sometimes we can perform \"recursion without recursion\" by using functools.reduce() or other folding operations. A recursion is often simply a way of combining something simpler with an accumulated intermediate result, and that is exactly what reduce() does at heart. \n\nAnd then I was NOT surprised to read this passage after reading about recursion in Grokking Algorithms: An illustrated guide for programmers and other curious people.\n\n64.3 / 229 Grokking Algorithms: An illustrated guide for programmers and other curious people by Aditya Y. Bhargava\n\n\n“Why would I do this recursively if I can do it easily with a loop?” you may be thinking. Well, this is a sneak peek into functional programming! Functional programming languages like Haskell don’t have loops, so you have to use recursion to write functions like this. If you have a good understanding of recursion, functional languages will be easier to learn.\n\nAnd then I decided to search for some discussion about recursion from a JavaScript point of view and found this.\n\nAll in the Family: Filter, Map, Reduce, Recur\n\n\nWe loop over everything in the list and apply a reduction. You can create an add function and a list of numbers and try it for yourself. Anyway, if you really want to do this functionally, you would pull out our good friend, recursion. Recursion is a more mathematical means for looping over a set of values, and dates back to some of the earliest prototypes for computing back in the 1930’s.\n\nresources\n\n\nFunctional Programming in Python by David Mertz Copyright Get the free book here.\n\n\nGrokking Algorithms: An illustrated guide for programmers and other curious people by Aditya Y. Bhargava\n\n\nAll in the Family: Filter, Map, Reduce, Recur: http://www.chrisstead.com/archives/841/all-in-the-family-filter-map-reduce-recur/\n\nPython docs: https://docs.python.org/3.4/library/functools.html\n\nI am most comfortable with Python when it comes to exploring programming concepts, so I turned to the Python docs for more discussion.\n\nPython docs: functools.reduce(function, iterable[, initializer])\n\n\nApply function of two arguments cumulatively to the items of sequence, from left to right, so as to reduce the sequence to a single value. For example, reduce(lambda x, y: x+y, [1, 2, 3, 4, 5]) calculates ((((1+2)+3)+4)+5). The left argument, x, is the accumulated value and the right argument, y, is the update value from the sequence. If the optional initializer is present, it is placed before the items of the sequence in the calculation, and serves as a default when the sequence is empty. If initializer is not given and sequence contains only one item, the first item is returned.\n\nfunctools.reduce imitated here. Taken from the Python docs.", "def reduce(function, iterable, initializer=None):\n it = iter(iterable)\n if initializer is None:\n try:\n initializer = next(it)\n except StopIteration:\n raise TypeError('reduce() of empty sequence with no initial value')\n accum_value = initializer\n # it exhausted if initializer not given and sequence only has one item\n # so this loop does not run with sequence of length 1 and initializer is None\n for x in it: \n accum_value = function(accum_value, x)\n return accum_value", "Trying reduce without and with an initializer.", "from operator import add\n\nfor result in (reduce(add, [42]), reduce(add, [42], 10)):\n print(result)", "My rewrite of functools.reduce using recursion.\nFor the sake of demonstration only.", "def first(value_list):\n return value_list[0]\n \ndef rest(value_list):\n return value_list[1:]\n \ndef is_undefined(value):\n return value is None\n \ndef recursive_reduce(function, iterable, initializer=None):\n if is_undefined(initializer):\n initializer = accum_value = first(iterable)\n else:\n accum_value = function(initializer, first(iterable))\n if len(iterable) == 1: # base case\n return accum_value\n return recursive_reduce(function, rest(iterable), accum_value)", "Test.\nTest if the two functions return the sum of a list of random numbers.", "from random import choice\nfrom operator import add\n\nLINE = ''.join(('-', ) * 20)\nprint(LINE)\nfor _ in range(5):\n # create a tuple of random numbers of length 2 to 10\n test_values = tuple(choice(range(101)) for _ in range(choice(range(2, 11))))\n print('Testing these values: {}'.format(test_values))\n # use sum for canonical value\n expected = sum(test_values)\n print('The expected result: {}\\n'.format(expected))\n test_answers = ((f.__name__, f(add, test_values)) \n for f \n in (reduce, recursive_reduce))\n \n test_results = ((f_name, test_answer == expected, ) \n for f_name, test_answer in test_answers)\n for f_name, answer in test_results:\n try:\n assert answer\n print('`{}` passed: {}'.format(f_name, answer))\n except AssertionError:\n print('`{}` failed: {}'.format(f_name, not answer))\n print(LINE)\n\nfrom recursion_looping_relationship_meta import tweets" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
kinshuk4/MoocX
misc/machinelearningbootcamp/day2/neural_nets.ipynb
mit
[ "Neural Networks\nThe Perceptron\nTo get an intuitive idea about Neural Networks, let us code an elementary perceptron. In this example we will illustrate some of the concepts we have seen, build a small perceptron and make a link between Perceptron and linear classification.\nLearning Activity 1: Generating some data\nBefore working with the MNIST dataset, you'll first test your perceptron implementation on a \"toy\" dataset with just a few data points. This allows you to test your implementations with data you can easily inspect and visualise without getting lost in the complexities of the dataset itself.\nStart by loading two basic libraries: matplotlib for plotting graphs (http://matplotlib.org/contents.html) and numpy for numerical computing with vectors, matrices, etc. (http://docs.scipy.org/doc/).", "# Load the libraries\n\nimport matplotlib.pyplot as plt\nimport numpy as np\n\n%matplotlib inline ", "Then let us generate some points in 2-D that will form our dataset:", "# Create some data points", "Let's visualise these points in a scatterplot using the plot function from matplotlib", "# Visualise the points in a scatterplot", "Here, imagine that the purpose is to build a classifier that for a given new point will return whether it belongs to the crosses (class 1) or circles (class 0).\nLearning Activity 2: Computing the output of a Perceptron\nLet’s now define a function which returns the output of a Perceptron for a single input point.", "# Now let's build a perceptron for our points\n\ndef outPerceptron(x,w,b):\n innerProd = np.dot(x,w) # computes the weighted sum of input\n output = 0\n if innerProd > b:\n output = 1\n return output", "It’s useful to define a function which returns the sequence of outputs of the Perceptron for a sequence\nof input points:", "# Define a function which returns the sequence of outputs for a sequence of input points\n\ndef multiOutPerceptron(X,w,b):\n nInstances = X.shape[0]\n outputs = np.zeros(nInstances)\n for i in range(0,nInstances):\n outputs[i] = outPerceptron(X[i,:],w,b)\n return outputs", "Bonus Activity: Efficient coding of multiOutPerceptron\nIn the above implementation, the simple outPerceptron function is called for every single instance. It\nis cleaner and more efficient to code everything in one function using matrices:", "# Optimise the multiOutPerceptron function", "In the above implementation, the simple outPerceptron function is called for every single instance. It is cleaner and more efficient to code everything in one function using matrices.\nLearning Activity 4: Playing with weights and thresholds\nLet’s try some weights and thresholds, and see what happens:", "# Try some initial weights and thresholds", "So this is clearly not great! it classifies the first point as in one category and all the others in the other one. Let's try something else (an educated guess this time).", "# Try an \"educated guess\"", "This is much better! To obtain these values, we found a separating hyperplane (here a line) between the points. The equation of the line is \ny = 0.5x-0.2\nQuiz\n- Can you explain why this line corresponds to the weights and bias we used?\n- Is this separating line unique? what does it mean?\nCan you check that the perceptron will indeed classify any point above the red line as a 1 (cross) and every point below as a 0 (circle)?\nLearning Activity 5: Illustration of the output of the Perceptron and the separating line", "# Visualise the separating line", "Now try adding new points to see how they are classified:", "# Add new points and test ", "Visualise the new test points in the graph and plot the separating lines.", "# Visualise the new points and line", "Note here that the two sets of parameters classify the squares identically but not the triangle. You can now ask yourself, which one of the two sets of parameters makes more sense? How would you classify that triangle? These type of points are frequent in realistic datasets and the question of how to classify them \"accurately\" is often very hard to answer...\nGradient Descent\nLearning Activity 6: Coding a simple gradient descent\nDefinition of a function and it's gradient\n$f(x) = \\exp(-\\sin(x))x^2$\n$f'(x) = -x \\exp(-\\sin(x)) (x\\cos(x)-2)$\nIt is convenient to define python functions which return the value of the function and its gradient at an arbitrary point $x$", "def function(x):\n return np.exp(-np.sin(x))*(x**2)\n\ndef gradient(x):\n return -x*np.exp(-np.sin(x))*(x*np.cos(x)-2) # use wolfram alpha!", "Let's see what the function looks like", "# Visualise the function ", "Now let us implement a simple Gradient Descent that uses constant stepsizes. We define two functions, the first one is the most simple version which doesn't store the intermediate steps that are taken. The second one does store the steps which is useful to visualize what is going on and explain some of the typical behaviour of GD.", "def simpleGD(x0,stepsize,nsteps):\n x = x0\n for k in range(0,nsteps):\n x -= stepsize*gradient(x)\n return x\n\ndef simpleGD2(x0,stepsize,nsteps):\n x = np.zeros(nsteps+1)\n x[0] = x0\n for k in range(0,nsteps):\n x[k+1] = x[k]-stepsize*gradient(x[k])\n return x", "Let's see what it looks like. Let's start from $x_0 = 3$, use a (constant) stepsize of $\\delta=0.1$ and let's go for 100 steps.", "# Try the first given values ", "Simple inspection of the figure above shows that that is close enough to the actual true minimum ($x^\\star=0$)\nA few standard situations:", "# Try the second given values ", "Ok! so that's still alright", "# Try the third given values ", "That's not... Visual inspection of the figure above shows that we got stuck in a local optimum.\nBelow we define a simple visualization function to show where the GD algorithm brings us. It can be overlooked.", "def viz(x,a=-10,b=10):\n xx = np.linspace(a,b,100)\n yy = function(xx)\n ygd = function(x)\n plt.plot(xx,yy)\n plt.plot(x,ygd,color='red')\n plt.plot(x[0],ygd[0],marker='o',color='green',markersize=10)\n plt.plot(x[len(x)-1],ygd[len(x)-1],marker='o',color='red',markersize=10)\n plt.show()\n", "Let's show the steps that were taken in the various cases that we considered above", "# Visualise the steps taken in the previous cases ", "To summarise these three cases: \n- In the first case, we start from a sensible point (not far from the optimal value $x^\\star = 0$ and on a slope that leads directly to it) and we get to a very satisfactory point.\n- In the second case, we start from a less sensible point (on a slope that does not lead directly to it) and yet the algorithm still gets us to a very satisfactory point.\n- In the third case, we also start from a bad location but this time the algorithm gets stuck in a local minima.\nAttacking MNIST\nLearning Activity 7: Loading the Python libraries\nImport statements for KERAS library", "from keras.datasets import mnist \nfrom keras.models import Sequential \nfrom keras.layers.core import Dense, Activation\nfrom keras.optimizers import SGD, RMSprop\nfrom keras.utils import np_utils\n\n# Some generic parameters for the learning process\nbatch_size = 100 # number of instances each noisy gradient will be evaluated upon\nnb_classes = 10 # 10 classes 0-1-...-9\nnb_epoch = 10 # computational budget: 10 passes through the whole dataset", "Learning Activity 8: Loading the MNIST dataset\nKeras does the loading of the data itself and shuffles the data randomly. This is useful since the difficulty\nof the examples in the dataset is not uniform (the last examples are harder than the first ones)", "# Load the MNIST data", "You can also depict a sample from either the training or the test set using the imshow() function:", "# Display the first image", "Ok the label 5 does indeed seem to correspond to that number!\nLet's check the dimension of the dataset\nLearning Activity 9: Reshaping the dataset\nEach image in MNIST has 28 by 28 pixels, which results in a $28\\times 28$ array. As a next step, and prior to feeding the data into our NN classifier, we needd to flatten each array into a $28\\times 28$=784 dimensional vector. Each component of the vector holds an integer value between 0 (black) and 255 (white), which we need to normalise to the range 0 and 1.", "# Reshaping of vectors in a format that works with the way the layers are coded", "Remember, it is always good practice to check the dimensionality of your train and test data using the shape command prior to constructing any classification model:", "# Check the dimensionality of train and test", "So we have 60,000 training samples, 10,000 test samples and the dimension of the samples (instances) are 28x28 arrays. We need to reshape these instances as vectors (of 784=28x28 components). For storage efficiency, the values of the components are stored as Uint8, we need to cast that as float32 so that Keras can deal with them. Finally we normalize the values to the 0-1 range.\nThe labels are stored as integer values from 0 to 9. We need to tell Keras that these form the output categories via the function to_categorical.", "# Set y categorical ", "Learning Activity 10: Building a NN classifier\nA neural network model consists of artificial neurons arranged in a sequence of layers. Each layer receives a vector of inputs and converts these into some output. The interconnection pattern is \"dense\" meaning it is fully connected to the previous layer. Note that the first hidden layer needs to specify the size of the input which amounts to implicitly having an input layer.", "# First, declare a model with a sequential architecture\n\n# Then add a first layer with 500 nodes and 784 inputs (the pixels of the image)\n\n# Define the activation function to use on the nodes of that first layer\n\n# Second hidden layer with 300 nodes\n\n# Output layer with 10 categories (+using softmax)", "Learning Activity 11: Training and testing of the model\nHere we define a somewhat standard optimizer for NN. It is based on Stochastic Gradient Descent with some standard choice for the annealing.", "# Definition of the optimizer.", "Finding the right arguments here is non trivial but the choice suggested here will work well. The only parameter we can explain here is the first one which can be understood as an initial scaling of the gradients. \nAt this stage, launch the learning (fit the model). The model.fit function takes all the necessary arguments and trains the model. We describe below what these arguments are:\n\nthe training set (points and labels)\nglobal parameters for the learning (batch size and number of epochs)\nwhether or not we want to show output during the learning\nthe test set (points and labels)", "# Fit the model", "Obviously we care far more about the results on the validation set since it is the data that the NN has not used for its training. Good results on the test set means the model is robust.", "# Display the results, the accuracy (over the test set) should be in the 98%", "Bonus: Does it work?", "def whatAmI(img):\n score = model.predict(img,batch_size=1,verbose=0)\n for s in range(0,10):\n print ('Am I a ', s, '? -- score: ', np.around(score[0][s]*100,3))\n\nindex = 1004 # here use anything between 0 and 9999\ntest = np.reshape(images_train[index,],(1,784))\nplt.imshow(np.reshape(test,(28,28)), cmap=\"gray\")\nwhatAmI(test)\n", "Does it work? (experimental Pt2)", "from scipy import misc\n\ntest = misc.imread('data/ex7.jpg')\ntest = np.reshape(test,(1,784))\ntest = test.astype('float32')\ntest /= 255.\nplt.imshow(np.reshape(test,(28,28)), cmap=\"gray\")\nwhatAmI(test)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
CivicKnowledge/metatab-packages
ipums.org/income_home_value/Income and Home Value Distributions.ipynb
mit
[ "Senior Income and Home Value Distributions For San Diego County\nThis package extracts the home value and household income for households in San DIego county with one or more household members aged 65 or older. . The base data is from the 2015 5 year PUMS sample, from IPUMS<sup>1</sup>. The dataset variables used are: HHINCOME and VALUEH. \nThis extract is intended for analysis of senior issues in San Diego County, so the record used are further restricted with these filters: \n\nWHERE AGE > = 65\nHHINCOME < 9999999\nVALUEH < 9999999 \nSTATEFIP = 6 \nCOUNTYFIPS = 73\n\nThe limits on the HHINCOME and VALUEH variables eliminate top coding. \nThis analysis used the IPUMS (ipums) data", "%matplotlib inline\n%load_ext metatab\n\n%load_ext autoreload\n%autoreload 2\n\n%mt_lib_dir lib\n\nimport pandas as pd\nimport matplotlib.pyplot as plt\nimport seaborn as sns\nimport numpy as np \nimport metatab as mt\nimport seaborn as sns; sns.set(color_codes=True)\nimport sqlite3\n\nimport statsmodels as sm\nfrom statsmodels.nonparametric.kde import KDEUnivariate\nfrom scipy import integrate, stats\n\nfrom incomedist import * \nfrom multikde import MultiKde \n\nplt.rcParams['figure.figsize']=(6,6)\n\n%%metatab \nOrigin: ipums.org\nDataset: income_homevalue\nIdentifier: b407e5af-cc23-431d-a431-15c202ec0c3b\nName: ipums.org-income_homevalue-4\nVersion: 4\n\nSection: Contacts\nWrangler: Eric Busboom\nWrangler.Email: eric@civicknowledge.com\n \nSection: Bibliography\nCitation: ipums\nCitation.Type: dataset\nCitation.Author: Steven Ruggles; Katie Genadek; Ronald Goeken; Josiah Grover; Matthew Sobek\nCitation.Title: Integrated Public Use Microdata Series\nCitation.Year: 2017\nCitation.Publisher: University of Minnesota\nCitation.Version: 7.0 \nCitation.AccessDate: 20170718\nCitation.Url: https://usa.ipums.org/usa/index.shtml\nCitation.Doi: https://doi.org/10.18128/D010.V7.0\n \nCitation: bordley\nCitation.Type: article\nCitation.Author: Robert F. Bordley; James B. McDonald; Anand Mantrala\nCitation.Title: Something New, Something Old: Parametric Models for the Size of Distribution of Income\nCitation.Year: 1997\nCitation.Month: June\nCitation.Journal: Journal of Income Distribution\nCitation.Volume: 6 \nCitation.Number: 1\nCitation.Pages: 5-5\nCitation.Url: https://ideas.repec.org/a/jid/journl/y1997v06i1p5-5.html\n \nCitation: mcdonald\nCitation.Type: article \nCitation.Author: McDonald, James B.; Mantrala, Anand\nCitation.Title: The distribution of personal income: Revisited\nCitation.Journal: Journal of Applied Econometrics\nCitation.Volume: 10\nCitation.Number: 2\nCitation.Publisher: Wiley Subscription Services, Inc., A Wiley Company\nCitation.Issn: 1099-1255\nCitation.Doi: 10.1002/jae.3950100208\nCitation.Pages: 201--204,\nCitation.Year: 1995\n \nCitation: majumder\nCitation.Type: article \nCitation.Author: Majumder, Amita; Chakravarty, Satya Ranjan\nCitation.Title: Distribution of personal income: Development of a new model and its application to U.S. income data\nCitation.Journal: Journal of Applied Econometrics\nCitation.Volume: 5\nCitation.Number: 2\nCitation.Publisher: Wiley Subscription Services, Inc., A Wiley Company\nCitation.Issn: 1099-1255\nCitation.Doi: 10.1002/jae.3950050206\nCitation.Pages: 189--196\nCitation.Year: 1990\n\n%%bash\n# Create a sample of a SQL database, so we can edit the schema. \n# Run the cell once to create the schema, then edit the schema and run it \n# again to build the database. \n\nfn='/Volumes/Storage/Downloads/usa_00005.csv'\nif [ ! -e schema.sql ]\nthen\n head -100 $fn > sample.csv\n sqlite3 --csv ipums.sqlite '.import sample.csv ipums'\n sqlite3 ipums.sqlite .schema > schema-orig.sql\n sqlite3 -header ipums.sqlite \"select * from ipums limit 2\" > sample.sql # Show a sample of data\n rm ipums.sqlite\nfi \n\nif [ -e schema.sql -a \\( ! -e ipums.sqlite \\) ]\nthen\n sqlite3 ipums.sqlite < schema.sql\n sqlite3 --csv ipums.sqlite \".import $fn ipums\"\n # Create some indexes to speed up queries\n sqlite3 ipums.sqlite \"CREATE INDEX IF NOT EXISTS state_idx ON ipums (STATEFIP)\"\n sqlite3 ipums.sqlite \"CREATE INDEX IF NOT EXISTS county_idx ON ipums (COUNTYFIPS)\"\n sqlite3 ipums.sqlite \"CREATE INDEX IF NOT EXISTS state_county_idx ON ipums (STATEFIP, COUNTYFIPS)\"\n \nfi", "Source Data\nThe PUMS data is a sample, so both household and person records have weights. We use those weights to replicate records. We are not adjusting the values for CPI, since we don't have a CPI for 2015, and because the medians for income comes out pretty close to those from the 2015 5Y ACS. \nThe HHINCOME and VALUEH have the typical distributions for income and home values, both of which look like Poisson distributions.", "# Check the weights for the whole file to see if they sum to the number\n# of households and people in the county. They don't, but the sum of the weights for households is close, \n# 126,279,060 vs about 116M housholds\ncon = sqlite3.connect(\"ipums.sqlite\")\nwt = pd.read_sql_query(\"SELECT YEAR, DATANUM, SERIAL, HHWT, PERNUM, PERWT FROM ipums \"\n \"WHERE PERNUM = 1 AND YEAR = 2015\", con)\n\nwt.drop(0, inplace=True)\n\nnd_s = wt.drop_duplicates(['YEAR', 'DATANUM','SERIAL'])\ncountry_hhwt_sum = nd_s[nd_s.PERNUM == 1]['HHWT'].sum()\n\nlen(wt), len(nd_s), country_hhwt_sum\n\nimport sqlite3\n\n# PERNUM = 1 ensures only record for each household \n\ncon = sqlite3.connect(\"ipums.sqlite\")\nsenior_hh = pd.read_sql_query(\n \"SELECT DISTINCT SERIAL, HHWT, PERWT, HHINCOME, VALUEH \"\n \"FROM ipums \"\n \"WHERE AGE >= 65 \" \n \"AND HHINCOME < 9999999 AND VALUEH < 9999999 \"\n \"AND STATEFIP = 6 AND COUNTYFIPS=73 \", con)\n\n# Since we're doing a probabilistic simulation, the easiest way to deal with the weight is just to repeat rows. \n# However, adding the weights doesn't change the statistics much, so they are turned off now, for speed. \n\ndef generate_data():\n \n for index, row in senior_hh.drop_duplicates('SERIAL').iterrows():\n #for i in range(row.HHWT):\n yield (row.HHINCOME, row.VALUEH)\n \nincv = pd.DataFrame(list(generate_data()), columns=['HHINCOME', 'VALUEH'])\n\n\nsns.jointplot(x=\"HHINCOME\", y=\"VALUEH\", marker='.', scatter_kws={'alpha': 0.1}, data=incv, kind='reg');", "Procedure\nAfter extracting the data for HHINCOME and VALUEH, we rank both values and then quantize the rankings into 10 groups, 0 through 9, hhincome_group and valueh_group. The HHINCOME variable correlates with VALUEH at .36, and the quantized rankings hhincome_group and valueh_group correlate at .38.\nInitial attempts were made to fit curves to the income and home value distributions, but it is very difficult to find well defined models that fit real income distributions. Bordley (bordley) analyzes the fit for 15 different distributions, reporting success with variations of the generalized beta distribution, gamma and Weibull. Majumder (majumder) proposes a four parameter model with variations for special cases. None of these models were considered well established enough to fit within the time contraints for the project, so this analysis will use empirical distributions that can be scale to fit alternate parameters.", "\nincv['valueh_rank'] = incv.rank()['VALUEH']\nincv['valueh_group'] = pd.qcut(incv.valueh_rank, 10, labels=False )\nincv['hhincome_rank'] = incv.rank()['HHINCOME']\nincv['hhincome_group'] = pd.qcut(incv.hhincome_rank, 10, labels=False )\nincv[['HHINCOME', 'VALUEH', 'hhincome_group', 'valueh_group']] .corr()\n\nfrom metatab.pands import MetatabDataFrame\nodf = MetatabDataFrame(incv)\nodf.name = 'income_homeval'\nodf.title = 'Income and Home Value Records for San Diego County'\nodf.HHINCOME.description = 'Household income'\nodf.VALUEH.description = 'Home value'\nodf.valueh_rank.description = 'Rank of the VALUEH value'\nodf.valueh_group.description = 'The valueh_rank value quantized into 10 bins, from 0 to 9'\nodf.hhincome_rank.description = 'Rank of the HHINCOME value'\nodf.hhincome_group.description = 'The hhincome_rank value quantized into 10 bins, from 0 to 9'\n\n%mt_add_dataframe odf --materialize", "Then, we group the dataset by valueh_group and collect all of the income values for each group. These groups have different distributions, with the lower numbered group shewing to the left and the higher numbered group skewing to the right. \nTo use these groups in a simulation, the user would select a group for a subject's home value, then randomly select an income in that group. When this is done many times, the original VALUEH correlates to the new distribution ( here, as t_income ) at .33, reasonably similar to the original correlations.", "import matplotlib.pyplot as plt\nimport numpy as np\n\nmk = MultiKde(odf, 'valueh_group', 'HHINCOME')\n\nfig,AX = plt.subplots(3, 3, sharex=True, sharey=True, figsize=(15,15))\n\nincomes = [30000,\n 40000,\n 50000,\n 60000,\n 70000,\n 80000,\n 90000,\n 100000,\n 110000]\n\nfor mi, ax in zip(incomes, AX.flatten()):\n s, d, icdf, g = mk.make_kde(mi)\n syn_d = mk.syn_dist(mi, 10000)\n \n syn_d.plot.hist(ax=ax, bins=40, title='Median Income ${:0,.0f}'.format(mi), normed=True, label='Generated')\n\n ax.plot(s,d, lw=2, label='KDE')\n \nfig.suptitle('Income Distributions By Median Income\\nKDE and Generated Distribution')\nplt.legend(loc='upper left')\nplt.show()", "A scatter matrix show similar structure for VALUEH and t_income.", "t = incv.copy()\nt['t_income'] = mk.syn_dist(t.HHINCOME.median(), len(t))\nt[['HHINCOME','VALUEH','t_income']].corr()\n\nsns.pairplot(t[['VALUEH','HHINCOME','t_income']]);", "The simulated incomes also have similar statistics to the original incomes. However, the median income is high. In San Diego county, the median household income for householders 65 and older in the 2015 5 year ACS about \\$51K, versus \\$56K here. For home values, the mean home value for 65+ old homeowners is \\$468K in the 5 year ACS, vs \\$510K here.", "from IPython.display import display_html, HTML\ndisplay(HTML(\"<h3>Descriptive Stats</h3>\"))\nt[['VALUEH','HHINCOME','t_income']].describe()\n\ndisplay(HTML(\"<h3>Correlations</h3>\"))\nt[['VALUEH','HHINCOME','t_income']].corr()", "Bibliography", "%mt_bibliography\n\n# Tests", "Create a new KDE distribution, based on the home values, including only home values ( actually KDE supports ) between $130,000 and $1.5M.", "s,d = make_prototype(incv.VALUEH.astype(float), 130_000, 1_500_000)\n\nplt.plot(s,d)", "Overlay the prior plot with the histogram of the original values. We're using np.histogram to make the histograph, so it appears as a line chart.", "v = incv.VALUEH.astype(float).sort_values()\n#v = v[ ( v > 60000 ) & ( v < 1500000 )]\n\nhist, bin_edges = np.histogram(v, bins=100, density=True)\n\nbin_middles = 0.5*(bin_edges[1:] + bin_edges[:-1])\n\nbin_width = bin_middles[1] - bin_middles[0]\n\nassert np.isclose(sum(hist*bin_width),1) # == 1 b/c density==True\n\nhist, bin_edges = np.histogram(v, bins=100) # Now, without 'density'\n\n# And, get back to the counts, but now on the KDE\n\nfig = plt.figure()\nax = fig.add_subplot(111)\n\nax.plot(s,d * sum(hist*bin_width));\n\nax.plot(bin_middles, hist);\n", "Show an a home value curve, interpolated to the same values as the distribution. The two curves should be co-incident.", "def plot_compare_curves(p25, p50, p75):\n fig = plt.figure(figsize = (8,3))\n ax = fig.add_subplot(111)\n\n sp, dp = interpolate_curve(s, d, p25, p50, p75)\n\n ax.plot(pd.Series(s), d, color='black');\n ax.plot(pd.Series(sp), dp, color='red');\n\n# Re-input the quantiles for the KDE\n# Curves should be co-incident\nplot_compare_curves(2.800000e+05,4.060000e+05,5.800000e+05)\n", "Now, interpolate to the values for the county, which shifts the curve right.", "# Values for SD County home values\nplot_compare_curves(349100.0,485900.0,703200.0)\n", "Here is an example of creating an interpolated distribution, then generating a synthetic distribution from it.", "sp, dp = interpolate_curve(s, d, 349100.0,485900.0,703200.0)\nv = syn_dist(sp, dp, 10000)\n\nplt.hist(v, bins=100); \npd.Series(v).describe()" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
desihub/desitarget
doc/nb/connecting-spectra-to-mocks.ipynb
bsd-3-clause
[ "Connecting Spectra to Mocks\nThe purpose of this notebook is to demonstrate how to generate spectra and apply target selection cuts for various mock catalogs and target types. Here we generate spectra for targets in a single healpixel with no constraints on the target density (relative to the expected target density) or contaminants.\nFor code to generate large numbers of spectra over significant patches of sky and to create a representative DESI dataset (with parallelism), see desitarget/bin/select_mock_targets (as well as its MPI-parallelized cousin, desitarget/bin/select_mock_targets) and desitarget.mock.build.targets_truth.\nFinally, note that the various python Classes instantiated here (documented in desitarget.mock.mockmaker) are easily extensible to other mock catalogs and galaxy/QSO/stellar physics. Please contact @desi-data if you have specific suggestions, requirements, or desired features.\nJohn Moustakas\nSiena College\n2018 September", "import os\nimport sys\nimport numpy as np\nimport matplotlib.pyplot as plt\n\nfrom desiutil.log import get_logger, DEBUG\nlog = get_logger()\n\nimport seaborn as sns\nsns.set(style='white', font_scale=1.1, palette='Set2')\n\n%matplotlib inline", "To keep the calculations below manageable we specify a single nside=64 healpixel in an arbitrary location of the DESI footprint.", "healpixel = 26030\nnside = 64", "Specifying the random seed makes our calculations reproducible.", "seed = 555\nrand = np.random.RandomState(seed)", "Define a couple wrapper routines we will use below several times.", "def plot_subset(wave, flux, truth, objtruth, nplot=16, ncol=4, these=None, \n xlim=None, loc='right', targname='', objtype=''):\n \"\"\"Plot a random sampling of spectra.\"\"\"\n \n nspec, npix = flux.shape\n if nspec < nplot:\n nplot = nspec\n \n nrow = np.ceil(nplot / ncol).astype('int')\n\n if loc == 'left':\n xtxt, ytxt, ha = 0.05, 0.93, 'left'\n else:\n xtxt, ytxt, ha = 0.93, 0.93, 'right'\n \n if these is None:\n these = rand.choice(nspec, nplot, replace=False)\n these = np.sort(these)\n \n ww = (wave > 5500) * (wave < 5550)\n\n fig, ax = plt.subplots(nrow, ncol, figsize=(2.5*ncol, 2*nrow), sharey=False, sharex=True)\n for thisax, indx in zip(ax.flat, these):\n thisax.plot(wave, flux[indx, :] / np.median(flux[indx, ww]))\n if objtype == 'STAR' or objtype == 'WD':\n thisax.text(xtxt, ytxt, r'$T_{{eff}}$={:.0f} K'.format(objtruth['TEFF'][indx]), \n ha=ha, va='top', transform=thisax.transAxes, fontsize=13)\n else:\n thisax.text(xtxt, ytxt, 'z={:.3f}'.format(truth['TRUEZ'][indx]), \n ha=ha, va='top', transform=thisax.transAxes, fontsize=13)\n \n thisax.xaxis.set_major_locator(plt.MaxNLocator(3))\n if xlim:\n thisax.set_xlim(xlim)\n for thisax in ax.flat:\n thisax.yaxis.set_ticks([])\n thisax.margins(0.2)\n \n fig.suptitle(targname)\n fig.subplots_adjust(wspace=0.05, hspace=0.05, top=0.93)", "Tracer QSOs\nBoth tracer and Lya QSO spectra contain an underlying QSO spectrum, but the Lya QSOs (which we demonstrate below) also include the Lya forest (here, based on the v2.0 of the \"London\" mocks).\nEvery target class has its own dedicated \"Maker\" class.", "from desitarget.mock.mockmaker import QSOMaker\n\nQSO = QSOMaker(seed=seed)", "The various read methods return a dictionary with (hopefully self-explanatory) target- and mock-specific quantities.\nBecause most mock catalogs only come with (cosmologically accurate) 3D positions (RA, Dec, redshift), we use Gaussian mixture models trained on real data to assign other quantities like shapes, magnitudes, and colors, depending on the target class. For more details see the gmm-dr7.pynb Python notebook.", "dir(QSOMaker)\n\ndata = QSO.read(healpixels=healpixel, nside=nside)\n\nfor key in sorted(list(data.keys())):\n print('{:>20}'.format(key))", "Now we can generate the spectra as well as the targeting catalogs (targets) and corresponding truth table.", "%time flux, wave, targets, truth, objtruth = QSO.make_spectra(data)\n\nprint(flux.shape, wave.shape)", "The truth catalog contains the target-type-agnostic, known properties of each object (including the noiseless photometry), while the objtruth catalog contains different information depending on the type of target.", "truth\n\nobjtruth", "Next, let's run target selection, after which point the targets catalog should look just like an imaging targeting catalog (here, using the DR7 data model).", "QSO.select_targets(targets, truth)\n\ntargets", "And indeed, we can see that only a subset of the QSOs were identified as targets (the rest scattered out of the QSO color selection boxes).", "from desitarget.targetmask import desi_mask\nisqso = (targets['DESI_TARGET'] & desi_mask.QSO) != 0\nprint('Identified {} / {} QSO targets.'.format(np.count_nonzero(isqso), len(targets)))", "Finally, let's plot some example spectra.", "plot_subset(wave, flux, truth, objtruth, targname='QSO')", "Generating QSO spectra with cosmological Lya skewers proceeds along similar lines.\nHere, we also include BALs with 25% probability.", "from desitarget.mock.mockmaker import LYAMaker\n\nmockfile='/project/projectdirs/desi/mocks/lya_forest/london/v9.0/v9.0.0/master.fits'\n\nLYA = LYAMaker(seed=seed, balprob=0.25)\n\nlyadata = LYA.read(mockfile=mockfile,healpixels=healpixel, nside=nside)\n\n%time lyaflux, lyawave, lyatargets, lyatruth, lyaobjtruth = LYA.make_spectra(lyadata)\n\nlyaobjtruth\n\nplot_subset(lyawave, lyaflux, lyatruth, lyaobjtruth, xlim=(3500, 5500), targname='LYA')\n\n#Now lets generate the same spectra but including the different features and the new continum model.\n#For this we need to reload the desitarget module, for some reason it seems not be enough with defining a diferen variable for the LYAMaker\ndel sys.modules['desitarget.mock.mockmaker']\n\nfrom desitarget.mock.mockmaker import LYAMaker\n\nLYA = LYAMaker(seed=seed,sqmodel='lya_simqso_model_develop',balprob=0.25)\n\nlyadata_continum = LYA.read(mockfile=mockfile,healpixels=healpixel, nside=nside)\n\n%time lyaflux_cont, lyawave_cont, lyatargets_cont, lyatruth_cont, lyaobjtruth_cont = LYA.make_spectra(lyadata_continum)", "Lets plot together some of the spectra with the old and new continum model", "plt.figure(figsize=(20, 10))\nindx=rand.choice(len(lyaflux),9)\nfor i in range(9):\n plt.subplot(3, 3, i+1)\n plt.plot(lyawave,lyaflux[indx[i]],label=\"Old Continum\")\n plt.plot(lyawave_cont,lyaflux_cont[indx[i]],label=\"New Continum\")\nplt.legend()\n\n", "And finally we compare the colors, for the two runs with the new and old continum", "plt.plot(lyatruth[\"FLUX_W1\"],lyatruth_cont[\"FLUX_W1\"]/lyatruth[\"FLUX_W1\"]-1,'.')\nplt.xlabel(\"FLUX_W1\")\nplt.ylabel(r\"FLUX_W1$^{new}$/FLUX_W1-1\")\n\n\nplt.plot(lyatruth[\"FLUX_W2\"],lyatruth_cont[\"FLUX_W2\"]/lyatruth[\"FLUX_W2\"]-1,'.')\nplt.xlabel(\"FLUX_W2\")\nplt.ylabel(r\"(FLUX_W2$^{new}$/FLUX_W2)-1\")\n\nplt.hist(lyatruth[\"FLUX_W1\"],bins=100,label=\"Old Continum\",alpha=0.7)\nplt.hist(lyatruth_cont[\"FLUX_W1\"],bins=100,label=\"New Continum\",histtype='step',linestyle='--')\nplt.xlim(0,100) #Limiting to 100 to see it better.\nplt.xlabel(\"FLUX_W1\")\nplt.legend()\n\nplt.hist(lyatruth[\"FLUX_W2\"],bins=100,label=\"Old Continum\",alpha=0.7)\nplt.hist(lyatruth_cont[\"FLUX_W2\"],bins=100,label=\"New Continum\",histtype='step',linestyle='--')\nplt.xlim(0,100) #Limiting to 100 to see it better.\nplt.xlabel(\"FLUX_W2\")\nplt.legend()", "Conclusion: Colors are slightly affected by changing the continum model.\nTo Finalize the LYA section, lets generate another set of spectra now including DLAs, metals, LYB, etc.", "del sys.modules['desitarget.mock.mockmaker']\nfrom desitarget.mock.mockmaker import LYAMaker ##Done in order to reload the desitarget, it doesn't seem to be enough with initiating a diferent variable for the LYAMaker class. \n\nLYA = LYAMaker(seed=seed,sqmodel='lya_simqso_model',balprob=0.25,add_dla=True,add_metals=\"all\",add_lyb=True)\nlyadata_all= LYA.read(mockfile=mockfile,healpixels=healpixel, nside=nside)\n%time lyaflux_all, lyawave_all, lyatargets_all, lyatruth_all, lyaobjtruth_all = LYA.make_spectra(lyadata_all)\n\nplot_subset(lyawave_all, lyaflux_all, lyatruth_all, lyaobjtruth_all, xlim=(3500, 5500), targname='LYA')", "Demonstrate the other extragalactic target classes: LRG, ELG, and BGS.\nFor simplicity let's write a little wrapper script that does all the key steps.", "def demo_mockmaker(Maker, seed=None, nrand=16, loc='right'):\n\n TARGET = Maker(seed=seed)\n \n log.info('Reading the mock catalog for {}s'.format(TARGET.objtype))\n tdata = TARGET.read(healpixels=healpixel, nside=nside)\n \n log.info('Generating {} random spectra.'.format(nrand))\n indx = rand.choice(len(tdata['RA']), np.min( (nrand, len(tdata['RA'])) ) )\n tflux, twave, ttargets, ttruth, tobjtruth = TARGET.make_spectra(tdata, indx=indx)\n \n log.info('Selecting targets')\n TARGET.select_targets(ttargets, ttruth)\n \n plot_subset(twave, tflux, ttruth, tobjtruth, loc=loc, \n targname=tdata['TARGET_NAME'], objtype=TARGET.objtype)", "LRGs", "from desitarget.mock.mockmaker import LRGMaker\n\n%time demo_mockmaker(LRGMaker, seed=seed, loc='left')", "ELGs", "from desitarget.mock.mockmaker import ELGMaker\n\n%time demo_mockmaker(ELGMaker, seed=seed, loc='left')", "BGS", "from desitarget.mock.mockmaker import BGSMaker\n\n%time demo_mockmaker(BGSMaker, seed=seed)", "Next, demonstrate how to generate spectra of stars...\nMWS_MAIN", "from desitarget.mock.mockmaker import MWS_MAINMaker\n\n%time demo_mockmaker(MWS_MAINMaker, seed=seed, loc='left')", "MWS_NEARBY", "from desitarget.mock.mockmaker import MWS_NEARBYMaker\n\n%time demo_mockmaker(MWS_NEARBYMaker, seed=seed, loc='left')", "White dwarfs (WDs)", "from desitarget.mock.mockmaker import WDMaker\n\n%time demo_mockmaker(WDMaker, seed=seed, loc='right')", "Finally demonstrate how to generate (empyt) SKY spectra.", "from desitarget.mock.mockmaker import SKYMaker\n\nSKY = SKYMaker(seed=seed)\n\nskydata = SKY.read(healpixels=healpixel, nside=nside)\n\nskyflux, skywave, skytargets, skytruth, objtruth = SKY.make_spectra(skydata)\n\nSKY.select_targets(skytargets, skytruth)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
albahnsen/PracticalMachineLearningClass
exercises/P2-MovieGenrePrediction.ipynb
mit
[ "Project 2\nMovie Genre Classification\nClassify a movie genre based on its plot.\n<img src=\"moviegenre.png\"\n style=\"float: left; margin-right: 10px;\" />\nhttps://www.kaggle.com/c/miia4200-20191-p2-moviegenreclassification/overview\nData\nInput:\n- movie plot\nOutput:\nProbability of the movie belong to each genre\nEvaluation\n\n20% API\n30% Create a solution using with a Machine Learning algorithm - Presentation (5 slides)\n50% Performance in the Kaggle competition (Normalized acording to class performance in the private leaderboard)\n\nAcknowledgements\nWe thank Professor Fabio Gonzalez, Ph.D. and his student John Arevalo for providing this dataset.\nSee https://arxiv.org/abs/1702.01992\nSample Submission", "import pandas as pd\nimport os\nimport numpy as np\nfrom sklearn.feature_extraction.text import CountVectorizer\nfrom sklearn.preprocessing import MultiLabelBinarizer\nfrom sklearn.multiclass import OneVsRestClassifier\nfrom sklearn.ensemble import RandomForestRegressor, RandomForestClassifier\nfrom sklearn.metrics import r2_score, roc_auc_score\nfrom sklearn.model_selection import train_test_split\n\ndataTraining = pd.read_csv('https://github.com/albahnsen/PracticalMachineLearningClass/raw/master/datasets/dataTraining.zip', encoding='UTF-8', index_col=0)\ndataTesting = pd.read_csv('https://github.com/albahnsen/PracticalMachineLearningClass/raw/master/datasets/dataTesting.zip', encoding='UTF-8', index_col=0)\n\ndataTraining.head()\n\ndataTesting.head()", "Create count vectorizer", "vect = CountVectorizer(max_features=1000)\nX_dtm = vect.fit_transform(dataTraining['plot'])\nX_dtm.shape\n\nprint(vect.get_feature_names()[:50])", "Create y", "dataTraining['genres'] = dataTraining['genres'].map(lambda x: eval(x))\n\nle = MultiLabelBinarizer()\ny_genres = le.fit_transform(dataTraining['genres'])\n\ny_genres\n\nX_train, X_test, y_train_genres, y_test_genres = train_test_split(X_dtm, y_genres, test_size=0.33, random_state=42)", "Train multi-class multi-label model", "clf = OneVsRestClassifier(RandomForestClassifier(n_jobs=-1, n_estimators=100, max_depth=10, random_state=42))\n\nclf.fit(X_train, y_train_genres)\n\ny_pred_genres = clf.predict_proba(X_test)\n\nroc_auc_score(y_test_genres, y_pred_genres, average='macro')", "Predict the testing dataset", "X_test_dtm = vect.transform(dataTesting['plot'])\n\ncols = ['p_Action', 'p_Adventure', 'p_Animation', 'p_Biography', 'p_Comedy', 'p_Crime', 'p_Documentary', 'p_Drama', 'p_Family',\n 'p_Fantasy', 'p_Film-Noir', 'p_History', 'p_Horror', 'p_Music', 'p_Musical', 'p_Mystery', 'p_News', 'p_Romance',\n 'p_Sci-Fi', 'p_Short', 'p_Sport', 'p_Thriller', 'p_War', 'p_Western']\n\ny_pred_test_genres = clf.predict_proba(X_test_dtm)\n\n\nres = pd.DataFrame(y_pred_test_genres, index=dataTesting.index, columns=cols)\n\nres.head()\n\nres.to_csv('pred_genres_text_RF.csv', index_label='ID')" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
astroumd/GradMap
notebooks/Lectures2017/Lecture2/Lecture_2_Inst_copy.ipynb
gpl-3.0
[ "Lecture 2 - Logic, Loops, and Arrays\nThis iPython notebook covers some of the most important aspects of the Python language that is used daily by real Astronomers and Physicists. Topics will include:\n\nThe logic of Python, including while loops and if/else statements\nFunction definitions and how to make your own\nA review of numpy arrays and a discussion of their usefulness in solving real problems\nReading in data from text and numpy file formats, along with creating your own outputs to be used later\n\nA. Logic, If/Else, and Loops\nYou can make conditional (logical) statements in Python, which return either \"True\" or \"False\", also known as \"Booleans.\"\nA basic logic statement is something that we've used already: x < y. Here is this one again, and a few more.", "#Example conditional statements\nx = 1\ny = 2\nx<y #x is less than y\n\n#x is greater than y\nx>y\n\n#x is less-than or equal to y\nx<=y\n\n#x is greater-than or equal to y\nx>=y", "If you let a and b be conditional statements (like the above statements, e.g. a = x < y), then you can combine the two together using logical operators, which can be thought of as functions for conditional statements.\nThere are three logical operators that are handy to know:\n\nAnd operator: a and b\nOr operator: a or b\nNot operator: not(a)", "#Example of and operator\n(1<2)and(2<3)\n\n#Example of or operator\n(1<2)or(2>3)\n\n#Example of not operator\nnot(1>2)", "Now, these might not seem especially useful at first, but they're the bread and butter of programming. Even more importantly, they are used when we are doing if/else statements or loops, which we will now cover.\nAn if/else statement (or simply an if statement) are segments of code that have a conditional statement built into it, such that the code within that segment doesn't activate unless the conditional statement is true.\nHere's an example. Play around with the variables x and y to see what happens.", "x = 1\ny = 2\nif (x < y):\n print(\"Yup, totally true!\")\nelse:\n print(\"Nope, completely wrong!\")", "The idea here is that Python checks to see if the statement (in this case \"x < y\") is True. If it is, then it will do what is below the if statement. The else statement tells Python what to do if the condition is False.\nNote that Python requires you to indent these segments of code, and WILL NOT like it if you don't. Some languages don't require it, but Python is very particular when it comes to this point. (The parentheses around the conditional statement, however, are optional.)\nYou also do not need an \"else\" segment, which effectively means that if the condition isn't True, then that segment of code doesn't do anything, and Python will just continue on past the if statement.\nHere is an example of such a case. Play around with it to see what happens when you change the values of x and y.", "x = 1\ny = 2\nif (x>y):\n print(\"The condition is True!\")\nx+y", "While-loops are similar to if statements, in the sense that they also have a conditional statement that is built into it and it executes when the conditional is True. However, the only difference is, it will KEEP executing that segment of code until the conditional statement becomes False.\nThis might seem a bit strange, but you can get the hang of it!\nFor example, let's say we want Python to count from 1 to 10.", "x = 1\nwhile (x <= 10):\n print(x)\n x = x+1", "Note here that we tell Python to print the number x (x starts at 1) and then redefining x as itself +1 (so, x=1 gets redefined to x = x+1 = 1+1 = 2). Python then executes the loop again, but now x has been incremented by 1. We continue this process from x = 1 to x = 10, printing out x every time. Thus, with a fairly compact bit of code, you get 10 lines of output.\nIt is sometimes handy to define what is known as a DUMMY VARIABLE, whose only job is to count the number of times the loop has been executed. Let's call this dummy variable i.", "x = 2\ni = 0 #dummy variable\nwhile (i<10):\n x = 2*x\n print(x)\n i = i+1 \n #another way to write this is i+=1, but it's idiosyncratic and we won't use it here", "But...what if the conditional statement is always true?\nB. Defining Your Own Functions\nSo far, we have really focused on using built-in functions (such as from numpy and matplotlib), but what about defining our own? This is easy to do, and can be a way to not only clean up your code, but also allows you to apply the same set of operations to multiple variables without having to explicitly write it out every time.\nFor example, let's say we want to define a function that takes the square root of a number. It's probably a good idea to check if the number is positive first, otherwise we'll end up with an imaginary answer.", "#Defining a square root function\ndef sqrt(x):\n if (x < 0):\n print(\"Your input is not positive!\")\n else: \n return x**(1/2)\n\nsqrt(4)\n\nsqrt(-4)", "So the outline for a function is\npython\ndef &lt;function name&gt; (&lt;input variable&gt;):\n &lt;some code here&gt;\n return &lt;output variable&gt;\nIn general, many common mathematical functions like sqrt, log, exp, sin, cos can be found in the math module. So we don't have to write our own - phew!", "import math\nprint(math.sqrt(25))\nprint(math.sin(math.pi/2))\nprint(math.exp(math.pi)-math.pi)", "When defining your own functions, you can also use multiple input variables. For example, if we want to find the greatest common divisor (gcd) of two numbers, we could apply something called Euclid's algorithm.\nWe define gcd(a,0) = a. Then we note that the gcd(a,b) = gcd(b,r), where r is the remainderwhen a is divided by b. So we can repeat this process until we end up with zero remainder. Then, we return whatever number is left in a as the greatest common divisor.\nA command you might not have encountered yet is %. The expression x % y returns the remainder when x is divided by y.", "def gcd(a, b):\n \"\"\"Calculate the Greatest Common Divisor of a and b.\n \n Unless b==0, the result will have the same sign as b (so that when\n b is divided by it, the result comes out positive).\n \"\"\"\n while b > 0:\n a, b = b, a%b\n return a\n\nprint(gcd(120,16))", "Challenge 1 - Fibonacci Sequence and the Golden Ratio\nThe Fibonacci sequence is defined by $f_{n+1} =f_n + f_{n-1}$ and the initial values $f_0 = 0$ and $f_1 = 1$. \nThe first few elements of the sequence are: $0, 1, 1, 2, 3, 5, 8, 13, 21, 34, 55, 89, 144, 233, 377 ...$\nUsing what you just learned about functions, define a function fib, which calculates the $n$-th element in the Fibonacci sequence. It should have one input variable, $n$ and its return value should be the $n$-th element in the Fibonacci sequence.", "# Answer\ndef fib(n):\n \"\"\"Return nth element of the Fibonacci sequence.\"\"\"\n # Create the base case\n n0 = 0\n n1 = 1\n\n # Loop n times. Just ignore the variable i.\n for i in range(n):\n n_new = n0 + n1\n n0 = n1\n n1 = n_new\n return n0", "The ratio of successive elements in the Fibonacci sequence converges to\n$$\\phi = (1 + \\sqrt{5})/2 ≈ 1.61803\\dots$$\nwhich is the famous golden ratio. Your task is to approximate $\\phi$.\nDefine a function phi_approx that calculates the approximate value of $\\phi$ obtained by the ratio of the $n$-th and $(n−1)$-st elements, $$f_n /f_{n-1} \\approx \\phi$$\nphi_approx should have one variable, $n$. Its return value should be the $n$-th order approximation of $\\phi$.", "#Answer:\nphi_approx_output_format = \\\n\"\"\"Approximation order: {:d}\n fib_n: {:g}\n fib_(n-1): {:g}\n phi: {:.25f}\"\"\"\n\ndef phi_approx(n, show_output=True):\n \"\"\"Return the nth-order Fibonacci approximation to the golden ratio.\"\"\"\n fib_n = fib(n)\n fib_nm1 = fib(n - 1)\n phi = fib_n/fib_nm1\n if show_output:\n print(phi_approx_output_format.format(n, fib_n, fib_nm1, phi))\n return phi", "C. Numpy Arrays - Review of Basics and Some More Advanced Topics\nRecall in the first lecture that we introduced a python module known as numpy and type of variable known as a numpy array. For review, we will call numpy to be imported into this notebook so we can use its contents.", "import numpy as np", "Here we are calling in the contents of numpy and giving it the shorthand name 'np' for convenience.\nTo create an array variable (let's call it 'x'), we simply assign 'x' to be equal to the output of the np.array() function, using a list as an input. You can then verify its contents by using the print() function.", "x = np.array([1,2,3,4,5])\nprint(x)", "As we learned in Lecture 1, numpy arrays are convenient because they allow us to do math across the whole array and not just individual numbers.\nFor example, let's say we want to make a new variable 'y' such that y = $x^{2}$, then this is done simply as", "y = x**2\nprint(y)", "The documentation of possible functions that can be applied to integers and floats (i.e. single numbers), as well as numpy arrays, can be found here: https://docs.scipy.org/doc/numpy/reference/routines.math.html\nAs discussed previously, there are numerous ways to create arrays beyond np.numpy(). These include:\n* np.arange()\n* np.linspace()\nThese create arrays of numbers within a range with a specific step-size between each consecutive number in the array.\nIt is sometimes convenient to have Python create other arrays for you, depending on the problem that you are going to solve. For example, sometimes it is handy to create an array of all zeros, which can then be replaced later with data. This can be done by using np.zeros().\nGoing back to the Fibonacci example, let's say we want to store the first 10 elements of the Fibonacci sequence in an array for easy access in the future. To ready such an array, you simply do the following.", "data = np.zeros(10)\nprint(data)", "Now how do we assign a new value to an element of the array? We use the following \"square bracket\" notation:\n\narray_name[index_number] = value\n\nIn this, the array (with the name \"array_name\" or whatever it is you have named it) will have \"value\" replace whatever is in the position corresponding to \"index_number.\" \nArrays are numbered starting from 0, such that\n\nFirst position = 0\nSecond position = 1\nThird position = 2\netc.\n\nIt is a bit confusing, but after a bit of time, this becomes quite natural. Let's practice with the Fibonacci example. \nFirst, let's store the first Fibonacci number in our array. We use the brackets to store that value in the first position (0 index number) in the data array we made above.", "data[0] = fib(0)\nprint(data[0])", "Now you try it. Store the second Fibonacci number in the second position of your array and use a print statement to verify that you have done so.", "#Your code goes here", "Python array indexing is fairly straightforward once you get the hang of it.\nLet's say you wanted the last element of the array, but you don't quite recall the size of the array. One of the easiest ways to access that element is to use negative indexing.\nNegative indexing is the same as normal indexing, but backward, in the sense that you start with the last element of the array and count forward. More explicitly, for any array:\n\narray[-1] = last element of array\narray[-2] = second to last element of the array\narray[-3] = third to last element of the array\netc\n\nNow then, let's create an array using np.arange() with 10 elements, and see if you can access the last element and the second to last element using negative indexing. Print out these values.", "#Your code goes here", "Now, sometimes its useful to access more than one element of an array. Let's say that we have an array with 100 elements in the range [0,10] (including endpoints). If you recall, this can be done via the np.linspace() function.", "x = np.linspace(0,10,100)", "Now then, in order to get a range of elements rather than simply a single one, we use the notation:\n\nx[i_start,i_end+1]\n\nFor example, let's say you want the 1st, 2nd, and 3rd element, then you'd have to do\n\nx[0:3]\n\nIn this notation, \":\" represents you want everything between 0 and 3, and including 0. Let's test this.", "x[0:3]", "If you want everything passed a certain point of the array (including that point), then you would just eliminate the right number, for example\n\nx[90:]\n\nwould give you everything after (and including) the 90 index element. Similarly, if you want everything before a certain index\n\nx[:90]\n\nwould give you everything before the 90 index element.\nSo, let's say that you would want everything up to and including the tenth element of the array $x$. How would you do that?\n(Remember, the tenth element has an index of 9)", "#Your code goes here", "Finally, simply using the \":\" gives you all the elements in the array.\nChallenge 2 - Projectile Motion\nIn this challenge problem, you will be building what is known as a NUMERICAL INTEGRATOR in order to predict the projectiles trajectory through a gravitational field (i.e. what happens when you throw a ball through the air)\nLet's say that you have a projectile (let's say a ball) in a world with 2 spatial dimensions (dimensions x and y). This world has a constant acceleration due to gravity (call it simply g) that points in the -y direction and has a surface at y = 0.\nCan we calculate the motion of the projectile in the x-y plane after the projectile is given some initial velocity vector v? In particular, can we predict where the ball will land? With loops, yes we can!\nLet's first define all of the relevant variables so far. Let g = -9.8 (units of m/s, so an Earth-like world), the initial velocity vector being an array v = [3.,3.], and an initial position vector (call it r) in the x-y plane of r = [0.,1.]. For ease, let's use numpy arrays for the vectors.", "#Your code here\n#Answers\ng = -9.8\nv = np.array([3.,3.])\nr = np.array([0.,0.])", "Now then, remember that,\n$a = \\frac{dv}{dt}$ and thus, $dv = a\\ dt$\nSo, the change of an objects velocity ($dv$) is equal to the acceleration ($a = g$ in this case) multiplied by the change in time ($dt$)\nLikewise:\n$v_{x} = \\frac{dx}{dt}$ and $v_{y} = \\frac{dy}{dt}$, or\n$v_{x}\\ dt = dx$ and $v_{y}\\ dt = dy$\nNow, in this case, since there is only downward acceleration, the change of $v_{x}$ is 0 until the projective hits the ground.\nNow, we're going to define two functions, one that will calculate the velocity vector components and the other, the position vector components, and returning a new vector with the new components. I'll give you the first one.", "def intV(v,g,dt):\n deltaVy = g*dt\n vXnew = v[0]\n vYnew = v[1]+deltaVy\n return np.array([vXnew,vYnew])", "Now that we've defined intV (short for \"integrate v\"), let's use it real quick, just to test it out. Let dt = 0.1 (meaning, your taking a step forward in time by 0.1 seconds).", "dt = 0.1\nintV(v,g,dt)", "As you can see, $V_{x}$ hasn't changed, but $V_{y}$ has decreased, representing the projectile slowing down as it's going upward. \nI'll let you define the function now for the position vector. Call it intR, and it should be a function of (r,v,dt), and remember that now both $r_{x}$ and $r_{y}$ are changing. Remember to return an array.", "#Your code here.\n#Answer\ndef intR(r,v,dt):\n rXnew = r[0]+(v[0]*dt)\n rYnew = r[1]+(v[1]*dt)\n return np.array([rXnew,rYnew])", "Now we have the functions that calculate the changes in the position and velocity vectors. We're almost there!\nNow, we will need a while-loop in order to step the projectile through its trajectory. What would the condition be?\nWell, we know that the projectile stops when it hits the ground. So, one way we can do this is to have the condition being (r[1] &gt;= 0), since the ground is defined at y = 0.\nSo, having your intV and intR functions, along with a while-loop and a dt = 0.1 (known as the \"step-size\"), can you use Python to predict where the projectile will land?", "#Your code here.\n#Answer\ndt = 0.01\nwhile (r[1] >= 0.):\n v = intV(v,g,dt)\n r = intR(r,v,dt)\nprint(r)", "Now, note that we've defined the while-loop such that it doesn't stop exactly at 0. Firstly, this was strategic, since the initial y = 0, and thus the while-loop wouldn't initialize to begin with (you can try to change it). One way you can overcome this issue is to decrease dt, meaning that you're letting less time pass between each step. Ideally, you'd want dt to be infinitely small, but we don't have that convenience in reality. Re-run the cells, but with dt = 0.01 and we will get much closer to the correct answer.\nSo, we know where it's going to land...can we plot the full trajectory? Yes, but this is a bit complicated, and requires one last function: np.append(). https://docs.scipy.org/doc/numpy/reference/generated/numpy.append.html\nThe idea is to use np.append() to make an array that keeps track of where the ball has been.\nLet's define two empty arrays that will store our information (x and y). This is an odd idea, defining an array variable without any elements, so instead think of it as a basket without anything inside of it yet, and we will np.append() to fill it.", "x = np.array([]) #defining an empty array that will store x position\ny = np.array([]) #defining an empty array that will store y position", "Now, all you have to do, is each time the while-loop executes, you use np.append() for the x and y arrays, adding the new values to the end of them.\nHow do you do that? Well, looking at the np.append() documentation, for x, you do\nx = np.append(x,[r[0]])\nThe same syntax is used for the y array.\nAfter that, you simply use plt.plot(x,y,'o') to plot the trajectory of the ball (the last 'o' is used to change the plotting from a line to points).\nGood luck! Also, don't forget to reset your v and r arrays (otherwise, this will not work)", "#Your code goes here\n\n#Answer\nv = np.array([3.,3.])\nr = np.array([0.,0.])\n\ndt = 0.01\nwhile (r[1] >= 0.):\n v = intV(v,g,dt)\n r = intR(r,v,dt)\n x = np.append(x,r[0])\n y = np.append(y,r[1])\nprint(r)\n\nplt.plot(x,y,'o')\nplt.show()", "If everything turns out alright, you should get the characteristic parabola.\nAlso, if you're going to experiment with changing the intial position and velocity, remember to re-run the cell where we define the x and y arrays in order to clear the plot.\nNow you've learned how to do numerical integration. This technique is used all throughout Physics and Astronomy, and while there are more advanced ways to do it in order to increase accuracy, the heart of the idea is the same.\nHere is a figure made by Corbin Taylor (head of the Python team) that used numerical integration to track the position of a ray of light as it falls into a spinning black hole.\n<img src=\"./raytrace_picture.jpg\">\nD. Loading And Saving Data Arrays\nSo, we have learned a lot about data arrays and how we can manipulate them, either through mathematics or indexing. However, up until this point, all we've done is use arrays that we ourselves created. But what happens if we have data from elsewhere? Can Python use that?\nThe answer is of course yes, and while there are ways to import data that are a bit complicated at times, we're going to teach you some of the most basic, and most useful, ways.\nFor this section, we will be using plotting to visualize the data as you're working with it, and as such, we will be loading in the package \"matplotlib.pyplot\" which you used in Lecture 1", "%matplotlib inline\nimport matplotlib.pyplot as plt", "Now then, let's say we are doing a timing experiment, where we look at the brightness of an object as a function of time. This is actually a very common type of measurement that you may do in research, such as looking for dips in the brightness of stars as a way to detect planets.\nThis data is stored in a text file named timeseries_data.txt in the directory lecture2_data. Let's load it in.", "timeseriesData = np.loadtxt(\"./lecture2_data/timeseries_data.txt\")", "Now we have the data loaded into Python as a numpy array, and one handy thing you can do is to use Python to find the dimensions of the array. This is done by using \".shape\" as so.", "timeseriesData.shape", "In this format, we know that this is a 2x1000 array (two rows, 1000 columns). Another way you can think about this is that you have two 1000-element arrays contained within another array, where each of those arrays are elements (think of it as an array of arrays).\nThe first row is the time stamps when each measurement was taken, while the second row is that of the value of the measurement itself.\nFor ease of handling this data, one can in principle take each of these rows and create new arrays out of them. Let's do just that.", "t = timeseriesData[0,:]\nsignal = timeseriesData[1,:]", "Here, you have 2 dimensions with the array timeseriesData, and as such much specify the row first and then the column. So,\n- array_name[n,:] is the n-th row, and all columns within that row.\n- array_name[:,n] is the n-th column, and all rows within that particular column.\nNow then, let's see what the data looks like using the plot() function that you learned last time. Do you remember how to do it? Why don't you try! Plot t as your x-axis and signal as your y-axis. Don't forget to show your plot.", "#Your code here\n\n#Answer\nplt.plot(t,signal)\nplt.show()", "Looking at our data, you see clear spikes that jump well above most of the signal. (I've added this to the data to represent outliers that may sometimes appear when you're messing with raw data, and those must be dealt with). In astronomy, you sometimes have relativistic charged particles, not from your source, that hit the detector known as cosmic rays, and we often have to remove these.\nThere are some very complex codes that handle cosmic rays, but for our purposes (keeping it easy), we're going to just set a hard cut off of, let's say 15.\nIn order to do this, we can use conditional indexing in place of normal indices. This involves taking a conditional statement (more on those later) and testing whether it evaluates to True on each element in the array.\nThis gives an array of Booleans, which we can use as logical indices to select only the entries for which the logical statement is True.", "cutOff = 15.\nsignalFix = signal[signal < cutOff]", "In this case, the conditional statement that we have used is signal &lt; cutOff. \nHere, conditional indexing keeps the data that we have deemed \"good\" by this criteria. We can also do the same for the corresponding time stamps, since t and signal have the same length.", "tFix = t[signal < cutOff]", "Now let's plot it. You try.", "#Your code goes here\nplt.plot(tFix,signalFix)\nplt.show()", "Now that you have your data all cleaned up, it would be nice if we could save it for later and not have to go through the process of cleaning it up every time. Fear not! Python has you covered.\nThere are two formats that we are going to cover, one that is Python-specific, and the other a simple text format.\nFirst, we must package our two cleaned up arrays into one again. This can be done simply with the np.array() function.", "dataFix = np.array([tFix,signalFix])", "Then, we can use either the np.save() function or the np.savetxt function, the first saving the array into a '.npy' file and the other, into a '.txt' file. The syntax is pretty much the same for each.", "np.save('./lecture2_data/dataFix.npy',dataFix)\nnp.savetxt('./lecture2_data/dataFix.txt',dataFix)", "Now that your data files are saved, you can load them up again, using np.loadtxt() and np.load() for .txt and .npy files respectively. We used np.loadtxt() above, and np.load works the same way. So, let's load in the .npy file and see if our data was saved correctly.", "data = np.load('./lecture2_data/dataFix.npy')\nt = data[0,:]\nsignal = data[1,:]\nplt.plot(t,signal)\nplt.show()", "Now, let's see if you can do the same thing, but with the .txt file that we saved.", "#Your code goes here", "So, to summarize, not only can you manipulate arrays, but now you can save them and load them. In a way, those are some of the most important skills in scientific computing. Almost everything you'll be doing requires you know this, and now that you've mastered it, you're well on your way to being an expert in computational physics and astronomy!" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
Leguark/pygeomod
notebooks_GeoPyMC/PyMC for Geology Tutorial/PyMC geomod-2.ipynb
mit
[ "PyMC geomod 2: Incorporating the Geological model to the Bayesian Model in PyMC\nThis notebook explains how we can generate the whole model at every step of our Bayesian Inference. This will allow us to create much more complex constrains to reduce the uncertainty.\nImporting", "%matplotlib inline\nfrom IPython.core.display import Image\n\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport sys, os\nimport shutil\n#import geobayes_simple as gs\n\nimport pymc as pm # PyMC 2\nfrom pymc.Matplot import plot\nfrom pymc import graph as gr\nimport numpy as np\n#import daft\nfrom IPython.core.pylabtools import figsize\nfigsize(12.5, 10)\n\n# as we have our model and pygeomod in different paths, let's change the pygeomod path to the default path.\nsys.path.append(\"C:\\Users\\Miguel\\workspace\\pygeomod\\pygeomod\")\n#sys.path.append(r'/home/jni/git/tmp/pygeomod_tmp')\nimport geogrid\nimport geomodeller_xml_obj as gxml\nreload(gxml)\n", "Coping our Model in a new folder", "try:\n shutil.copytree('C:/Users/Miguel/workspace/Thesis/Geomodeller/Basic_case/3_horizontal_layers', 'Temp_test/')\nexcept:\n print \"The folder is already created\"", "Simplest case: three horizontal layers, with depth unknow\nLoading pre-made Geomodeller model\nYou have to be very careful with the path, and all the bars to the RIGHT", "hor_lay = 'Temp_test/horizontal_layers.xml'#C:\\Users\\Miguel\\workspace\\Thesis\\Thesis\\Temp3\nprint hor_lay\n\nreload(geogrid)\nG1 = geogrid.GeoGrid()\n\n# Using G1, we can read the dimensions of our Murci geomodel\nG1.get_dimensions_from_geomodeller_xml_project(hor_lay)\n\n#G1.set_dimensions(dim=(0,23000,0,16000,-8000,1000))\nnx = 400\nny = 2\nnz = 400\nG1.define_regular_grid(nx,ny,nz)\n\nG1.update_from_geomodeller_project(hor_lay)", "Tha axis here represent the number of cells not the real values of geomodeller", "G1.plot_section('y',cell_pos=1,colorbar = True, cmap='RdBu', figsize=(6,6),interpolation= 'nearest' ,ve = 1, geomod_coord= True)", "Setting Bayes Model", "Image(\"Nice Notebooks\\THL_no_thickness.png\")\n\nalpha = pm.Normal(\"alpha\", -350, 0.05)\nalpha\n\nalpha = pm.Normal(\"alpha\", -350, 0.05)# value= -250)\n\n#Thickness of the layers\nthickness_layer1 = pm.Normal(\"thickness_layer1\", -150, 0.005) # a lot of uncertainty so the constrains are necessary\nthickness_layer2 = pm.Normal(\"thickness_layer2\", -150, 0.005)\n\n\n@pm.deterministic\ndef beta(alpha = alpha, thickness_layer1 = thickness_layer1):\n return alpha + thickness_layer1\n\n@pm.deterministic\ndef gamma(beta = beta, thickness_layer2 = thickness_layer2):\n return beta + thickness_layer2\n\n\n@pm.deterministic\ndef section(alpha = alpha, beta = beta, gamma = gamma):\n # Create the array we will use to modify the xml\n samples = [alpha,beta, gamma,alpha,beta, gamma]\n \n # Load the xml to be modify\n hor_lay = 'Temp_test\\horizontal_layers.xml'\n \n #Create the instance to modify the xml\n # Loading stuff\n reload(gxml)\n gmod_obj = gxml.GeomodellerClass()\n gmod_obj.load_geomodeller_file(hor_lay)\n \n # Create a dictionary so we can acces the section through the name\n section_dict = gmod_obj.create_sections_dict()\n \n # ## Get the points of all formation for a given section: Dictionary\n contact_points = gmod_obj.get_formation_point_data(section_dict['Section1'])\n \n #Perform the position Change\n for i, point in enumerate(contact_points):\n gmod_obj.change_formation_point_pos(point, y_coord = [samples[i],samples[i]])\n \n # Check the new position of points\n #points_changed = gmod_obj.get_point_coordinates(contact_points)\n #print \"Points coordinates\", points_changed\n \n # Write the new xml\n gmod_obj.write_xml(\"Temp_test/new.xml\")\n \n \n \n # Read the new xml\n hor_lay_new = 'Temp_test/new.xml'\n G1 = geogrid.GeoGrid()\n \n # Getting dimensions and definning grid\n \n G1.get_dimensions_from_geomodeller_xml_project(hor_lay_new)\n \n # Resolution!\n nx = 2\n ny = 2\n nz = 400\n G1.define_regular_grid(nx,ny,nz)\n \n # Updating project\n G1.update_from_geomodeller_project(hor_lay_new)\n \n # Printing new model\n # G1.plot_section('y',cell_pos=1,colorbar = True, cmap='RdBu', figsize=(6,6),interpolation= 'nearest' ,ve = 1, geomod_coord= True)\n return G1\n\n#MODEL!!\nmodel = pm.Model([alpha, beta, gamma, section, thickness_layer1, thickness_layer2])\n\nM = pm.MCMC(model)\nM.sample(iter=100)", "Extracting Posterior Traces to Arrays", "n_samples = 20\n\nalpha_samples, alpha_samples_all = M.trace('alpha')[-n_samples:], M.trace(\"alpha\")[:]\nbeta_samples, beta_samples_all = M.trace('beta')[-n_samples:], M.trace(\"beta\")[:]\ngamma_samples, gamma_samples_all = M.trace('gamma')[-n_samples:], M.trace('gamma')[:]\nsection_samples, section_samples_all = M.trace('section')[-n_samples:], M.trace('section')[:]\n\n#print section_samples", "Plotting the results", "fig, ax = plt.subplots(1, 2, figsize=(15, 5))\nax[0].hist(alpha_samples_all, histtype='stepfilled', bins=30, alpha=1,\n label=\"Upper most layer\", normed=True)\nax[0].hist(beta_samples_all, histtype='stepfilled', bins=30, alpha=1,\n label=\"Middle layer\", normed=True, color = \"g\")\nax[0].hist(gamma_samples_all, histtype='stepfilled', bins=30, alpha=1,\n label=\"Bottom most layer\", normed=True, color = \"r\")\n\n\nax[0].invert_xaxis()\nax[0].legend()\nax[0].set_title(r\"\"\"Posterior distributions of the layers\"\"\")\nax[0].set_xlabel(\"Depth(m)\")\n\n\nax[1].set_title(\"Representation\")\n\n\nfor i in section_samples:\n i.plot_section('y',cell_pos=1,colorbar = True, ax = ax[1], alpha = 0.3, figsize=(6,6),interpolation= 'nearest' ,ve = 1, geomod_coord= True, contour = True)\n\nplot(M)\n\nImage(\"Nice Notebooks\\THL_no_thickness.png\")\n\nplot(M)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
ES-DOC/esdoc-jupyterhub
notebooks/niwa/cmip6/models/sandbox-3/aerosol.ipynb
gpl-3.0
[ "ES-DOC CMIP6 Model Properties - Aerosol\nMIP Era: CMIP6\nInstitute: NIWA\nSource ID: SANDBOX-3\nTopic: Aerosol\nSub-Topics: Transport, Emissions, Concentrations, Optical Radiative Properties, Model. \nProperties: 69 (37 required)\nModel descriptions: Model description details\nInitialized From: -- \nNotebook Help: Goto notebook help page\nNotebook Initialised: 2018-02-15 16:54:30\nDocument Setup\nIMPORTANT: to be executed each time you run the notebook", "# DO NOT EDIT ! \nfrom pyesdoc.ipython.model_topic import NotebookOutput \n\n# DO NOT EDIT ! \nDOC = NotebookOutput('cmip6', 'niwa', 'sandbox-3', 'aerosol')", "Document Authors\nSet document authors", "# Set as follows: DOC.set_author(\"name\", \"email\") \n# TODO - please enter value(s)", "Document Contributors\nSpecify document contributors", "# Set as follows: DOC.set_contributor(\"name\", \"email\") \n# TODO - please enter value(s)", "Document Publication\nSpecify document publication status", "# Set publication status: \n# 0=do not publish, 1=publish. \nDOC.set_publication_status(0)", "Document Table of Contents\n1. Key Properties\n2. Key Properties --&gt; Software Properties\n3. Key Properties --&gt; Timestep Framework\n4. Key Properties --&gt; Meteorological Forcings\n5. Key Properties --&gt; Resolution\n6. Key Properties --&gt; Tuning Applied\n7. Transport\n8. Emissions\n9. Concentrations\n10. Optical Radiative Properties\n11. Optical Radiative Properties --&gt; Absorption\n12. Optical Radiative Properties --&gt; Mixtures\n13. Optical Radiative Properties --&gt; Impact Of H2o\n14. Optical Radiative Properties --&gt; Radiative Scheme\n15. Optical Radiative Properties --&gt; Cloud Interactions\n16. Model \n1. Key Properties\nKey properties of the aerosol model\n1.1. Model Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of aerosol model.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.model_overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "1.2. Model Name\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nName of aerosol model code", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.model_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "1.3. Scheme Scope\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nAtmospheric domains covered by the aerosol model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.scheme_scope') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"troposhere\" \n# \"stratosphere\" \n# \"mesosphere\" \n# \"mesosphere\" \n# \"whole atmosphere\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "1.4. Basic Approximations\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nBasic approximations made in the aerosol model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.basic_approximations') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "1.5. Prognostic Variables Form\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nPrognostic variables in the aerosol model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.prognostic_variables_form') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"3D mass/volume ratio for aerosols\" \n# \"3D number concenttration for aerosols\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "1.6. Number Of Tracers\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nNumber of tracers in the aerosol model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.number_of_tracers') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "1.7. Family Approach\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nAre aerosol calculations generalized into families of species?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.family_approach') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "2. Key Properties --&gt; Software Properties\nSoftware properties of aerosol code\n2.1. Repository\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nLocation of code for this component.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.software_properties.repository') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "2.2. Code Version\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nCode version identifier.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.software_properties.code_version') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "2.3. Code Languages\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nCode language(s).", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.software_properties.code_languages') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "3. Key Properties --&gt; Timestep Framework\nPhysical properties of seawater in ocean\n3.1. Method\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nMathematical method deployed to solve the time evolution of the prognostic variables", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.timestep_framework.method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Uses atmospheric chemistry time stepping\" \n# \"Specific timestepping (operator splitting)\" \n# \"Specific timestepping (integrated)\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "3.2. Split Operator Advection Timestep\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nTimestep for aerosol advection (in seconds)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.timestep_framework.split_operator_advection_timestep') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "3.3. Split Operator Physical Timestep\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nTimestep for aerosol physics (in seconds).", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.timestep_framework.split_operator_physical_timestep') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "3.4. Integrated Timestep\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nTimestep for the aerosol model (in seconds)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.timestep_framework.integrated_timestep') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "3.5. Integrated Scheme Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nSpecify the type of timestep scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.timestep_framework.integrated_scheme_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Explicit\" \n# \"Implicit\" \n# \"Semi-implicit\" \n# \"Semi-analytic\" \n# \"Impact solver\" \n# \"Back Euler\" \n# \"Newton Raphson\" \n# \"Rosenbrock\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "4. Key Properties --&gt; Meteorological Forcings\n**\n4.1. Variables 3D\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nThree dimensionsal forcing variables, e.g. U, V, W, T, Q, P, conventive mass flux", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.variables_3D') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "4.2. Variables 2D\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nTwo dimensionsal forcing variables, e.g. land-sea mask definition", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.variables_2D') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "4.3. Frequency\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nFrequency with which meteological forcings are applied (in seconds).", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.frequency') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "5. Key Properties --&gt; Resolution\nResolution in the aersosol model grid\n5.1. Name\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nThis is a string usually used by the modelling group to describe the resolution of this grid, e.g. ORCA025, N512L180, T512L70 etc.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.resolution.name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "5.2. Canonical Horizontal Resolution\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nExpression quoted for gross comparisons of resolution, eg. 50km or 0.1 degrees etc.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.resolution.canonical_horizontal_resolution') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "5.3. Number Of Horizontal Gridpoints\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nTotal number of horizontal (XY) points (or degrees of freedom) on computational grid.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.resolution.number_of_horizontal_gridpoints') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "5.4. Number Of Vertical Levels\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nNumber of vertical levels resolved on computational grid.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.resolution.number_of_vertical_levels') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "5.5. Is Adaptive Grid\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDefault is False. Set true if grid resolution changes during execution.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.resolution.is_adaptive_grid') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "6. Key Properties --&gt; Tuning Applied\nTuning methodology for aerosol model\n6.1. Description\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nGeneral overview description of tuning: explain and motivate the main targets and metrics retained. &amp;Document the relative weight given to climate performance metrics versus process oriented metrics, &amp;and on the possible conflicts with parameterization level tuning. In particular describe any struggle &amp;with a parameter value that required pushing it to its limits to solve a particular model deficiency.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.tuning_applied.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "6.2. Global Mean Metrics Used\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nList set of metrics of the global mean state used in tuning model/component", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.tuning_applied.global_mean_metrics_used') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "6.3. Regional Metrics Used\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nList of regional metrics of mean state used in tuning model/component", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.tuning_applied.regional_metrics_used') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "6.4. Trend Metrics Used\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nList observed trend metrics used in tuning model/component", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.key_properties.tuning_applied.trend_metrics_used') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "7. Transport\nAerosol transport\n7.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of transport in atmosperic aerosol model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.transport.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "7.2. Scheme\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nMethod for aerosol transport modeling", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.transport.scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Uses Atmospheric chemistry transport scheme\" \n# \"Specific transport scheme (eulerian)\" \n# \"Specific transport scheme (semi-lagrangian)\" \n# \"Specific transport scheme (eulerian and semi-lagrangian)\" \n# \"Specific transport scheme (lagrangian)\" \n# TODO - please enter value(s)\n", "7.3. Mass Conservation Scheme\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nMethod used to ensure mass conservation.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.transport.mass_conservation_scheme') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Uses Atmospheric chemistry transport scheme\" \n# \"Mass adjustment\" \n# \"Concentrations positivity\" \n# \"Gradients monotonicity\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "7.4. Convention\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nTransport by convention", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.transport.convention') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Uses Atmospheric chemistry transport scheme\" \n# \"Convective fluxes connected to tracers\" \n# \"Vertical velocities connected to tracers\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "8. Emissions\nAtmospheric aerosol emissions\n8.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of emissions in atmosperic aerosol model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.emissions.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "8.2. Method\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nMethod used to define aerosol species (several methods allowed because the different species may not use the same method).", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.emissions.method') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"None\" \n# \"Prescribed (climatology)\" \n# \"Prescribed CMIP6\" \n# \"Prescribed above surface\" \n# \"Interactive\" \n# \"Interactive above surface\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "8.3. Sources\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nSources of the aerosol species are taken into account in the emissions scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.emissions.sources') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Vegetation\" \n# \"Volcanos\" \n# \"Bare ground\" \n# \"Sea surface\" \n# \"Lightning\" \n# \"Fires\" \n# \"Aircraft\" \n# \"Anthropogenic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "8.4. Prescribed Climatology\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nSpecify the climatology type for aerosol emissions", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.emissions.prescribed_climatology') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Constant\" \n# \"Interannual\" \n# \"Annual\" \n# \"Monthly\" \n# \"Daily\" \n# TODO - please enter value(s)\n", "8.5. Prescribed Climatology Emitted Species\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nList of aerosol species emitted and prescribed via a climatology", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.emissions.prescribed_climatology_emitted_species') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "8.6. Prescribed Spatially Uniform Emitted Species\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nList of aerosol species emitted and prescribed as spatially uniform", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.emissions.prescribed_spatially_uniform_emitted_species') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "8.7. Interactive Emitted Species\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nList of aerosol species emitted and specified via an interactive method", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.emissions.interactive_emitted_species') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "8.8. Other Emitted Species\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nList of aerosol species emitted and specified via an &quot;other method&quot;", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.emissions.other_emitted_species') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "8.9. Other Method Characteristics\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nCharacteristics of the &quot;other method&quot; used for aerosol emissions", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.emissions.other_method_characteristics') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "9. Concentrations\nAtmospheric aerosol concentrations\n9.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of concentrations in atmosperic aerosol model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.concentrations.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "9.2. Prescribed Lower Boundary\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nList of species prescribed at the lower boundary.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.concentrations.prescribed_lower_boundary') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "9.3. Prescribed Upper Boundary\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nList of species prescribed at the upper boundary.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.concentrations.prescribed_upper_boundary') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "9.4. Prescribed Fields Mmr\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nList of species prescribed as mass mixing ratios.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.concentrations.prescribed_fields_mmr') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "9.5. Prescribed Fields Mmr\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nList of species prescribed as AOD plus CCNs.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.concentrations.prescribed_fields_mmr') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "10. Optical Radiative Properties\nAerosol optical and radiative properties\n10.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of optical and radiative properties", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.optical_radiative_properties.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "11. Optical Radiative Properties --&gt; Absorption\nAbsortion properties in aerosol scheme\n11.1. Black Carbon\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nAbsorption mass coefficient of black carbon at 550nm (if non-absorbing enter 0)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.black_carbon') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "11.2. Dust\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nAbsorption mass coefficient of dust at 550nm (if non-absorbing enter 0)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.dust') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "11.3. Organics\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nAbsorption mass coefficient of organics at 550nm (if non-absorbing enter 0)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.organics') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "12. Optical Radiative Properties --&gt; Mixtures\n**\n12.1. External\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs there external mixing with respect to chemical composition?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.external') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "12.2. Internal\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs there internal mixing with respect to chemical composition?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.internal') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "12.3. Mixing Rule\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nIf there is internal mixing with respect to chemical composition then indicate the mixinrg rule", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.mixing_rule') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "13. Optical Radiative Properties --&gt; Impact Of H2o\n**\n13.1. Size\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDoes H2O impact size?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.optical_radiative_properties.impact_of_h2o.size') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "13.2. Internal Mixture\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDoes H2O impact internal mixture?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.optical_radiative_properties.impact_of_h2o.internal_mixture') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "14. Optical Radiative Properties --&gt; Radiative Scheme\nRadiative scheme for aerosol\n14.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of radiative scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "14.2. Shortwave Bands\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nNumber of shortwave bands", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.shortwave_bands') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "14.3. Longwave Bands\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nNumber of longwave bands", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.longwave_bands') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "15. Optical Radiative Properties --&gt; Cloud Interactions\nAerosol-cloud interactions\n15.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of aerosol-cloud interactions", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "15.2. Twomey\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs the Twomey effect included?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.twomey') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "15.3. Twomey Minimum Ccn\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nIf the Twomey effect is included, then what is the minimum CCN number?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.twomey_minimum_ccn') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "15.4. Drizzle\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDoes the scheme affect drizzle?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.drizzle') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "15.5. Cloud Lifetime\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDoes the scheme affect cloud lifetime?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.cloud_lifetime') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "15.6. Longwave Bands\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nNumber of longwave bands", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.longwave_bands') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "16. Model\nAerosol model\n16.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of atmosperic aerosol model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.model.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "16.2. Processes\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nProcesses included in the Aerosol model.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.model.processes') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Dry deposition\" \n# \"Sedimentation\" \n# \"Wet deposition (impaction scavenging)\" \n# \"Wet deposition (nucleation scavenging)\" \n# \"Coagulation\" \n# \"Oxidation (gas phase)\" \n# \"Oxidation (in cloud)\" \n# \"Condensation\" \n# \"Ageing\" \n# \"Advection (horizontal)\" \n# \"Advection (vertical)\" \n# \"Heterogeneous chemistry\" \n# \"Nucleation\" \n# TODO - please enter value(s)\n", "16.3. Coupling\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nOther model components coupled to the Aerosol model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.model.coupling') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Radiation\" \n# \"Land surface\" \n# \"Heterogeneous chemistry\" \n# \"Clouds\" \n# \"Ocean\" \n# \"Cryosphere\" \n# \"Gas phase chemistry\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "16.4. Gas Phase Precursors\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nList of gas phase aerosol precursors.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.model.gas_phase_precursors') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"DMS\" \n# \"SO2\" \n# \"Ammonia\" \n# \"Iodine\" \n# \"Terpene\" \n# \"Isoprene\" \n# \"VOC\" \n# \"NOx\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "16.5. Scheme Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nType(s) of aerosol scheme used by the aerosols model (potentially multiple: some species may be covered by one type of aerosol scheme and other species covered by another type).", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.model.scheme_type') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Bulk\" \n# \"Modal\" \n# \"Bin\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "16.6. Bulk Scheme Species\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nList of species covered by the bulk scheme.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.aerosol.model.bulk_scheme_species') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Sulphate\" \n# \"Nitrate\" \n# \"Sea salt\" \n# \"Dust\" \n# \"Ice\" \n# \"Organic\" \n# \"Black carbon / soot\" \n# \"SOA (secondary organic aerosols)\" \n# \"POM (particulate organic matter)\" \n# \"Polar stratospheric ice\" \n# \"NAT (Nitric acid trihydrate)\" \n# \"NAD (Nitric acid dihydrate)\" \n# \"STS (supercooled ternary solution aerosol particule)\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "©2017 ES-DOC" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
kialio/gsfcpyboot
Day_01/01_Pandas/4. Statistical Data Modeling.ipynb
mit
[ "Statistical Data Modeling\nSome or most of you have probably taken some undergraduate- or graduate-level statistics courses. Unfortunately, the curricula for most introductory statisics courses are mostly focused on conducting statistical hypothesis tests as the primary means for interest: t-tests, chi-squared tests, analysis of variance, etc. Such tests seek to esimate whether groups or effects are \"statistically significant\", a concept that is poorly understood, and hence often misused, by most practioners. Even when interpreted correctly, statistical significance is a questionable goal for statistical inference, as it is of limited utility.\nA far more powerful approach to statistical analysis involves building flexible models with the overarching aim of estimating quantities of interest. This section of the tutorial illustrates how to use Python to build statistical models of low to moderate difficulty from scratch, and use them to extract estimates and associated measures of uncertainty.", "import numpy as np\nimport pandas as pd\n\n# Set some Pandas options\npd.set_option('display.notebook_repr_html', False)\npd.set_option('display.max_columns', 20)\npd.set_option('display.max_rows', 25)", "Estimation\nAn recurring statistical problem is finding estimates of the relevant parameters that correspond to the distribution that best represents our data.\nIn parametric inference, we specify a priori a suitable distribution, then choose the parameters that best fit the data.\n\ne.g. $\\mu$ and $\\sigma^2$ in the case of the normal distribution", "x = array([ 1.00201077, 1.58251956, 0.94515919, 6.48778002, 1.47764604,\n 5.18847071, 4.21988095, 2.85971522, 3.40044437, 3.74907745,\n 1.18065796, 3.74748775, 3.27328568, 3.19374927, 8.0726155 ,\n 0.90326139, 2.34460034, 2.14199217, 3.27446744, 3.58872357,\n 1.20611533, 2.16594393, 5.56610242, 4.66479977, 2.3573932 ])\n_ = hist(x, bins=8)", "Fitting data to probability distributions\nWe start with the problem of finding values for the parameters that provide the best fit between the model and the data, called point estimates. First, we need to define what we mean by ‘best fit’. There are two commonly used criteria:\n\nMethod of moments chooses the parameters so that the sample moments (typically the sample mean and variance) match the theoretical moments of our chosen distribution.\nMaximum likelihood chooses the parameters to maximize the likelihood, which measures how likely it is to observe our given sample.\n\nDiscrete Random Variables\n$$X = {0,1}$$\n$$Y = {\\ldots,-2,-1,0,1,2,\\ldots}$$\nProbability Mass Function: \nFor discrete $X$,\n$$Pr(X=x) = f(x|\\theta)$$\n\ne.g. Poisson distribution\nThe Poisson distribution models unbounded counts:\n<div style=\"font-size: 150%;\"> \n$$Pr(X=x)=\\frac{e^{-\\lambda}\\lambda^x}{x!}$$\n\n* $X=\\{0,1,2,\\ldots\\}$\n* $\\lambda > 0$\n\n$$E(X) = \\text{Var}(X) = \\lambda$$\n\n### Continuous Random Variables\n\n$$X \\in [0,1]$$\n\n$$Y \\in (-\\infty, \\infty)$$\n\n**Probability Density Function**: \n\nFor continuous $X$,\n\n$$Pr(x \\le X \\le x + dx) = f(x|\\theta)dx \\, \\text{ as } \\, dx \\rightarrow 0$$\n\n![Continuous variable](http://upload.wikimedia.org/wikipedia/commons/e/ec/Exponential_pdf.svg)\n\n***e.g. normal distribution***\n\n<div style=\"font-size: 150%;\"> \n$$f(x) = \\frac{1}{\\sqrt{2\\pi\\sigma^2}}\\exp\\left[-\\frac{(x-\\mu)^2}{2\\sigma^2}\\right]$$\n\n* $X \\in \\mathbf{R}$\n* $\\mu \\in \\mathbf{R}$\n* $\\sigma>0$\n\n$$\\begin{align}E(X) &= \\mu \\cr\n\\text{Var}(X) &= \\sigma^2 \\end{align}$$\n\n### Example: Nashville Precipitation\n\nThe dataset `nashville_precip.txt` contains [NOAA precipitation data for Nashville measured since 1871](http://bit.ly/nasvhville_precip_data). The gamma distribution is often a good fit to aggregated rainfall data, and will be our candidate distribution in this case.", "precip = pd.read_table(\"data/nashville_precip.txt\", index_col=0, na_values='NA', delim_whitespace=True)\nprecip.head()\n\n_ = precip.hist(sharex=True, sharey=True, grid=False)\ntight_layout()", "The first step is recognixing what sort of distribution to fit our data to. A couple of observations:\n\nThe data are skewed, with a longer tail to the right than to the left\nThe data are positive-valued, since they are measuring rainfall\nThe data are continuous\n\nThere are a few possible choices, but one suitable alternative is the gamma distribution:\n<div style=\"font-size: 150%;\"> \n$$x \\sim \\text{Gamma}(\\alpha, \\beta) = \\frac{\\beta^{\\alpha}x^{\\alpha-1}e^{-\\beta x}}{\\Gamma(\\alpha)}$$\n</div>\n\n\nThe method of moments simply assigns the empirical mean and variance to their theoretical counterparts, so that we can solve for the parameters.\nSo, for the gamma distribution, the mean and variance are:\n<div style=\"font-size: 150%;\"> \n$$ \\hat{\\mu} = \\bar{X} = \\alpha \\beta $$\n$$ \\hat{\\sigma}^2 = S^2 = \\alpha \\beta^2 $$\n</div>\n\nSo, if we solve for these parameters, we can use a gamma distribution to describe our data:\n<div style=\"font-size: 150%;\"> \n$$ \\alpha = \\frac{\\bar{X}^2}{S^2}, \\, \\beta = \\frac{S^2}{\\bar{X}} $$\n</div>\n\nLet's deal with the missing value in the October data. Given what we are trying to do, it is most sensible to fill in the missing value with the average of the available values.", "precip.fillna(value={'Oct': precip.Oct.mean()}, inplace=True)", "Now, let's calculate the sample moments of interest, the means and variances by month:", "precip_mean = precip.mean()\nprecip_mean\n\nprecip_var = precip.var()\nprecip_var", "We then use these moments to estimate $\\alpha$ and $\\beta$ for each month:", "alpha_mom = precip_mean ** 2 / precip_var\nbeta_mom = precip_var / precip_mean\n\nalpha_mom, beta_mom", "We can use the gamma.pdf function in scipy.stats.distributions to plot the ditribtuions implied by the calculated alphas and betas. For example, here is January:", "from scipy.stats.distributions import gamma\n\nhist(precip.Jan, normed=True, bins=20)\nplot(linspace(0, 10), gamma.pdf(linspace(0, 10), alpha_mom[0], beta_mom[0]))", "Looping over all months, we can create a grid of plots for the distribution of rainfall, using the gamma distribution:", "axs = precip.hist(normed=True, figsize=(12, 8), sharex=True, sharey=True, bins=15, grid=False)\n\nfor ax in axs.ravel():\n \n # Get month\n m = ax.get_title()\n \n # Plot fitted distribution\n x = linspace(*ax.get_xlim())\n ax.plot(x, gamma.pdf(x, alpha_mom[m], beta_mom[m]))\n \n # Annotate with parameter estimates\n label = 'alpha = {0:.2f}\\nbeta = {1:.2f}'.format(alpha_mom[m], beta_mom[m])\n ax.annotate(label, xy=(10, 0.2))\n \ntight_layout()", "Maximum Likelihood\nMaximum likelihood (ML) fitting is usually more work than the method of moments, but it is preferred as the resulting estimator is known to have good theoretical properties. \nThere is a ton of theory regarding ML. We will restrict ourselves to the mechanics here.\nSay we have some data $y = y_1,y_2,\\ldots,y_n$ that is distributed according to some distribution:\n<div style=\"font-size: 120%;\"> \n$$Pr(Y_i=y_i | \\theta)$$\n</div>\n\nHere, for example, is a Poisson distribution that describes the distribution of some discrete variables, typically counts:", "y = np.random.poisson(5, size=100)\nplt.hist(y, bins=12, normed=True)\nxlabel('y'); ylabel('Pr(y)')", "The product $\\prod_{i=1}^n Pr(y_i | \\theta)$ gives us a measure of how likely it is to observe values $y_1,\\ldots,y_n$ given the parameters $\\theta$. Maximum likelihood fitting consists of choosing the appropriate function $l= Pr(Y|\\theta)$ to maximize for a given set of observations. We call this function the likelihood function, because it is a measure of how likely the observations are if the model is true.\n\nGiven these data, how likely is this model?\n\nIn the above model, the data were drawn from a Poisson distribution with parameter $\\lambda =5$.\n$$L(y|\\lambda=5) = \\frac{e^{-5} 5^y}{y!}$$\nSo, for any given value of $y$, we can calculate its likelihood:", "poisson_like = lambda x, lam: np.exp(-lam) * (lam**x) / (np.arange(x)+1).prod()\n\nlam = 6\nvalue = 10\npoisson_like(value, lam)\n\nnp.sum(poisson_like(yi, lam) for yi in y)\n\nlam = 8\nnp.sum(poisson_like(yi, lam) for yi in y)", "We can plot the likelihood function for any value of the parameter(s):", "lambdas = np.linspace(0,15)\nx = 5\nplt.plot(lambdas, [poisson_like(x, l) for l in lambdas])\nxlabel('$\\lambda$')\nylabel('L($\\lambda$|x={0})'.format(x))", "How is the likelihood function different than the probability distribution function (PDF)? The likelihood is a function of the parameter(s) given the data, whereas the PDF returns the probability of data given a particular parameter value. Here is the PDF of the Poisson for $\\lambda=5$.", "lam = 5\nxvals = arange(15)\nplt.bar(xvals, [poisson_like(x, lam) for x in xvals])\nxlabel('x')\nylabel('Pr(X|$\\lambda$=5)')", "Why are we interested in the likelihood function? \nA reasonable estimate of the true, unknown value for the parameter is one which maximizes the likelihood function. So, inference is reduced to an optimization problem.\nGoing back to the rainfall data, if we are using a gamma distribution we need to maximize:\n$$\\begin{align}l(\\alpha,\\beta) &= \\sum_{i=1}^n \\log[\\beta^{\\alpha} x^{\\alpha-1} e^{-x/\\beta}\\Gamma(\\alpha)^{-1}] \\cr \n&= n[(\\alpha-1)\\overline{\\log(x)} - \\bar{x}\\beta + \\alpha\\log(\\beta) - \\log\\Gamma(\\alpha)]\\end{align}$$\n(Its usually easier to work in the log scale)\nwhere $n = 2012 − 1871 = 141$ and the bar indicates an average over all i. We choose $\\alpha$ and $\\beta$ to maximize $l(\\alpha,\\beta)$.\nNotice $l$ is infinite if any $x$ is zero. We do not have any zeros, but we do have an NA value for one of the October data, which we dealt with above.\nFinding the MLE\nTo find the maximum of any function, we typically take the derivative with respect to the variable to be maximized, set it to zero and solve for that variable. \n$$\\frac{\\partial l(\\alpha,\\beta)}{\\partial \\beta} = n\\left(\\frac{\\alpha}{\\beta} - \\bar{x}\\right) = 0$$\nWhich can be solved as $\\beta = \\alpha/\\bar{x}$. However, plugging this into the derivative with respect to $\\alpha$ yields:\n$$\\frac{\\partial l(\\alpha,\\beta)}{\\partial \\alpha} = \\log(\\alpha) + \\overline{\\log(x)} - \\log(\\bar{x}) - \\frac{\\Gamma(\\alpha)'}{\\Gamma(\\alpha)} = 0$$\nThis has no closed form solution. We must use numerical optimization!\nNumerical optimization alogarithms take an initial \"guess\" at the solution, and iteratively improve the guess until it gets \"close enough\" to the answer.\nHere, we will use Newton-Raphson algorithm:\n<div style=\"font-size: 120%;\"> \n$$x_{n+1} = x_n - \\frac{f(x_n)}{f'(x_n)}$$\n</div>\n\nWhich is available to us via SciPy:", "from scipy.optimize import newton", "Here is a graphical example of how Newtone-Raphson converges on a solution, using an arbitrary function:", "# some function\nfunc = lambda x: 3./(1 + 400*np.exp(-2*x)) - 1\nxvals = np.linspace(0, 6)\nplot(xvals, func(xvals))\ntext(5.3, 2.1, '$f(x)$', fontsize=16)\n# zero line\nplot([0,6], [0,0], 'k-')\n# value at step n\nplot([4,4], [0,func(4)], 'k:')\nplt.text(4, -.2, '$x_n$', fontsize=16)\n# tangent line\ntanline = lambda x: -0.858 + 0.626*x\nplot(xvals, tanline(xvals), 'r--')\n# point at step n+1\nxprime = 0.858/0.626\nplot([xprime, xprime], [tanline(xprime), func(xprime)], 'k:')\nplt.text(xprime+.1, -.2, '$x_{n+1}$', fontsize=16)", "To apply the Newton-Raphson algorithm, we need a function that returns a vector containing the first and second derivatives of the function with respect to the variable of interest. In our case, this is:", "from scipy.special import psi, polygamma\n\ndlgamma = lambda m, log_mean, mean_log: np.log(m) - psi(m) - log_mean + mean_log\ndl2gamma = lambda m, *args: 1./m - polygamma(1, m)", "where log_mean and mean_log are $\\log{\\bar{x}}$ and $\\overline{\\log(x)}$, respectively. psi and polygamma are complex functions of the Gamma function that result when you take first and second derivatives of that function.", "# Calculate statistics\nlog_mean = precip.mean().apply(log)\nmean_log = precip.apply(log).mean()", "Time to optimize!", "# Alpha MLE for December\nalpha_mle = newton(dlgamma, 2, dl2gamma, args=(log_mean[-1], mean_log[-1]))\nalpha_mle", "And now plug this back into the solution for beta:\n<div style=\"font-size: 120%;\"> \n$$ \\beta = \\frac{\\alpha}{\\bar{X}} $$", "beta_mle = alpha_mle/precip.mean()[-1]\nbeta_mle", "We can compare the fit of the estimates derived from MLE to those from the method of moments:", "dec = precip.Dec\ndec.hist(normed=True, bins=10, grid=False)\nx = linspace(0, dec.max())\nplot(x, gamma.pdf(x, alpha_mom[-1], beta_mom[-1]), 'm-')\nplot(x, gamma.pdf(x, alpha_mle, beta_mle), 'r--')", "For some common distributions, SciPy includes methods for fitting via MLE:", "from scipy.stats import gamma\n\ngamma.fit(precip.Dec)", "This fit is not directly comparable to our estimates, however, because SciPy's gamma.fit method fits an odd 3-parameter version of the gamma distribution.\nExample: truncated distribution\nSuppose that we observe $Y$ truncated below at $a$ (where $a$ is known). If $X$ is the distribution of our observation, then:\n$$ P(X \\le x) = P(Y \\le x|Y \\gt a) = \\frac{P(a \\lt Y \\le x)}{P(Y \\gt a)}$$\n(so, $Y$ is the original variable and $X$ is the truncated variable) \nThen X has the density:\n$$f_X(x) = \\frac{f_Y (x)}{1−F_Y (a)} \\, \\text{for} \\, x \\gt a$$ \nSuppose $Y \\sim N(\\mu, \\sigma^2)$ and $x_1,\\ldots,x_n$ are independent observations of $X$. We can use maximum likelihood to find $\\mu$ and $\\sigma$. \nFirst, we can simulate a truncated distribution using a while statement to eliminate samples that are outside the support of the truncated distribution.", "x = np.random.normal(size=10000)\na = -1\nx_small = x < a\nwhile x_small.sum():\n x[x_small] = np.random.normal(size=x_small.sum())\n x_small = x < a\n \n_ = hist(x, bins=100)", "We can construct a log likelihood for this function using the conditional form:\n$$f_X(x) = \\frac{f_Y (x)}{1−F_Y (a)} \\, \\text{for} \\, x \\gt a$$", "from scipy.stats.distributions import norm\n\ntrunc_norm = lambda theta, a, x: -(np.log(norm.pdf(x, theta[0], theta[1])) - \n np.log(1 - norm.cdf(a, theta[0], theta[1]))).sum()", "For this example, we will use another optimization algorithm, the Nelder-Mead simplex algorithm. It has a couple of advantages: \n\nit does not require derivatives\nit can optimize (minimize) a vector of parameters\n\nSciPy implements this algorithm in its fmin function:", "from scipy.optimize import fmin\n\nfmin(trunc_norm, np.array([1,2]), args=(-1, x))", "In general, simulating data is a terrific way of testing your model before using it with real data.\nKernel density estimates\nIn some instances, we may not be interested in the parameters of a particular distribution of data, but just a smoothed representation of the data at hand. In this case, we can estimate the disribution non-parametrically (i.e. making no assumptions about the form of the underlying distribution) using kernel density estimation.", "# Some random data\ny = np.random.random(15) * 10\ny\n\nx = np.linspace(0, 10, 100)\n# Smoothing parameter\ns = 0.4\n# Calculate the kernels\nkernels = np.transpose([norm.pdf(x, yi, s) for yi in y])\nplot(x, kernels, 'k:')\nplot(x, kernels.sum(1))\nplot(y, np.zeros(len(y)), 'ro', ms=10)", "SciPy implements a Gaussian KDE that automatically chooses an appropriate bandwidth. Let's create a bi-modal distribution of data that is not easily summarized by a parametric distribution:", "# Create a bi-modal distribution with a mixture of Normals.\nx1 = np.random.normal(0, 3, 50)\nx2 = np.random.normal(4, 1, 50)\n\n# Append by row\nx = np.r_[x1, x2]\n\nplt.hist(x, bins=8, normed=True)\n\nfrom scipy.stats import kde\n\ndensity = kde.gaussian_kde(x)\nxgrid = np.linspace(x.min(), x.max(), 100)\nplt.hist(x, bins=8, normed=True)\nplt.plot(xgrid, density(xgrid), 'r-')", "Exercise: Cervical dystonia analysis\nRecall the cervical dystonia database, which is a clinical trial of botulinum toxin type B (BotB) for patients with cervical dystonia from nine U.S. sites. The response variable is measurements on the Toronto Western Spasmodic Torticollis Rating Scale (TWSTRS), measuring severity, pain, and disability of cervical dystonia (high scores mean more impairment). One way to check the efficacy of the treatment is to compare the distribution of TWSTRS for control and treatment patients at the end of the study.\nUse the method of moments or MLE to calculate the mean and variance of TWSTRS at week 16 for one of the treatments and the control group. Assume that the distribution of the twstrs variable is normal:\n$$f(x \\mid \\mu, \\sigma^2) = \\sqrt{\\frac{1}{2\\pi\\sigma^2}} \\exp\\left{ -\\frac{1}{2} \\frac{(x-\\mu)^2}{\\sigma^2} \\right}$$", "cdystonia = pd.read_csv(\"data/cdystonia.csv\")\ncdystonia[cdystonia.obs==6].hist(column='twstrs', by=cdystonia.treat, bins=8)", "Regression models\nA general, primary goal of many statistical data analysis tasks is to relate the influence of one variable on another. For example, we may wish to know how different medical interventions influence the incidence or duration of disease, or perhaps a how baseball player's performance varies as a function of age.", "x = np.array([2.2, 4.3, 5.1, 5.8, 6.4, 8.0])\ny = np.array([0.4, 10.1, 14.0, 10.9, 15.4, 18.5])\nplot(x,y,'ro')", "We can build a model to characterize the relationship between $X$ and $Y$, recognizing that additional factors other than $X$ (the ones we have measured or are interested in) may influence the response variable $Y$.\n<div style=\"font-size: 150%;\"> \n$y_i = f(x_i) + \\epsilon_i$\n</div>\n\nwhere $f$ is some function, for example a linear function:\n<div style=\"font-size: 150%;\"> \n$y_i = \\beta_0 + \\beta_1 x_i + \\epsilon_i$\n</div>\n\nand $\\epsilon_i$ accounts for the difference between the observed response $y_i$ and its prediction from the model $\\hat{y_i} = \\beta_0 + \\beta_1 x_i$. This is sometimes referred to as process uncertainty.\nWe would like to select $\\beta_0, \\beta_1$ so that the difference between the predictions and the observations is zero, but this is not usually possible. Instead, we choose a reasonable criterion: the smallest sum of the squared differences between $\\hat{y}$ and $y$.\n<div style=\"font-size: 120%;\"> \n$$R^2 = \\sum_i (y_i - [\\beta_0 + \\beta_1 x_i])^2 = \\sum_i \\epsilon_i^2 $$ \n</div>\n\nSquaring serves two purposes: (1) to prevent positive and negative values from cancelling each other out and (2) to strongly penalize large deviations. Whether the latter is a good thing or not depends on the goals of the analysis.\nIn other words, we will select the parameters that minimize the squared error of the model.", "ss = lambda theta, x, y: np.sum((y - theta[0] - theta[1]*x) ** 2)\n\nss([0,1],x,y)\n\nb0,b1 = fmin(ss, [0,1], args=(x,y))\nb0,b1\n\nplot(x, y, 'ro')\nplot([0,10], [b0, b0+b1*10])\n\nplot(x, y, 'ro')\nplot([0,10], [b0, b0+b1*10])\nfor xi, yi in zip(x,y):\n plot([xi]*2, [yi, b0+b1*xi], 'k:')\nxlim(2, 9); ylim(0, 20)", "Minimizing the sum of squares is not the only criterion we can use; it is just a very popular (and successful) one. For example, we can try to minimize the sum of absolute differences:", "sabs = lambda theta, x, y: np.sum(np.abs(y - theta[0] - theta[1]*x))\nb0,b1 = fmin(sabs, [0,1], args=(x,y))\nprint b0,b1\nplot(x, y, 'ro')\nplot([0,10], [b0, b0+b1*10])", "We are not restricted to a straight-line regression model; we can represent a curved relationship between our variables by introducing polynomial terms. For example, a cubic model:\n<div style=\"font-size: 150%;\"> \n$y_i = \\beta_0 + \\beta_1 x_i + \\beta_2 x_i^2 + \\epsilon_i$\n</div>", "ss2 = lambda theta, x, y: np.sum((y - theta[0] - theta[1]*x - theta[2]*(x**2)) ** 2)\nb0,b1,b2 = fmin(ss2, [1,1,-1], args=(x,y))\nprint b0,b1,b2\nplot(x, y, 'ro')\nxvals = np.linspace(0, 10, 100)\nplot(xvals, b0 + b1*xvals + b2*(xvals**2))", "Although polynomial model characterizes a nonlinear relationship, it is a linear problem in terms of estimation. That is, the regression model $f(y | x)$ is linear in the parameters.\nFor some data, it may be reasonable to consider polynomials of order>2. For example, consider the relationship between the number of home runs a baseball player hits and the number of runs batted in (RBI) they accumulate; clearly, the relationship is positive, but we may not expect a linear relationship.", "ss3 = lambda theta, x, y: np.sum((y - theta[0] - theta[1]*x - theta[2]*(x**2) - theta[3]*(x**3)) ** 2)\n\nbb = pd.read_csv(\"data/baseball.csv\", index_col=0)\nplot(bb.hr, bb.rbi, 'r.')\nb0,b1,b2,b3 = fmin(ss3, [0,1,-1,0], args=(bb.hr, bb.rbi))\nxvals = arange(40)\nplot(xvals, b0 + b1*xvals + b2*(xvals**2) + b3*(xvals**3))", "Of course, we need not fit least squares models by hand. The statsmodels package implements least squares models that allow for model fitting in a single line:", "import statsmodels.api as sm\n\nstraight_line = sm.OLS(y, sm.add_constant(x)).fit()\nstraight_line.summary()\n\nfrom statsmodels.formula.api import ols as OLS\n\ndata = pd.DataFrame(dict(x=x, y=y))\ncubic_fit = OLS('y ~ x + I(x**2)', data).fit()\n\ncubic_fit.summary()", "Exercise: Polynomial function\nWrite a function that specified a polynomial of arbitrary degree.\nModel Selection\nHow do we choose among competing models for a given dataset? More parameters are not necessarily better, from the standpoint of model fit. For example, fitting a 9-th order polynomial to the sample data from the above example certainly results in an overfit.", "def calc_poly(params, data):\n x = np.c_[[data**i for i in range(len(params))]]\n return np.dot(params, x)\n \nssp = lambda theta, x, y: np.sum((y - calc_poly(theta, x)) ** 2)\nbetas = fmin(ssp, np.zeros(10), args=(x,y), maxiter=1e6)\nplot(x, y, 'ro')\nxvals = np.linspace(0, max(x), 100)\nplot(xvals, calc_poly(betas, xvals))", "One approach is to use an information-theoretic criterion to select the most appropriate model. For example Akaike's Information Criterion (AIC) balances the fit of the model (in terms of the likelihood) with the number of parameters required to achieve that fit. We can easily calculate AIC as:\n$$AIC = n \\log(\\hat{\\sigma}^2) + 2p$$\nwhere $p$ is the number of parameters in the model and $\\hat{\\sigma}^2 = RSS/(n-p-1)$.\nNotice that as the number of parameters increase, the residual sum of squares goes down, but the second term (a penalty) increases.\nTo apply AIC to model selection, we choose the model that has the lowest AIC value.", "n = len(x)\n\naic = lambda rss, p, n: n * np.log(rss/(n-p-1)) + 2*p\n\nRSS1 = ss(fmin(ss, [0,1], args=(x,y)), x, y)\nRSS2 = ss2(fmin(ss2, [1,1,-1], args=(x,y)), x, y)\n\nprint aic(RSS1, 2, n), aic(RSS2, 3, n)", "Hence, we would select the 2-parameter (linear) model.\nLogistic Regression\nFitting a line to the relationship between two variables using the least squares approach is sensible when the variable we are trying to predict is continuous, but what about when the data are dichotomous?\n\nmale/female\npass/fail\ndied/survived\n\nLet's consider the problem of predicting survival in the Titanic disaster, based on our available information. For example, lets say that we want to predict survival as a function of the fare paid for the journey.", "titanic = pd.read_excel(\"data/titanic.xls\", \"titanic\")\ntitanic.name\n\njitter = np.random.normal(scale=0.02, size=len(titanic))\nplt.scatter(np.log(titanic.fare), titanic.survived + jitter, alpha=0.3)\nyticks([0,1])\nylabel(\"survived\")\nxlabel(\"log(fare)\")", "I have added random jitter on the y-axis to help visualize the density of the points, and have plotted fare on the log scale.\nClearly, fitting a line through this data makes little sense, for several reasons. First, for most values of the predictor variable, the line would predict values that are not zero or one. Second, it would seem odd to choose least squares (or similar) as a criterion for selecting the best line.", "x = np.log(titanic.fare[titanic.fare>0])\ny = titanic.survived[titanic.fare>0]\nbetas_titanic = fmin(ss, [1,1], args=(x,y))\n\njitter = np.random.normal(scale=0.02, size=len(titanic))\nplt.scatter(np.log(titanic.fare), titanic.survived + jitter, alpha=0.3)\nyticks([0,1])\nylabel(\"survived\")\nxlabel(\"log(fare)\")\nplt.plot([0,7], [betas_titanic[0], betas_titanic[0] + betas_titanic[1]*7.])", "If we look at this data, we can see that for most values of fare, there are some individuals that survived and some that did not. However, notice that the cloud of points is denser on the \"survived\" (y=1) side for larger values of fare than on the \"died\" (y=0) side.\nStochastic model\nRather than model the binary outcome explicitly, it makes sense instead to model the probability of death or survival in a stochastic model. Probabilities are measured on a continuous [0,1] scale, which may be more amenable for prediction using a regression line. We need to consider a different probability model for this exerciese however; let's consider the Bernoulli distribution as a generative model for our data:\n<div style=\"font-size: 120%;\"> \n$$f(y|p) = p^{y} (1-p)^{1-y}$$ \n</div>\n\nwhere $y = {0,1}$ and $p \\in [0,1]$. So, this model predicts whether $y$ is zero or one as a function of the probability $p$. Notice that when $y=1$, the $1-p$ term disappears, and when $y=0$, the $p$ term disappears.\nSo, the model we want to fit should look something like this:\n<div style=\"font-size: 120%;\"> \n$$p_i = \\beta_0 + \\beta_1 x_i + \\epsilon_i$$\n\nHowever, since $p$ is constrained to be between zero and one, it is easy to see where a linear (or polynomial) model might predict values outside of this range. We can modify this model sligtly by using a **link function** to transform the probability to have an unbounded range on a new scale. Specifically, we can use a **logit transformation** as our link function:\n\n<div style=\"font-size: 120%;\"> \n$$\\text{logit}(p) = \\log\\left[\\frac{p}{1-p}\\right] = x$$\n\nHere's a plot of $p/(1-p)$", "logit = lambda p: np.log(p/(1.-p))\nunit_interval = np.linspace(0,1)\nplt.plot(unit_interval/(1-unit_interval), unit_interval)", "And here's the logit function:", "plt.plot(logit(unit_interval), unit_interval)", "The inverse of the logit transformation is:\n<div style=\"font-size: 150%;\"> \n$$p = \\frac{1}{1 + \\exp(-x)}$$\n\nSo, now our model is:\n\n<div style=\"font-size: 120%;\"> \n$$\\text{logit}(p_i) = \\beta_0 + \\beta_1 x_i + \\epsilon_i$$\n\nWe can fit this model using maximum likelihood. Our likelihood, again based on the Bernoulli model is:\n\n<div style=\"font-size: 120%;\"> \n$$L(y|p) = \\prod_{i=1}^n p_i^{y_i} (1-p_i)^{1-y_i}$$\n\nwhich, on the log scale is:\n\n<div style=\"font-size: 120%;\"> \n$$l(y|p) = \\sum_{i=1}^n y_i \\log(p_i) + (1-y_i)\\log(1-p_i)$$\n\nWe can easily implement this in Python, keeping in mind that `fmin` minimizes, rather than maximizes functions:", "invlogit = lambda x: 1. / (1 + np.exp(-x))\n\ndef logistic_like(theta, x, y):\n p = invlogit(theta[0] + theta[1] * x)\n # Return negative of log-likelihood\n return -np.sum(y * np.log(p) + (1-y) * np.log(1 - p))", "Remove null values from variables", "x, y = titanic[titanic.fare.notnull()][['fare', 'survived']].values.T", "... and fit the model.", "b0,b1 = fmin(logistic_like, [0.5,0], args=(x,y))\nb0, b1\n\njitter = np.random.normal(scale=0.01, size=len(x))\nplot(x, y+jitter, 'r.', alpha=0.3)\nyticks([0,.25,.5,.75,1])\nxvals = np.linspace(0, 600)\nplot(xvals, invlogit(b0+b1*xvals))", "As with our least squares model, we can easily fit logistic regression models in statsmodels, in this case using the GLM (generalized linear model) class with a binomial error distribution specified.", "logistic = sm.GLM(y, sm.add_constant(x), family=sm.families.Binomial()).fit()\nlogistic.summary()", "Exercise: multivariate logistic regression\nWhich other variables might be relevant for predicting the probability of surviving the Titanic? Generalize the model likelihood to include 2 or 3 other covariates from the dataset.\nBootstrapping\nParametric inference can be non-robust:\n\ninaccurate if parametric assumptions are violated\nif we rely on asymptotic results, we may not achieve an acceptable level of accuracy\n\nParmetric inference can be difficult:\n\nderivation of sampling distribution may not be possible\n\nAn alternative is to estimate the sampling distribution of a statistic empirically without making assumptions about the form of the population.\nWe have seen this already with the kernel density estimate.\nNon-parametric Bootstrap\nThe bootstrap is a resampling method discovered by Brad Efron that allows one to approximate the true sampling distribution of a dataset, and thereby obtain estimates of the mean and variance of the distribution.\nBootstrap sample:\n<div style=\"font-size: 120%;\"> \n$$S_1^* = \\{x_{11}^*, x_{12}^*, \\ldots, x_{1n}^*\\}$$\n</div>\n\n$S_i^$ is a sample of size $n$, with* replacement.\nIn Python, we have already seen the NumPy function permutation that can be used in conjunction with Pandas' take method to generate a random sample of some data without replacement:", "np.random.permutation(titanic.name)[:5]", "Similarly, we can use the random.randint method to generate a sample with replacement, which we can use when bootstrapping.", "random_ind = np.random.randint(0, len(titanic), 5)\ntitanic.name[random_ind]", "We regard S as an \"estimate\" of population P\n\npopulation : sample :: sample : bootstrap sample\n\nThe idea is to generate replicate bootstrap samples:\n<div style=\"font-size: 120%;\"> \n$$S^* = \\{S_1^*, S_2^*, \\ldots, S_R^*\\}$$\n</div>\n\nCompute statistic $t$ (estimate) for each bootstrap sample:\n<div style=\"font-size: 120%;\"> \n$$T_i^* = t(S^*)$$\n</div>", "n = 10\nR = 1000\n# Original sample (n=10)\nx = np.random.normal(size=n)\n# 1000 bootstrap samples of size 10\ns = [x[np.random.randint(0,n,n)].mean() for i in range(R)]\n_ = hist(s, bins=30)", "Bootstrap Estimates\nFrom our bootstrapped samples, we can extract estimates of the expectation and its variance:\n$$\\bar{T}^ = \\hat{E}(T^) = \\frac{\\sum_i T_i^*}{R}$$\n$$\\hat{\\text{Var}}(T^) = \\frac{\\sum_i (T_i^ - \\bar{T}^*)^2}{R-1}$$", "boot_mean = np.sum(s)/R\nboot_mean\n\nboot_var = ((np.array(s) - boot_mean) ** 2).sum() / (R-1)\nboot_var", "Since we have estimated the expectation of the bootstrapped statistics, we can estimate the bias of T:\n$$\\hat{B}^ = \\bar{T}^ - T$$", "boot_mean - np.mean(x)", "Bootstrap error\nThere are two sources of error in bootstrap estimates:\n\nSampling error from the selection of $S$.\nBootstrap error from failing to enumerate all possible bootstrap samples.\n\nFor the sake of accuracy, it is prudent to choose at least R=1000\nBootstrap Percentile Intervals\nAn attractive feature of bootstrap statistics is the ease with which you can obtain an estimate of uncertainty for a given statistic. We simply use the empirical quantiles of the bootstrapped statistics to obtain percentiles corresponding to a confidence interval of interest.\nThis employs the ordered bootstrap replicates:\n$$T_{(1)}^, T_{(2)}^, \\ldots, T_{(R)}^*$$\nSimply extract the $100(\\alpha/2)$ and $100(1-\\alpha/2)$ percentiles:\n$$T_{[(R+1)\\alpha/2]}^ \\lt \\theta \\lt T_{[(R+1)(1-\\alpha/2)]}^$$", "s_sorted = np.sort(s)\ns_sorted[:10]\n\ns_sorted[-10:]\n\nalpha = 0.05\ns_sorted[[(R+1)*alpha/2, (R+1)*(1-alpha/2)]]", "Exercise: Cervical dystonia bootstrap estimates\nUse bootstrapping to estimate the mean of one of the treatment groups, and calculate percentile intervals for the mean." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
Bio204-class/bio204-notebooks
inclass-2016-02-22-Confidence-Intervals.ipynb
cc0-1.0
[ "Standard Error of the Mean Revisted\nLet's return to a topic we first discussed up in our introduction to simulation -- the standard error of the mean.\nHere was the scenario we explored:\n\n\nYou want to learn about a variable $X$ in a population of interest. \n\n\nAssume $X \\sim N(\\mu,\\sigma)$. \n\n\nYou take a random sample of size $n$ from the population and estimate the sample mean $\\overline{x}$\n\n\nYou repeat step 3 a large number of times, calculating a new sample mean each time.\n\n\nWe call the distribution of sample means the sampling distribution of the mean (note that you can also estimate the sampling distribution for any statistic of interest).\n\n\nYou examine the spread of your sample means. You will find that the sampling distribution of the mean is approximately normally distributed with mean $\\sim\\mu$, and with a standard deviation $\\sim\\frac{\\sigma}{\\sqrt{n}}$. \n $$\n\\overline{x} \\sim N \\left( \\mu, \\frac{\\sigma}{\\sqrt{n}}\\ \\right)\n $$\n\n\nWe refer to the standard deviation of a sampling distibution of a statistic as the standard error of that statistic. When the statistic of interest is the mean, this is the standard error of the mean (standard errors of the mean are often just referred to as \"standard errors\" as this is the most common standard error one usually calculates)", "%matplotlib inline\nimport numpy as np\nimport scipy.stats as stats\nimport pandas as pd\nimport matplotlib.pyplot as plt\nimport matplotlib\n\nmatplotlib.style.use(\"bmh\")\n\nnp.random.seed(20160222) # setting seed insures reproducability\n\nmu, sigma = 10, 2\npopn = stats.norm(loc=mu, scale=sigma)\n\nssizes = [25, 50, 100, 200, 400]\nsamples = [popn.rvs(size=(sz,100)) for sz in ssizes]\nmeans = [np.mean(sample, axis=0) for sample in samples] \nse = [np.std(mean) for mean in means]", "Explanation of code above\nThe code above contains three list comprehensions for very compactly simulating sampling distribution of the man\n\nCreate a list of sample sizes to simulate (ssizes)\nFor each sample size (sz), generate random 100 samples, and store those samples in a matrix of size sz $\\times$ 100 (i.e. each column is a sample)\nFor each matrix created in step 2, calculate column means (= sample means)\nFor each set of sample means in 3, calculate the standard deviation (= standard error)", "# make a pair of plots\nssmin, ssmax = min(ssizes), max(ssizes)\ntheoryss = np.linspace(ssmin, ssmax, 250)\n\nfig, (ax1, ax2) = plt.subplots(1,2) # 1 x 2 grid of plots\nfig.set_size_inches(12,4)\n\n# plot histograms of sampling distributions\nfor (ss,mean) in zip(ssizes, means):\n ax1.hist(mean, normed=True, histtype='stepfilled', alpha=0.75, label=\"n = %d\" % ss)\n\nax1.set_xlabel(\"X\")\nax1.set_ylabel(\"Density\")\nax1.legend()\nax1.set_title(\"Sampling Distributions of Mean\\nFor Different Sample Sizes\")\n\n# plot simulation SE of mean vs theory SE of mean\nax2.plot(ssizes, se, 'ko', label='simulation')\nax2.plot(theoryss, sigma/np.sqrt(theoryss), color='red', label=\"theory\")\nax2.set_xlim(0, ssmax*1.1)\nax2.set_ylim(0, max(se)*1.1)\nax2.set_xlabel(\"sample size ($n$)\")\nax2.set_ylabel(\"SE of mean\")\nax2.legend()\nax2.set_title(\"Standard Error of Mean\\nTheoretical Expectation vs. Simulation\")\n\npass", "Sample Estimate of the Standard Error of the Mean\nIn real-life life, we don't have access to the sampling distribution of the mean or the true population parameter $\\sigma$ from which can calculate the standard error of the mean. However, we can still use our unbiased sample estimator of the standard deviation, $s$, to estimate the standard error of the mean.\n$$\n{SE}_{\\overline{x}} = \\frac{s}{\\sqrt{n}}\n$$\nConditions for sampling distribution to be nearly normal\nFor the sampling distribution of the mean to be nearly normal with ${SE}_\\overline{x}$ accurate, the following conditions should hold:\n\nSample observations are independent\nSample size is large ($n \\geq 30$ is good rule of thumb)\nPopulation distribution is not strongly skewed\n\nConfidence Intervals for the Mean\nWe know that given a random sample from a population of interest, the mean of $X$ in our random sample is unlikely to be the true population mean of $X$. However, our simulations have taught us a number of things:\n\nAs sample size increases, the sample estimate of the mean is more likely to be close to the true mean\nAs sample size increases, the standard deviation of the sampling distribution of the mean (= standard error of the mean) decreases\n\nWe can use this knowledge to calculate plausible ranges of values for the mean. We call such ranges confidence intervals for the mean (the idea of confidence intervals can apply to other statistics as well). We're going to express our confidence intervals in terms of multiples of the standard error. \nLet's start by using simulation to explore how often our confidence intervals capture the true mean when we base our confidence intervals on different multiples, $z$, of the SE.\n$$\n{CI}\\overline{x} = \\overline{x} \\pm (z \\times {SE}\\overline{x})\n$$\nFor the purposes of this simulation, let's consider samples of size 50, drawn from the same population of interest as before (popn above). We're going to generate a large number of such samples, and for each sample we will calculate the CI of the mean using the formula above. We will then ask, \"for what fraction of the samples did our CI overlap the true population mean\"? This will give us a sense of how well different confidence intervals do in providing a plausible range for the true mean.", "N = 1000\nsamples50 = popn.rvs(size=(50, N)) # N samples of size 50\nmeans50 = np.mean(samples50, axis=0) # sample means\nstd50 = np.std(samples50, axis=0, ddof=1) # sample std devs\nse50 = std50/np.sqrt(50) # sample standard errors\n\nfrac_overlap_mu = []\nzs = np.arange(1,3,step=0.05)\nfor z in zs:\n lowCI = means50 - z*se50\n highCI = means50 + z*se50 \n overlap_mu = np.logical_and(lowCI <= mu, highCI >= mu)\n frac = np.count_nonzero(overlap_mu)/N\n frac_overlap_mu.append(frac)\n \nfrac_overlap_mu = np.array(frac_overlap_mu)\n\nplt.plot(zs, frac_overlap_mu * 100, 'k-', label=\"simulation\")\nplt.ylim(60, 104)\nplt.xlim(1, 3)\nplt.xlabel(\"z in CI = sample mean ± z × SE\")\nplt.ylabel(u\"% of CIs that include popn mean\")\n\n# plot theoretical expectation\nstdnorm = stats.norm(loc=0, scale=1)\nplt.plot(zs, (1 - (2* stdnorm.sf(zs)))*100, 'r-', alpha=0.5, label=\"theory\")\nplt.legend(loc='lower right')\n\npass", "Interpreting our simulation\nHow should we interpret the results above? We found as we increased the scaling of our confidence intervals (larger $z$), the true mean was within sample confidence intervals a greater proportion of the time. For example, when $z = 1$ we found that the true mean was within our CIs roughly 67% of the time, while at $z = 2$ the true mean was within our confidence intervals approximately 95% of the time.\nWe call $x \\pm 2 \\times {SE}_\\overline{x}$ the approximate 95% confidence interval of the mean (see below for exact values of z). Given such a CI calculated from a random sample we can say we are \"95% confident\" that we have captured the true mean within the bounds of the CI (subject to the caveats about the sampling distribution above). By this we mean if we took many samples and built a confidence interval from each sample using the equation above, then about 95% of those intervals would contain the actual mean, μ. Note that this is exactly what we did in our simulation!", "ndraw = 100\nx = means50[:ndraw]\ny = range(0,ndraw)\nplt.errorbar(x, y, xerr=1.96*se50[:ndraw], fmt='o')\nplt.vlines(mu, 0, ndraw, linestyle='dashed', color='#D55E00', linewidth=3, zorder=5)\nplt.ylim(-1,101)\nplt.yticks([])\nplt.title(\"95% CI: mean ± 1.96×SE\\nfor 100 samples of size 50\")\nfig = plt.gcf()\nfig.set_size_inches(4,8)", "Generating a table of CIs and corresponding margins of error\nThe table below gives the percent CI and the corresponding margin of error ($z \\times {SE}$) for that confidence interval.", "perc = np.array([.80, .90, .95, .99, .997])\nzval = stdnorm.ppf(1 - (1 - perc)/2) # account for the two tails of the sampling distn\n\nprint(\"% CI \\tz × SE\")\nprint(\"-----\\t------\")\nfor (i,j) in zip(perc, zval):\n print(\"{:5.1f}\\t{:6.2f}\".format(i*100, j)) \n # see the string docs (https://docs.python.org/3.4/library/string.html)\n # for information on how formatting works \n ", "Interpretting Confidence Intervals\nYou should be careful in interpretting confidence intervals.\nThe correct interpretation is wording like:\n\nWe are XX% confident that the population parameter is between..." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
mne-tools/mne-tools.github.io
0.14/_downloads/plot_creating_data_structures.ipynb
bsd-3-clause
[ "%matplotlib inline", "Creating MNE-Python's data structures from scratch", "from __future__ import print_function\n\nimport mne\nimport numpy as np", "Creating :class:Info &lt;mne.Info&gt; objects\n<div class=\"alert alert-info\"><h4>Note</h4><p>for full documentation on the `Info` object, see\n `tut_info_objects`. See also\n `sphx_glr_auto_examples_io_plot_objects_from_arrays.py`.</p></div>\n\nNormally, :class:mne.Info objects are created by the various\ndata import functions &lt;ch_convert&gt;.\nHowever, if you wish to create one from scratch, you can use the\n:func:mne.create_info function to initialize the minimally required\nfields. Further fields can be assigned later as one would with a regular\ndictionary.\nThe following creates the absolute minimum info structure:", "# Create some dummy metadata\nn_channels = 32\nsampling_rate = 200\ninfo = mne.create_info(n_channels, sampling_rate)\nprint(info)", "You can also supply more extensive metadata:", "# Names for each channel\nchannel_names = ['MEG1', 'MEG2', 'Cz', 'Pz', 'EOG']\n\n# The type (mag, grad, eeg, eog, misc, ...) of each channel\nchannel_types = ['grad', 'grad', 'eeg', 'eeg', 'eog']\n\n# The sampling rate of the recording\nsfreq = 1000 # in Hertz\n\n# The EEG channels use the standard naming strategy.\n# By supplying the 'montage' parameter, approximate locations\n# will be added for them\nmontage = 'standard_1005'\n\n# Initialize required fields\ninfo = mne.create_info(channel_names, sfreq, channel_types, montage)\n\n# Add some more information\ninfo['description'] = 'My custom dataset'\ninfo['bads'] = ['Pz'] # Names of bad channels\n\nprint(info)", "<div class=\"alert alert-info\"><h4>Note</h4><p>When assigning new values to the fields of an\n :class:`mne.Info` object, it is important that the\n fields are consistent:\n\n - The length of the channel information field `chs` must be\n `nchan`.\n - The length of the `ch_names` field must be `nchan`.\n - The `ch_names` field should be consistent with the `name` field\n of the channel information contained in `chs`.</p></div>\n\n\nCreating :class:Raw &lt;mne.io.Raw&gt; objects\nTo create a :class:mne.io.Raw object from scratch, you can use the\n:class:mne.io.RawArray class, which implements raw data that is backed by a\nnumpy array. The correct units for the data are:\n\nV: eeg, eog, seeg, emg, ecg, bio, ecog\nT: mag\nT/m: grad\nM: hbo, hbr\nAm: dipole\nAU: misc\n\nThe :class:mne.io.RawArray constructor simply takes the data matrix and\n:class:mne.Info object:", "# Generate some random data\ndata = np.random.randn(5, 1000)\n\n# Initialize an info structure\ninfo = mne.create_info(\n ch_names=['MEG1', 'MEG2', 'EEG1', 'EEG2', 'EOG'],\n ch_types=['grad', 'grad', 'eeg', 'eeg', 'eog'],\n sfreq=100\n)\n\ncustom_raw = mne.io.RawArray(data, info)\nprint(custom_raw)", "Creating :class:Epochs &lt;mne.Epochs&gt; objects\nTo create an :class:mne.Epochs object from scratch, you can use the\n:class:mne.EpochsArray class, which uses a numpy array directly without\nwrapping a raw object. The array must be of shape(n_epochs, n_chans,\nn_times). The proper units of measure are listed above.", "# Generate some random data: 10 epochs, 5 channels, 2 seconds per epoch\nsfreq = 100\ndata = np.random.randn(10, 5, sfreq * 2)\n\n# Initialize an info structure\ninfo = mne.create_info(\n ch_names=['MEG1', 'MEG2', 'EEG1', 'EEG2', 'EOG'],\n ch_types=['grad', 'grad', 'eeg', 'eeg', 'eog'],\n sfreq=sfreq\n)", "It is necessary to supply an \"events\" array in order to create an Epochs\nobject. This is of shape(n_events, 3) where the first column is the sample\nnumber (time) of the event, the second column indicates the value from which\nthe transition is made from (only used when the new value is bigger than the\nold one), and the third column is the new event value.", "# Create an event matrix: 10 events with alternating event codes\nevents = np.array([\n [0, 0, 1],\n [1, 0, 2],\n [2, 0, 1],\n [3, 0, 2],\n [4, 0, 1],\n [5, 0, 2],\n [6, 0, 1],\n [7, 0, 2],\n [8, 0, 1],\n [9, 0, 2],\n])", "More information about the event codes: subject was either smiling or\nfrowning", "event_id = dict(smiling=1, frowning=2)", "Finally, we must specify the beginning of an epoch (the end will be inferred\nfrom the sampling frequency and n_samples)", "# Trials were cut from -0.1 to 1.0 seconds\ntmin = -0.1", "Now we can create the :class:mne.EpochsArray object", "custom_epochs = mne.EpochsArray(data, info, events, tmin, event_id)\n\nprint(custom_epochs)\n\n# We can treat the epochs object as we would any other\n_ = custom_epochs['smiling'].average().plot()", "Creating :class:Evoked &lt;mne.Evoked&gt; Objects\nIf you already have data that is collapsed across trials, you may also\ndirectly create an evoked array. Its constructor accepts an array of\nshape(n_chans, n_times) in addition to some bookkeeping parameters.\nThe proper units of measure for the data are listed above.", "# The averaged data\ndata_evoked = data.mean(0)\n\n# The number of epochs that were averaged\nnave = data.shape[0]\n\n# A comment to describe to evoked (usually the condition name)\ncomment = \"Smiley faces\"\n\n# Create the Evoked object\nevoked_array = mne.EvokedArray(data_evoked, info, tmin,\n comment=comment, nave=nave)\nprint(evoked_array)\n_ = evoked_array.plot()" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
zenotech/zPost
ipynb/CARADONNA_TUNG/Cara Tung Rotor.ipynb
bsd-3-clause
[ "Caradonna Tung Hover Rotor\nThis case aims to reproduce the results from an experiment undertaken by F. X. Caradonna and C. Tung in 1981 and published in NASA TM 81232. The rotor airfoil section is a NACA0012 pitched about 25% chord. For this simulation the hub has been ignored.\nThis test case uses a moving (rotating) frame of reference to model the rotor. \nSee the NASA Technical Report below, page 34.", "from IPython.display import FileLink, display \ndisplay(FileLink('data/NASA_TM_81232.pdf')) ", "Define Data Location\nFor remote data the interaction will use ssh to securely interact with the data<br/>\nThis uses the reverse connection capability in paraview so that the paraview server can be submitted to a job scheduler<br/>\nNote: The default paraview server connection will use port 11111", "remote_data = True\nremote_server_auto = True\n\ncase_name = 'caratung-ar-6p0-pitch-8p0'\ndata_dir='/gpfs/thirdparty/zenotech/home/dstandingford/VALIDATION/CARATUNG'\ndata_host='dstandingford@vis03'\nparaview_cmd='mpiexec /gpfs/cfms/apps/zCFD/bin/pvserver'\n\nif not remote_server_auto:\n paraview_cmd=None\n\nif not remote_data:\n data_host='localhost'\n paraview_cmd=None", "Validation and regression", "# Validation for Caradonna Tung Rotor (Mach at Tip - 0.877) from NASA TM 81232, page 34\nvalidate = True\nregression = True\n# Make movie option currently not working - TODO\nmake_movie = False\n\nif (validate):\n valid = True\n validation_tol = 0.0100\n valid_lower_cl_0p50 = 0.2298-validation_tol\n valid_upper_cl_0p50 = 0.2298+validation_tol\n valid_lower_cl_0p68 = 0.2842-validation_tol\n valid_upper_cl_0p68 = 0.2842+validation_tol\n valid_lower_cl_0p80 = 0.2736-validation_tol\n valid_upper_cl_0p80 = 0.2736+validation_tol\n valid_lower_cl_0p89 = 0.2989-validation_tol\n valid_upper_cl_0p89 = 0.2989+validation_tol\n valid_lower_cl_0p96 = 0.3175-validation_tol\n valid_upper_cl_0p96 = 0.3175+validation_tol\n print 'VALIDATING CARADONNA TUNG CASE'\n \nif (regression):\n print 'REGRESSION CARADONNA TUNG CASE'", "Initialise Environment", "%pylab inline\nfrom paraview.simple import *\nparaview.simple._DisableFirstRenderCameraReset()\nimport pylab as pl", "Data Connection\nThis starts paraview server on remote host and connects", "from zutil.post import pvserver_connect\nif remote_data:\n pvserver_connect(data_host=data_host,data_dir=data_dir,paraview_cmd=paraview_cmd)", "Get control dictionary", "from zutil.post import get_case_parameters,print_html_parameters\nparameters=get_case_parameters(case_name,data_host=data_host,data_dir=data_dir)\n# print parameters", "Get status file", "from zutil.post import get_status_dict\nstatus=get_status_dict(case_name,data_host=data_host,data_dir=data_dir)\nnum_procs = str(status['num processor'])", "Define test conditions", "from IPython.display import HTML\nHTML(print_html_parameters(parameters))\naspect_ratio = 6.0\nPitch = 8.0\n\nfrom zutil.post import for_each\nfrom zutil import rotate_vector\nfrom zutil.post import get_csv_data\n\ndef plot_cp_profile(ax,file_root,span_loc,ax2):\n \n wall = PVDReader( FileName=file_root+'_wall.pvd' )\n wall.UpdatePipeline()\n \n point_data = CellDatatoPointData(Input=wall)\n point_data.PassCellData = 0\n point_data.UpdatePipeline()\n\n merged = MergeBlocks(Input=point_data)\n merged.UpdatePipeline()\n \n wall_slice = Slice(Input=merged, SliceType=\"Plane\" )\n wall_slice.SliceType.Normal = [0.0,1.0,0.0]\n wall_slice.SliceType.Origin = [0, span_loc*aspect_ratio, 0]\n wall_slice.UpdatePipeline()\n \n sorted_line = PlotOnSortedLines(Input=wall_slice)\n sorted_line.UpdatePipeline()\n\n slice_client = servermanager.Fetch(sorted_line)\n for_each(slice_client,func=plot_array,axis=ax,span_loc=span_loc,axis2=ax2)\n\ndef plot_array(data_array,pts_array,**kwargs):\n ax = kwargs['axis']\n span_loc = kwargs['span_loc']\n ax2 = kwargs['axis2']\n data = []\n pos = []\n pos_y = []\n count = 0\n cp_array = data_array.GetPointData()['cp']\n for p in pts_array.GetPoints()[:,0]:\n cp = float(cp_array[count])\n # transform to local Cp\n cp = cp/(span_loc)**2\n data.append(cp)\n pt_x = pts_array.GetPoints()[count,0]\n pt_z = pts_array.GetPoints()[count,2]\n # rotate by -8 deg\n pt_rot = rotate_vector([pt_x,0.0,pt_z],-8.0,0.0)\n pt = pt_rot[0] + 0.25\n pos.append(pt)\n pos_y.append(pt_rot[2])\n count+=1 \n ax.plot(pos, data , color='g',linestyle='-',marker='None',label='zCFD')\n ax2.plot(pos, pos_y , color='grey',linestyle='-',marker='None',label='profile')\n \ndef plot_experiment(ax, filename):\n header = True\n remote = False\n # Note - this returns a pandas dataframe object\n df = get_csv_data(filename,True,False)\n x = []\n y = []\n for ind in range(0,len(df.index)-1):\n x.append(df[list(df.columns.values)[0]][ind])\n y.append(-df[list(df.columns.values)[1]][ind]) \n ax.scatter(x, y, color='grey', label='Experiment') \n", "Cp Profile", "from zutil.post import get_case_root, cp_profile_wall_from_file_span\nfrom zutil.post import ProgressBar\nfrom collections import OrderedDict\n\nfactor = 0.0\npbar = ProgressBar()\n\nplot_list = OrderedDict([(0.50,{'exp_data_file': 'data/cp-0p50.txt', 'cp_axis':[0.0,1.0,1.2,-1.0]}),\n (0.68,{'exp_data_file': 'data/cp-0p68.txt', 'cp_axis':[0.0,1.0,1.2,-1.5]}),\n (0.80,{'exp_data_file': 'data/cp-0p80.txt', 'cp_axis':[0.0,1.0,1.2,-1.5]}),\n (0.89,{'exp_data_file': 'data/cp-0p89.txt', 'cp_axis':[0.0,1.0,1.2,-1.5]}),\n (0.96,{'exp_data_file': 'data/cp-0p96.txt', 'cp_axis':[0.0,1.0,1.2,-1.5]})])\n\nfig = pl.figure(figsize=(25, 30),dpi=100, facecolor='w', edgecolor='k')\nfig.suptitle('Caradonna Tung Hover Rotor (' + r'$\\mathbf{M_{TIP}}$' + ' = 0.877)', \n fontsize=28, fontweight='normal', color = '#5D5858')\n\npnum=1\ncl = {}\nfor plot in plot_list:\n pbar+=5\n span_loc = plot + factor\n ax = fig.add_subplot(3,2,pnum)\n ax.set_title('$\\mathbf{C_P}$' + ' at ' + '$\\mathbf{r/R}$' + ' = ' + str(span_loc) + '\\n', \n fontsize=24, fontweight='normal', color = '#E48B25')\n ax.grid(True)\n ax.set_xlabel('$\\mathbf{x/c}$', fontsize=24, fontweight='bold', color = '#5D5858')\n ax.set_ylabel('$\\mathbf{C_p}$', fontsize=24, fontweight='bold', color = '#5D5858')\n ax.axis(plot_list[plot]['cp_axis'])\n ax2 = ax.twinx()\n ax2.set_ylabel('$\\mathbf{z/c}$', fontsize=24, fontweight='bold', color = '#5D5858')\n ax2.axis([0,1,-0.5,0.5])\n plot_cp_profile(ax,get_case_root(case_name,num_procs),span_loc,ax2)\n \n normal = [0.0, 1.0, 0.0]\n origin = [0.0, span_loc*aspect_ratio, 0.0]\n # Check this - alpha passed via kwargs to post.py\n # THESE NUMBERS ARE COMPLETELY WRONG - CHECK\n forces = cp_profile_wall_from_file_span(get_case_root(case_name,num_procs), normal, origin, alpha=Pitch)\n cd = forces['friction force'][0] + forces['pressure force'][0]\n cs = forces['friction force'][1] + forces['pressure force'][1]\n cl[plot] = forces['friction force'][2] + forces['pressure force'][2]\n print cd, cs, cl[plot]\n \n plot_experiment(ax,plot_list[plot]['exp_data_file'])\n ax.legend(loc='upper right', shadow=True)\n \n legend = ax.legend(loc='best', scatterpoints=1, numpoints=1, shadow=False, fontsize=16)\n legend.get_frame().set_facecolor('white')\n ax.tick_params(axis='x', pad=16)\n \n for tick in ax.xaxis.get_major_ticks():\n tick.label.set_fontsize(18) \n tick.label.set_fontweight('normal') \n tick.label.set_color('#E48B25')\n for tick in ax.yaxis.get_major_ticks():\n tick.label.set_fontsize(18)\n tick.label.set_fontweight('normal') \n tick.label.set_color('#E48B25')\n for tick in ax2.yaxis.get_major_ticks():\n tick.label2.set_fontsize(18)\n tick.label2.set_fontweight('normal') \n tick.label2.set_color('#E48B25') \n \n pnum=pnum+1\n\nfig.subplots_adjust(hspace=0.3) \nfig.subplots_adjust(wspace=0.4) \nfig.savefig(\"images/Caradonna_Tung_CP_profile.png\")\npbar.complete()\nshow()\nfrom IPython.display import FileLink, display \ndisplay(FileLink('images/Caradonna_Tung_CP_profile.png'))", "Convergence", "from zutil.post import residual_plot, get_case_report\nresidual_plot(get_case_report(case_name))\nshow()\n\nif make_movie:\n from zutil.post import get_case_root\n from zutil.post import ProgressBar\n pb = ProgressBar()\n vtu = PVDReader( FileName=[get_case_root(case_name,num_procs)+'.pvd'] )\n vtu.UpdatePipeline()\n pb += 20\n merged = CleantoGrid(Input=vtu)\n merged.UpdatePipeline()\n pb += 20\n point_data = CellDatatoPointData(Input=merged)\n point_data.PassCellData = 0\n point_data.PieceInvariant = 1\n point_data.UpdatePipeline()\n pb.complete()\n\nif make_movie:\n# from paraview.vtk.dataset_adapter import DataSet\n from vtk.numpy_interface.dataset_adapter import DataSet\n stream = StreamTracer(Input=point_data)\n stream.SeedType = \"Point Source\"\n stream.SeedType.Center = [49673.0, 58826.0, 1120.0]\n stream.SeedType.Radius = 1\n stream.SeedType.NumberOfPoints = 1\n stream.Vectors = ['POINTS', 'V']\n stream.MaximumStreamlineLength = 135800.00000000035\n # IntegrationDirection can be FORWARD, BACKWARD, or BOTH\n stream.IntegrationDirection = 'BACKWARD'\n stream.UpdatePipeline()\n stream_client = servermanager.Fetch(stream)\n upstream_data = DataSet(stream_client)\n stream.IntegrationDirection = 'FORWARD'\n stream.UpdatePipeline()\n stream_client = servermanager.Fetch(stream)\n downstream_data = DataSet(stream_client)\n\nif make_movie:\n def vtk_show(renderer, w=100, h=100):\n \"\"\"\n Takes vtkRenderer instance and returns an IPython Image with the rendering.\n \"\"\"\n from vtk import vtkRenderWindow,vtkWindowToImageFilter,vtkPNGWriter\n \n renderWindow = vtkRenderWindow()\n renderWindow.SetOffScreenRendering(1)\n renderWindow.AddRenderer(renderer)\n renderWindow.SetSize(w, h)\n renderWindow.Render()\n \n windowToImageFilter = vtkWindowToImageFilter()\n windowToImageFilter.SetInput(renderWindow)\n windowToImageFilter.Update()\n \n writer = vtkPNGWriter()\n writer.SetWriteToMemory(1)\n writer.SetInputConnection(windowToImageFilter.GetOutputPort())\n writer.Write()\n data = str(buffer(writer.GetResult()))\n \n from IPython.display import Image\n return Image(data)\n\n\n\nif make_movie:\n #print stream_data.GetPoint(0)\n from zutil.post import ProgressBar\n pb = ProgressBar()\n\n wall = PVDReader( FileName=[get_case_root(case_name,num_procs)+'_wall.pvd'] )\n wall.UpdatePipeline()\n\n merged = CleantoGrid(Input=wall)\n merged.UpdatePipeline()\n\n point_data = CellDatatoPointData(Input=merged)\n point_data.PassCellData = 0\n point_data.PieceInvariant = 1\n point_data.UpdatePipeline()\n\n total_pts = 100# stream_data.GetNumberOfPoints()\n\n scene = GetAnimationScene()\n scene.EndTime = total_pts\n scene.PlayMode = 'Snap To TimeSteps'\n scene.AnimationTime = 0\n\n a1_yplus_PVLookupTable = GetLookupTableForArray( \"yplus\", 1, RGBPoints=[96.69050598144531, 0.23, 0.299, 0.754, 24391.206581115723, 0.865, 0.865, 0.865, 48685.72265625, 0.706, 0.016, 0.15], VectorMode='Magnitude', NanColor=[0.25, 0.0, 0.0], ColorSpace='Diverging', ScalarRangeInitialized=1.0 )\n a1_yplus_PiecewiseFunction = CreatePiecewiseFunction( Points=[96.69050598144531, 0.0, 0.5, 0.0, 48685.72265625, 1.0, 0.5, 0.0] )\n\n drepr = Show() # GetDisplayProperties( Contour1 )\n drepr.EdgeColor = [0.0, 0.0, 0.5000076295109483]\n drepr.SelectionPointFieldDataArrayName = 'yplus'\n #DataRepresentation4.SelectionCellFieldDataArrayName = 'eddy'\n drepr.ColorArrayName = ('POINT_DATA', 'yplus')\n drepr.LookupTable = a1_yplus_PVLookupTable\n drepr.ScaleFactor = 0.08385616838932038\n drepr.Interpolation = 'Flat'\n drepr.ScalarOpacityFunction = a1_yplus_PiecewiseFunction\n\n view = GetRenderView()\n if not view:\n # When using the ParaView UI, the View will be present, not otherwise.\n view = CreateRenderView()\n\n scene.ViewModules = [view]\n\n view.CameraViewUp = [0.0, 0.0, 1.0]\n view.CameraPosition = list(upstream_data.GetPoint(0))\n view.CameraFocalPoint = list(upstream_data.GetPoint(1))\n view.CameraParallelScale = 0.499418869125992\n view.CenterOfRotation = [49673.0, 58826.0, 1120.0]\n view.CenterAxesVisibility = 0\n view.ViewSize = [3840,2160]\n view.LightSwitch=0\n view.UseLight = 1\n #RenderView2.SetOffScreenRendering(1)\n #Render()\n\n pb+=20\n\n camera = view.GetActiveCamera()\n key_frames = []\n for p in range(total_pts):\n pt = stream_data.GetPoint(p)\n #print pt\n frame = CameraKeyFrame()\n frame.Position = list(pt)\n frame.ViewUp = [0.0, 0.0, 1.0]\n frame.FocalPoint = camera.GetFocalPoint()\n frame.KeyTime = p/total_pts\n key_frames.append(frame)\n\n pb+=20\n \n cue = GetCameraTrack()\n cue.Mode = 'Interpolate Camera'\n cue.AnimatedProxy = view\n cue.KeyFrames = key_frames\n\n TimeAnimationCue4 = GetTimeTrack()\n\n scene.Cues = [cue]\n\n for t in range(total_pts-1):\n print 'Generating: ' + str(t)\n pt = stream_data.GetPoint(t)\n view.CameraPosition = list(pt)\n view.CameraFocalPoint = list(stream_data.GetPoint(t+1))\n #vtk_show(view.GetRenderer())\n Render()\n #scene.AnimationTime = t\n WriteImage('movies/caradonna_'+str(t)+'.png')\n \n pb.complete()", "Check validation and regression¶", "if (validate):\n def validate_data(name, value, valid_lower, valid_upper):\n if ((value < valid_lower) or (value > valid_upper)):\n print 'INVALID: ' + name + ' %.4f '%valid_lower + '%.4f '%value + ' %.4f'%valid_upper\n return False\n else:\n return True \n \n valid = validate_data('C_L[0.50]', cl[0.50], valid_lower_cl_0p50, valid_upper_cl_0p50) and valid\n valid = validate_data('C_L[0.68]', cl[0.68], valid_lower_cl_0p68, valid_upper_cl_0p68) and valid\n valid = validate_data('C_L[0.80]', cl[0.80], valid_lower_cl_0p80, valid_upper_cl_0p80) and valid\n valid = validate_data('C_L[0.89]', cl[0.89], valid_lower_cl_0p89, valid_upper_cl_0p89) and valid\n valid = validate_data('C_L[0.96]', cl[0.96], valid_lower_cl_0p96, valid_upper_cl_0p96) and valid\n \n if (valid):\n print 'VALIDATION = PASS :-)'\n else:\n print 'VALIDATION = FAIL :-(' \n\nif (regression):\n import pandas as pd\n pd.options.display.float_format = '{:,.6f}'.format\n print 'REGRESSION DATA'\n regress = {'version' : ['v0.0', 'v0.1' , 'CURRENT'], \n 'C_L[0.50]' : [2.217000, 2.217000, cl[0.50]], \n 'C_L[0.68]' : [0.497464, 0.498132, cl[0.68]],\n 'C_L[0.80]' : [0.024460, 0.024495, cl[0.80]],\n 'C_L[0.89]' : [0.014094, 0.014099, cl[0.89]],\n 'C_L[0.96]' : [0.010366, 0.010396, cl[0.96]]}\n regression_table = pd.DataFrame(regress, columns=['version','C_L[0.50]','C_L[0.68]',\n 'C_L[0.80]','C_L[0.89]','C_L[0.96]'])\n print regression_table", "Cleaning up", "if remote_data:\n #print 'Disconnecting from remote paraview server connection'\n Disconnect()" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
esa-as/2016-ml-contest
LiamLearn/K-fold_CV_F1_score__MATT.ipynb
apache-2.0
[ "'Grouped' k-fold CV\nA quick demo by Matt\nIn cross-validating, we'd like to drop out one well at a time. LeaveOneGroupOut is good for this:", "import pandas as pd\ntraining_data = pd.read_csv('../training_data.csv')", "Isolate X and y:", "X = training_data.drop(['Formation', 'Well Name', 'Depth','Facies'], axis=1).values\ny = training_data['Facies'].values", "We want the well names to use as groups in the k-fold analysis, so we'll get those too:", "wells = training_data[\"Well Name\"].values", "Now we train as normal, but LeaveOneGroupOut gives us the approriate indices from X and y to test against one well at a time:", "from sklearn.svm import SVC\nfrom sklearn.model_selection import LeaveOneGroupOut\n\nlogo = LeaveOneGroupOut()\n\nfor train, test in logo.split(X, y, groups=wells):\n well_name = wells[test[0]]\n score = SVC().fit(X[train], y[train]).score(X[test], y[test])\n print(\"{:>20s} {:.3f}\".format(well_name, score))" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
taku-y/bmlingam
doc/notebook/expr/20160915/20160902-eval-bml.ipynb
mit
[ "因果の効果が小さい場合のベイズファクター\nこのノートブックでは, 回帰係数が一様分布U(-1.5, 1.5)から生成される場合に, ベイズファクターによって因果の効果が小さい場合を判断できるかどうかを確認します.", "%matplotlib inline\n%autosave 0\nimport sys, os\nsys.path.insert(0, os.path.expanduser('~/work/tmp/20160915-bmlingam/bmlingam'))\n\nfrom copy import deepcopy\nimport hashlib\nimport matplotlib.pyplot as plt\nimport numpy as np\nimport pandas as pd\nimport seaborn as sns\nimport time\n\nfrom bmlingam import do_mcmc_bmlingam, InferParams, MCMCParams, save_pklz, load_pklz, define_hparam_searchspace, find_best_model\nfrom bmlingam.utils.gendata import GenDataParams, gen_artificial_data", "実験条件\n実験条件は以下の通りです. \n\n\n人工データパラメータ\n\nサンプル数 (n_samples): [100]\n総ノイズスケール: $c=0.5, 1.0$. \n交絡因子のスケール: $c/\\sqrt{Q}$\nデータ観測ノイズ分布 (data_noise_type): ['laplace', 'uniform']\n交絡因子数 (n_confs or $Q$): [10]\n観測データノイズスケール: 3に固定\n回帰係数の分布 uniform(-1.5, 1.5)\n\n\n\n推定ハイパーパラメータ\n\n交絡因子相関係数 (L_cov_21s): [[-.9, -.7, -.5, -.3, 0, .3, .5, .7, .9]]\nモデル観測ノイズ分布 (model_noise_type): ['gg']", "conds = [\n {\n 'totalnoise': totalnoise, \n 'L_cov_21s': L_cov_21s, \n 'n_samples': n_samples, \n 'n_confs': n_confs, \n 'data_noise_type': data_noise_type, \n 'model_noise_type': model_noise_type, \n 'b21_dist': b21_dist\n }\n for totalnoise in [0.5, 1.0]\n for L_cov_21s in [[-.9, -.7, -.5, -.3, 0, .3, .5, .7, .9]]\n for n_samples in [100]\n for n_confs in [10] # [1, 3, 5, 10]\n for data_noise_type in ['laplace', 'uniform']\n for model_noise_type in ['gg']\n for b21_dist in ['uniform(-1.5, 1.5)']\n ]\n\nprint('{} conditions'.format(len(conds)))", "人工データの生成\n実験条件に基づいて人工データを生成する関数を定義します.", "def gen_artificial_data_given_cond(ix_trial, cond):\n # 実験条件に基づく人工データ生成パラメータの設定\n n_confs = cond['n_confs']\n gen_data_params = deepcopy(gen_data_params_default)\n gen_data_params.n_samples = cond['n_samples']\n gen_data_params.conf_dist = [['all'] for _ in range(n_confs)]\n gen_data_params.e1_dist = [cond['data_noise_type']]\n gen_data_params.e2_dist = [cond['data_noise_type']]\n gen_data_params.b21_dist = cond['b21_dist']\n\n noise_scale = cond['totalnoise'] / np.sqrt(n_confs)\n gen_data_params.f1_coef = [noise_scale for _ in range(n_confs)]\n gen_data_params.f2_coef = [noise_scale for _ in range(n_confs)]\n\n # 人工データ生成\n gen_data_params.seed = ix_trial\n data = gen_artificial_data(gen_data_params)\n \n return data\n\n# 人工データ生成パラメータの基準値\ngen_data_params_default = GenDataParams(\n n_samples=100, \n b21_dist='r2intervals', \n mu1_dist='randn', \n mu2_dist='randn', \n f1_scale=1.0, \n f2_scale=1.0, \n f1_coef=['r2intervals', 'r2intervals', 'r2intervals'], \n f2_coef=['r2intervals', 'r2intervals', 'r2intervals'], \n conf_dist=[['all'], ['all'], ['all']], \n e1_std=3.0, \n e2_std=3.0, \n e1_dist=['laplace'], \n e2_dist=['laplace'],\n seed=0\n)", "実行例です.", "data = gen_artificial_data_given_cond(0, conds[0])\nxs = data['xs']\nplt.figure(figsize=(3, 3))\nplt.scatter(xs[:, 0], xs[:, 1])\n\ndata = gen_artificial_data_given_cond(0, \n {\n 'totalnoise': 3 * np.sqrt(1), \n 'n_samples': 10000, \n 'n_confs': 1, \n 'data_noise_type': 'laplace',\n 'b21_dist': 'uniform(-1.5, 1.5)'\n }\n)\nxs = data['xs']\nplt.figure(figsize=(3, 3))\nplt.scatter(xs[:, 0], xs[:, 1])", "トライアルの定義\nトライアルとは, 生成された1つの人工データに対する因果推論と精度評価の処理を指します. \n一つの実験条件に対し, トライアルを100回実行します.", "n_trials_per_cond = 100", "実験条件パラメータの基準値", "# 因果推論パラメータ\ninfer_params = InferParams(\n seed=0, \n standardize=True, \n subtract_mu_reg=False, \n fix_mu_zero=True, \n prior_var_mu='auto', \n prior_scale='uniform', \n max_c=1.0, \n n_mc_samples=10000, \n dist_noise='laplace', \n df_indvdl=8.0, \n prior_indvdls=['t'], \n cs=[0.4, 0.6, 0.8],\n scale_coeff=2. / 3., \n L_cov_21s=[-0.8, -0.6, -0.4, 0.4, 0.6, 0.8], \n betas_indvdl=None, # [0.25, 0.5, 0.75, 1.], \n betas_noise=[0.25, 0.5, 1.0, 3.0], \n causalities=[[1, 2], [2, 1]], \n sampling_mode='cache_mp4'\n)\n\n# 回帰係数推定パラメータ\nmcmc_params = MCMCParams(\n n_burn=1000, \n n_mcmc_samples=1000\n)", "プログラム\nトライアル識別子の生成\n以下の情報からトライアルに対する識別子を生成します. \n\nトライアルインデックス (ix_trial)\nサンプル数 (n_samples)\n交絡因子数 (n_confs)\n人工データ観測ノイズの種類 (data_noise_type)\n予測モデル観測ノイズの種類 (model_noise_type)\n交絡因子相関係数 (L_cov_21s)\n総ノイズスケール (totalnoise)\n回帰係数分布 (b21_dist)\n\nトライアル識別子は推定結果をデータフレームに格納するときに使用されます.", "def make_id(ix_trial, n_samples, n_confs, data_noise_type, model_noise_type, L_cov_21s, totalnoise, b21_dist):\n L_cov_21s_ = ' '.join([str(v) for v in L_cov_21s])\n \n return hashlib.md5(\n str((L_cov_21s_, ix_trial, n_samples, n_confs, data_noise_type, model_noise_type, totalnoise, b21_dist.replace(' ', ''))).encode('utf-8')\n ).hexdigest()\n\n# テスト\nprint(make_id(55, 100, 12, 'all', 'gg', [1, 2, 3], 0.3, 'uniform(-1.5, 1.5)'))\nprint(make_id(55, 100, 12, 'all', 'gg', [1, 2, 3], 0.3, 'uniform(-1.5, 1.5)')) # 空白を無視します", "トライアル結果のデータフレームへの追加\n\nトライアル結果をデータフレームに追加します. \n引数dfがNoneの場合, 新たにデータフレームを作成します.", "def add_result_to_df(df, result):\n if df is None:\n return pd.DataFrame({k: [v] for k, v in result.items()})\n else:\n return df.append(result, ignore_index=True)\n\n# テスト\nresult1 = {'col1': 10, 'col2': 20}\nresult2 = {'col1': 30, 'col2': -10}\ndf1 = add_result_to_df(None, result1)\nprint('--- df1 ---')\nprint(df1)\ndf2 = add_result_to_df(df1, result2)\nprint('--- df2 ---')\nprint(df2)", "データフレーム内のトライアル識別子の確認\n\n計算済みの結果に対して再計算しないために使用します.", "def df_exist_result_id(df, result_id):\n if df is not None:\n return result_id in np.array(df['result_id'])\n else:\n False", "データフレームの取得\n\nデータフレームをセーブ・ロードする関数を定義します. \nファイルが存在しなければNoneを返します.", "def load_df(df_file):\n if os.path.exists(df_file):\n return load_pklz(df_file)\n else:\n return None\n\ndef save_df(df_file, df):\n save_pklz(df_file, df)", "トライアル実行\nトライアルインデックスと実験条件を引数としてトライアルを実行し, 推定結果を返します.", "def _estimate_hparams(xs, infer_params):\n assert(type(infer_params) == InferParams)\n\n sampling_mode = infer_params.sampling_mode\n hparamss = define_hparam_searchspace(infer_params)\n results = find_best_model(xs, hparamss, sampling_mode)\n hparams_best = results[0]\n bf = results[2] - results[5] # Bayes factor\n \n return hparams_best, bf\n\ndef run_trial(ix_trial, cond):\n # 人工データ生成\n data = gen_artificial_data_given_cond(ix_trial, cond)\n b_true = data['b']\n causality_true = data['causality_true']\n \n # 因果推論\n t = time.time()\n infer_params.L_cov_21s = cond['L_cov_21s']\n infer_params.dist_noise = cond['model_noise_type']\n hparams, bf = _estimate_hparams(data['xs'], infer_params)\n causality_est = hparams['causality']\n time_causal_inference = time.time() - t\n\n # 回帰係数推定\n t = time.time()\n trace = do_mcmc_bmlingam(data['xs'], hparams, mcmc_params)\n b_post = np.mean(trace['b'])\n time_posterior_inference = time.time() - t\n \n return {\n 'causality_true': causality_true, \n 'regcoef_true': b_true, \n 'n_samples': cond['n_samples'], \n 'n_confs': cond['n_confs'], \n 'data_noise_type': cond['data_noise_type'], \n 'model_noise_type': cond['model_noise_type'], \n 'causality_est': causality_est,\n 'correct_rate': (1.0 if causality_est == causality_true else 0.0), \n 'error_reg_coef': np.abs(b_post - b_true), \n 'regcoef_est': b_post, \n 'log_bf': 2 * bf, # 2log(p(M) / p(M_rev))なので常に正の値となります. \n 'time_causal_inference': time_causal_inference, \n 'time_posterior_inference': time_posterior_inference, \n 'L_cov_21s': str(cond['L_cov_21s']), \n 'n_mc_samples': infer_params.n_mc_samples, \n 'confs_absmean': np.mean(np.abs(data['confs'].ravel())), \n 'totalnoise': cond['totalnoise']\n }", "メインプログラム", "def run_expr(conds):\n # データフレームファイル名\n data_dir = '.'\n df_file = data_dir + '/20160902-eval-bml-results.pklz'\n \n # ファイルが存在すれば以前の続きから実行します. \n df = load_df(df_file)\n\n # 実験条件に渡るループ\n n_skip = 0\n \n for cond in conds:\n print(cond)\n \n # トライアルに渡るループ\n for ix_trial in range(n_trials_per_cond):\n # 識別子\n result_id = make_id(ix_trial, **cond)\n \n # データフレームに結果が保存済みかどうかチェックします. \n if df_exist_result_id(df, result_id):\n n_skip += 1\n else:\n # resultはトライアルの結果が含まれるdictです. \n # トライアルインデックスix_trialは乱数シードとして使用されます. \n result = run_trial(ix_trial, cond)\n result.update({'result_id': result_id})\n \n df = add_result_to_df(df, result)\n save_df(df_file, df)\n \n print('Number of skipped trials = {}'.format(n_skip))\n return df", "メインプログラムの実行", "df = run_expr(conds)", "結果の確認", "import pandas as pd\n\n# データフレームファイル名\ndata_dir = '.'\ndf_file = data_dir + '/20160902-eval-bml-results.pklz'\ndf = load_pklz(df_file)\n\nsg = df.groupby(['model_noise_type', 'data_noise_type', 'n_confs', 'totalnoise'])\nsg1 = sg['correct_rate'].mean()\nsg2 = sg['correct_rate'].count()\nsg3 = sg['time_causal_inference'].mean()\n\npd.concat(\n {\n 'correct_rate': sg1, \n 'count': sg2, \n 'time': sg3, \n }, axis=1\n)", "回帰係数の大きさとBayesFactor\n$2\\log(BF)$を横軸, $|b_{21}|$(または$|b_{12}|$)を縦軸に取りプロットしました. $2\\log(BF)$が10以上だと真の回帰係数(の絶対値)が大きく因果効果があると言えるのですが, $2\\log(BF)$がそれ以下だと, 回帰係数が大きい場合も小さい場合もあり, BFで因果効果の有無を判断するのは難しいと言えそうです. 因果効果があるモデルと無いモデルとの比較も必要なのでしょう.", "data = np.array(df[['regcoef_true', 'log_bf', 'totalnoise', 'correct_rate']])\nixs1 = np.where(data[:, 3] == 1.0)[0]\nixs2 = np.where(data[:, 3] == 0.0)[0]\nplt.scatter(data[ixs1, 1], np.abs(data[ixs1, 0]), marker='o', s=20, c='r', label='Success')\nplt.scatter(data[ixs2, 1], np.abs(data[ixs2, 0]), marker='*', s=70, c='b', label='Failure')\nplt.ylabel('|b|')\nplt.xlabel('2 * log(bayes_factor)')\nplt.legend(fontsize=15, loc=4, shadow=True, frameon=True, framealpha=1.0)", "回帰系数の予測精度\n人工データの回帰係数をU(-1.5, 1.5)で生成した実験で, 回帰系数の真値を横軸, 事後分布平均を縦軸に取りプロットしました. 真値が小さい場合は因果方向予測の正解(赤)と不正解(青)に関わらず事後分布平均が小さくなっています. 一方, 正解の場合には回帰係数が小さく, 不正解の場合には回帰係数が小さく推定される傾向があるようです.", "data = np.array(df[['regcoef_true', 'regcoef_est', 'correct_rate']])\nixs1 = np.where(data[:, 2] == 1)[0]\nixs2 = np.where(data[:, 2] == 0)[0]\nassert(len(ixs1) + len(ixs2) == len(data))\n\nplt.figure(figsize=(5, 5))\nplt.scatter(data[ixs1, 0], data[ixs1, 1], c='r', label='Correct')\nplt.scatter(data[ixs2, 0], data[ixs2, 1], c='b', label='Incorrect')\nplt.plot([-3, 3], [-3, 3])\nplt.xlim(-3, 3)\nplt.ylim(-3, 3)\nplt.gca().set_aspect('equal')\nplt.xlabel('Reg coef (true)')\nplt.ylabel('Reg coef (posterior mean)')", "EPSで出力", "data = np.array(df[['regcoef_true', 'regcoef_est', 'correct_rate']])\nixs1 = np.where(data[:, 2] == 1)[0]\nixs2 = np.where(data[:, 2] == 0)[0]\nassert(len(ixs1) + len(ixs2) == len(data))\n\nplt.figure(figsize=(5, 5))\nplt.scatter(data[ixs1, 0], data[ixs1, 1], c='r', label='Correct')\nplt.plot([-3, 3], [-3, 3])\nplt.xlim(-3, 3)\nplt.ylim(-3, 3)\nplt.gca().set_aspect('equal')\nplt.xlabel('Reg coef (true)')\nplt.ylabel('Reg coef (posterior mean)')\nplt.title('Correct inference')\nplt.savefig('20160905-eval-bml-correct.eps')\n\nplt.figure(figsize=(5, 5))\nplt.scatter(data[ixs2, 0], data[ixs2, 1], c='b', label='Incorrect')\nplt.plot([-3, 3], [-3, 3])\nplt.xlim(-3, 3)\nplt.ylim(-3, 3)\nplt.gca().set_aspect('equal')\nplt.xlabel('Reg coef (true)')\nplt.ylabel('Reg coef (posterior mean)')\nplt.title('Incorrect inference')\nplt.savefig('20160905-eval-bml-incorrect.eps')" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
AlexandruValeanu/365-days-of-algorithms
Day 11 - Closest pair of points.ipynb
gpl-3.0
[ "Definition(s)\nThe closest pair of points problem or closest pair problem is a problem of computational geometry: given n points in metric space, find a pair of points with the smallest distance between them. \nThe closest pair problem for points in the Euclidean plane was among the first geometric problems that were treated at the origins of the systematic study of the computational complexity of geometric algorithms.\nAlgorithm(s)", "import numpy as np\nimport matplotlib\nimport matplotlib.pyplot as plt\nfrom operator import itemgetter\n\n%matplotlib inline\n\ndef euclid_distance(p, q):\n return np.sqrt((p[0] - q[0]) ** 2 + (p[1] - q[1]) ** 2)\n\ndef search(points, st, dr):\n if st >= dr:\n return np.inf, None, None\n elif st + 1 == dr:\n # sort on y\n if points[st][1] > points[dr][1]:\n points[st], points[dr] = points[dr], points[st]\n return euclid_distance(points[st], points[dr]), points[st], points[dr]\n\n m = (st + dr) // 2\n median_x = points[m][0]\n\n dleft, pleft, qleft = search(points, st, m)\n dright, pright, qright = search(points, m + 1, dr)\n\n if dleft < dright:\n d, p, q = dleft, pleft, qleft\n else:\n d, p, q = dright, pright, qright\n\n # merge the two halves on y\n aux = []\n i, j = st, m + 1\n\n while i <= m and j <= dr:\n if points[i][1] <= points[j][1]:\n aux.append(points[i])\n i += 1\n else:\n aux.append(points[j])\n j += 1\n\n while i <= m:\n aux.append(points[i])\n i += 1\n\n while j <= dr:\n aux.append(points[j])\n j += 1\n\n # copy back the points\n points[st:dr+1] = aux\n\n # select a set of points to be tested\n good_points = []\n\n for i in range(st, dr + 1):\n if abs(points[i][0] - median_x) <= d:\n good_points.append(points[i])\n \n for i in range(len(good_points)):\n j, cnt = i - 1, 8\n\n # go for at most 8 steps\n while j >= 0 and cnt > 0:\n tmp_d = euclid_distance(aux[i], aux[j])\n if tmp_d < d:\n d, p, q = tmp_d, aux[i], aux[j]\n\n j -= 1\n cnt -= 1\n\n return d, p, q\n\ndef closest_pair(points):\n points.sort(key = itemgetter(0))\n \n return search(points, 0, len(points) - 1)", "Naive implementation of closest_pair", "def naive_closest_pair(points):\n best, p, q = np.inf, None, None\n n = len(points)\n\n for i in range(n):\n for j in range(i + 1, n):\n d = euclid_distance(points[i], points[j])\n\n if d < best:\n best, p, q = d, points[i], points[j]\n\n return best, p, q", "Draw points (with closest-pair marked as red)", "def draw_points(points, p, q):\n xs, ys = zip(*points)\n plt.figure(figsize=(10,10))\n plt.scatter(xs, ys)\n plt.scatter([p[0], q[0]], [p[1], q[1]], s=100, c='red')\n plt.plot([p[0], q[0]], [p[1], q[1]], 'k', c='red')\n plt.show()", "Run(s)", "points = [(26, 77), (12, 37), (14, 18), (19, 96), (71, 95), (91, 9), (98, 43), (66, 77), (2, 75), (94, 91)]\n\nxs, ys = zip(*points)\n\nd, p, q = closest_pair(points)\nassert d == naive_closest_pair(points)[0]\n\nprint(\"The closest pair of points is ({0}, {1}) at distance {2}\".format(p, q, d))\n\ndraw_points(points, p, q)\n\nN = 10\nx = np.random.rand(N) * 100\ny = np.random.rand(N) * 100\n\npoints = list(zip(x, y))\n\nd, p, q = closest_pair(points)\nassert d == naive_closest_pair(points)[0]\n\nprint(\"The closest pair of points is ({0}, {1}) at distance {2}\".format(p, q, d))\n\ndraw_points(points, p, q)\n\nN = 20\nx = np.random.randint(100, size=N)\ny = np.random.randint(100, size=N)\n\npoints = list(zip(x, y))\n\nd, p, q = closest_pair(points)\nassert d == naive_closest_pair(points)[0]\n\nprint(\"The closest pair of points is ({0}, {1}) at distance {2}\".format(p, q, d))\n\ndraw_points(points, p, q)\n\nN = 20\nx = np.random.rand(N)\ny = np.random.rand(N)\n\npoints = list(zip(x, y))\n\nd, p, q = closest_pair(points)\nassert d == naive_closest_pair(points)[0]\n\nprint(\"The closest pair of points is ({0}, {1}) at distance {2}\".format(p, q, d))\n\ndraw_points(points, p, q)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
cfangmeier/UNL-Gantry-Encapsulation-Monitoring
legacy/Potting Data Analysis.ipynb
mit
[ "Potting Data Analysis\n\nThe data in the Potting log files has data on the following parts\n - HDI wirebond pad groups\n - ROC wirebond pad groups\n - Address Pads on HDI\n - TBM wirebond pads on HDI side\n - TBM wirebond pads on TBM side\n - High-Voltage Pad on HDI\n - High-Voltage Pad on sensor\nThe data consists of 3-D measurements. Each measurement is spread over 3 rows. with the format\nx1\ny1\nz1\nx2\ny2\nz2\n...\n...\n\nIn the first line of data for each module there is a time stamp which indicates when the data taking was started and which mode was used (\"HDIV3 + module\" means real parts mode). \nThe HDI has 16 pads groups (each group has 35 pads). Since the sylgard line must be delivered between two points\n(start-stop) the first 6 numbers correspond to the first pads group start position and stop position\n\n241.926502 25/1/2016-18:52:49HDIV3 + module # | | pad1-U1-x \n213.485689 # | start | pad1-U1-y \n61.871853 # | | pad1-U1-z \n # | -\n241.942338 # | | pad35-U1-x \n219.910052 # | stop | pad35-U1-y \n61.874242 # | | pad35-U1-x\n\n\nThe breakdown of numbers to parts is therefore\n\n96 lines(32 positions) for the 16 HDI pads groups. \n12 lines(4 positions) for the address pads\n24 lines(8 positions) for the HDI-TBM pads (pads where the TBM is wired to the HDI)\n12 lines(4 positions) for the TBM pads\n3 lines(1 position) for the HV in the HDI side\n96 lines(32 positions) for the 16 ROC pads groups. \n3 lines(1 position) for the HV in the sensor side", "#setup matplotlib to have live plots in notebook and load datafile into memory\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport matplotlib.dates as mdates\nfrom datetime import datetime\nfrom itertools import chain\n\n%matplotlib notebook\nplt.style.use('fivethirtyeight')\nwith open('pottingData.lvm', 'r') as f:\n lines = f.readlines() ", "To parse the data file, just read 3 lines at a time to build individual vectors. Each module's data has a known number of measurements(82) so the list of vectors can be split into groups and assembled into Module objects.", "from collections import namedtuple\nfrom statistics import mean,stdev\nVector = namedtuple('Vector', ['x', 'y', 'z', 'label'])\ndef parse_vectors(lines):\n vecs = []\n lines_iter = iter(lines)\n label = \"\"\n def tokenize(l):\n nonlocal label\n l = l.split('#')[-1]\n toks = [t for t in l.split('\\t') if t]\n if len(toks) > 1:\n label = toks[1].strip()\n return toks[0]\n while lines_iter:\n try:\n x = float(tokenize(next(lines_iter)))\n y = float(tokenize(next(lines_iter)))\n z = float(tokenize(next(lines_iter)))\n vecs.append(Vector(x,y,z,label))\n except IndexError:\n pass\n except StopIteration:\n break\n return vecs\n \nvecs = parse_vectors(lines)\n\nclass Module:\n n = 82\n def __init__(self, vecs):\n self.hdi_bond_pads = vecs[0:32] # 32 measurements\n self.address_pads = vecs[32:36] # 4 measurements\n self.tbm_on_hdi = vecs[36:44] # 8 measurements\n self.tbm_on_tbm = vecs[44:48] # 4 measurements\n self.hdi_hv_pad = vecs[48]\n self.roc_bond_pads = vecs[49:81] # 32 measurements\n self.roc_hv_pad = vecs[81]\n\ndef parse_modules(vectors):\n n = Module.n\n num_modules = len(vectors)//n\n return [Module(vectors[i*n:(i+1)*n]) for i in range(num_modules)]\n \n\nmodules = parse_modules(vecs)", "Now that the potting data has been successfully loaded into an appropriate data structure, some plots can be done.\nFirst, let's look at the location of the potting positions on the TBM, both TBM and HDI side", "tbm_horiz = []\ntbm_verti = []\nhdi_horiz = []\nhdi_verti = []\nplt.figure()\nfor module in modules:\n tbm_horiz.append(module.tbm_on_tbm[1][0]-module.tbm_on_tbm[0][0])\n tbm_horiz.append(module.tbm_on_tbm[2][0]-module.tbm_on_tbm[3][0])\n tbm_verti.append(module.tbm_on_tbm[3][1]-module.tbm_on_tbm[0][1])\n tbm_verti.append(module.tbm_on_tbm[2][1]-module.tbm_on_tbm[1][1])\n \n hdi_horiz.append(module.tbm_on_hdi[1][0]-module.tbm_on_hdi[0][0])\n hdi_horiz.append(module.tbm_on_hdi[4][0]-module.tbm_on_hdi[5][0])\n hdi_verti.append(module.tbm_on_hdi[3][1]-module.tbm_on_hdi[2][1])\n hdi_verti.append(module.tbm_on_hdi[6][1]-module.tbm_on_hdi[7][1])\n \n xs = []\n ys = []\n offset_x, offset_y, *_ = module.hdi_bond_pads[0]\n for i,point in enumerate(module.tbm_on_hdi):\n xs.append(point[0]-offset_x)\n ys.append(point[1]-offset_y)\n for i,point in enumerate(module.tbm_on_tbm):\n xs.append(point[0]-offset_x)\n ys.append(point[1]-offset_y)\n plt.plot(xs,ys,'.')\nplt.xlabel(\"X(mm)\")\nplt.ylabel(\"Y(mm)\")\nprint(\"Mean TBM_TBM X-Trace Length\",mean(tbm_horiz),\"+-\",stdev(tbm_horiz),\"mm\")\nprint(\"Mean TBM_TBM Y-Trace Length\",mean(tbm_verti),\"+-\",stdev(tbm_verti),\"mm\")\nprint(\"Mean TBM_HDI X-Trace Length\",mean(hdi_horiz),\"+-\",stdev(hdi_horiz),\"mm\")\nprint(\"Mean TBM_HDI Y-Trace Length\",mean(hdi_verti),\"+-\",stdev(hdi_verti),\"mm\")", "So now we know what the average and standard deviation of the trace lengths on the TBM are. good.\n\nNow let's examine how flat the Modules are overall by looking at the points for the HDI and BBM bond pads in the YZ plane.", "fig, axes = plt.subplots(nrows=2, ncols=2, figsize=(12, 5))\nfor i, module in enumerate(modules):\n ys = []\n zs = []\n _, offset_y, offset_z, *_ = module.hdi_bond_pads[0]\n for bond_pad in module.hdi_bond_pads[:16]:\n ys.append(bond_pad[1]-offset_y)\n zs.append(bond_pad[2]-offset_z)\n axes[0][0].plot(ys,zs,'.', label=str(i))\n \n ys.clear()\n zs.clear()\n _, offset_y, offset_z, *_ = module.hdi_bond_pads[16]\n for bond_pad in module.hdi_bond_pads[16:]:\n ys.append(bond_pad[1]-offset_y)\n zs.append(bond_pad[2]-offset_z)\n axes[0][1].plot(ys,zs,'.', label=str(i))\n \n ys.clear()\n zs.clear()\n _, offset_y, offset_z, *_ = module.roc_bond_pads[0]\n for bond_pad in module.roc_bond_pads[:16]:\n ys.append(bond_pad[1]-offset_y)\n zs.append(bond_pad[2]-offset_z)\n axes[1][0].plot(ys,zs,'.', label=str(i))\n\n ys.clear()\n zs.clear()\n _, offset_y, offset_z, *_ = module.roc_bond_pads[16]\n for bond_pad in module.roc_bond_pads[16:]:\n ys.append(bond_pad[1]-offset_y)\n zs.append(bond_pad[2]-offset_z)\n axes[1][1].plot(ys,zs,'.', label=str(i))\n\n\naxes[0][0].set_ylabel('Z(mm)')\naxes[1][0].set_ylabel('Z(mm)')\naxes[1][0].set_xlabel('Y(mm)')\naxes[1][1].set_xlabel('Y(mm)')\n\naxes[0][0].set_ylim((-.20,.20))\naxes[0][0].set_title(\"HDI Pads left side\")\naxes[0][1].set_ylim((-.20,.20))\naxes[0][1].set_title(\"HDI Pads right side\")\naxes[1][0].set_ylim((-.20,.20))\naxes[1][0].set_title(\"BBM Pads left side\")\naxes[1][1].set_ylim((-.20,.20))\naxes[1][1].set_title(\"BBM Pads right side\")\n ", "HDI/BBM Offset Data\nThere is also data available for center/rotation of hdi/bbm. We can try to measure the offsets.\nThe raw data files are missing some rows. This would throw off the parser by introducing a misalignment of the hdi/bbm pairing. I added the missng rows by hand.", "from IPython.display import Markdown, display_markdown\nwith open(\"./orientationData.txt\") as f:\n vecs = parse_vectors(f.readlines())\npairs = []\nNULL = set([0])\nfor i in range(len(vecs)//16):\n for j in range(8):\n hdi = vecs[i*16+j]\n bbm = vecs[i*16+j+8]\n pair = (hdi,bbm)\n if set(hdi[:3]) != NULL and set(bbm[:3]) != NULL:\n pairs.append(pair)\ndeltas = []\nangles = []\nss = [\"| | time stamp | delta ($\\mu$m) | rotation (degrees)|\",\n \"|--:|------------|---------------:|------------------:|\"]\nfor i,pair in enumerate(pairs):\n dx = pair[0].x - pair[1].x\n dy = pair[0].y - pair[1].y\n dt = abs(pair[0].z - pair[1].z)\n delta = np.sqrt(dx**2 + dy**2)\n fmt = \"|{}|{}|{:03f}|{:03f}|\"\n ss.append(fmt.format(i, pair[0].label[:-14], delta*1000, dt))\n deltas.append(delta)\n angles.append(abs(dt))\n\ndisplay_markdown(Markdown('\\n'.join(ss)))\n \nfig, axes = plt.subplots(ncols=2)\naxes[0].hist(deltas, bins=50)\naxes[0].set_xlabel(\"offset(mm)\")\naxes[1].hist(angles, bins=50)\naxes[1].set_xlabel(\"offset(deg)\")\nplt.tight_layout()\nplt.show()\n\n\nfig, axs = plt.subplots(2,2, sharex=True)\ntimes = []\ndxs = []\ndys = []\ndrs = []\ndthetas = []\nfor i,pair in enumerate(pairs):\n dt = datetime.strptime(pair[0].label[:-14], \"%d/%m/%Y-%H:%M:%S\")\n times.append(dt)\n dx = (pair[0].x - pair[1].x)*1000\n dy = (pair[0].y - pair[1].y)*1000\n dxs.append(dx)\n dys.append(dy)\n drs.append(np.sqrt(dx**2 + dy**2))\n dthetas.append(abs(pair[0].z - pair[1].z))\n\nlabels = [\"$\\Delta$x ($\\mu$m)\", \"$\\Delta$y ($\\mu$m)\", \"$\\Delta$r ($\\mu$m)\", \"$\\Delta \\\\theta$ (deg)\"]\naxs = chain.from_iterable(axs)\ndatas = [dxs, dys, drs, dthetas]\nfor label, ax, data in zip(labels, axs, datas):\n months = mdates.MonthLocator() # every month\n monthFmt = mdates.DateFormatter('%b')\n ax.xaxis.set_major_locator(months)\n ax.xaxis.set_major_formatter(monthFmt) \n ax.plot_date(times, data)\n ax.set_ylabel(label)\n ax.set_yscale('log')\n\nfig.tight_layout()\nplt.show()" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
nudomarinero/mltier1
PanSTARRS_WISE_reddening.ipynb
gpl-3.0
[ "PanSTARRS - WISE reddening\nApply the reddenign to the relevant magnitudes of our samples. This is done before computing the $Q_0$ or applying the ML cross-matching.", "import numpy as np\nfrom astropy.table import Table\nfrom astropy import units as u\nfrom astropy.coordinates import SkyCoord\n\nfrom mltier1 import Field\nfrom extinction import FILTER_EXT, get_eb_v\n\n%load_ext autoreload\n\n%autoreload\n\n%pylab inline\n\nfield = Field(170.0, 190.0, 45.5, 56.5)\nfield_full = Field(160.0, 232.0, 42.0, 62.0)", "Load the data\nLoad the catalogues", "panstarrs = Table.read(\"panstarrs_u1.fits\")\n\nwise = Table.read(\"wise_u1.fits\")", "Coordinates\nAs we will use the coordinates to retrieve the extinction in their positions", "coords_panstarrs = SkyCoord(panstarrs['raMean'], panstarrs['decMean'], unit=(u.deg, u.deg), frame='icrs')\n\ncoords_wise = SkyCoord(wise['raWise'], wise['decWise'], unit=(u.deg, u.deg), frame='icrs')", "Reddening\nGet the extinction for the positions of the sources in the catalogues.", "ext_panstarrs = get_eb_v(coords_panstarrs.ra.deg, coords_panstarrs.dec.deg)\n\next_wise = get_eb_v(coords_wise.ra.deg, coords_wise.dec.deg)", "Apply the correction to each position", "i_correction = ext_panstarrs * FILTER_EXT[\"i\"]\n\nw1_correction = ext_wise * FILTER_EXT[\"W1\"]\n\nhist(i_correction, bins=100);\n\nhist(w1_correction, bins=100);\n\npanstarrs.rename_column(\"i\", 'iUncor')\n\nwise.rename_column(\"W1mag\", 'W1magUncor')\n\npanstarrs[\"i\"] = panstarrs[\"iUncor\"] - i_correction\n\nwise[\"W1mag\"] = wise[\"W1magUncor\"] - w1_correction", "Save the corrected catalogues\nPanSTARRS", "columns_save = ['objID', 'raMean', 'decMean', 'raMeanErr', 'decMeanErr', 'i', 'iErr']\n\npanstarrs[columns_save].write('panstarrs_u2.fits', format=\"fits\")\n\npanstarrs[\"ext\"] = ext_panstarrs\n\npanstarrs[['objID', \"ext\"]].write('panstarrs_extinction.fits', format=\"fits\")\n\n# Free memory\ndel panstarrs", "WISE", "columns_save = ['AllWISE', 'raWise', 'decWise', 'raWiseErr', 'decWiseErr', 'W1mag', 'W1magErr']\n\nwise[columns_save].write('wise_u2.fits', format=\"fits\")\n\nwise[\"ext\"] = ext_wise\n\nwise[['AllWISE', \"ext\"]].write('wise_extinction.fits', format=\"fits\")" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]