repo_name
stringlengths 6
77
| path
stringlengths 8
215
| license
stringclasses 15
values | cells
sequence | types
sequence |
---|---|---|---|---|
peterwittek/ipython-notebooks | Parameteric and Bilevel Polynomial Optimization Problems.ipynb | gpl-3.0 | [
"Relaxations of parametric and bilevel polynomial optimization problems\nSuppose we are interested in finding the global optimum of the following constrained polynomial optimization problem:\n$$ \\min_{x\\in\\mathbb{R}^n}f(x)$$\nsuch that\n$$ g_i(x) \\geq 0, i=1,\\ldots,r$$\nHere $f$ and $g_i$ are polynomials in $x$. We can think of the constraints as a semialgebraic set $\\mathbf{K}={x\\in\\mathbb{R}^n: g_i(x) \\geq 0, i=1,\\ldots,r}$. Lasserre's method gives a series of semidefinite programming (SDP) relaxation of increasing size that approximate this optimum through the moments of $x$.\nA manuscript appeared on arXiv a few months ago that considers bilevel polynomial optimization problems, that is, when a set of variables is subject to a lower level optimization. The method heavily relies on the joint+marginal approach to SDP relaxations of parametric polynomial optimization problems. Here we discuss both methods and how they can be implemented with Ncpol2sdpa>=1.10 in Python.\nConstraints on moments\nLet us consider the plainest form of the SDP relaxation first with the the following trivial polynomial optimization problem:\n$$ \\min_{x\\in \\mathbb{R}}2x^2$$\nLet us denote the linear mapping from monomials to $\\mathbb{R}$ by $L_x$. Then, for instance, $L_x(2x^2)= 2y_2$. If there is a representing Borel measure $\\mu$ on the semialgebraic set $\\mathbf{K}$ of the constraints (in this case, there are no constraints), then we can write $y_\\alpha = \\int_\\mathbf{K} x^\\alpha \\mathrm{d}\\mu$. The level-1 relaxation will be\n$$ \\min_{y}2y_{2}$$\nsuch that\n$$\\left[ \\begin{array}{cc}1 & y_{1} \\y_{1} & y_{2}\\end{array} \\right] \\succeq{}0.$$\nWe import some functions first that we will use later:",
"import matplotlib.pyplot as plt\nfrom math import sqrt\nfrom sympy import integrate, N\nfrom ncpol2sdpa import SdpRelaxation, generate_variables, flatten\n%matplotlib inline ",
"Not surprisingly, solving the example gives a very accurate result:",
"x = generate_variables('x')[0]\nsdp = SdpRelaxation([x])\nsdp.get_relaxation(1, objective=x**2)\nsdp.solve()\nprint(sdp.primal, sdp.dual)",
"Notice that even in this formulation, there is an implicit constraint on a moment: the top left element of the moment matrix is 1. Given a representing measure, this means that $\\int_\\mathbf{K} \\mathrm{d}\\mu=1$. It is actually because of this that a $\\lambda$ dual variable appears in the dual formulation:\n$$max_{\\lambda, \\sigma_0} \\lambda$$\nsuch that\n$$2x^2 - \\lambda = \\sigma_0\\\n\\sigma_0\\in \\Sigma{[x]}, \\mathrm{deg}\\sigma_0\\leq 2.$$\nIn fact, we can move $\\lambda$ to the right-hand side, where the sum-of-squares (SOS) decomposition is, $\\lambda$ being a trivial SOS multiplied by the constraint $\\int_\\mathbf{K} \\mathrm{d}\\mu$, that is, by 1.\nWe normally think of adding some $g_i(x)$ polynomial constraints that define a semialgebraic set, and then constructing matching localizing matrices. We can, however, impose more constraints on the moments. For instance, we can add a constraint that $\\int_\\mathbf{K} x\\mathrm{d}\\mu = 1$. All of these constraints will have a constant instead of an SOS polynomial in the dual. To ensure the moments are not substituted out while generating the SDP, we enter them as pairs of moment inequalities. Solving this problem gives the correct result again:",
"moments = [x-1, 1-x]\nsdp = SdpRelaxation([x])\nsdp.get_relaxation(1, objective=x**2, momentinequalities=moments)\nsdp.solve()\nprint(sdp.primal, sdp.dual)",
"The dual changed, slightly. Let $\\gamma_\\beta=\\int_\\mathbf{K} x^\\beta\\mathrm{d}\\mu$ for $\\beta=0, 1$. Then the dual reads as\n$$max_{\\lambda_\\beta, \\sigma_0} \\sum_{\\beta=0}^1\\lambda_\\beta \\gamma_\\beta$$\nsuch that\n$$2x^2 - \\sum_{\\beta=0}^1\\lambda_\\beta x^\\beta = \\sigma_0\\\n\\sigma_0\\in \\Sigma{[x]}, \\mathrm{deg}\\sigma_0\\leq 2.$$\nIndeed, if we extract the coefficients, we will see that $x$ gets a $\\lambda_1=2$ (note that equalities are replaced by pairs of inequalities):",
"coeffs = [-sdp.extract_dual_value(0, range(1))]\ncoeffs += [sdp.y_mat[2*i+1][0][0] - sdp.y_mat[2*i+2][0][0]\n for i in range(len(moments)//2)]\nsigma_i = sdp.get_sos_decomposition()\nprint(coeffs, [sdp.dual, sigma_i[1]-sigma_i[2]])",
"Moment constraints play a crucial role in the joint+marginal approach of the SDP relaxation of polynomial optimization problems, and hence also indirectly in the bilevel polynomial optimization problems.\nJoint+marginal approach\nIn a parametric polynomial optimization problem, we can separate two sets of variables, and one set acts as a parameter to the problem. More formally, we would like to find the following function:\n$$ J(x) = \\inf_{y\\in\\mathbb{R}^m}{f(x,y): h_j(y)\\geq 0, j=1,\\ldots,r},$$\nwhere $x\\in\\mathbf{X}={x\\in \\mathbb{R}^n: h_k(x)\\geq 0, k=r+1,\\ldots,t}$. This can be relaxed as an SDP, and we can extract an approximation $J_k(x)$ at level-$k$ from the dual solution. The primal form reads as\n$$ \\mathrm{inf}z L_z(f)$$\nsuch that\n$$ M_k(z)\\succeq 0,\\\nM{k-v_j}(h_j z)\\succeq 0, j=1,\\ldots,t\\\nL_z(x^\\beta) = \\gamma_\\beta, \\forall\\beta\\in\\mathbb{N}_k^n.$$\nNotice that the localizing matrices also address the polynomial constraints that define the semialgebraic set $\\mathbf{X}$. If the positivity constraints are fulfilled, then we have a finite Borel representing measure $\\mu$ on $\\mathbf{K}={h_j(y)\\geq 0, j=1,\\ldots,r}$ such that $z_{\\alpha\\beta}=\\int_\\mathbf{K} x^\\alpha y^\\beta\\mathrm{d}\\mu$. \nThe part that is different from the regular Lasserre hierachy is the last line, where $\\gamma_\\beta=\\int_\\mathbf{X} x^\\beta\\mathrm{d}\\varphi(x)$. This establishes a connection between the moments of $x$ on $\\mathbf{K}$ in measure $\\mu$ and the moments of $x$ on $\\mathbf{X}$ in measure $\\varphi$. This $\\varphi$ measure is a Borel probability measure on $\\mathbf{X}$ with a positive density with respect to the Lebesgue measure on $\\mathbb{R}^n$. In other words, the marginal of one measure must match the other on $\\mathbf{X}$.\nThe dual of the primal form of the SDP with these moment constraints is\n$$ \\mathrm{sup}{p, \\sigma_i} \\int\\mathbf{X} p \\mathrm{d}\\varphi$$\nsuch that\n$$ f - p = \\sigma_0 + \\sum_{j=1}^t \\sigma_j h_j\\\np\\in\\mathbb{R}[x], \\sigma_j\\in\\Sigma[x, y], j=0,\\ldots,t\\\n\\mathrm{deg} p\\leq 2k, \\mathrm{deg} \\sigma_0\\leq 2k, \\mathrm{deg}\\sigma_j h_j \\leq 2k, j=1,\\ldots, t.\n$$\nThe polynomial $p=\\sum_{\\beta=0}^{2k} \\lambda_\\beta x^\\beta$ is the approximation $J_k(x)$. Below we reproduce the three examples of the paper.\nExample 1\nLet $\\mathbf{X}=[0,1]$, $\\mathbf{K}={(x,y): 1-x^2-y^2\\geq 0; x,y\\in\\mathbf{X}}$, and $f(x,y)= - xy^2$. We know that $J(x) = -1(1-x^2)x$. First we declare $J(x)$, a helper function to define a polynomial, and we set up the symbolic variables $x$ and $y$.",
"def J(x):\n return -(1-x**2)*x\n\n\ndef Jk(x, coeffs):\n return sum(ci*x**i for i, ci in enumerate(coeffs))\n\nx = generate_variables('x')[0]\ny = generate_variables('y')[0]",
"Next, we define the level of the relaxation and the moment constraints:",
"level = 4\ngamma = [integrate(x**i, (x, 0, 1)) for i in range(1, 2*level+1)]\nmarginals = flatten([[x**i-N(gamma[i-1]), N(gamma[i-1])-x**i] for i in range(1, 2*level+1)])",
"Finally we define the objective function and the constraints that define the semialgebraic sets, and we generate and solve the relaxation.",
"f = -x*y**2\ninequalities = [1.0-x**2-y**2, 1-x, x, 1-y, y]\n\nsdp = SdpRelaxation([x, y], verbose=0)\nsdp.get_relaxation(level, objective=f, momentinequalities=marginals,\n inequalities=inequalities)\nsdp.solve()\nprint(sdp.primal, sdp.dual, sdp.status)\ncoeffs = [sdp.extract_dual_value(0, range(len(inequalities)+1))]\ncoeffs += [sdp.y_mat[len(inequalities)+1+2*i][0][0] - sdp.y_mat[len(inequalities)+1+2*i+1][0][0]\n for i in range(len(marginals)//2)]",
"To check the correctness of the approximation, we plot the optimal and the approximated functions over the domain.",
"x_domain = [i/100. for i in range(100)]\nplt.plot(x_domain, [J(xi) for xi in x_domain], linewidth=2.5)\nplt.plot(x_domain, [Jk(xi, coeffs) for xi in x_domain], linewidth=2.5)\nplt.show()",
"Example 2\nThe set $\\mathbf{X}=[0,1]$ remains the same. Let $\\mathbf{K}={(x,y): 1-y_1^2-y_2^2\\geq 0}$, and $f(x,y) = xy_1 + (1-x)y_2$. Now the optimal $J(x)$ will be $-\\sqrt{x^2+(1-x)^2}$.",
"def J(x):\n return -sqrt(x**2+(1-x)**2)\n\nx = generate_variables('x')[0]\ny = generate_variables('y', 2)\n\nf = x*y[0] + (1-x)*y[1]\n\ngamma = [integrate(x**i, (x, 0, 1)) for i in range(1, 2*level+1)]\nmarginals = flatten([[x**i-N(gamma[i-1]), N(gamma[i-1])-x**i] for i in range(1, 2*level+1)])\ninequalities = [1-y[0]**2-y[1]**2, x, 1-x]\nsdp = SdpRelaxation(flatten([x, y]))\nsdp.get_relaxation(level, objective=f, momentinequalities=marginals,\n inequalities=inequalities)\nsdp.solve()\nprint(sdp.primal, sdp.dual, sdp.status)\ncoeffs = [sdp.extract_dual_value(0, range(len(inequalities)+1))]\ncoeffs += [sdp.y_mat[len(inequalities)+1+2*i][0][0] - sdp.y_mat[len(inequalities)+1+2*i+1][0][0]\n for i in range(len(marginals)//2)]\nplt.plot(x_domain, [J(xi) for xi in x_domain], linewidth=2.5)\nplt.plot(x_domain, [Jk(xi, coeffs) for xi in x_domain], linewidth=2.5)\nplt.show()",
"Example 3\nNote that this is Example 4 in the paper. The set $\\mathbf{X}=[0,1]$ remains the same, whereas $\\mathbf{K}={(x,y): xy_1^2+y_2^2-x= 0, y_1^2+xy_2^2-x= 0}$, and $f(x,y) = (1-2x)(y_1+y_2)$. The optimal $J(x)$ is $-2|1-2x|\\sqrt{x/(1+x)}$. We enter the equalities as pairs of inequalities.",
"def J(x):\n return -2*abs(1-2*x)*sqrt(x/(1+x))\n\nx = generate_variables('x')[0]\ny = generate_variables('y', 2)\nf = (1-2*x)*(y[0] + y[1])\n\ngamma = [integrate(x**i, (x, 0, 1)) for i in range(1, 2*level+1)]\nmarginals = flatten([[x**i-N(gamma[i-1]), N(gamma[i-1])-x**i] for i in range(1, 2*level+1)])\n\ninequalities = [x*y[0]**2 + y[1]**2 - x, - x*y[0]**2 - y[1]**2 + x,\n y[0]**2 + x*y[1]**2 - x, - y[0]**2 - x*y[1]**2 + x,\n 1-x, x]\nsdp = SdpRelaxation(flatten([x, y]))\nsdp.get_relaxation(level, objective=f, momentinequalities=marginals,\n inequalities=inequalities)\nsdp.solve(solver=\"mosek\")\nprint(sdp.primal, sdp.dual, sdp.status)\ncoeffs = [sdp.extract_dual_value(0, range(len(inequalities)+1))]\ncoeffs += [sdp.y_mat[len(inequalities)+1+2*i][0][0] - sdp.y_mat[len(inequalities)+1+2*i+1][0][0]\n for i in range(len(marginals)//2)]\nplt.plot(x_domain, [J(xi) for xi in x_domain], linewidth=2.5)\nplt.plot(x_domain, [Jk(xi, coeffs) for xi in x_domain], linewidth=2.5)\nplt.show()",
"Bilevel problem of nonconvex lower level\nWe define the bilevel problem as follows:\n$$ \\min_{x\\in\\mathbb{R}^n, y\\in\\mathbb{R}^m}f(x,y)$$\nsuch that\n$$g_i(x, y) \\geq 0, i=1,\\ldots,s,\\\ny\\in Y(x)=\\mathrm{argmin}_{w\\in\\mathbb{R}^m}{G(x,w): h_j(w)\\geq 0, j=1,...,r}.$$\nThe more interesting case is when the when the lower level problem's objective function $G(x,y)$ is nonconvex. We consider the $\\epsilon$-approximation of this case:\n$$ \\min_{x\\in\\mathbb{R}^n, y\\in\\mathbb{R}^m}f(x,y)$$\nsuch that\n$$g_i(x, y) \\geq 0, i=1,\\ldots,s,\\\nh_j(y) \\geq 0, j=1,\\dots, r,\\\nG(x,y) - \\mathrm{min}_{w\\in\\mathbb{R}^m}{G(x,w): h_j(w)\\geq 0, j=1,...,r}\\leq \\epsilon.$$\nThis approximation will give an increasing lower bound) on the original problem. The min function on the right of $G(x,y)$ is essentially a parametric polynomial optimization problem, that is, our task is to find $J(x)$. We have to ensure that the parameter set is compact, so we add a set of constraints on the coordinates of $x$: ${M^2-x_l^2\\geq 0, l=1,\\ldots,n}$ for some $M>0$.\nThe idea is that we relax this as an SDP at some level $k$ and fixed $\\epsilon$ to obtain the following single-level polynomial optimization problem:\n$$ \\min_{x\\in\\mathbb{R}^n, y\\in\\mathbb{R}^m}f(x,y)$$\nsuch that\n$$g_i(x, y) \\geq 0, i=1,\\ldots,s,\\\nh_j(y) \\geq 0, j=1,\\dots, r,\\\nG(x,y) - J_k(x)\\leq \\epsilon.$$\nThen we relax this is an another SDP at level $k$.\nConsider a test problem defined as follows:\n$$ \\min_{(x,y)\\in\\mathbb{R}^2} x+y$$\nsuch that\n$$x\\in[-1,1], \\\ny\\in \\mathrm{argmin}_{w\\in\\mathbb{R}^m}{\\frac{xy^2}{2}-\\frac{y^3}{3}, y\\in[-1,1]}$$.\nThis is clearly a bilevel problem. We set up the necessary variables and constraints, requesting a level-3 relaxation, and also fixing $\\epsilon$ and a choice of $M$.",
"x = generate_variables('x')[0]\ny = generate_variables('y')[0]\n\nf = x + y\ng = [x <= 1.0, x >= -1.0]\nG = x*y**2/2.0 - y**3/3.0\nh = [y <= 1.0, y >= -1.0]\nepsilon = 0.001\nM = 1.0\nlevel = 3",
"We define the relaxation of the parametric polynomial optimization problem that returns an approximation of $J(x)$ from the dual:",
"def lower_level(k, G, h, M):\n gamma = [integrate(x**i, (x, -M, M))/(2*M) for i in range(1, 2*k+1)]\n marginals = flatten([[x**i-N(gamma[i-1]), N(gamma[i-1])-x**i] for i in range(1, 2*k+1)])\n inequalities = h + [x**2 <= M**2]\n lowRelaxation = SdpRelaxation([x, y])\n lowRelaxation.get_relaxation(k, objective=G,\n momentinequalities=marginals,\n inequalities=inequalities)\n lowRelaxation.solve()\n print(\"Low-level:\", lowRelaxation.primal, lowRelaxation.dual, lowRelaxation.status)\n coeffs = []\n for i in range(len(marginals)//2):\n coeffs.append(lowRelaxation.y_mat[len(inequalities)+1+2*i][0][0] -\n lowRelaxation.y_mat[len(inequalities)+1+2*i+1][0][0])\n blocks = [i for i in range(len(inequalities)+1)]\n constant = lowRelaxation.extract_dual_value(0, blocks)\n return constant + sum(ci*x**(i+1) for i, ci in enumerate(coeffs))",
"Finally, we put it all together:",
"Jk = lower_level(level, G, h, M)\ninequalities = g + h + [G - Jk <= epsilon]\nhighRelaxation = SdpRelaxation([x, y], verbose=0)\nhighRelaxation.get_relaxation(level, objective=f,\n inequalities=inequalities)\nhighRelaxation.solve()\nprint(\"High-level:\", highRelaxation.primal, highRelaxation.status)\nprint(\"Optimal x and y:\", highRelaxation[x], highRelaxation[y])\n",
"These values are close to the analytical solution."
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
Vizzuality/gfw | docs/Update_GFW_Layers_Vault.ipynb | mit | [
"Create Layer Config Backup\nThis notebook outlines how to run a process to create a remote backup of gfw layers.\nRough process:\n\nRun this notebook from the gfw/data folder\nWait...\nCheck _metadata.json files in the production and staging folders for changes\nIf everything looks good, make a PR\n\nFirst, install the latest version of LMIPy",
"!pip install LMIPy\n\nfrom IPython.display import clear_output\nclear_output()\n\nprint('LMI ready!')",
"Next, import relevent modules",
"import LMIPy as lmi\n\nimport os\nimport json\nimport shutil\n\nfrom pprint import pprint\nfrom datetime import datetime\nfrom tqdm import tqdm",
"First, pull the gfw repo and check that the following path correctly finds the data/layers folder, inside which, you should find a production and staging folder.",
"envs = ['staging', 'production']\n\npath = './backup/configs'\n\n# Create directory and archive previous datasets\nwith open(path + '/metadata.json') as f:\n date = json.load(f)[0]['updatedAt']\n \nshutil.make_archive(f'./backup/archived/archive_{date}', 'zip', path)\n\n# Check correct folders are found\n\nif not all([folder in os.listdir(path) for folder in envs]):\n print(f'Boo! Incorrect path: {path}')\nelse:\n print('Good to go!')",
"Run the following to save, build .json files and log changes.\nUpdate record",
"%%time\nfor env in envs:\n \n # Get all old ids\n old_ids = [file.split('.json')[0] for file in os.listdir(path + f'/{env}') if '_metadata' not in file]\n \n old_datasets = []\n files = os.listdir(path + f'/{env}')\n \n # Extract all old datasets\n for file in files:\n if '_metadata' not in file:\n with open(path + f'/{env}/{file}') as f:\n old_datasets.append(json.load(f))\n \n # Now pull all current gfw datasets and save\n col = lmi.Collection(app=['gfw'], env=env)\n col.save(path + f'/{env}')\n \n # Get all new ids\n new_ids = [file.split('.json')[0] for file in os.listdir(path + f'/{env}') if '_metadata' not in file]\n \n # See which are new, and which have been removed\n added = list(set(new_ids) - set(old_ids))\n removed = list(set(old_ids) - set(new_ids))\n changed = []\n \n # COmpare old and new, logging those that have changed\n for old_dataset in old_datasets:\n ds_id = old_dataset['id']\n old_ids.append(ds_id)\n with open(path + f'/{env}/{ds_id}.json') as f:\n new_dataset = json.load(f)\n \n if old_dataset != new_dataset:\n changed.append(ds_id)\n \n # Create metadata json\n with open(path + f'/{env}/_metadata.json', 'w') as f:\n \n meta = {\n 'updatedAt': datetime.today().strftime('%Y-%m-%d@%Hh-%Mm-%Ss'),\n 'env': env,\n 'differences': {\n 'changed': changed,\n 'added': added,\n 'removed': removed\n }\n }\n \n # And save it too!\n json.dump(meta,f)\n \nprint('Done!')\n\n# Generate rich metadata\n\nmetadata = []\nfor env in tqdm(envs):\n with open(path + f'/{env}/_metadata.json') as f:\n metadata.append(json.load(f))\n \nfor env in tqdm(metadata):\n for change_type, ds_list in env['differences'].items():\n tmp = []\n for dataset in ds_list:\n # generate Dataset entity to get name etc...\n print(dataset)\n tmp.append(str(lmi.Dataset(dataset)))\n env['differences'][change_type] = tmp\n \nwith open(path + f'/metadata.json', 'w') as f:\n \n # And save it too!\n json.dump(metadata,f)\n\npprint(metadata)"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
pastas/pasta | examples/notebooks/02_fix_parameters.ipynb | mit | [
"Time Series Analysis with Pastas\nDeveloped by Mark Bakker, TU Delft\nRequired files to run this notebook (all available from the data subdirectory):\n\nHead files: head_nb1.csv, B58C0698001_1.csv, B50H0026001_1.csv, B22C0090001_1.csv, headwell.csv\nPricipitation files: rain_nb1.csv, neerslaggeg_HEIBLOEM-L_967.txt, neerslaggeg_ESBEEK_831.txt, neerslaggeg_VILSTEREN_342.txt, rainwell.csv\nEvaporation files: evap_nb1.csv, etmgeg_380.txt, etmgeg_260.txt, evapwell.csv\nWell files: well1.csv, well2.csv\nFigure: b58c0698_dino.png\n\nPastas\nPastas is a computer program for hydrological time series analysis and is available from the Pastas Github . Pastas makes heavy use of pandas timeseries. An introduction to pandas timeseries can be found, for example, here. The Pastas documentation is available here.",
"import pandas as pd\nimport pastas as ps\nimport matplotlib.pyplot as plt\n\nps.set_log_level(\"ERROR\")\nps.show_versions()",
"Load the head observations\nThe first step in time series analysis is to load a time series of head observations. The time series needs to be stored as a pandas.Series object where the index is the date (and time, if desired). pandas provides many options to load time series data, depending on the format of the file that contains the time series. In this example, measured heads are stored in the csv file head_nb1.csv. \nThe heads are read from a csv file with the read_csv function of pandas and are then squeezed to create a pandas Series object. To check if you have the correct data type, use the type command as shown below.",
"ho = pd.read_csv('../data/head_nb1.csv', parse_dates=['date'], index_col='date', squeeze=True)\nprint('The data type of the oseries is:', type(ho))",
"The variable ho is now a pandas Series object. To see the first five lines, type ho.head().",
"ho.head()",
"The series can be plotted as follows",
"ho.plot(style='.', figsize=(12, 4))\nplt.ylabel('Head [m]');\nplt.xlabel('Time [years]');",
"Load the stresses\nThe head variation shown above is believed to be caused by two stresses: rainfall and evaporation. Measured rainfall is stored in the file rain_nb1.csv and measured potential evaporation is stored in the file evap_nb1.csv. \nThe rainfall and potential evaporation are loaded and plotted.",
"rain = pd.read_csv('../data/rain_nb1.csv', parse_dates=['date'], index_col='date', squeeze=True)\nprint('The data type of the rain series is:', type(rain))\n\nevap = pd.read_csv('../data/evap_nb1.csv', parse_dates=['date'], index_col='date', squeeze=True)\nprint('The data type of the evap series is', type(evap))\n\nplt.figure(figsize=(12, 4))\nrain.plot(label='rain')\nevap.plot(label='evap')\nplt.xlabel('Time [years]')\nplt.ylabel('Rainfall/Evaporation (m/d)')\nplt.legend(loc='best');",
"Recharge\nAs a first simple model, the recharge is approximated as the measured rainfall minus the measured potential evaporation.",
"recharge = rain - evap\nplt.figure(figsize=(12, 4))\nrecharge.plot()\nplt.xlabel('Time [years]')\nplt.ylabel('Recharge (m/d)');",
"First time series model\nOnce the time series are read from the data files, a time series model can be constructed by going through the following three steps:\n\nCreat a Model object by passing it the observed head series. Store your model in a variable so that you can use it later on. \nAdd the stresses that are expected to cause the observed head variation to the model. In this example, this is only the recharge series. For each stess, a StressModel object needs to be created. Each StressModel object needs three input arguments: the time series of the stress, the response function that is used to simulate the effect of the stress, and a name. In addition, it is recommended to specified the kind of series, which is used to perform a number of checks on the series and fix problems when needed. This checking and fixing of problems (for example, what to substitute for a missing value) depends on the kind of series. In this case, the time series of the stress is stored in the variable recharge, the Gamma function is used to simulate the response, the series will be called 'recharge', and the kind is prec which stands for precipitation. One of the other keyword arguments of the StressModel class is up, which means that a positive stress results in an increase (up) of the head. The default value is True, which we use in this case as a positive recharge will result in the heads going up. Each StressModel object needs to be stored in a variable, after which it can be added to the model. \nWhen everything is added, the model can be solved. The default option is to minimize the sum of the squares of the errors between the observed and modeled heads.",
"ml = ps.Model(ho)\nsm1 = ps.StressModel(recharge, ps.Gamma, name='recharge', settings='prec')\nml.add_stressmodel(sm1)\nml.solve(tmin='1985', tmax='2010')",
"The solve function has a number of default options that can be specified with keyword arguments. One of these options is that by default a fit report is printed to the screen. The fit report includes a summary of the fitting procedure, the optimal values obtained by the fitting routine, and some basic statistics. The model contains five parameters: the parameters $A$, $n$, and $a$ of the Gamma function used as the response function for the recharge, the parameter $d$, which is a constant base level, and the parameter $\\alpha$ of the noise model, which will be explained a little later on in this notebook.\nThe results of the model are plotted below.",
"ml.plot(figsize=(12, 4));\n\nml = ps.Model(ho)\nsm1 = ps.StressModel(recharge, ps.Gamma, name='recharge', settings='prec')\nml.add_stressmodel(sm1)\nml.solve(tmin='1985', tmax='2010', solver=ps.LeastSquares)\n\nml = ps.Model(ho)\nsm1 = ps.StressModel(recharge, ps.Gamma, name='recharge', settings='prec')\nml.add_stressmodel(sm1)\nml.set_parameter('recharge_n', vary=False)\nml.solve(tmin='1985', tmax='2010', solver=ps.LeastSquares)\n\nml.plot(figsize=(10, 4));"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
UCSBarchlab/PyRTL | ipynb-examples/introduction-to-hardware.ipynb | bsd-3-clause | [
"Introduction to Hardware Design\nThis code works through the hardware design process with the the\naudience of software developers more in mind. We start with the simple\nproblem of designing a fibonacci sequence calculator (http://oeis.org/A000045).",
"import pyrtl",
"A normal old python function to return the Nth fibonacci number.\nInterative implementation of fibonacci, just iteratively adds a and b to\ncalculate the nth number in the sequence.\n>> [software_fibonacci(x) for x in range(10)]\n[0, 1, 1, 2, 3, 5, 8, 13, 21, 34]",
"def software_fibonacci(n):\n a, b = 0, 1\n for i in range(n):\n a, b = b, a + b\n return a",
"Attempt 1\nLet's convert this into some hardware that computes the same thing. Our first go will be to just replace the 0 and 1 with WireVectors to see\nwhat happens.",
"def attempt1_hardware_fibonacci(n, bitwidth):\n a = pyrtl.Const(0)\n b = pyrtl.Const(1)\n for i in range(n):\n a, b = b, a + b\n return a",
"The above looks really nice does not really represent a hardware implementation\nof fibonacci.\nLet's reason through the code, line by line, to figure out what it would actually build.\n\n\na = pyrtl.Const(0)\n\nThis makes a wirevector of bitwidth=1 that is driven by a zero. Thus a is a wirevector. Seems good.\n\n\n\nb = pyrtl.Const(1) \n\nJust like above, b is a wirevector driven by 1\n\n\n\nfor i in range(n):\n\nOkay, here is where things start to go off the rails a bit. This says to perform the following code 'n' times, but the value 'n' is passed as an input and is not something that is evaluated in the hardware, it is evaluated when you run the PyRTL program which generates (or more specifically elaborates) the hardware. Thus the hardware we are building will have The value of 'n' built into the hardware and won't actually be a run-time parameter. Loops are really useful for building large repetitive hardware structures, but they CAN'T be used to represent hardware that should do a computation iteratively. Instead we are going to need to use some registers to build a state machine.\n\n\na, b = b, a + b\nLet's break this apart. In the first cycle b is Const(1) and (a + b) builds an adder with a (Const(0)) and b (Const(1) as inputs. Thus (b, a + b) in the first iteration is: ( Const(1), result_of_adding( Const(0), Const(1) ) At the end of the first iteration a and b refer to those two constant values. In each following iteration more adders are built and the names a and b are bound to larger and larger trees of adders but all the inputs are constants!\n\n\nreturn a \nThe final thing that is returned then is the last output from this tree of adders which all have Consts as inputs. Thus this hardware is hard-wired to find only and exactly the value of fibonacci of the value N specified at design time! Probably not what you are intending.\n\n\n\nAttempt 2\nLet's try a different approach. Let's specify two registers (\"a\" and \"b\") and then we can update those values as we iteratively compute fibonacci of N cycle by cycle.",
"def attempt2_hardware_fibonacci(n, bitwidth):\n a = pyrtl.Register(bitwidth, 'a')\n b = pyrtl.Register(bitwidth, 'b')\n\n a.next <<= b\n b.next <<= a + b\n\n return a",
"This is looking much better. \nTwo registers, a and b store the values from which we\ncan compute the series. \nThe line a.next <<= b means that the value of a in the next\ncycle should be simply be b from the current cycle.\nThe line b.next <<= a + b says\nto build an adder, with inputs of a and b from the current cycle and assign the value\nto b in the next cycle.\nA visual representation of the hardware built is as such:\n+-----+ +---------+\n | | | |\n +===V==+ | +===V==+ |\n | | | | | |\n | a | | | b | |\n | | | | | |\n +===V==+ | +==V===+ |\n | | | |\n | +-----+ |\n | | |\n +===V===========V==+ |\n \\ adder / |\n +==============+ |\n | |\n +---------------+\nNote that in the picture the register a and b each have a wirevector which is\nthe current value (shown flowing out of the bottom of the register) and an input\nwhich is giving the value that should be the value of the register in the following\ncycle (shown flowing into the top of the register) which are a and a.next respectively.\nWhen we say return a what we are returning is a reference to the register a in\nthe picture above.\nAttempt 3\nOf course one problem is that we don't know when we are done! How do we know we\nreached the \"nth\" number in the sequence? Well, we need to add a register to\ncount up and see if we are done.\nThis is very similliar to the example before, except that now we have a register \"i\"\nwhich keeps track of the iteration that we are on (i.next <<= i + 1).\nThe function now returns two values, a reference to the register \"a\" and a reference to a single\nbit that tells us if we are done. That bit is calculated by comparing \"i\" to the\nto a wirevector \"n\" that is passed in to see if they are the same.",
"def attempt3_hardware_fibonacci(n, bitwidth):\n a = pyrtl.Register(bitwidth, 'a')\n b = pyrtl.Register(bitwidth, 'b')\n i = pyrtl.Register(bitwidth, 'i')\n\n i.next <<= i + 1\n a.next <<= b\n b.next <<= a + b\n\n return a, i == n",
"Attempt 4\nThis is now far enough along that we can simulate the design and see what happens...",
"def attempt4_hardware_fibonacci(n, req, bitwidth):\n a = pyrtl.Register(bitwidth, 'a')\n b = pyrtl.Register(bitwidth, 'b')\n i = pyrtl.Register(bitwidth, 'i')\n local_n = pyrtl.Register(bitwidth, 'local_n')\n done = pyrtl.WireVector(bitwidth=1, name='done')\n\n with pyrtl.conditional_assignment:\n with req:\n local_n.next |= n\n i.next |= 0\n a.next |= 0\n b.next |= 1\n with pyrtl.otherwise:\n i.next |= i + 1\n a.next |= b\n b.next |= a + b\n done <<= i == local_n\n return a, done"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
LucaCanali/Miscellaneous | Pyspark_SQL_Magic_Jupyter/IPython_Pyspark_SQL_Magic.ipynb | apache-2.0 | [
"IPython magic functions for Pyspark\nExamples of shortcuts for executing SQL in Spark",
"#\n# IPython magic functions to use with Pyspark and Spark SQL\n# The following code is intended as examples of shorcuts to simplify the use of SQL in pyspark\n# The defined functions are:\n#\n# %sql <statement> - return a Spark DataFrame for lazy evaluation of the SQL\n# %sql_show <statement> - run the SQL statement and show max_show_lines (50) lines\n# %sql_display <statement> - run the SQL statement and display the results using a HTML table\n# - this is implemented passing via Pandas and displays up to max_show_lines (50)\n# %sql_explain <statement> - display the execution plan of the SQL statement\n#\n# Use: %<magic> for line magic or %%<magic> for cell magic.\n#\n# Author: Luca.Canali@cern.ch\n# September 2016\n#\n\nfrom IPython.core.magic import register_line_cell_magic\n\n# Configuration parameters\nmax_show_lines = 50 # Limit on the number of lines to show with %sql_show and %sql_display\ndetailed_explain = True # Set to False if you want to see only the physical plan when running explain\n\n\n@register_line_cell_magic\ndef sql(line, cell=None):\n \"Return a Spark DataFrame for lazy evaluation of the sql. Use: %sql or %%sql\"\n val = cell if cell is not None else line \n return spark.sql(val)\n\n@register_line_cell_magic\ndef sql_show(line, cell=None):\n \"Execute sql and show the first max_show_lines lines. Use: %sql_show or %%sql_show\"\n val = cell if cell is not None else line \n return spark.sql(val).show(max_show_lines) \n\n@register_line_cell_magic\ndef sql_display(line, cell=None):\n \"\"\"Execute sql and convert results to Pandas DataFrame for pretty display or further processing.\n Use: %sql_display or %%sql_display\"\"\"\n val = cell if cell is not None else line \n return spark.sql(val).limit(max_show_lines).toPandas() \n\n@register_line_cell_magic\ndef sql_explain(line, cell=None):\n \"Display the execution plan of the sql. Use: %sql_explain or %%sql_explain\"\n val = cell if cell is not None else line \n return spark.sql(val).explain(detailed_explain)\n",
"Define test tables",
"# Define test data and register it as tables \n# This is a classic example of employee and department relational tables\n# Test data will be used in the examples later in this notebook\n\nfrom pyspark.sql import Row\n\nEmployee = Row(\"id\", \"name\", \"email\", \"manager_id\", \"dep_id\")\ndf_emp = sqlContext.createDataFrame([\n Employee(1234, 'John', 'john@mail.com', 1236, 10),\n Employee(1235, 'Mike', 'mike@mail.com', 1237, 10),\n Employee(1236, 'Pat', 'pat@mail.com', 1237, 20),\n Employee(1237, 'Claire', 'claire@mail.com', None, 20),\n Employee(1238, 'Jim', 'jim@mail.com', 1236, 30)\n ])\n\ndf_emp.registerTempTable(\"employee\")\n\nDepartment = Row(\"dep_id\", \"dep_name\")\ndf_dep = sqlContext.createDataFrame([\n Department(10, 'Engineering'),\n Department(20, 'Head Quarter'),\n Department(30, 'Human resources')\n ])\n\ndf_dep.registerTempTable(\"department\")",
"Examples of how to use %SQL magic functions with Spark\nUse %sql to run SQL and return a DataFrame, lazy evaluation",
"# Example of line magic, a shortcut to run SQL in pyspark\n# Pyspark has lazy evaluation, so the query is not executed in this exmaple\n\ndf = %sql select * from employee\ndf",
"Use %sql_show to run SQL and show the top lines of the result set",
"# Example of line magic, the SQL is executed and the result is displayed\n# the maximum number of displayed lines is configurable (max_show_lines)\n\n%sql_show select * from employee",
"Example of cell magic to run SQL spanning multiple lines",
"%%sql_show \nselect emp.id, emp.name, emp.email, emp.manager_id, dep.dep_name \nfrom employee emp, department dep \nwhere emp.dep_id=dep.dep_id",
"Use %sql_display to run SQL and display the results as a HTML table\nExample of cell magic that runs SQL and then transforms it to Pandas. This will display the output as a HTML table in Jupyter notebooks",
"%%sql_display \nselect emp.id, emp.name, emp.email, emp2.name as manager_name, dep.dep_name \nfrom employee emp \n left outer join employee emp2 on emp2.id=emp.manager_id\n join department dep on emp.dep_id=dep.dep_id",
"Use %sql_explain to display the execution plan",
"%%sql_explain\nselect emp.id, emp.name, emp.email, emp2.name as manager_name, dep.dep_name \nfrom employee emp \n left outer join employee emp2 on emp2.id=emp.manager_id\n join department dep on emp.dep_id=dep.dep_id"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
pramitchoudhary/Experiments | notebook_gallery/other_experiments/build-models/model-selection-and-tuning/current-solutions/TPOT/TPOT-demo.ipynb | unlicense | [
"from IPython import display\nURL = \"https://github.com/rhiever/tpot\"\ndisplay.IFrame(URL, 1000, 1000)\n",
"TPOT uses a genetic algorithm (implemented with DEAP library) to pick an optimal pipeline for a regression task.\nWhat is a pipeline?\nPipeline is composed of preprocessors:\n* take polynomial transformations of features\n* \nTPOTBase is key class\nparameters:\npopulation_size: int (default: 100)\n The number of pipelines in the genetic algorithm population. Must\n be > 0.The more pipelines in the population, the slower TPOT will\n run, but it's also more likely to find better pipelines.\n* generations: int (default: 100)\n The number of generations to run pipeline optimization for. Must\n be > 0. The more generations you give TPOT to run, the longer it\n takes, but it's also more likely to find better pipelines.\n* mutation_rate: float (default: 0.9)\n The mutation rate for the genetic programming algorithm in the range\n [0.0, 1.0]. This tells the genetic programming algorithm how many\n pipelines to apply random changes to every generation. We don't\n recommend that you tweak this parameter unless you know what you're\n doing.\n* crossover_rate: float (default: 0.05)\n The crossover rate for the genetic programming algorithm in the\n range [0.0, 1.0]. This tells the genetic programming algorithm how\n many pipelines to \"breed\" every generation. We don't recommend that\n you tweak this parameter unless you know what you're doing.\n* scoring: function or str\n Function used to evaluate the quality of a given pipeline for the\n problem. By default, balanced class accuracy is used for\n classification problems, mean squared error for regression problems.\n TPOT assumes that this scoring function should be maximized, i.e.,\n higher is better.\n Offers the same options as sklearn.cross_validation.cross_val_score:\n ['accuracy', 'adjusted_rand_score', 'average_precision', 'f1',\n 'f1_macro', 'f1_micro', 'f1_samples', 'f1_weighted',\n 'precision', 'precision_macro', 'precision_micro', 'precision_samples',\n 'precision_weighted', 'r2', 'recall', 'recall_macro', 'recall_micro',\n 'recall_samples', 'recall_weighted', 'roc_auc']\n* num_cv_folds: int (default: 3)\n The number of folds to evaluate each pipeline over in k-fold\n cross-validation during the TPOT pipeline optimization process\n* max_time_mins: int (default: None)\n How many minutes TPOT has to optimize the pipeline. If not None,\n this setting will override the generations parameter.\nTPOTClassifier and TPOTRegressor inherit parent class TPOTBase, with modifications of the scoring function.",
"!sudo pip install deap update_checker tqdm xgboost tpot\n\nimport pandas as pd \nimport numpy as np\nimport psycopg2 \nimport os\nimport json\nfrom tpot import TPOTClassifier\nfrom sklearn.metrics import classification_report\n\nconn = psycopg2.connect(\n user = os.environ['REDSHIFT_USER']\n ,password = os.environ['REDSHIFT_PASS'] \n ,port = os.environ['REDSHIFT_PORT']\n ,host = os.environ['REDSHIFT_HOST']\n ,database = 'tradesy'\n)\nquery = \"\"\"\n select \n purchase_dummy\n ,shipping_price_ratio\n ,asking_price\n ,price_level\n ,brand_score\n ,brand_size\n ,a_over_b\n ,favorite_count\n ,has_blurb\n ,has_image\n ,seasonal_component\n ,description_length\n ,product_category_accessories\n ,product_category_shoes\n ,product_category_bags\n ,product_category_tops\n ,product_category_dresses\n ,product_category_weddings\n ,product_category_bottoms\n ,product_category_outerwear\n ,product_category_jeans\n ,product_category_activewear\n ,product_category_suiting\n ,product_category_swim\n \n from saleability_model_v2\n \n limit 50000\n \n\"\"\"\n\ndf = pd.read_sql(query, conn)\n\ntarget = 'purchase_dummy'\ndomain = filter(lambda x: x != target, df.columns.values)\ndf = df.astype(float)\n\ny_all = df[target].values\nX_all = df[domain].values\n\nidx_all = np.random.RandomState(1).permutation(len(y_all))\nidx_train = idx_all[:int(.8 * len(y_all))]\nidx_test = idx_all[int(.8 * len(y_all)):]\n\n# TRAIN AND TEST DATA\nX_train = X_all[idx_train]\ny_train = y_all[idx_train]\nX_test = X_all[idx_test]\ny_test = y_all[idx_test]",
"Sklearn model:",
"from sklearn.ensemble import RandomForestClassifier\nsklearn_model = RandomForestClassifier()\nsklearn_model.fit(X_train, y_train)\n\nsklearn_predictions = sklearn_model.predict(X_test)\nprint classification_report(y_test, sklearn_predictions)",
"TPOT Classifier",
"tpot_model = TPOTClassifier(generations=3, population_size=10, verbosity=2, max_time_mins=10)\ntpot_model.fit(X_train, y_train)\n\ntpot_predictions = tpot_model.predict(X_test)\nprint classification_report(y_test, tpot_predictions)",
"Export Pseudo Pipeline Code",
"tpot_model.export('optimal-saleability-model.py')\n\n!cat optimal-saleability-model.py"
] | [
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
efoley/deep-learning | transfer-learning/Transfer_Learning.ipynb | mit | [
"Transfer Learning\nMost of the time you won't want to train a whole convolutional network yourself. Modern ConvNets training on huge datasets like ImageNet take weeks on multiple GPUs. Instead, most people use a pretrained network either as a fixed feature extractor, or as an initial network to fine tune. In this notebook, you'll be using VGGNet trained on the ImageNet dataset as a feature extractor. Below is a diagram of the VGGNet architecture.\n<img src=\"assets/cnnarchitecture.jpg\" width=700px>\nVGGNet is great because it's simple and has great performance, coming in second in the ImageNet competition. The idea here is that we keep all the convolutional layers, but replace the final fully connected layers with our own classifier. This way we can use VGGNet as a feature extractor for our images then easily train a simple classifier on top of that. What we'll do is take the first fully connected layer with 4096 units, including thresholding with ReLUs. We can use those values as a code for each image, then build a classifier on top of those codes.\nYou can read more about transfer learning from the CS231n course notes.\nPretrained VGGNet\nWe'll be using a pretrained network from https://github.com/machrisaa/tensorflow-vgg. Make sure to clone this repository to the directory you're working from. You'll also want to rename it so it has an underscore instead of a dash.\ngit clone https://github.com/machrisaa/tensorflow-vgg.git tensorflow_vgg\nThis is a really nice implementation of VGGNet, quite easy to work with. The network has already been trained and the parameters are available from this link. You'll need to clone the repo into the folder containing this notebook. Then download the parameter file using the next cell.",
"from urllib.request import urlretrieve\nfrom os.path import isfile, isdir\nfrom tqdm import tqdm\n\nvgg_dir = 'tensorflow_vgg/'\n# Make sure vgg exists\nif not isdir(vgg_dir):\n raise Exception(\"VGG directory doesn't exist!\")\n\nclass DLProgress(tqdm):\n last_block = 0\n\n def hook(self, block_num=1, block_size=1, total_size=None):\n self.total = total_size\n self.update((block_num - self.last_block) * block_size)\n self.last_block = block_num\n\nif not isfile(vgg_dir + \"vgg16.npy\"):\n with DLProgress(unit='B', unit_scale=True, miniters=1, desc='VGG16 Parameters') as pbar:\n urlretrieve(\n 'https://s3.amazonaws.com/content.udacity-data.com/nd101/vgg16.npy',\n vgg_dir + 'vgg16.npy',\n pbar.hook)\nelse:\n print(\"Parameter file already exists!\")",
"Flower power\nHere we'll be using VGGNet to classify images of flowers. To get the flower dataset, run the cell below. This dataset comes from the TensorFlow inception tutorial.",
"import tarfile\n\ndataset_folder_path = 'flower_photos'\n\nclass DLProgress(tqdm):\n last_block = 0\n\n def hook(self, block_num=1, block_size=1, total_size=None):\n self.total = total_size\n self.update((block_num - self.last_block) * block_size)\n self.last_block = block_num\n\nif not isfile('flower_photos.tar.gz'):\n with DLProgress(unit='B', unit_scale=True, miniters=1, desc='Flowers Dataset') as pbar:\n urlretrieve(\n 'http://download.tensorflow.org/example_images/flower_photos.tgz',\n 'flower_photos.tar.gz',\n pbar.hook)\n\nif not isdir(dataset_folder_path):\n with tarfile.open('flower_photos.tar.gz') as tar:\n tar.extractall()\n tar.close()",
"ConvNet Codes\nBelow, we'll run through all the images in our dataset and get codes for each of them. That is, we'll run the images through the VGGNet convolutional layers and record the values of the first fully connected layer. We can then write these to a file for later when we build our own classifier.\nHere we're using the vgg16 module from tensorflow_vgg. The network takes images of size $224 \\times 224 \\times 3$ as input. Then it has 5 sets of convolutional layers. The network implemented here has this structure (copied from the source code):\n```\nself.conv1_1 = self.conv_layer(bgr, \"conv1_1\")\nself.conv1_2 = self.conv_layer(self.conv1_1, \"conv1_2\")\nself.pool1 = self.max_pool(self.conv1_2, 'pool1')\nself.conv2_1 = self.conv_layer(self.pool1, \"conv2_1\")\nself.conv2_2 = self.conv_layer(self.conv2_1, \"conv2_2\")\nself.pool2 = self.max_pool(self.conv2_2, 'pool2')\nself.conv3_1 = self.conv_layer(self.pool2, \"conv3_1\")\nself.conv3_2 = self.conv_layer(self.conv3_1, \"conv3_2\")\nself.conv3_3 = self.conv_layer(self.conv3_2, \"conv3_3\")\nself.pool3 = self.max_pool(self.conv3_3, 'pool3')\nself.conv4_1 = self.conv_layer(self.pool3, \"conv4_1\")\nself.conv4_2 = self.conv_layer(self.conv4_1, \"conv4_2\")\nself.conv4_3 = self.conv_layer(self.conv4_2, \"conv4_3\")\nself.pool4 = self.max_pool(self.conv4_3, 'pool4')\nself.conv5_1 = self.conv_layer(self.pool4, \"conv5_1\")\nself.conv5_2 = self.conv_layer(self.conv5_1, \"conv5_2\")\nself.conv5_3 = self.conv_layer(self.conv5_2, \"conv5_3\")\nself.pool5 = self.max_pool(self.conv5_3, 'pool5')\nself.fc6 = self.fc_layer(self.pool5, \"fc6\")\nself.relu6 = tf.nn.relu(self.fc6)\n```\nSo what we want are the values of the first fully connected layer, after being ReLUd (self.relu6). To build the network, we use\nwith tf.Session() as sess:\n vgg = vgg16.Vgg16()\n input_ = tf.placeholder(tf.float32, [None, 224, 224, 3])\n with tf.name_scope(\"content_vgg\"):\n vgg.build(input_)\nThis creates the vgg object, then builds the graph with vgg.build(input_). Then to get the values from the layer,\nfeed_dict = {input_: images}\ncodes = sess.run(vgg.relu6, feed_dict=feed_dict)",
"import os\n\nimport numpy as np\nimport tensorflow as tf\n\nfrom tensorflow_vgg import vgg16\nfrom tensorflow_vgg import utils\n\ndata_dir = 'flower_photos/'\ncontents = os.listdir(data_dir)\nclasses = [each for each in contents if os.path.isdir(data_dir + each)]",
"Below I'm running images through the VGG network in batches.\n\nExercise: Below, build the VGG network. Also get the codes from the first fully connected layer (make sure you get the ReLUd values).",
"# Set the batch size higher if you can fit in in your GPU memory\nbatch_size = 10\ncodes_list = []\nlabels = []\nbatch = []\n\ncodes = None\n\nwith tf.Session() as sess:\n \n # TODO: Build the vgg network here\n\n for each in classes:\n print(\"Starting {} images\".format(each))\n class_path = data_dir + each\n files = os.listdir(class_path)\n for ii, file in enumerate(files, 1):\n # Add images to the current batch\n # utils.load_image crops the input images for us, from the center\n img = utils.load_image(os.path.join(class_path, file))\n batch.append(img.reshape((1, 224, 224, 3)))\n labels.append(each)\n \n # Running the batch through the network to get the codes\n if ii % batch_size == 0 or ii == len(files):\n \n # Image batch to pass to VGG network\n images = np.concatenate(batch)\n \n # TODO: Get the values from the relu6 layer of the VGG network\n codes_batch = \n \n # Here I'm building an array of the codes\n if codes is None:\n codes = codes_batch\n else:\n codes = np.concatenate((codes, codes_batch))\n \n # Reset to start building the next batch\n batch = []\n print('{} images processed'.format(ii))\n\n# write codes to file\nwith open('codes', 'w') as f:\n codes.tofile(f)\n \n# write labels to file\nimport csv\nwith open('labels', 'w') as f:\n writer = csv.writer(f, delimiter='\\n')\n writer.writerow(labels)",
"Building the Classifier\nNow that we have codes for all the images, we can build a simple classifier on top of them. The codes behave just like normal input into a simple neural network. Below I'm going to have you do most of the work.",
"# read codes and labels from file\nimport csv\n\nwith open('labels') as f:\n reader = csv.reader(f, delimiter='\\n')\n labels = np.array([each for each in reader]).squeeze()\nwith open('codes') as f:\n codes = np.fromfile(f, dtype=np.float32)\n codes = codes.reshape((len(labels), -1))",
"Data prep\nAs usual, now we need to one-hot encode our labels and create validation/test sets. First up, creating our labels!\n\nExercise: From scikit-learn, use LabelBinarizer to create one-hot encoded vectors from the labels.",
"labels_vecs = # Your one-hot encoded labels array here",
"Now you'll want to create your training, validation, and test sets. An important thing to note here is that our labels and data aren't randomized yet. We'll want to shuffle our data so the validation and test sets contain data from all classes. Otherwise, you could end up with testing sets that are all one class. Typically, you'll also want to make sure that each smaller set has the same the distribution of classes as it is for the whole data set. The easiest way to accomplish both these goals is to use StratifiedShuffleSplit from scikit-learn.\nYou can create the splitter like so:\nss = StratifiedShuffleSplit(n_splits=1, test_size=0.2)\nThen split the data with \nsplitter = ss.split(x, y)\nss.split returns a generator of indices. You can pass the indices into the arrays to get the split sets. The fact that it's a generator means you either need to iterate over it, or use next(splitter) to get the indices. Be sure to read the documentation and the user guide.\n\nExercise: Use StratifiedShuffleSplit to split the codes and labels into training, validation, and test sets.",
"train_x, train_y = \nval_x, val_y = \ntest_x, test_y = \n\nprint(\"Train shapes (x, y):\", train_x.shape, train_y.shape)\nprint(\"Validation shapes (x, y):\", val_x.shape, val_y.shape)\nprint(\"Test shapes (x, y):\", test_x.shape, test_y.shape)",
"If you did it right, you should see these sizes for the training sets:\nTrain shapes (x, y): (2936, 4096) (2936, 5)\nValidation shapes (x, y): (367, 4096) (367, 5)\nTest shapes (x, y): (367, 4096) (367, 5)\nClassifier layers\nOnce you have the convolutional codes, you just need to build a classfier from some fully connected layers. You use the codes as the inputs and the image labels as targets. Otherwise the classifier is a typical neural network.\n\nExercise: With the codes and labels loaded, build the classifier. Consider the codes as your inputs, each of them are 4096D vectors. You'll want to use a hidden layer and an output layer as your classifier. Remember that the output layer needs to have one unit for each class and a softmax activation function. Use the cross entropy to calculate the cost.",
"inputs_ = tf.placeholder(tf.float32, shape=[None, codes.shape[1]])\nlabels_ = tf.placeholder(tf.int64, shape=[None, labels_vecs.shape[1]])\n\n# TODO: Classifier layers and operations\n\nlogits = # output layer logits\ncost = # cross entropy loss\n\noptimizer = # training optimizer\n\n# Operations for validation/test accuracy\npredicted = tf.nn.softmax(logits)\ncorrect_pred = tf.equal(tf.argmax(predicted, 1), tf.argmax(labels_, 1))\naccuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32))",
"Batches!\nHere is just a simple way to do batches. I've written it so that it includes all the data. Sometimes you'll throw out some data at the end to make sure you have full batches. Here I just extend the last batch to include the remaining data.",
"def get_batches(x, y, n_batches=10):\n \"\"\" Return a generator that yields batches from arrays x and y. \"\"\"\n batch_size = len(x)//n_batches\n \n for ii in range(0, n_batches*batch_size, batch_size):\n # If we're not on the last batch, grab data with size batch_size\n if ii != (n_batches-1)*batch_size:\n X, Y = x[ii: ii+batch_size], y[ii: ii+batch_size] \n # On the last batch, grab the rest of the data\n else:\n X, Y = x[ii:], y[ii:]\n # I love generators\n yield X, Y",
"Training\nHere, we'll train the network.\n\nExercise: So far we've been providing the training code for you. Here, I'm going to give you a bit more of a challenge and have you write the code to train the network. Of course, you'll be able to see my solution if you need help. Use the get_batches function I wrote before to get your batches like for x, y in get_batches(train_x, train_y). Or write your own!",
"saver = tf.train.Saver()\nwith tf.Session() as sess:\n \n # TODO: Your training code here\n saver.save(sess, \"checkpoints/flowers.ckpt\")",
"Testing\nBelow you see the test accuracy. You can also see the predictions returned for images.",
"with tf.Session() as sess:\n saver.restore(sess, tf.train.latest_checkpoint('checkpoints'))\n \n feed = {inputs_: test_x,\n labels_: test_y}\n test_acc = sess.run(accuracy, feed_dict=feed)\n print(\"Test accuracy: {:.4f}\".format(test_acc))\n\n%matplotlib inline\n\nimport matplotlib.pyplot as plt\nfrom scipy.ndimage import imread",
"Below, feel free to choose images and see how the trained classifier predicts the flowers in them.",
"test_img_path = 'flower_photos/roses/10894627425_ec76bbc757_n.jpg'\ntest_img = imread(test_img_path)\nplt.imshow(test_img)\n\n# Run this cell if you don't have a vgg graph built\nif 'vgg' in globals():\n print('\"vgg\" object already exists. Will not create again.')\nelse:\n #create vgg\n with tf.Session() as sess:\n input_ = tf.placeholder(tf.float32, [None, 224, 224, 3])\n vgg = vgg16.Vgg16()\n vgg.build(input_)\n\nwith tf.Session() as sess:\n img = utils.load_image(test_img_path)\n img = img.reshape((1, 224, 224, 3))\n\n feed_dict = {input_: img}\n code = sess.run(vgg.relu6, feed_dict=feed_dict)\n \nsaver = tf.train.Saver()\nwith tf.Session() as sess:\n saver.restore(sess, tf.train.latest_checkpoint('checkpoints'))\n \n feed = {inputs_: code}\n prediction = sess.run(predicted, feed_dict=feed).squeeze()\n\nplt.imshow(test_img)\n\nplt.barh(np.arange(5), prediction)\n_ = plt.yticks(np.arange(5), lb.classes_)"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
royalosyin/Python-Practical-Application-on-Climate-Variability-Studies | ex15-Trend and Anomaly Analyses of Long-term Tempro-Spatial Dataset.ipynb | mit | [
"%load_ext load_style\n%load_style talk.css",
"Trend and Anomaly Analyses of Long-term Tempro-Spatial Dataset\nTrend and anomaly analyses are widely used in atmospheric and oceanographic research for detecting long term change.\nAn example is presented in this notebook of a numerical analysis of Sea Surface Temperature (SST) where the global change rate per decade has been calculated from 1982 to 2016. Moreover, its area-weighted global monthly SST anomaly time series is presented, too. In addition, all of calculating processes is list step by step.\n\nData Source\nNOAA Optimum Interpolation (OI) Sea Surface Temperature (SST) V2 is downloaded from https://www.esrl.noaa.gov/psd/data/gridded/data.noaa.oisst.v2.html.\nSpatial Coverage:\n* 1.0 degree latitude x 1.0 degree longitude global grid (180x360).\n* 89.5N - 89.5S, 0.5E - 359.5E.\nBecause oisst is an interpolated data, so it covers ocean and land. As a result, have to use land-ocean mask data at the same time, which is also available from the webstie.\nWe select SST from 1982 to 2016.\n\n\n\n1. Load basic libs",
"% matplotlib inline\n\nfrom pylab import *\nimport numpy as np\nimport datetime \n\nfrom netCDF4 import netcdftime\nfrom netCDF4 import Dataset as netcdf # netcdf4-python module\nfrom netcdftime import utime\n\nimport matplotlib.pyplot as plt\nfrom mpl_toolkits.basemap import Basemap\nimport matplotlib.dates as mdates\nfrom matplotlib.dates import MonthLocator, WeekdayLocator, DateFormatter\nimport matplotlib.ticker as ticker\n\nfrom matplotlib.pylab import rcParams\nrcParams['figure.figsize'] = 15, 6\n\nimport warnings\nwarnings.simplefilter('ignore')",
"2. Read SST data and pick variables\n2.1 Read SST",
"ncset= netcdf(r'data/sst.mnmean.nc')\nlons = ncset['lon'][:] \nlats = ncset['lat'][:] \nsst = ncset['sst'][1:421,:,:] # 1982-2016 to make it divisible by 12\nnctime = ncset['time'][1:421]\nt_unit = ncset['time'].units\n\ntry :\n t_cal =ncset['time'].calendar\nexcept AttributeError : # Attribute doesn't exist\n t_cal = u\"gregorian\" # or standard\n\nnt, nlat, nlon = sst.shape\nngrd = nlon*nlat",
"2.2 Parse time",
"utime = netcdftime.utime(t_unit, calendar = t_cal)\ndatevar = utime.num2date(nctime)\nprint(datevar.shape)\n\ndatevar[0:5]",
"2.3 Read mask (1=ocen, 0=land)",
"lmfile = 'data\\lsmask.nc'\nlmset = netcdf(lmfile)\nlsmask = lmset['mask'][0,:,:]\nlsmask = lsmask-1\n\nnum_repeats = nt\nlsm = np.stack([lsmask]*num_repeats,axis=-1).transpose((2,0,1))\nlsm.shape",
"2.3 Mask out Land",
"sst = np.ma.masked_array(sst, mask=lsm)",
"3. Trend Analysis\n3.1 Linear trend calculation",
"#import scipy.stats as stats\nsst_grd = sst.reshape((nt, ngrd), order='F') \nx = np.linspace(1,nt,nt)#.reshape((nt,1))\nsst_rate = np.empty((ngrd,1))\nsst_rate[:,:] = np.nan\n\nfor i in range(ngrd): \n y = sst_grd[:,i] \n if(not np.ma.is_masked(y)): \n z = np.polyfit(x, y, 1)\n sst_rate[i,0] = z[0]*120.0\n #slope, intercept, r_value, p_value, std_err = stats.linregress(x, sst_grd[:,i])\n #sst_rate[i,0] = slope*120.0 \n \nsst_rate = sst_rate.reshape((nlat,nlon), order='F')",
"3.2 Visualize SST trend",
"m = Basemap(projection='cyl', llcrnrlon=min(lons), llcrnrlat=min(lats),\n urcrnrlon=max(lons), urcrnrlat=max(lats))\n\nx, y = m(*np.meshgrid(lons, lats))\nclevs = np.linspace(-0.5, 0.5, 21)\ncs = m.contourf(x, y, sst_rate.squeeze(), clevs, cmap=plt.cm.RdBu_r)\nm.drawcoastlines()\n#m.fillcontinents(color='#000000',lake_color='#99ffff')\n\ncb = m.colorbar(cs)\ncb.set_label('SST Changing Rate ($^oC$/decade)', fontsize=12)\nplt.title('SST Changing Rate ($^oC$/decade)', fontsize=16)",
"4. Anomaly analysis\n4.1 Convert sst data into nyear x12 x lat x lon",
"sst_grd_ym = sst.reshape((12,nt/12, ngrd), order='F').transpose((1,0,2))\nsst_grd_ym.shape",
"4.2 Calculate seasonal cycle",
"sst_grd_clm = np.mean(sst_grd_ym, axis=0)\nsst_grd_clm.shape",
"4.3 Remove seasonal cycle",
"sst_grd_anom = (sst_grd_ym - sst_grd_clm).transpose((1,0,2)).reshape((nt, nlat, nlon), order='F')\nsst_grd_anom.shape",
"4.4 Calculate area-weights\n4.4.1 Make sure lat-lon grid direction",
"print(lats[0:12])\nprint(lons[0:12])",
"4.4.2 Calculate area-weights with cos(lats)",
"lonx, latx = np.meshgrid(lons, lats)\nweights = np.cos(latx * np.pi / 180.)\n\nprint(weights.shape)",
"4.4.3 Calculate valid grids total eareas for Global, NH and SH",
"sst_glb_avg = np.zeros(nt)\nsst_nh_avg = np.zeros(nt)\nsst_sh_avg = np.zeros(nt)\n\nfor it in np.arange(nt):\n sst_glb_avg[it] = np.ma.average(sst_grd_anom[it, :], weights=weights)\n sst_nh_avg[it] = np.ma.average(sst_grd_anom[it,0:nlat/2,:], weights=weights[0:nlat/2,:])\n sst_sh_avg[it] = np.ma.average(sst_grd_anom[it,nlat/2:nlat,:], weights=weights[nlat/2:nlat,:]) ",
"5. Visualize monthly SST anomaly time series",
"fig, ax = plt.subplots(1, 1 , figsize=(15,5))\n\nax.plot(datevar, sst_glb_avg, color='b', linewidth=2, label='GLB')\nax.plot(datevar, sst_nh_avg, color='r', linewidth=2, label='NH')\nax.plot(datevar, sst_sh_avg, color='g', linewidth=2, label='SH')\n\nax.axhline(0, linewidth=1, color='k')\nax.legend()\nax.set_title('Monthly SST Anomaly Time Series (1982 - 2016)', fontsize=16)\nax.set_xlabel('Month/Year #', fontsize=12)\nax.set_ylabel('$^oC$', fontsize=12)\nax.set_ylim(-0.6, 0.6)\nfig.set_figheight(9)\n\n# rotate and align the tick labels so they look better\nfig.autofmt_xdate()\n# use a more precise date string for the x axis locations in the toolbar\nax.fmt_xdata = mdates.DateFormatter('%Y')",
"References\nhttp://unidata.github.io/netcdf4-python/\nhttp://www.scipy.org/\nKalnay et al.,The NCEP/NCAR 40-year reanalysis project, Bull. Amer. Meteor. Soc., 77, 437-470, 1996.\nMatplotlib: A 2D Graphics Environment by J. D. Hunter In Computing in Science & Engineering, Vol. 9, No. 3. (2007), pp. 90-95\nReynolds, R.W., N.A. Rayner, T.M. Smith, D.C. Stokes, and W. Wang, 2002: An improved in situ and satellite SST analysis for climate. J. Climate, 15, 1609-1625."
] | [
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
spacedrabbit/PythonBootcamp | Advanced Modules/Collections Module.ipynb | mit | [
"from collections import Counter\n\nCounter('with a string')\n\nCounter('with a string'.split())\n\nc = Counter([1, 1, 1, 2, 2, 3, 3, 3, 3, 3, 3, 3, 3, 5, 6, 100, 'test'])\n\nc\n\nc.viewitems()\n\nfor k, v in c.iteritems(): \n print \"key:\", k, \"value:\", v\n\nc.most_common() # descending order of most common\n\nc.most_common(1)\n\nc.most_common(3)\n\nc\n\nlist(c)\n\nset(c)\n\ndict(c)\n\nc.most_common()[:-4-1:-1]",
"defaultdict\nThe whole point is that it will always return a value even if you query for a key that doesnt exist. That value you set ahead of time is called a factory object. That key also gets turned into a new key/value pair with the factory object",
"from collections import defaultdict\n\nd = {'k1':1}\n\nd[\"k1\"]\n\nd[\"k2\"] # this will get an error because the k2 key doesnt exist\n\nd = defaultdict(object)\n\nd['one'] # this doesn't exist, but in calling for it it will create a new element {'one' : object}\n\nd['two'] # same, this will add {'two' : object}\n\nfor k, v in d.items():\n print \"key:\", k, \"item:\", v\n\ne = defaultdict(lambda: 0) # lambda just returns 0 here\n\ne['four']\ne['twelve']\n\ndef error():\n return 'error'\n\nf = defaultdict(error) #returned item must be callable or None\n\nf['new']\n\nf.items()",
"orderedDict\ndictionary subclass that remembers the order items were added",
"d_norm = {}\nd_norm['a'] = 1\nd_norm['b'] = 2\nd_norm['c'] = 3\nd_norm['d'] = 4\nd_norm['e'] = 5\n\n# order isn't preserved since a dict is just a mapping\nfor k,v in d_norm.items():\n print k,v\n\n\nfrom collections import OrderedDict\n\nd_ordered = OrderedDict()\nd_ordered['a'] = 1\nd_ordered['b'] = 2\nd_ordered['c'] = 3\nd_ordered['d'] = 4\nd_ordered['e'] = 5\n\nfor k,v in d_ordered.items():\n print k, v\n\nfrom collections import namedtuple\n\n# this is kind of like creating a new class on the fly\n# the first parameter of a namedtuple is the name of the class/tuple type\n# the second parameter is a space-delimeted list of properties of the tuple\nDog = namedtuple('Dog','age breed name')\nsam = Dog(age=2, breed='Lab', name='Sammy')\n\nprint sam.age\nprint sam.breed\nprint sam.name\n\nCatzz = namedtuple('Cat', 'fur claws name')\nmittens = Catzz(fur='fuzzy', claws='sharp', name='Mittens')\nprint type(mittens)\nprint type(Catzz)\nprint mittens[0]\nprint mittens.claws\nprint mittens.name\nprint mittens.count('fuzzy')\nprint mittens.index('sharp')"
] | [
"code",
"markdown",
"code",
"markdown",
"code"
] |
tensorflow/docs-l10n | site/en-snapshot/io/tutorials/orc.ipynb | apache-2.0 | [
"Copyright 2021 The TensorFlow Authors.",
"#@title Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.",
"Apache ORC Reader\n<table class=\"tfo-notebook-buttons\" align=\"left\">\n <td>\n <a target=\"_blank\" href=\"https://www.tensorflow.org/io/tutorials/orc\"><img src=\"https://www.tensorflow.org/images/tf_logo_32px.png\" />View on TensorFlow.org</a>\n </td>\n <td>\n <a target=\"_blank\" href=\"https://colab.research.google.com/github/tensorflow/io/blob/master/docs/tutorials/orc.ipynb\"><img src=\"https://www.tensorflow.org/images/colab_logo_32px.png\" />Run in Google Colab</a>\n </td>\n <td>\n <a target=\"_blank\" href=\"https://github.com/tensorflow/io/blob/master/docs/tutorials/orc.ipynb\"><img src=\"https://www.tensorflow.org/images/GitHub-Mark-32px.png\" />View on GitHub</a>\n </td>\n <td>\n <a href=\"https://storage.googleapis.com/tensorflow_docs/io/docs/tutorials/orc.ipynb\"><img src=\"https://www.tensorflow.org/images/download_logo_32px.png\" />Download notebook</a>\n </td>\n</table>\n\nOverview\nApache ORC is a popular columnar storage format. tensorflow-io package provides a default implementation of reading Apache ORC files.\nSetup\nInstall required packages, and restart runtime",
"!pip install tensorflow-io\n\nimport tensorflow as tf\nimport tensorflow_io as tfio",
"Download a sample dataset file in ORC\nThe dataset you will use here is the Iris Data Set from UCI. The data set contains 3 classes of 50 instances each, where each class refers to a type of iris plant. It has 4 attributes: (1) sepal length, (2) sepal width, (3) petal length, (4) petal width, and the last column contains the class label.",
"!curl -OL https://github.com/tensorflow/io/raw/master/tests/test_orc/iris.orc\n!ls -l iris.orc",
"Create a dataset from the file",
"dataset = tfio.IODataset.from_orc(\"iris.orc\", capacity=15).batch(1)",
"Examine the dataset:",
"for item in dataset.take(1):\n print(item)\n",
"Let's walk through an end-to-end example of tf.keras model training with ORC dataset based on iris dataset.\nData preprocessing\nConfigure which columns are features, and which column is label:",
"feature_cols = [\"sepal_length\", \"sepal_width\", \"petal_length\", \"petal_width\"]\nlabel_cols = [\"species\"]\n\n# select feature columns\nfeature_dataset = tfio.IODataset.from_orc(\"iris.orc\", columns=feature_cols)\n# select label columns\nlabel_dataset = tfio.IODataset.from_orc(\"iris.orc\", columns=label_cols)",
"A util function to map species to float numbers for model training:",
"vocab_init = tf.lookup.KeyValueTensorInitializer(\n keys=tf.constant([\"virginica\", \"versicolor\", \"setosa\"]),\n values=tf.constant([0, 1, 2], dtype=tf.int64))\nvocab_table = tf.lookup.StaticVocabularyTable(\n vocab_init,\n num_oov_buckets=4)\n\nlabel_dataset = label_dataset.map(vocab_table.lookup)\ndataset = tf.data.Dataset.zip((feature_dataset, label_dataset))\ndataset = dataset.batch(1)\n\ndef pack_features_vector(features, labels):\n \"\"\"Pack the features into a single array.\"\"\"\n features = tf.stack(list(features), axis=1)\n return features, labels\n\ndataset = dataset.map(pack_features_vector)",
"Build, compile and train the model\nFinally, you are ready to build the model and train it! You will build a 3 layer keras model to predict the class of the iris plant from the dataset you just processed.",
"model = tf.keras.Sequential(\n [\n tf.keras.layers.Dense(\n 10, activation=tf.nn.relu, input_shape=(4,)\n ),\n tf.keras.layers.Dense(10, activation=tf.nn.relu),\n tf.keras.layers.Dense(3),\n ]\n)\n\nmodel.compile(optimizer=\"adam\", loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True), metrics=[\"accuracy\"])\nmodel.fit(dataset, epochs=5)"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
phoebe-project/phoebe2-docs | 2.2/tutorials/requiv_crit_detached.ipynb | gpl-3.0 | [
"Critical Radii: Detached Systems\nSetup\nLet's first make sure we have the latest version of PHOEBE 2.2 installed. (You can comment out this line if you don't use pip for your installation or don't want to update to the latest release).",
"!pip install -I \"phoebe>=2.2,<2.3\"",
"As always, let's do imports and initialize a logger and a new Bundle. See Building a System for more details.",
"%matplotlib inline\n\nimport phoebe\nfrom phoebe import u # units\nimport numpy as np\nimport matplotlib.pyplot as plt\n\nlogger = phoebe.logger()\n\nb = phoebe.default_binary()",
"Detached Systems\nDetached systems are the default case for default_binary. The requiv_max parameter is constrained to show the maximum value for requiv before the system will begin overflowing at periastron.",
"b['requiv_max@component@primary']\n\nb['requiv_max@constraint@primary']",
"We can see that the default system is well within this critical value by printing all radii and critical radii.",
"print(b.filter(qualifier='requiv*', context='component'))",
"If we increase 'requiv' past the critical point, we'll receive a warning from the logger and would get an error if attempting to call b.run_compute().",
"b['requiv@primary'] = 2.2\n\nprint(b.run_checks())",
"If the value of requiv was exactly the critical value, we'd have a semi-detached system. Alternatively, we could use a constraint to enforce that a system remains semi-detached.\nIf the value of requiv is over the critical value, the system is overflowing and will raise an error. If we intentionally want a contact system, we can explicitly create a contact system."
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
dipanjank/ml | text_classification_and_clustering/step_3_classification_of_full_dataset.ipynb | gpl-3.0 | [
"<h1 align=\"center\">Level and Group Classification on Train and Test Datasets</h1>\n\nWe have two classification tasks:\n\nPredict the level, which ranges from 1-16.\nPredict the group of a given text, given this mapping from levels to group:\nLevels 1-3 = Group A1\nLevels 4-6 = Group A2\nLevels 7-9 = Group B1\nLevels 10-12 = Group B2\nLevels 13-15 = Group C1\nLevels 16 = Group C2",
"%matplotlib inline\nimport matplotlib.pyplot as plt\nplt.style.use('ggplot')\n\nimport pandas as pd \nimport numpy as np\nimport seaborn as sns",
"Here, we load the DataFrame for the full training set and repeat the classification approach identified in step 2.",
"%%time\nraw_input = pd.read_pickle('train_full.pkl')\n\nraw_input.head()\n\nraw_input.info()",
"Check for Class Imbalance",
"level_counts = raw_input.level.value_counts().sort_index()\ngroup_counts = raw_input.group.value_counts().sort_index()\n\n_, ax = plt.subplots(1, 2, figsize=(10, 5))\n\n_ = level_counts.plot(kind='bar', title='Feature Instances per Level', ax=ax[0], rot=0)\n_ = group_counts.plot(kind='bar', title='Feature Instances per Group', ax=ax[1], rot=0)\n\nplt.tight_layout()",
"Level Classification Based on Text\nHere we apply the same approach of converting text to bag-of-words features and then using a maximum entropy classifier. The difference is we are now running on the full dataset which is much larger. The optimizer now requires more steps to converge, so we change the max_iters attribute of LogisticRegression to 1000. We address the label imbalance by setting class_weight='balanced'.",
"import nltk\nnltk.download('stopwords')\nnltk.download('punkt')\nfrom nltk.corpus import stopwords\n\nen_stopwords = set(stopwords.words('english'))\nprint(en_stopwords)\n\nfrom sklearn.metrics import classification_report\nfrom sklearn.metrics import confusion_matrix\n\n\ndef display_results(y, y_pred):\n \"\"\"Given some predications y_pred for a target label y, \n display the precision/recall/f1 score and the confusion matrix.\"\"\"\n \n report = classification_report(y_pred, y)\n print(report)\n\n level_values = y.unique()\n level_values.sort()\n cm = confusion_matrix(y_true=y, y_pred=y_pred.values, labels=level_values)\n cm = pd.DataFrame(index=level_values, columns=level_values, data=cm)\n\n fig, ax = plt.subplots(1, 1, figsize=(12, 10))\n ax = sns.heatmap(cm, annot=True, ax=ax, fmt='d')\n\nfrom sklearn.feature_extraction.text import CountVectorizer\nfrom sklearn.pipeline import make_pipeline\nfrom sklearn.linear_model import LogisticRegression\nfrom sklearn.model_selection import cross_val_predict, StratifiedKFold\n\ndef build_pipeline():\n \"\"\"Return the combination of a Feature Extractor and a LogisticRegression model in a ``Pipeline``. \"\"\"\n \n counter = CountVectorizer(\n lowercase=True, \n stop_words=en_stopwords, \n ngram_range=(1, 1),\n min_df=5,\n max_df=0.4,\n binary=True)\n\n model = LogisticRegression(\n # maximize log-likelihood + square norm of parameters\n penalty='l2',\n # steps required for the L-BFGS optimizer to converge, found by trial and error\n max_iter=1000, \n # use softmax instead of one-vs-rest style classification\n multi_class='multinomial', \n # use L-BFGS optimizer \n solver='lbfgs',\n # This prints out a warning if the optimizer hasn't converged\n verbose=True, \n # to handle the class imbalance\n # automatically adjust weights inversely proportional to \n # class frequencies in the input data\n class_weight='balanced', \n random_state=4321)\n \n pipeline = make_pipeline(counter, model)\n return pipeline\n \n\ndef classify(input_df, target_label='level'):\n \"\"\"\n Build a classifier for the `target_label` column in the DataFrame `input_df` using the `text` column. \n Return the (labels, predicted_labels) tuple. \n Use a 10-fold Stratified K-fold cross-validator to generate the out-of-sample predictions.\"\"\"\n \n assert target_label in input_df.columns\n \n pipeline = build_pipeline() \n cv = StratifiedKFold(n_splits=10, shuffle=True, random_state=1234)\n\n X = input_df.text\n y = input_df.loc[:, target_label]\n y_pred = cross_val_predict(pipeline, X=X.values, y=y.values, cv=cv, n_jobs=10, verbose=2)\n y_pred = pd.Series(index=input_df.index.copy(), data=y_pred)\n\n return y.copy(), y_pred\n\n%%time\nlevels, levels_predicted = classify(raw_input, target_label='level')\n\ndisplay_results(levels, levels_predicted)",
"Group Classification Based on Text",
"%%time\ngroups, groups_predicted = classify(raw_input, target_label='group')\n\ndisplay_results(groups, groups_predicted)",
"Classification on Test Set\nFinally we report the performance on our classfier on both the leve and group classification tasks using the test dataset. For this we re-build the model using the hyperparameters used above, and train it using the entire train dataset.",
"from functools import lru_cache\n\n@lru_cache(maxsize=1)\ndef get_test_dataset():\n return pd.read_pickle('test.pkl')\n\n\ndef report_test_perf(train_df, target_label='level'):\n \"\"\"Produce classification report and confusion matrix on the test Dataset for a given ``target_label``.\"\"\"\n test_df = get_test_dataset()\n \n assert target_label in train_df.columns\n assert target_label in test_df.columns\n \n # Train the model using the entire training dataset\n pipeline = build_pipeline() \n X_train, y_train = train_df.text, train_df.loc[:, target_label]\n pipeline = pipeline.fit(X_train.values, y_train.values)\n \n # Generate predictions using test data\n X_test, y_test = test_df.text, test_df.loc[:, target_label]\n predicted = pipeline.predict(X_test.values)\n \n predicted = pd.Series(index=test_df.index, \n data=predicted, \n name='{}_pred'.format(target_label))\n \n display_results(y_test, predicted)\n ",
"Level Classification on Test Set",
"%% time\n\ntrain_df = raw_input\nreport_test_perf(train_df, 'level')",
"Group Classification on Test Set",
"%%time\n\nreport_test_perf(train_df, 'group')"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
mtasende/Machine-Learning-Nanodegree-Capstone | notebooks/prod/n08_simple_q_learner_fast_learner_full_training.ipynb | mit | [
"In this notebook a simple Q learner will be trained and evaluated. The Q learner recommends when to buy or sell shares of one particular stock, and in which quantity (in fact it determines the desired fraction of shares in the total portfolio value). One initial attempt was made to train the Q-learner with multiple processes, but it was unsuccessful.",
"# Basic imports\nimport os\nimport pandas as pd\nimport matplotlib.pyplot as plt\nimport numpy as np\nimport datetime as dt\nimport scipy.optimize as spo\nimport sys\nfrom time import time\nfrom sklearn.metrics import r2_score, median_absolute_error\nfrom multiprocessing import Pool\n\n%matplotlib inline\n\n%pylab inline\npylab.rcParams['figure.figsize'] = (20.0, 10.0)\n\n%load_ext autoreload\n%autoreload 2\n\nsys.path.append('../../')\n\nimport recommender.simulator as sim\nfrom utils.analysis import value_eval\nfrom recommender.agent import Agent\nfrom functools import partial\n\nNUM_THREADS = 1\nLOOKBACK = -1 # 252*4 + 28\nSTARTING_DAYS_AHEAD = 252\nPOSSIBLE_FRACTIONS = [0.0, 1.0]\n\n# Get the data\nSYMBOL = 'SPY'\ntotal_data_train_df = pd.read_pickle('../../data/data_train_val_df.pkl').stack(level='feature')\ndata_train_df = total_data_train_df[SYMBOL].unstack()\ntotal_data_test_df = pd.read_pickle('../../data/data_test_df.pkl').stack(level='feature')\ndata_test_df = total_data_test_df[SYMBOL].unstack()\nif LOOKBACK == -1:\n total_data_in_df = total_data_train_df\n data_in_df = data_train_df\nelse:\n data_in_df = data_train_df.iloc[-LOOKBACK:]\n total_data_in_df = total_data_train_df.loc[data_in_df.index[0]:]\n\n# Create many agents\nindex = np.arange(NUM_THREADS).tolist()\nenv, num_states, num_actions = sim.initialize_env(total_data_in_df, \n SYMBOL, \n starting_days_ahead=STARTING_DAYS_AHEAD,\n possible_fractions=POSSIBLE_FRACTIONS)\nagents = [Agent(num_states=num_states, \n num_actions=num_actions, \n random_actions_rate=0.98, \n random_actions_decrease=0.999,\n dyna_iterations=0,\n name='Agent_{}'.format(i)) for i in index]\n\ndef show_results(results_list, data_in_df, graph=False):\n for values in results_list:\n total_value = values.sum(axis=1)\n print('Sharpe ratio: {}\\nCum. Ret.: {}\\nAVG_DRET: {}\\nSTD_DRET: {}\\nFinal value: {}'.format(*value_eval(pd.DataFrame(total_value))))\n print('-'*100)\n initial_date = total_value.index[0]\n compare_results = data_in_df.loc[initial_date:, 'Close'].copy()\n compare_results.name = SYMBOL\n compare_results_df = pd.DataFrame(compare_results)\n compare_results_df['portfolio'] = total_value\n std_comp_df = compare_results_df / compare_results_df.iloc[0]\n if graph:\n plt.figure()\n std_comp_df.plot()",
"Let's show the symbols data, to see how good the recommender has to be.",
"print('Sharpe ratio: {}\\nCum. Ret.: {}\\nAVG_DRET: {}\\nSTD_DRET: {}\\nFinal value: {}'.format(*value_eval(pd.DataFrame(data_in_df['Close'].iloc[STARTING_DAYS_AHEAD:]))))\n\n# Simulate (with new envs, each time)\nn_epochs = 4\n\nfor i in range(n_epochs):\n tic = time()\n env.reset(STARTING_DAYS_AHEAD)\n results_list = sim.simulate_period(total_data_in_df, \n SYMBOL,\n agents[0],\n starting_days_ahead=STARTING_DAYS_AHEAD,\n possible_fractions=POSSIBLE_FRACTIONS,\n verbose=False,\n other_env=env)\n toc = time()\n print('Epoch: {}'.format(i))\n print('Elapsed time: {} seconds.'.format((toc-tic)))\n print('Random Actions Rate: {}'.format(agents[0].random_actions_rate))\n show_results([results_list], data_in_df)\n\nenv.indicators['rsi'].scaler\n\ndata_in_df['Close'].isnull().sum()\n\nenv.reset(STARTING_DAYS_AHEAD)\nresults_list = sim.simulate_period(total_data_in_df, \n SYMBOL, agents[0], \n learn=False, \n starting_days_ahead=STARTING_DAYS_AHEAD,\n possible_fractions=POSSIBLE_FRACTIONS,\n verbose=False,\n other_env=env)\nshow_results([results_list], data_in_df, graph=True)",
"Let's run the trained agent, with the test set\nFirst a non-learning test: this scenario would be worse than what is possible (in fact, the q-learner can learn from past samples in the test set without compromising the causality).",
"TEST_DAYS_AHEAD = 20\n\nenv.set_test_data(total_data_test_df, TEST_DAYS_AHEAD)\ntic = time()\nresults_list = sim.simulate_period(total_data_test_df, \n SYMBOL,\n agents[0],\n learn=False,\n starting_days_ahead=TEST_DAYS_AHEAD,\n possible_fractions=POSSIBLE_FRACTIONS,\n verbose=False,\n other_env=env)\ntoc = time()\nprint('Epoch: {}'.format(i))\nprint('Elapsed time: {} seconds.'.format((toc-tic)))\nprint('Random Actions Rate: {}'.format(agents[0].random_actions_rate))\nshow_results([results_list], data_test_df, graph=True)",
"And now a \"realistic\" test, in which the learner continues to learn from past samples in the test set (it even makes some random moves, though very few).",
"env.set_test_data(total_data_test_df, TEST_DAYS_AHEAD)\ntic = time()\nresults_list = sim.simulate_period(total_data_test_df, \n SYMBOL,\n agents[0],\n learn=True,\n starting_days_ahead=TEST_DAYS_AHEAD,\n possible_fractions=POSSIBLE_FRACTIONS,\n verbose=False,\n other_env=env)\ntoc = time()\nprint('Epoch: {}'.format(i))\nprint('Elapsed time: {} seconds.'.format((toc-tic)))\nprint('Random Actions Rate: {}'.format(agents[0].random_actions_rate))\nshow_results([results_list], data_test_df, graph=True)",
"What are the metrics for \"holding the position\"?",
"print('Sharpe ratio: {}\\nCum. Ret.: {}\\nAVG_DRET: {}\\nSTD_DRET: {}\\nFinal value: {}'.format(*value_eval(pd.DataFrame(data_test_df['Close'].iloc[TEST_DAYS_AHEAD:]))))\n\nimport pickle\nwith open('../../data/simple_q_learner_fast_learner_full_training.pkl', 'wb') as best_agent:\n pickle.dump(agents[0], best_agent)"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
cpatrickalves/simprev | notebooks/CalculoEstoqueMetodoProb.ipynb | gpl-3.0 | [
"Sugestão de metodologia para cálculo de Intervalos de Confiança\nConforme mencionado na LDO de 2018, o modelo oficial do governo se define como determinístico: \n“[...] ou seja, a partir da fixação de um conjunto de variáveis, o modelo determina de maneira única seus resultados [...]\nComo se trabalha com probabilidades, não necessariamente todos os eventos previstos podem acontecer. O modelo da LDO é determinístico por trabalhar apenas com médias (ex: média de pessoas que se aposentarão) e não considera diferentes cenários onde isso pode não ocorrer, ou seja, situações diferentes do comportamento médio.\nEste documento busca apresentar uma forma diferente de se projetar estoques considerando diferentes cenários onde nem sempre os segurados irão se aposentar.\nMétodo determinístico\nConsidere o seguinte cenário:\n* Segurados = 1000\n* Probabilidade de se aposentar = 0.35\nA forma mais simples e utilizada na LDO de se calcular o número de aposentados é:\nnum_ap = segurados x Probabilidade\nnum_ap = 1000 x 0.35 = * 350*\nMétodo probabilístico\nComo se trata de uma probabilidade, o evento de se aposentar pode ou não ocorrer para cada segurado.\nDiante disso, uma outra forma de se calcular o estoque de aposentados, seria calcular individualmente a probabilidade \nde cada segurado se aposentar.\nEsse cálculo individual seria feito através de números aleatórios, onde para cada segurado gera-se um número aleatório o qual é \ncomparado com a probabilidade de se aposentar, conforme apresentado abaixo:",
"import numpy as np\n\nn_segurados = 1000\nprob = 0.35\n\n# Lista que salva a quantidade de aposentados para cada cenário\nlista_nap = []\n\n# Lista de seeds -> 50 cenários\nseeds = range(0,50)\n\n# Executa 50 cenários (seeds) diferentes\nfor seed in seeds:\n\n # Define o seed para geração de números aleatórios\n np.random.seed(seed)\n # Gera 1000 números aletórios entre 0 e \n lista_na = np.random.rand(n_segurados)\n\n # Número de aposentados\n num_ap = 0\n\n # Determina quantos irão se aposentar para o cenário\n for na in lista_na:\n # calcula a probabilidade de cada um dos segurados se aposentar\n # Se o número aleatório for menor ou igual a probabilidade o segurado irá se aposentar\n if na <= prob: \n num_ap += 1\n\n lista_nap.append(num_ap)\n",
"Observem que diferente do método simples, para cada cenário (seed) ocorre uma situação diferente, ou seja, o número de segurados que se aposenta é diferente:",
"print(lista_nap)",
"Se calcularmos a média, temos um valor bem próximo ao do Método determinístico.",
"media = np.mean(lista_nap)\nprint('Média: {}'.format(media))",
"Porém, com diferentes cenários, podemos calcular medidas de dispersão, como o desvio padrão.",
"std = np.std(lista_nap)\nprint('Desvio padrão: {}'.format(std))",
"Visualizando em um gráfico:",
"import matplotlib.pyplot as plt\n%matplotlib inline\n\nmedias = [350] * len(seeds)\n\nfig, ax = plt.subplots()\nax.plot(seeds, lista_nap, '--', linewidth=2, label='Método Probabilístico')\nax.plot(seeds, medias,label='Método Determinístico')\nax.set_ylabel('Número de Aposentados')\nax.set_xlabel('Seed')\nax.set_title('Cálculo do estoque usando diferentes métodos')\nax.legend()\nplt.show()",
"Aplicando o método probabilístico no cálculo dos estoques (onde as probabilidades são aplicadas), teremos para cada seed, uma projeção/resultado diferente.\nNa média o resultado vai ser o mesmo obtido pelo método original, porém teremos diversas curvas ou pontos para cada ano, o que nos permite calcular medidas de dispersão como desvio padrão e Intervalos de Confiança para os resultados de receita e despesa.",
"np.var(lista_nap)\n"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
Z0m6ie/Zombie_Code | Data_Science_Course/Michigan Data Analysis Course/0 Introduction to Data Science in Python/Week4/Week+4.ipynb | mit | [
"You are currently looking at version 1.0 of this notebook. To download notebooks and datafiles, as well as get help on Jupyter notebooks in the Coursera platform, visit the Jupyter Notebook FAQ course resource.\n\nDistributions in Pandas",
"import pandas as pd\nimport numpy as np\n\nfor i in range(5):\n coinflip = np.random.binomial(1, 0.5)\n print(coinflip)\n\nnp.random.binomial(1000, 0.5)/1000\n\nchance_of_tornado = 0.01/100\nnp.random.binomial(100000, chance_of_tornado)\n\nchance_of_tornado = 0.01\n\ntornado_events = np.random.binomial(1, chance_of_tornado, 1000000)\n \ntwo_days_in_a_row = 0\nfor j in range(1,len(tornado_events)-1):\n if tornado_events[j]==1 and tornado_events[j-1]==1:\n two_days_in_a_row+=1\n\nprint('{} tornadoes back to back in {} years'.format(two_days_in_a_row, 1000000/365))\n\nnp.random.uniform(0, 1)\n\nnp.random.normal(0.75)",
"Formula for standard deviation\n$$\\sqrt{\\frac{1}{N} \\sum_{i=1}^N (x_i - \\overline{x})^2}$$",
"distribution = np.random.normal(0.75,size=1000)\n\nnp.sqrt(np.sum((np.mean(distribution)-distribution)**2)/len(distribution))\n\nnp.std(distribution)\n\nimport scipy.stats as stats\nstats.kurtosis(distribution)\n\nstats.skew(distribution)\n\nchi_squared_df2 = np.random.chisquare(10, size=10000)\nstats.skew(chi_squared_df2)\n\nchi_squared_df5 = np.random.chisquare(5, size=10000)\nstats.skew(chi_squared_df5)\n\n%matplotlib inline\nimport matplotlib\nimport matplotlib.pyplot as plt\n\noutput = plt.hist([chi_squared_df2,chi_squared_df5], bins=200, histtype='step', \n label=['2 degrees of freedom','5 degrees of freedom'])\nplt.legend(loc='upper right')\n",
"Hypothesis Testing",
"df = pd.read_csv('grades.csv')\n\ndf.head()\n\nlen(df)\n\nearly = df[df['assignment1_submission'] <= '2015-12-31']\nlate = df[df['assignment1_submission'] > '2015-12-31']\n\nearly.mean()\n\nlate.mean()\n\nfrom scipy import stats\nstats.ttest_ind?\n\nstats.ttest_ind(early['assignment1_grade'], late['assignment1_grade'])\n\nstats.ttest_ind(early['assignment2_grade'], late['assignment2_grade'])\n\nstats.ttest_ind(early['assignment3_grade'], late['assignment3_grade'])"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
koverholt/notebooks | dask/create-cluster.ipynb | bsd-3-clause | [
"Create a Dask cluster using Coiled\nFirst, we'll create a Dask cluster with Coiled:",
"import coiled\ncluster = coiled.Cluster(n_workers=10)",
"Let's point the distributed client to the Dask cluster on Coiled and output the link to the dashboard:",
"from dask.distributed import Client\nclient = Client(cluster)\nprint('Dashboard:', client.dashboard_link)",
"Now, we can connect to this Dask cluster from other notebooks and clients to run distributed computations in the cloud.\nThat was easy!"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
dsacademybr/PythonFundamentos | Cap04/Notebooks/DSA-Python-Cap04-Exercicios-Solucao.ipynb | gpl-3.0 | [
"<font color='blue'>Data Science Academy - Python Fundamentos - Capítulo 4</font>\nDownload: http://github.com/dsacademybr",
"# Versão da Linguagem Python\nfrom platform import python_version\nprint('Versão da Linguagem Python Usada Neste Jupyter Notebook:', python_version())",
"Versão da Linguagem Python\nfrom platform import python_version\nprint('Versão da Linguagem Python Usada Neste Jupyter Notebook:', python_version())\nExercícios",
"# Exercício 1 - Crie uma lista de 3 elementos e calcule a terceira potência de cada elemento.\nlist1 = [3,4,5]\nquadrado = [item**3 for item in list1] \nprint(quadrado)\n\n# Exercício 2 - Reescreva o código abaixo, usando a função map(). O resultado final deve ser o mesmo!\npalavras = 'A Data Science Academy oferce os melhores cursos de análise de dados do Brasil'.split()\nresultado = [[w.upper(), w.lower(), len(w)] for w in palavras]\nfor i in resultado:\n print (i)\n\nresultado = map(lambda w: [w.upper(), w.lower(), len(w)], palavras)\nfor i in resultado:\n print (i)\n\n# Exercício 3 - Calcule a matriz transposta da matriz abaixo.\n# Caso não saiba o que é matriz transposta, visite este link: https://pt.wikipedia.org/wiki/Matriz_transposta\n# Matriz transposta é um conceito fundamental na construção de redes neurais artificiais, base de sistemas de IA.\nmatrix = [[1, 2],[3,4],[5,6],[7,8]]\ntranspose = [[row[i] for row in matrix] for i in range(2)]\nprint(transpose)\n\n# Exercício 4 - Crie duas funções, uma para elevar um número ao quadrado e outra para elevar ao cubo. \n# Aplique as duas funções aos elementos da lista abaixo. \n# Obs: as duas funções devem ser aplicadas simultaneamente.\nlista = [0, 1, 2, 3, 4]\n\ndef square(x):\n return (x**2)\n \ndef cube(x):\n return (x**3)\n\nfuncs = [square, cube]\n\nfor i in lista:\n valor = map(lambda x: x(i), funcs)\n print(list((valor)))\n\n# Exercício 5 - Abaixo você encontra duas listas. Faça com que cada elemento da listaA seja elevado \n# ao elemento correspondente na listaB.\nlistaA = [2, 3, 4]\nlistaB = [10, 11, 12]\nlist(map(pow, listaA, listaB))\n\n# Exercício 6 - Considerando o range de valores abaixo, use a função filter() para retornar apenas os valores negativos.\nrange(-5, 5)\nlist(filter((lambda x: x < 0), range(-5,5)))\n\n# Exercício 7 - Usando a função filter(), encontre os valores que são comuns às duas listas abaixo.\na = [1,2,3,5,7,9]\nb = [2,3,5,6,7,8]\nprint (list(filter(lambda x: x in a, b)))\n\n# Exercício 8 - Considere o código abaixo. Obtenha o mesmo resultado usando o pacote time. \n# Não conhece o pacote time? Pesquise!\nimport datetime\nprint (datetime.datetime.now().strftime(\"%d/%m/%Y %H:%M\"))\n\nimport time\nprint (time.strftime(\"%d/%m/%Y %H:%M\"))\n\n# Exercício 9 - Considere os dois dicionários abaixo. \n# Crie um terceiro dicionário com as chaves do dicionário 1 e os valores do dicionário 2.\ndict1 = {'a':1,'b':2}\ndict2 = {'c':4,'d':5}\n\ndef trocaValores(d1, d2):\n dicTemp = {}\n \n for d1key, d2val in zip(d1,d2.values()):\n dicTemp[d1key] = d2val\n \n return dicTemp\n\ndict3 = trocaValores(dict1, dict2)\nprint(dict3)\n\n# Exercício 10 - Considere a lista abaixo e retorne apenas os elementos cujo índice for maior que 5.\nlista = ['a', 'b', 'c', 'd', 'e', 'f', 'g', 'h']\nfor indice, valor in enumerate(lista):\n if indice <= 5:\n continue\n else:\n print (valor)",
"Fim\nObrigado\nVisite o Blog da Data Science Academy - <a href=\"http://blog.dsacademy.com.br\">Blog DSA</a>"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
mne-tools/mne-tools.github.io | 0.20/_downloads/05c57a644672d33707fd1264df7f5617/plot_time_frequency_global_field_power.ipynb | bsd-3-clause | [
"%matplotlib inline",
"Explore event-related dynamics for specific frequency bands\nThe objective is to show you how to explore spectrally localized\neffects. For this purpose we adapt the method described in [1]_ and use it on\nthe somato dataset. The idea is to track the band-limited temporal evolution\nof spatial patterns by using the :term:Global Field Power(GFP) <GFP>.\nWe first bandpass filter the signals and then apply a Hilbert transform. To\nreveal oscillatory activity the evoked response is then subtracted from every\nsingle trial. Finally, we rectify the signals prior to averaging across trials\nby taking the magniude of the Hilbert.\nThen the :term:GFP is computed as described in [2], using the sum of the\nsquares but without normalization by the rank.\nBaselining is subsequently applied to make the :term:GFPs <GFP> comparable\nbetween frequencies.\nThe procedure is then repeated for each frequency band of interest and\nall :term:GFPs <GFP> are visualized. To estimate uncertainty, non-parametric\nconfidence intervals are computed as described in [3] across channels.\nThe advantage of this method over summarizing the Space x Time x Frequency\noutput of a Morlet Wavelet in frequency bands is relative speed and, more\nimportantly, the clear-cut comparability of the spectral decomposition (the\nsame type of filter is used across all bands).\nWe will use this dataset: somato-dataset\nReferences\n.. [1] Hari R. and Salmelin R. Human cortical oscillations: a neuromagnetic\n view through the skull (1997). Trends in Neuroscience 20 (1),\n pp. 44-49.\n.. [2] Engemann D. and Gramfort A. (2015) Automated model selection in\n covariance estimation and spatial whitening of MEG and EEG signals,\n vol. 108, 328-342, NeuroImage.\n.. [3] Efron B. and Hastie T. Computer Age Statistical Inference (2016).\n Cambrdige University Press, Chapter 11.2.",
"# Authors: Denis A. Engemann <denis.engemann@gmail.com>\n# Stefan Appelhoff <stefan.appelhoff@mailbox.org>\n#\n# License: BSD (3-clause)\nimport os.path as op\n\nimport numpy as np\nimport matplotlib.pyplot as plt\n\nimport mne\nfrom mne.datasets import somato\nfrom mne.baseline import rescale\nfrom mne.stats import bootstrap_confidence_interval",
"Set parameters",
"data_path = somato.data_path()\nsubject = '01'\ntask = 'somato'\nraw_fname = op.join(data_path, 'sub-{}'.format(subject), 'meg',\n 'sub-{}_task-{}_meg.fif'.format(subject, task))\n\n# let's explore some frequency bands\niter_freqs = [\n ('Theta', 4, 7),\n ('Alpha', 8, 12),\n ('Beta', 13, 25),\n ('Gamma', 30, 45)\n]",
"We create average power time courses for each frequency band",
"# set epoching parameters\nevent_id, tmin, tmax = 1, -1., 3.\nbaseline = None\n\n# get the header to extract events\nraw = mne.io.read_raw_fif(raw_fname)\nevents = mne.find_events(raw, stim_channel='STI 014')\n\nfrequency_map = list()\n\nfor band, fmin, fmax in iter_freqs:\n # (re)load the data to save memory\n raw = mne.io.read_raw_fif(raw_fname, preload=True)\n raw.pick_types(meg='grad', eog=True) # we just look at gradiometers\n\n # bandpass filter\n raw.filter(fmin, fmax, n_jobs=1, # use more jobs to speed up.\n l_trans_bandwidth=1, # make sure filter params are the same\n h_trans_bandwidth=1) # in each band and skip \"auto\" option.\n\n # epoch\n epochs = mne.Epochs(raw, events, event_id, tmin, tmax, baseline=baseline,\n reject=dict(grad=4000e-13, eog=350e-6),\n preload=True)\n # remove evoked response\n epochs.subtract_evoked()\n\n # get analytic signal (envelope)\n epochs.apply_hilbert(envelope=True)\n frequency_map.append(((band, fmin, fmax), epochs.average()))\n del epochs\ndel raw",
"Now we can compute the Global Field Power\nWe can track the emergence of spatial patterns compared to baseline\nfor each frequency band, with a bootstrapped confidence interval.\nWe see dominant responses in the Alpha and Beta bands.",
"# Helper function for plotting spread\ndef stat_fun(x):\n \"\"\"Return sum of squares.\"\"\"\n return np.sum(x ** 2, axis=0)\n\n\n# Plot\nfig, axes = plt.subplots(4, 1, figsize=(10, 7), sharex=True, sharey=True)\ncolors = plt.get_cmap('winter_r')(np.linspace(0, 1, 4))\nfor ((freq_name, fmin, fmax), average), color, ax in zip(\n frequency_map, colors, axes.ravel()[::-1]):\n times = average.times * 1e3\n gfp = np.sum(average.data ** 2, axis=0)\n gfp = mne.baseline.rescale(gfp, times, baseline=(None, 0))\n ax.plot(times, gfp, label=freq_name, color=color, linewidth=2.5)\n ax.axhline(0, linestyle='--', color='grey', linewidth=2)\n ci_low, ci_up = bootstrap_confidence_interval(average.data, random_state=0,\n stat_fun=stat_fun)\n ci_low = rescale(ci_low, average.times, baseline=(None, 0))\n ci_up = rescale(ci_up, average.times, baseline=(None, 0))\n ax.fill_between(times, gfp + ci_up, gfp - ci_low, color=color, alpha=0.3)\n ax.grid(True)\n ax.set_ylabel('GFP')\n ax.annotate('%s (%d-%dHz)' % (freq_name, fmin, fmax),\n xy=(0.95, 0.8),\n horizontalalignment='right',\n xycoords='axes fraction')\n ax.set_xlim(-1000, 3000)\n\naxes.ravel()[-1].set_xlabel('Time [ms]')"
] | [
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
daviddesancho/mdtraj | examples/WebGL-Viewer.ipynb | lgpl-2.1 | [
"Interactive WebGL trajectory widget\nNote: this feature requires a 'running' notebook, connected to a live kernel. It will not work with a staticly rendered display. For an introduction to the IPython interactive widget system and its capabilities, see this talk by Brian Granger\nhttp://player.vimeo.com/video/79832657#t=30m\nLet's start by just loading up a PDB file from the RCSB",
"from __future__ import print_function\nimport mdtraj as md\n\ntraj = md.load_pdb('http://www.rcsb.org/pdb/files/2M6K.pdb')\nprint(traj)",
"To enable these features, we first need to run enable_notebook to initialize\nthe required javascript.",
"from mdtraj.html import TrajectoryView, enable_notebook\nenable_notebook()",
"The WebGL viewer engine is called iview, and is introduced in the following paper: Li, Hongjian, et al. \"iview: an interactive WebGL visualizer for protein-ligand complex.\" BMC Bioinformatics 15.1 (2014): 56.",
"# Controls:\n# - default mouse to rotate.\n# - ctrl to translate\n# - shift to zoom (or use wheel)\n# - shift+ctrl to change the fog\n# - double click to toggle full screen\n\nwidget = TrajectoryView(traj, secondaryStructure='ribbon')\nwidget",
"We can even animate through the trajectory simply by updating the widget's frame attribute",
"import time\nfor i in range(traj.n_frames):\n widget.frame = i\n time.sleep(0.1)"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
ajhenrikson/phys202-2015-work | assignments/assignment03/NumpyEx03.ipynb | mit | [
"Numpy Exercise 3\nImports",
"import numpy as np\n%matplotlib inline\nimport matplotlib.pyplot as plt\nimport seaborn as sns\n\nimport antipackage\nimport github.ellisonbg.misc.vizarray as va",
"Geometric Brownian motion\nHere is a function that produces standard Brownian motion using NumPy. This is also known as a Wiener Process.",
"def brownian(maxt, n):\n \"\"\"Return one realization of a Brownian (Wiener) process with n steps and a max time of t.\"\"\"\n t = np.linspace(0.0,maxt,n)\n h = t[1]-t[0]\n Z = np.random.normal(0.0,1.0,n-1)\n dW = np.sqrt(h)*Z\n W = np.zeros(n)\n W[1:] = dW.cumsum()\n return t, W",
"Call the brownian function to simulate a Wiener process with 1000 steps and max time of 1.0. Save the results as two arrays t and W.",
"t,W=brownian(1,1000)\n\nassert isinstance(t, np.ndarray)\nassert isinstance(W, np.ndarray)\nassert t.dtype==np.dtype(float)\nassert W.dtype==np.dtype(float)\nassert len(t)==len(W)==1000",
"Visualize the process using plt.plot with t on the x-axis and W(t) on the y-axis. Label your x and y axes.",
"plt.plot(t,W)\nplt.xlabel(\"t\")\nplt.ylabel(\"W(t)\")\n\nassert True # this is for grading",
"Use np.diff to compute the changes at each step of the motion, dW, and then compute the mean and standard deviation of those differences.",
"dW=np.diff(W)\nprint dW.mean()\nprint dW.std()\n\nassert len(dW)==len(W)-1\nassert dW.dtype==np.dtype(float)",
"Write a function that takes $W(t)$ and converts it to geometric Brownian motion using the equation:\n$$\nX(t) = X_0 e^{((\\mu - \\sigma^2/2)t + \\sigma W(t))}\n$$\nUse Numpy ufuncs and no loops in your function.",
"def geo_brownian(t, W, X0, mu, sigma):\n \"Return X(t) for geometric brownian motion with drift mu, volatility sigma.\"\"\"\n x=(X0)*np.exp((mu-(sigma**2)/2)*(t)+sigma*(W))\n return x,t\n\nassert True # leave this for grading",
"Use your function to simulate geometric brownian motion, $X(t)$ for $X_0=1.0$, $\\mu=0.5$ and $\\sigma=0.3$ with the Wiener process you computed above.\nVisualize the process using plt.plot with t on the x-axis and X(t) on the y-axis. Label your x and y axes.",
"x,t=geo_brownian(t,W, 1.0, .5, .3) #plotting with variables\nplt.plot(t,x)\nplt.xlabel(\"t\")\nplt.ylabel(\"X(t)\")\n\nassert True # leave this for grading"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
kgourgou/stochastic-simulations-class | ipython_notebooks/langevin.ipynb | mit | [
"# Importing some python libraries.\nimport numpy as np\nfrom numpy.random import randn\nimport matplotlib.pyplot as pl\nimport seaborn as sns\n%matplotlib inline\n# Fixing figure sizes\nfrom pylab import rcParams\nrcParams['figure.figsize'] = 10,5\n\nimport sympy as sp\n\npale_red = sns.xkcd_rgb['pale red'] \ndenim_blue = sns.xkcd_rgb['denim blue']",
"Overdamped Langevin Equation\nThe overdamped Langevin equation is defined as\n$$\ndX_t=-\\nabla U(x)dt+\\sigma dB_t,\n$$\nfor some potential $U$. $B_t$ represents Brownian motion and $\\sigma$ controls the \"strength\" of the random variations. \nIn this example, we will work with a specific potential \n$$\nU(x)=(b-a/2)(x^2-1)^2+a/2\\cdot (x+1).\n$$\nThis is a double-well potential, as can be seen in the following plot.",
"a = -1;\nb = 1;\n\ndef U(x,a=-1,b=1):\n return (b-a/2)*(x**2-1)**2+a/2*(x+1)\n\nx = np.linspace(-1.5,1.5)\npl.plot(x,U(x),color=pale_red,linewidth=5)\npl.title('The potential $U(x)$ with $a=-1$ and $b=1$',fontsize=20)",
"Deterministic System\nLet's write the equation down, given the specific potential. First, if $\\sigma=0$,then the equation is an ODE\n$$\n\\begin{align}\n\\frac{dX_t}{dt}&=4c(1-x^2)x-a/2,\\c&=(b-a/2).\n\\end{align}\n$$",
"# Defining the derivative of the potential\ndef Uprime(x,t,a=-1,b=1):\n return 4*(b-a/2.0)*x*(1-x**2)-a/2.0",
"By numerically solving $U'(x)=0$, we can find that there are three equilibrium points for the system. Approximately, those are \n$$\n\\begin{align}\nx_1&=-0.955393,\\\nx_2&=-0.083924,\\\nx_3&=1.03932.\n\\end{align}\n$$",
"from scipy.integrate import odeint # importing a solver\n\nt = np.linspace(0,10,100)\n\nxinit = np.array([2.0,1.0,-0.08,-0.9,-2])\n\nwith sns.cubehelix_palette(3):\n for i in xrange(5):\n sol = odeint(Uprime, xinit[i], t)\n pl.plot(t,sol,alpha=0.8,linewidth=10)\n \npl.title('Five different solutions of the ODE system',fontsize=20)\npl.xlabel('t',fontsize=20)",
"As we can see, out of the three equilibrium solutions of the system, the two are stable and the one in the middle is unstable. We will use this information for comparisons with the stochastic system.\nStochastic System\nLet us now assume that $\\sigma>0$. In that case, we have an SDE, which we can solve with Euler-Maruyama. The scheme shall be : \n$$\nX_{n+1}=X_{n}+f(X_n)\\Delta t+\\sigma \\sqrt{\\Delta t}\\cdot z,\n$$\nwhere $z$ is a standard normal distribution.",
"def EM(xinit,sigma,T,Dt=0.1,a=-1,b=1):\n '''\n Returns the solution of the Langevin equation with \n potential U. \n \n Arguments\n =========\n xinit : real, initial condition.\n sigma : real, standard deviation parameter, used in generating brownian motion.\n Dt : real, stepsize of the Euler-Maruyama.\n T : real, final time to reach.\n \n '''\n \n n = int(T/Dt) # number of steps to reach T\n X = np.zeros(n)\n z = sigma*randn(n)\n \n X[0] = xinit # Initial condition\n \n # EM method \n for i in xrange(1,n):\n X[i] = X[i-1] + Dt* Uprime(X[i-1],a,b) + np.sqrt(Dt)*z[i-1]\n \n return X\n ",
"Now we can reproduce the picture from the deterministic case, but this time with the extra stochastic part. When $\\sigma$ is small, we see a similar picture with previously as the deterministic dynamics overpower the stochasticity.",
"with sns.cubehelix_palette(3):\n for i in xrange(5):\n path = EM(xinit[i],sigma=0.1,T=10)\n pl.plot(t,path,alpha=0.7,linewidth=10)\n\npl.title('Trajectories of the Langevin SDE, $\\sigma=0.1$',fontsize=20)\npl.xlabel('t',fontsize=20)\n\nwith sns.cubehelix_palette(5):\n for i in xrange(5):\n path = EM(xinit[i],sigma=0.4,T=10)\n pl.plot(t,path,alpha=0.9,linewidth=5)\n\npl.title('Trajectories of the Langevin SDE, $\\sigma=0.4$',fontsize=20)\npl.xlabel('t',fontsize=20)",
"Changing the $\\sigma$ from $0.1$ to $0.4$ provides random \"kicks\" that are hard enough for the solutions to jump from one equilibrium to the other.",
"with sns.cubehelix_palette(5):\n for i in xrange(5):\n path = EM(xinit[i],sigma=1,T=10)\n pl.plot(t,path,alpha=0.7,linewidth=5)\n\npl.title('Trajectories of the Langevin SDE, $\\sigma=1$',fontsize=20)\npl.xlabel('t',fontsize=20)\n\n",
"With $\\sigma=1$, the trajectories can now move freely between the equilibrium points. We can still see some kind of attraction though to the area around them.\n\nLet us attempt to set $\\sigma$ to a larger number and see what happens.",
"with sns.cubehelix_palette(5):\n for i in xrange(5):\n path = EM(xinit[i],sigma=2.4,T=10)\n pl.plot(t,path,linewidth=5)\n\npl.title('Trajectories of the Langevin SDE, $\\sigma=1$',fontsize=20)\npl.xlabel('t',fontsize=20)",
"Now the kicks are strong enough that the attractiveness (or repulsiveness) of the stationary points looks completely irrelevant. The dynamics are all about the stochastic part.\nChanging the properties of the potential\nWe now fix $\\sigma=1$ and look at the paths for $a\\in [0,b]$. We will start from $X_0=1.3$.",
"b = 1\narange = np.linspace(0,b,3)\n\n\nwith sns.cubehelix_palette(3):\n for aval in arange: \n path = EM(1.3,sigma=1, T=10, a=aval)\n pl.plot(t,path,linewidth=4)\n \npl.title('With $a=0,0.5,1$',fontsize=20)\npl.xlabel('$t$',fontsize=20)\n\nb = 1\narange = np.linspace(0,b,3)\n\n\nwith sns.cubehelix_palette(3):\n for aval in arange: \n pl.plot(x,U(x,a=aval),linewidth=4)\n \npl.title('With $a=0,0.5,1$',fontsize=20)\npl.xlabel('$x$',fontsize=20)",
"Above we have the potentials corresponding to the paths. To make things more concrete, here is the path superimposed on the potential.",
"def plotOnPoten(a,T=2):\n x = np.linspace(-2,2)\n pl.plot(x,U(x,a=0.5),linewidth=1,color='black')\n path = EM(1.3,sigma=1, T=T, a=aval)\n\n pl.plot(path,U(path,a=0.5),linewidth=4,alpha=0.7,color=pale_red)\n pl.title('For $a='+str(a)+'$.',fontsize=20)\n\nplotOnPoten(1,T=10)"
] | [
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
modin-project/modin | examples/tutorial/jupyter/execution/omnisci_on_native/local/exercise_1.ipynb | apache-2.0 | [
"<center><h2>Scale your pandas workflows by changing one line of code</h2>\nExercise 1: How to use Modin\nGOAL: Learn how to import Modin to accelerate and scale pandas workflows.\nModin is a drop-in replacement for pandas that distributes the computation \nacross all of the cores in your machine or in a cluster.\nIn practical terms, this means that you can continue using the same pandas scripts\nas before and expect the behavior and results to be the same. The only thing that needs\nto change is the import statement. Normally, you would change:\npython\nimport pandas as pd\nto:\npython\nimport modin.pandas as pd\nChanging this line of code will allow you to use all of the cores in your machine to do computation on your data. One of the major performance bottlenecks of pandas is that it only uses a single core for any given computation. Modin exposes an API that is identical to pandas, allowing you to continue interacting with your data as you would with pandas. There are no additional commands required to use Modin locally. Partitioning, scheduling, data transfer, and other related concerns are all handled by Modin under the hood.\n<p style=\"text-align:left;\">\n <h1>pandas on a multicore laptop\n <span style=\"float:right;\">\n Modin on a multicore laptop\n </span>\n\n<div>\n<img align=\"left\" src=\"../../../img/pandas_multicore.png\"><img src=\"../../../img/modin_multicore.png\">\n</div>\n\n### Concept for exercise: Dataframe constructor\n\nOften when playing around in pandas, it is useful to create a DataFrame with the constructor. That is where we will start.\n\n```python\nimport numpy as np\nimport pandas as pd\n\nframe_data = np.random.randint(0, 100, size=(2**10, 2**5))\ndf = pd.DataFrame(frame_data)\n```\n\nWhen creating a dataframe from a non-distributed object, it will take extra time to partition the data for Modin. When this is happening, you will see this message:\n\n```\nUserWarning: Distributing <class 'numpy.ndarray'> object. This may take some time.\n```\n\nModin uses Ray as an execution engine by default. Since this notebook is related to OmniSci, let's run examples on the OmniSci engine. For reaching this, we need to activate OmniSci either via Modin config or Modin environment variable. See more in [OmniSci usage](https://github.com/modin-project/modin/blob/master/docs/development/using_omnisci.rst) section.",
"import modin.config as cfg\ncfg.StorageFormat.put('omnisci')\n\n# Note: Importing notebooks dependencies. Do not change this code!\nimport numpy as np\nimport pandas\nimport sys\nimport modin\n\npandas.__version__\n\nmodin.__version__\n\n# Implement your answer here. You are also free to play with the size\n# and shape of the DataFrame, but beware of exceeding your memory!\n\nimport pandas as pd\n\nframe_data = np.random.randint(0, 100, size=(2**10, 2**5))\ndf = pd.DataFrame(frame_data)\n\n# ***** Do not change the code below! It verifies that \n# ***** the exercise has been done correctly. *****\n\ntry:\n assert df is not None\n assert frame_data is not None\n assert isinstance(frame_data, np.ndarray)\nexcept:\n raise AssertionError(\"Don't change too much of the original code!\")\nassert \"modin.pandas\" in sys.modules, \"Not quite correct. Remember the single line of code change (See above)\"\n\nimport modin.pandas\nassert pd == modin.pandas, \"Remember the single line of code change (See above)\"\nassert hasattr(df, \"_query_compiler\"), \"Make sure that `df` is a modin.pandas DataFrame.\"\n\nprint(\"Success! You only need to change one line of code!\")",
"Now that we have created a toy example for playing around with the DataFrame, let's print it out in different ways.\nConcept for Exercise: Data Interaction and Printing\nWhen interacting with data, it is very imporant to look at different parts of the data (e.g. df.head()). Here we will show that you can print the modin.pandas DataFrame in the same ways you would pandas.",
"# When working with non-string column labels it could happen that some backend logic would try to insert a column \n# with a string name to the frame, so we do add_prefix()\ndf = df.add_prefix(\"col\")\n\n# Print the first 10 lines.\ndf.head(10)\n\ndf.count()",
"Please move on to Exercise 2 when you are ready"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
abhipr1/DATA_SCIENCE_INTENSIVE | Week_1/DATA_WRANGLING/WORKING_WITH_DATA_IN_FILES/data_wrangling_xml/data_wrangling_xml/.ipynb_checkpoints/sliderule_dsi_xml_exercise-checkpoint.ipynb | apache-2.0 | [
"XML example and exercise\n\n\nstudy examples of accessing nodes in XML tree structure \nwork on exercise to be completed and submitted\n\n\n\nreference: https://docs.python.org/2.7/library/xml.etree.elementtree.html\ndata source: http://www.dbis.informatik.uni-goettingen.de/Mondial",
"from xml.etree import ElementTree as ET",
"XML example\n\nfor details about tree traversal and iterators, see https://docs.python.org/2.7/library/xml.etree.elementtree.html",
"document_tree = ET.parse( './data/mondial_database_less.xml' )\n\n# print names of all countries\nfor child in document_tree.getroot():\n print (child.find('name').text)\n\n# print names of all countries and their cities\nfor element in document_tree.iterfind('country'):\n print ('* ' + element.find('name').text + ':', end=''),\n capitals_string = ''\n for subelement in element.getiterator('city'):\n capitals_string += subelement.find('name').text + ', '\n print (capitals_string[:-2])",
"XML exercise\nUsing data in 'data/mondial_database.xml', the examples above, and refering to https://docs.python.org/2.7/library/xml.etree.elementtree.html, find\n\n10 countries with the lowest infant mortality rates\n10 cities with the largest population\n10 ethnic groups with the largest overall populations (sum of best/latest estimates over all countries)\nname and country of a) longest river, b) largest lake and c) airport at highest elevation",
"document = ET.parse( './data/mondial_database.xml' )\n\n# print child and attributes\n#for child in document.getroot():\n# print (child.tag, child.attrib)\n\nimport pandas as pd\n\n# Create a list of country and their Infant Mortality Rate \ncountry_imr=[]\nfor country in document.getroot().findall('country'):\n name = country.find('name').text\n infant_mortality_rate = country.find('infant_mortality')\n if infant_mortality_rate is not None:\n infant_mortality_rate=infant_mortality_rate.text\n else :\n infant_mortality_rate = -1\n country_imr.append((name, (float)(infant_mortality_rate)))",
"10 countries with the lowest infant mortality rates",
"df = pd.DataFrame(country_imr, columns=['Country', 'Infant_Mortality_Rate'])\ndf_unknown_removed = df[df.Infant_Mortality_Rate != -1] \ndf_unknown_removed.set_index('Infant_Mortality_Rate').sort().head(10)\n\ncity_population=[]\nfor country in document.iterfind('country'):\n for state in country.iterfind('province'):\n for city in state.iterfind('city'):\n try:\n city_population.append((city.find('name').text, float(city.find('population').text)))\n except:\n next\n for city in country.iterfind('city'):\n try:\n city_population.append((city.find('name').text, float(city.find('population').text)))\n except:\n next",
"10 cities with the largest population",
"df = pd.DataFrame(city_population, columns=['City', 'Population'])\n#df.info()\ndf.sort_index(by='Population', ascending=False).head(10)\n\nethnic_population={}\ncountry_population={}\nfor country in document.iterfind('country'):\n try:\n country_population[country.find('name').text]= float(country.find('population').text)\n except:\n next\n for state in country.iterfind('province' or 'state'):\n try:\n country_population[country.find('name').text] += float(state.find('population').text)\n except:\n next\n for city in state.iterfind('city'):\n try:\n country_population[country.find('name').text] += float(city.find('population').text)\n except:\n next\n\nfor country in document.iterfind('country'):\n for ethnicgroup in country.iterfind('ethnicgroup'):\n try:\n if ethnicgroup.text in ethnic_population:\n ethnic_population[ethnicgroup.text] += country_population[country.find('name').text]*float(ethnicgroup.get('percentage'))/100\n else:\n ethnic_population[ethnicgroup.text] = country_population[country.find('name').text]*float(ethnicgroup.get('percentage'))/100\n except:\n next",
"10 ethnic groups with the largest overall populations (sum of best/latest estimates over all countries)",
"pd.DataFrame(sorted(ethnic_population.items(), key=lambda x:x[1], reverse=True)[:10], columns=['Ethnic_Groups', 'Population'])\n\nrivers_list=[]\nrivers_df = pd.DataFrame()\nfor rivers in document.iterfind('river'):\n try:\n rivers_list.append({'name':rivers.find('name').text, 'length':rivers.find('length').text, 'country':rivers.find('located').attrib['country']})\n except:\n next\nrivers_list"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
4dsolutions/Python5 | About_Decorators.ipynb | mit | [
"Decorators\n\nI use UFO as a decorator not because I want or need people to believe in UFOs, but because the science fiction idea of being abducted is you stay the same but for something lasting the UFO did to you.\nIn the case of decorator syntax that's useful because to \"decorate\" (\"abduct\") is to \n\nfeed a function to a callable (the decorator), and \nkeep that function's name for whatever gets returned\n\nSince function type objects are just objects with a __dict__, we're free to apply arbitrary attributes to them. Lets have the UFO decorator decorate any function with a new attribute named 'abducted'.",
"def UFO(f):\n setattr(f, 'abducted', True) # f.abducted = True same thing\n return f\n\n@UFO\ndef addS(s):\n return s + \"S\"\n\n@UFO\ndef addX(s):\n return s + \"X\"\n\nhasattr(addX, 'abducted')\n\nif hasattr(addS, 'abducted'):\n print(\"The value of abducted for addS is:\", addS.abducted)",
"In the example below, the Composer class \"decorates\" the two following functions, meaning the Composer instances become the new proxies for the functions they swallowed. The original functions are still on tap, through __call__. \nFurthermore, when two such Composer types are multiplied, their internal functions get composed together, into a new internalized function.",
"class Composer:\n \n def __init__(self, f):\n self.func = f\n \n def __call__(self, s):\n return self.func(s)\n \n def __mul__(self, other):\n def new(s):\n return self(other(s))\n return Composer(new)\n\n@Composer\ndef F(x):\n return x * x\n\n@Composer\ndef G(x):\n return x + 2",
"Below is simple composition of functions. This is valid Python even if the Composer decorator is left out, i.e. function type objects would normally have no problem composing with one another in this way. \nTo compose F and G means going F(G(x)) for some x.",
"F(G(F(F(F(G(10))))))",
"Thanks to Compose, the \"class decorator\" (a decorator that happens to be a class), our F and G are actually Compose type objects, so have this additional ability to compose into other Compose type objects. We don't need an argument until we call the final H.",
"H = F*G*F*F*F*G # the functions themselves may be multiplied\nH(10)"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
sebastiandres/mat281 | laboratorios/lab01-PythonNumerico/PythonNumerico.ipynb | cc0-1.0 | [
"<header class=\"w3-container w3-teal\">\n<img src=\"images/utfsm.png\" alt=\"\" height=\"100px\" align=\"left\"/>\n<img src=\"images/mat.png\" alt=\"\" height=\"100px\" align=\"right\"/>\n</header>\n<br/><br/><br/><br/><br/>\nMAT281\nAplicaciones de la Matemática en la Ingeniería\nLaboratorio 1: Python Numérico\nINSTRUCCIONES\n\nAnoten su nombre y rol en la celda siguiente.\nDesarrollen los problemas de manera secuencial.\nGuarden constantemente con Ctr-S para evitar sorpresas.\nReemplacen en las celdas de código donde diga #FIX_ME por el código correspondiente.\nEjecuten cada celda de código utilizando Ctr-Enter\nPueden utilizar tabulación para obtener ayuda de ipython notebook.",
"#Configuracion para recargar módulos y librerías cada vez \n%reload_ext autoreload\n%autoreload 2\n \n%matplotlib inline\n \nfrom mat281_code.lab import *\nfrom IPython.core.display import HTML\nfrom matplotlib import pyplot as plt\n\nHTML(open(\"style/mat281.css\", \"r\").read())\n\nalumno_1 = (r\"Sebastián Flores\", \"2004001-7\") # FIX ME\nalumno_2 = (r\"María José Vargas\", \"2004007-8\") # FIX ME\n\nHTML(greetings(alumno_1, alumno_2))",
"Contenido\n\nOverview de Numpy y Scipy\nLibrería Numpy\nArreglos vs Matrices\nAxis\nFunciones basicas.\nInput y Output\nTips\n\n1. Overview de numpy y scipy\n¿Cual es la diferencia entre numpy y scipy?\nIn an ideal world, NumPy would contain nothing but the array data type and the most basic operations: indexing, sorting, reshaping, basic elementwise functions, et cetera. All numerical code would reside in SciPy. However, one of NumPy’s important goals is compatibility, so NumPy tries to retain all features supported by either of its predecessors. Thus NumPy contains some linear algebra functions, even though these more properly belong in SciPy. In any case, SciPy contains more fully-featured versions of the linear algebra modules, as well as many other numerical algorithms. If you are doing scientific computing with python, you should probably install both NumPy and SciPy. Most new features belong in SciPy rather than NumPy.\nLink stackoverflow\nPor ser python un lenguaje open-source, existen miles de paquetes disponibles creados por individuos o comunidades. Éstos pueden estar disponibles en un repositorio como github o bitbucket, o bien estar disponibles en el repositorio oficial de python: pypi. En un inicio, cuando no existía una librerías de cálculo científico oficial, varios candidatos proponían soluciones:\n* numpy: tenía una excelente representación de vectores, matrices y arreglos, implementados en C y llamados fácilmente desde python\n* scipy: proponía linkear a librerías ya elaboradas de calculo científico de alto rendimiento en C o fortran, permitiendo ejecutar rápidamente desde python.\nAmbos projectos fueron creciendo en complejidad y alcance, y en vez de competir, decidieron dividir tareas y unificar fuerzas para proponer una plataforma de cálculo científico que reemplazara completamente otros programas.\n\nnumpy: Corresponde a lo relacionado con la estructura de los datos (arrays densos y sparse, matrices, constructores especiales, lectura de datos regulares, etc.), pero no las operaciones en sí. Por razones históricas y de compatibilidad, tiene algunos algoritmos, pero en realidad resulta más consistente utilizar los algoritmos de scipy.\nscipy: Corresponde a la implementación numérica de diversos algoritmos de corte científicos: algebra lineal, estadística, ecuaciones diferenciales ordinarias, interpolacion, integracion, optimización, análisis de señales, entre otros.\n\nOBSERVACIÓN IMPORTANTE:\nLas matrices y arrays de numpy deben contener variables con el mismo tipo de datos: sólo enteros, sólo flotantes, sólo complejos, sólo booleanos o sólo strings. La uniformicidad de los datos es lo que permite acelerar los cálculos con implementaciones en C a bajo nivel.\n2. Librería Numpy\nSiempre importaremos la librería numpy de la siguiente forma:\nimport numpy as np\n\nTodas las funciones y módulos de numpy quedan a nuestro alcance a 3 carácteres de distancia:\nnp.array([1,4,9,16])\nnp.linspace(0.,1.,100)\n\nEvite a todo costo utilizar lo siguiente:\nfrom numpy import *",
"import numpy as np\nprint np.version.version # Si alguna vez tienen problemas, verifiquen su version de numpy",
"Importante\nIpython notebook es interactivo y permite la utilización de tabulación para ofrecer sugerencias o enseñar ayuda (no solo para numpy, sino que para cualquier código en python).\nPruebe los siguientes ejemplos:",
"# Presionar tabulacción con el cursor despues de np.arr\nnp.arr\n\n# Presionar Ctr-Enter para obtener la documentacion de la funcion np.array usando \"?\"\nnp.array?\n\n# Presionar Ctr-Enter\n%who\n\nx = 10\n%who",
"2. Librería Numpy\n2.1 Array vs Matrix\nPor defecto, la gran mayoria de las funciones de numpy y de scipy asumen que se les pasará un objeto de tipo array. \nVeremos las diferencias entre los objetos array y matrix, pero recuerden utilizar array mientras sea posible.\nMatrix\nUna matrix de numpy se comporta exactamente como esperaríamos de una matriz:\nPros:\n\nMultiplicación utiliza el signo * como es esperable.\nResulta natural si lo único que haremos es algebra lineal.\n\nContras:\n\nTodas las matrices deben estar completamente alineadas para poder operar correctamente.\nOperaciones elementwise son mas dificiles de definir/acceder.\nEstán exclusivamente definidas en 2D: un vector fila o un vector columna siguen siendo 2D.",
"# Operaciones con np.matrix\nA = np.matrix([[1,2],[3,4]])\nB = np.matrix([[1, 1],[0,1]], dtype=float)\nx = np.matrix([[1],[2]])\nprint \"A =\\n\", A\nprint \"B =\\n\", B\nprint \"x =\\n\", x\n\nprint \"A+B =\\n\", A+B\nprint \"A*B =\\n\", A*B\nprint \"A*x =\\n\", A*x\nprint \"A*A = A^2 =\\n\", A**2\nprint \"x.T*A =\\n\", x.T * A",
"2.1 Array vs Matrix\nArray\nUn array de numpy es simplemente un \"contenedor\" multidimensional.\nPros:\n\nEs multidimensional: 1D, 2D, 3D, ...\nResulta consistente: todas las operaciones son element-wise a menos que se utilice una función específica.\n\nContras:\n\nMultiplicación maticial utiliza la función dot()",
"# Operaciones con np.matrix\nA = np.array([[1,2],[3,4]])\nB = np.array([[1, 1],[0,1]], dtype=float)\nx = np.array([1,2]) # No hay necesidad de definir como fila o columna!\nprint \"A =\\n\", A\nprint \"B =\\n\", B\nprint \"x =\\n\", x\n\nprint \"A+B =\\n\", A+B\nprint \"AoB = (multiplicacion elementwise) \\n\", A*B\nprint \"A*B = (multiplicacion matricial, v1) \\n\", np.dot(A,B)\nprint \"A*B = (multiplicacion matricial, v2) \\n\", A.dot(B)\nprint \"A*A = A^2 = (potencia matricial)\\n\", np.linalg.matrix_power(A,2)\nprint \"AoA = (potencia elementwise)\\n\", A**2\nprint \"A*x =\\n\", np.dot(A,x)\nprint \"x.T*A =\\n\", np.dot(x,A) # No es necesario transponer.",
"Desafío 1: matrix vs array\nSean\n$$ \nA = \\begin{pmatrix} 1 & 0 & 1 \\ 0 & 1 & 1\\end{pmatrix}\n$$\ny \n$$ \nB = \\begin{pmatrix} 1 & 0 & 1 \\ 0 & 1 & 1 \\ 0 & 0 & 1\\end{pmatrix}\n$$\n\nCree las matrices utilizando np.matrix y multipliquelas en el sentido matricial. Imprima el resultado.\nCree las matrices utilizando np.array y multipliquelas en el sentido matricial. Imprima el resultado.",
"# 1: Utilizando matrix\nA = np.matrix([]) # FIX ME\nB = np.matrix([]) # FIX ME\nprint \"np.matrix, AxB=\\n\", #FIX ME\n\n# 2: Utilizando arrays\nA = np.array([]) # FIX ME\nB = np.array([]) # FIX ME\nprint \"np.matrix, AxB=\\n\", #FIX ME",
"2.2 Indexación y Slicing\nLos arrays se indexan de la forma \"tradicional\".\n\nPara un array unidimensional: sólo tiene una indexacion. ¡No es ni fila ni columna!\nPara un array bidimensional: primera componente son las filas, segunda componente son las columnas. Notación respeta por tanto la convención tradicional de matrices.\nPara un array tridimensional: primera componente son las filas, segunda componente son las columnas, tercera componente la siguiente dimension.\n\n<img src=\"images/anatomyarray.png\" alt=\"\" height=\"100px\" align=\"left\"/>\nRespecto a los índices de los elementos, éstos comienzan en cero, como en C. Además, es posible utilizar índices negativos, que como convención asignan -1 al último elemento, al -2 el penúltimo elemento, y así sucesivamente.\nPor ejemplo, si a = [2,3,5,7,11,13,17,19], entonces a[0] es el valor 2 y a[1] es el valor 3, mientras que a[-1] es el valor 19 y a[-2] es el valor 17.\nAdemas, en python existe la \"slicing notation\":\n* a[start:end] : items desde índice start hasta end-1\n* a[start:] : items desde índice start hasta el final del array\n* a[:end] : items desde el inicio hasta el índice end-1\n* a[:] : todos los items del array (una copia nueva)\n* a[start:end:step] : items desde start hasta pasar end (sin incluir) con paso step",
"x = np.arange(9) # \"Vector\" con valores del 0 al 8 \nprint \"x = \", x\nprint \"x[:] = \", x[:]\nprint \"x[5:] = \", x[5:]\nprint \"x[:8] = \", x[:8]\nprint \"x[:-1] = \", x[:-1]\nprint \"x[1:-1] = \", x[1:-1]\nprint \"x[1:-1:2] = \", x[1:-1:2]\n\nA = x.reshape(3,3) # Arreglo con valores del 0 al 8, en 3 filas y 3 columnas.\nprint \"\\n\"\nprint \"A = \\n\", A\nprint \"primera fila de A\\n\", A[0,:]\nprint \"ultima columna de A\\n\", A[:,-1]\nprint \"submatriz de A\\n\", A[:2,:2]",
"Observación\n\nCabe destacar que al tomar slices (subsecciones) de un arreglo obtenemos siempre un arreglo con menores dimensiones que el original.\nEsta notación es extremadamente conveniente, puesto que nos permite manipular el array sin necesitar conocer el tamaño del array y escribir de manera compacta las fórmulas numéricas.\n\nPor ejemplo, implementar una derivada numérica es tan simple como sigue.",
"def f(x):\n return 1 + x**2\n\nx = np.array([0, 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1.0]) # O utilizar np.linspace!\ny = f(x) # Tan facil como llamar f sobre x\ndydx = ( y[1:] - y[:-1] ) / ( x[1:] - x[:-1] )\nx_aux = 0.5*(x[1:] + x[:-1])\n# To plot\nfig = plt.figure(figsize=(12,8))\nplt.plot(x, y, '-s', label=\"f\")\nplt.plot(x_aux, dydx, '-s', label=\"df/dx\")\nplt.legend(loc=\"upper left\")\nplt.show()",
"Desafío 2: Derivación numérica\nImplemente el cálculo de la segunda derivada, que puede obtenerse por diferencias finitas centradas mediante\n$$ \\frac{d f(x_i)}{dx} = \\frac{1}{\\Delta x^2} \\Big( f(x_{i+1}) -2 f(x_{i}) + f(x_{i-1}) \\Big)$$",
"def g(x):\n return 1 + x**2 + np.sin(x)\n\nx = np.linspace(0,1,10)\ny = g(x) \nd2ydx2 = 0 * x # FIX ME\nx_aux = 0*d2ydx2 # FIX ME\n# To plot\nfig = plt.figure(figsize=(12,8))\nplt.plot(x, y, label=\"f\")\nplt.plot(x_aux, d2ydx2, label=\"d2f/dx2\")\nplt.legend(loc=\"upper left\")\nplt.show()",
"2. Librería Numpy\n2.2 Funciones Básicas\nAlgunas funciones básicas que es conveniente conocer son las siguientes:\n* shape: Entrega las dimensiones del arreglo. Siempre es una tupla.\n* len: Entrega el número de elementos de la primera dimensión del arreglo. Siempre es un entero.\n* ones: Crea un arreglo con las dimensiones provistas e inicializado con valores 1. Por defecto array 1D.\n* zeros: Crea un arreglo con las dimensiones provistas e inicializado con valores 1. Por defecto array 1D.\n* eye: Crea un arreglo con las dimensiones provistas e inicializado con 1 en la diagonal. Por defecto array 2D.",
"# arrays 1d\nA = np.ones(3)\nprint \"A = \\n\", A\nprint \"A.shape =\", A.shape\nprint \"len(A) =\", len(A)\nB = np.zeros(3)\nprint \"B = \\n\", B\nprint \"B.shape =\", B.shape\nprint \"len(B) =\", len(B)\nC = np.eye(1,3)\nprint \"C = \\n\", C\nprint \"C.shape =\", C.shape\nprint \"len(C) =\", len(C)\n# Si queremos forzar la misma forma que A y B\nC = np.eye(1,3).flatten() # o np.eye(1,3)[0,:] \nprint \"C = \\n\", C\nprint \"C.shape =\", C.shape\nprint \"len(C) =\", len(C)\n\n# square arrays\nA = np.ones((3,3))\nprint \"A = \\n\", A\nprint \"A.shape =\", A.shape\nprint \"len(A) =\", len(A)\nB = np.zeros((3,3))\nprint \"B = \\n\", B\nprint \"B.shape =\", B.shape\nprint \"len(B) =\", len(B)\nC = np.eye(3) # Or np.eye(3,3)\nprint \"C = \\n\", C\nprint \"C.shape =\", C.shape\nprint \"len(C) =\", len(C)\n\n# fat 2d array\nA = np.ones((2,5))\nprint \"A = \\n\", A\nprint \"A.shape =\", A.shape\nprint \"len(A) =\", len(A)\nB = np.zeros((2,5))\nprint \"B = \\n\", B\nprint \"B.shape =\", B.shape\nprint \"len(B) =\", len(B)\nC = np.eye(2,5)\nprint \"C = \\n\", C\nprint \"C.shape =\", C.shape\nprint \"len(C) =\", len(C)",
"2. Librería Numpy\n2.2 Funciones Básicas\nAlgunas funciones básicas que es conveniente conocer son las siguientes:\n* reshape: Convierte arreglo a nueva forma. Numero de elementos debe ser el mismo.\n* linspace: Regresa un arreglo con valores linealmente espaciados.\n* diag(x): Si x es 1D, regresa array 2D con valores en diagonal. Si x es 2D, regresa valores en la diagonal.\n* sum: Suma los valores del arreglo. Puede hacerse en general o a lo largo de un axis.\n* mean: Calcula el promedio de los valores del arreglo. Puede hacerse en general o a lo largo de un axis.\n* std: Calcula la desviación estándar de los valores del arreglo. Puede hacerse en general o a lo largo de un axis.",
"x = np.linspace(0., 1., 6)\nA = x.reshape(3,2)\nprint \"x = \\n\", x\nprint \"A = \\n\", A\n\nprint \"np.diag(x) = \\n\", np.diag(x)\nprint \"np.diag(B) = \\n\", np.diag(A)\n\nprint \"\"\nprint \"A.sum() = \", A.sum()\nprint \"A.sum(axis=0) = \", A.sum(axis=0)\nprint \"A.sum(axis=1) = \", A.sum(axis=1)\n\nprint \"\"\nprint \"A.mean() = \", A.mean()\nprint \"A.mean(axis=0) = \", A.mean(axis=0)\nprint \"A.mean(axis=1) = \", A.mean(axis=1)\n\nprint \"\"\nprint \"A.std() = \", A.std()\nprint \"A.std(axis=0) = \", A.std(axis=0)\nprint \"A.std(axis=1) = \", A.std(axis=1)",
"Desafío 3\nComplete el siguiente código:\n* Se le provee un array A cuadrado\n* Calcule un array B como la multiplicación element-wise de A por sí misma.\n* Calcule un array C como la multiplicación matricial de A y B.\n* Imprima la matriz C resultante.\n* Calcule la suma, promedio y desviación estándar de los valores en la diagonal de C.\n* Imprima los valores anteriormente calculados.",
"A = np.outer(np.arange(3),np.arange(3))\nprint A\n# FIX ME\n# FIX ME\n# FIX ME\n# FIX ME\n# FIX ME",
"Desafío 4\nImplemente la regla de integración trapezoidal",
"def mi_funcion(x):\n f = 1 + x + x**3 + x**5 + np.sin(x)\n return f\n\nN = 5\nx = np.linspace(-1,1,N)\ny = mi_funcion(x)\n# FIX ME\nI = 0 # FIX ME\n# FIX ME\nprint \"Area bajo la curva: %.3f\" %I\n\n# Ilustración gráfica\nx_aux = np.linspace(x.min(),x.max(),N**2)\nfig = plt.figure(figsize=(12,8))\nfig.gca().fill_between(x, 0, y, alpha=0.25)\nplt.plot(x_aux, mi_funcion(x_aux), 'k')\nplt.plot(x, y, 'r.-')\nplt.show()",
"2. Librería Numpy\n2.5 Inputs y Outputs\nNumpy permite leer datos en formato array con la función loadtxt. Existen variados argumentos opcionales, pero los mas importantes son:\n* skiprows: permite saltarse lineas en la lectura.\n* dtype: declarar que tipo de dato tendra el array resultante",
"# Ejemplo de lectura de datos\ndata = np.loadtxt(\"data/cherry.txt\")\nprint data.shape\nprint data\n\n# Ejemplo de lectura de datos, saltandose 11 lineas y truncando a enteros\ndata_int = np.loadtxt(\"data/cherry.txt\", skiprows=11).astype(int)\nprint data_int.shape\nprint data_int",
"2. Librería Numpy\n2.5 Inputs y Outputs\nNumpy permite guardar datos de manera sencilla con la función savetxt: siempre debemos dar el nombre del archivo y el array a guardar. \nExisten variados argumentos opcionales, pero los mas importantes son:\n* header: Línea a escribir como encabezado de los datos\n* fmt: Formato con el cual se guardan los datos (%d para enteros, %.5f para flotantes con 5 decimales, %.3E para notación científica con 3 decimales, etc.).",
"# Guardando el archivo con un header en español\nencabezado = \"Diametro Altura Volumen (Valores truncados a numeros enteros)\"\nnp.savetxt(\"data/cherry_int.txt\", data_int, fmt=\"%d\", header=encabezado)",
"Revisemos si el archivo quedó bien escrito. Cambiaremos de python a bash para utilizar los comandos del terminal:",
"%%bash \ncat data/cherry_int.txt",
"Desafío 5\n\nLea el archivo data/cherry.txt\nEscale la matriz para tener todas las unidades en metros o metros cubicos.\nGuarde la matriz en un nuevo archivo data/cherry_mks.txt, con un encabezado apropiado y 2 decimales de precisión para el flotante (pero no en notación científica).",
"# Leer datos\n#FIX_ME#\n\n# Convertir a mks\n#FIX_ME#\n\n# Guardar en nuevo archivo\n#FIX_ME#",
"2. Librería Numpy\n2.6 Selecciones de datos\nExisten 2 formas de seleccionar datos en un array A:\n* Utilizar máscaras de datos, que corresponden a un array con las mismas dimensiones del array A, pero de tipo booleano. Todos aquellos elementos True del array de la mascara serán seleccionados.\n* Utilizar un array con valores enteros. Los valores del array indican los valores que desean conservarse. \n2.6 Máscaras\nObserve que el array regresado siempre es unidimensional puesto que no es posible garantizar que se mantenga la dimensión original del array.",
"x = np.linspace(0,42,10)\nprint \"x = \", x\nprint \"x.shape = \", x.shape\n\nprint \"\\n\"\nmask_x_1 = x>10\nprint \"mask_x_1 = \", mask_x_1\nprint \"x[mask_x_1] = \", x[mask_x_1]\nprint \"x[mask_x_1].shape = \", x[mask_x_1].shape\n\nprint \"\\n\"\nmask_x_2 = x > x.mean()\nprint \"mask_x_2 = \", mask_x_2\nprint \"x[mask_x_2] = \", x[mask_x_2]\nprint \"x[mask_x_2].shape = \", x[mask_x_2].shape\n\nA = np.linspace(10,20,12).reshape(3,4)\nprint \"\\n\"\nprint \"A = \", A\nprint \"A.shape = \", A.shape\n\nprint \"\\n\"\nmask_A_1 = A>13\nprint \"mask_A_1 = \", mask_A_1\nprint \"A[mask_A_1] = \", A[mask_A_1]\nprint \"A[mask_A_1].shape = \", A[mask_A_1].shape\n\nprint \"\\n\"\nmask_A_2 = A > 0.5*(A.min()+A.max())\nprint \"mask_A_2 = \", mask_A_2\nprint \"A[mask_A_2] = \", A[mask_A_2]\nprint \"A[mask_A_2].shape = \", A[mask_A_2].shape\n\nT = np.linspace(-100,100,24).reshape(2,3,4)\nprint \"\\n\"\nprint \"T = \", T\nprint \"T.shape = \", T.shape\n\nprint \"\\n\"\nmask_T_1 = T>=0\nprint \"mask_T_1 = \", mask_T_1\nprint \"T[mask_T_1] = \", T[mask_T_1]\nprint \"T[mask_T_1].shape = \", T[mask_T_1].shape\n\nprint \"\\n\"\nmask_T_2 = 1 - T + 2*T**2 < 0.1*T**3\nprint \"mask_T_2 = \", mask_T_2\nprint \"T[mask_T_2] = \", T[mask_T_2]\nprint \"T[mask_T_2].shape = \", T[mask_T_2].shape",
"2.6 Índices\nObserve que es posible repetir índices, por lo que el array obtenido puede tener más elementos que el array original.\nEn un arreglo 2d, es necesario pasar 2 arrays, el primero para las filas y el segundo para las columnas.",
"x = np.linspace(10,20,11)\nprint \"x = \", x\nprint \"x.shape = \", x.shape\n\nprint \"\\n\"\nind_x_1 = np.array([1,2,3,5,7])\nprint \"ind_x_1 = \", ind_x_1\nprint \"x[ind_x_1] = \", x[ind_x_1]\nprint \"x[ind_x_1].shape = \", x[ind_x_1].shape\n\nprint \"\\n\"\nind_x_2 = np.array([0,0,1,2,3,4,5,6,7,-3,-2,-1,-1])\nprint \"ind_x_2 = \", ind_x_2\nprint \"x[ind_x_2] = \", x[ind_x_2]\nprint \"x[ind_x_2].shape = \", x[ind_x_2].shape\n\nA = np.linspace(-90,90,10).reshape(2,5)\nprint \"A = \", A\nprint \"A.shape = \", A.shape\n\nprint \"\\n\"\nind_row_A_1 = np.array([0,0,0,1,1])\nind_col_A_1 = np.array([0,2,4,1,3])\nprint \"ind_row_A_1 = \", ind_row_A_1\nprint \"ind_col_A_1 = \", ind_col_A_1\nprint \"A[ind_row_A_1,ind_col_A_1] = \", A[ind_row_A_1,ind_col_A_1]\nprint \"A[ind_row_A_1,ind_col_A_1].shape = \", A[ind_row_A_1,ind_col_A_1].shape\n\nprint \"\\n\"\nind_row_A_2 = 1\nind_col_A_2 = np.array([0,1,3])\nprint \"ind_row_A_2 = \", ind_row_A_2\nprint \"ind_col_A_2 = \", ind_col_A_2\nprint \"A[ind_row_A_2,ind_col_A_2] = \", A[ind_row_A_2,ind_col_A_2]\nprint \"A[ind_row_A_2,ind_col_A_2].shape = \", A[ind_row_A_2,ind_col_A_2].shape",
"Desafío 6\n<img src=\"images/generador_eolico.jpg\" alt=\"\" width=\"280px\" align=\"right\"/>\nLa potencia de un aerogenerador, para $k$ una constante relacionada con la geometría y la eficiencia, $\\rho$ la densidad del are, $r$ el radio del aerogenerador en metros y $v$ la velocidad el viento en metros por segundo, está dada por: \n$$ P = \\begin{cases} k \\ \\rho \\ r^2 \\ v^3, 3 \\leq v \\leq 25\\ 0,\\ eoc\\end{cases}$$\nTípicamente se considera una valor de $k=0.8$ y una densidad del aire de $\\rho = 1.2$ [$kg/m^3$].\nCalcule el número de aerogeneradores activos, la potencia promedio y la potencia total generada por los 11 generadores del parque Eólico Canela 1.\nLos valores de radio del aerogenerador (en metros) y la velocidad del viento (en kilometros por hora) se indican a continuación en arreglos en el código numérico.",
"import numpy as np\nk = 0.8\nrho = 1.2 # \nr_m = np.array([ 25., 25., 25., 25., 25., 25., 20., 20., 20., 20., 20.])\nv_kmh = np.array([10.4, 12.6, 9.7, 7.2, 12.3, 10.8, 12.9, 13.0, 8.6, 12.6, 11.2]) # En kilometros por hora\nP = 0 \nn_activos = 0\nP_mean = 0.0\nP_total = 0.0\nprint \"Existen %d aerogeneradores activos del total de %d\" %(n_activos, r.shape[0])\nprint \"La potencia promedio de los aeorgeneradores es {0:.2f} \".format(P_mean) \nprint \"La potencia promedio de los aeorgeneradores es \" + str(P_total) ",
"Tips\n\n\nLa práctica y la necesidad hace al maestro.\n\n\nPreguntar: en línea y en persona, pero tratar de solucionar los problemas antes.\n\n\nEnlaces útiles:\n\nhttp://www.labri.fr/perso/nrougier/teaching/numpy.100/ : Lista con 100 recetas prácticas.s\nhttp://sebastianraschka.com/Articles/2014_matlab_vs_numpy.html : Desde el lado oscuro a numpy.\nhttp://pages.physics.cornell.edu/~myers/teaching/ComputationalMethods/python/arrays.html : Otros consejos."
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
melissawm/lpwithnotebooks | exemplo/IDEB.ipynb | gpl-3.0 | [
"Exemplo: Análise do IDEB\nNeste notebook, vamos analisar dados relativos ao IDEB calculado por município no Brasil. Os dados estão no arquivo",
"arquivo = \"IDEB por Município Rede Federal Séries Finais (5ª a 8ª).xml\"",
"obtido no site <a href=\"http://dados.gov.br\">dados.gov.br</a>\nComo nosso arquivo é um .xml, vamos usar o módulo xml.etree.ElementTree para parsear o conteúdo do arquivo. Vamos abreviar o nome desse módulo por ET.",
"import xml.etree.ElementTree as ET",
"O módulo ElementTree (ET)\nUm arquivo XML é um conjunto hierárquico de dados, e portanto a maneira mais natural de representar esses dados é através de uma árvore. Para isso, o módulo ET tem duas classes: a classe ElementTree representa o documento XML inteiro como uma árvore, e a classe Element representa um nó desta árvore. Todas as interações que ocorrem com o documento completo (por exemplo, leitura e escrita no arquivo) são feitas através da classe ElementTree; por outro lado, as interações com um elemento isolado do XML e seus subelementos são feitas através da classe Element.\nO método ET.parse retorna uma ElementTree.",
"tree = ET.parse(arquivo)",
"Para vermos o elemento raiz da árvore, usamos",
"root = tree.getroot()",
"O objeto root, que é um Element, tem as propriedades tag e attrib, que é um dicionário de seus atributos.",
"root.tag\n\nroot.attrib",
"Para acessarmos cada um dos nós do elemento raiz, iteramos nestes nós (que são, também, Elements):",
"for child in root:\n print(child.tag, child.attrib)",
"Selecionando os dados\nAgora que temos uma ideia melhor dos dados a serem tratados, vamos construir um DataFrame do pandas com o que nos interessa. Primeiramente, observamos que somente o último nó nos interessa, já que todos os outros compõem o cabeçalho do arquivo XML. Assim, vamos explorar o nó valores:",
"valoresIDEB = root.find('valores')",
"Observe que temos mais uma camada de dados:",
"valoresIDEB\n\nvaloresIDEB[0]",
"Assim, podemos por exemplo explorar os nós netos da árvore:",
"for child in valoresIDEB:\n for grandchild in child:\n print(grandchild.tag, grandchild.attrib)",
"Vamos transformar agora os dados em um DataFrame.",
"data = []\nfor child in valoresIDEB:\n data.append([float(child[0].text), child[1].text, child[2].text])\n\ndata",
"Como a biblioteca <a href=\"http://pandas.pydata.org/\">Pandas</a> está na moda ;) vamos utilizá-la para tratar e armazenar os dados. Mas vamos chamar a biblioteca pandas com um nome mais curto, pd.",
"import pandas as pd",
"Inicialmente, criamos um DataFrame, ou seja, uma tabela, com os dados que já temos.",
"tabelaInicial = pd.DataFrame(data, columns = [\"Valor\", \"Municipio\", \"Ano\"])\n\ntabelaInicial",
"Observe que nesta tabela temos dados de 2007 e 2009. Não vamos usar os dados relativos a 2007 por simplicidade.",
"tabelaInicial = tabelaInicial.loc[0:19]",
"Obtendo códigos IBGE para os municípios\nNa tabelaInicial, os municípios não estão identificados por nome, e sim pelo seu código IBGE. Para lermos o arquivo excel em que temos a tabela dos municípios brasileiros (atualizada em 2014) e seus respectivos códigos de 7 dígitos - os códigos incluem um dígito verificador ao final - usamos o módulo xlrd, que não estará instalado junto com o pandas por definição (você deve instalá-lo manualmente) se quiser executar o comando abaixo. Veja <a href=\"https://pypi.python.org/pypi/xlrd\">aqui</a>, por exemplo.",
"dadosMunicipioIBGE = pd.read_excel(\"DTB_2014_Municipio.xls\")",
"Podemos olhar o tipo de tabela que temos usando o método head do pandas.",
"dadosMunicipioIBGE.head()",
"Como não são todos os dados que nos interessam, vamos selecionar apenas as colunas \"Nome_UF\" (pois pode ser interessante referenciarmos o estado da federação mais tarde), \"Cod Municipio Completo\" e \"Nome_Município\".",
"dadosMunicipioIBGE = dadosMunicipioIBGE[[\"Nome_UF\", \"Cod Municipio Completo\", \"Nome_Município\"]]",
"Em seguida, precisamos selecionar na tabela completa dadosMunicipioIBGE os dados dos municípios presentes na tabelaInicial contendo os valores calculados do IDEB. Para isso, vamos extrair dos dois DataFrames as colunas correspondentes aos codigos de município (lembrando que nos dadosMunicipioIBGE os códigos contém um dígito verificador que não será utilizado):",
"listaMunicipiosInicial = tabelaInicial[\"Municipio\"]\nlistaMunicipios = dadosMunicipioIBGE[\"Cod Municipio Completo\"].map(lambda x: str(x)[0:6])",
"Observe que usamos acima o método map para transformar os dados numéricos em string, e depois extrair o último dígito.\nAgora, ambos listaMunicípiosInicial e listaMunicipios são objetos Series do pandas. Para obtermos os índices dos municípios para os quais temos informação do IDEB, vamos primeiro identificar quais códigos não constam da listaMunicipiosInicial:",
"indicesMunicipios = listaMunicipios[~listaMunicipios.isin(listaMunicipiosInicial)]",
"E agora vamos extrair as linhas correspondentes na tabela dadosMunicipioIBGE.",
"new = dadosMunicipioIBGE.drop(indicesMunicipios.index).reset_index(drop=True)",
"Por fim, vamos criar uma nova tabela (DataFrame) juntando nome e valor do IDEB calculado na tabelaInicial.",
"dadosFinais = pd.concat([new, tabelaInicial], axis=1)",
"A tabela final é",
"dadosFinais",
"Para terminar: um gráfico\nPara usarmos gráficos em notebooks, devemos incluir no notebook o comando\n% matplotlib inline\nou \n% matplotlib notebook\nComo isso é geralmente feito usando a primeira célula do notebook, mas no nosso caso não gostaríamos de sacrificar a legibilidade do documento, usamos uma nbextension chamada init_cell para que este comando seja executado na inicialização do notebook (Detalhes)\nPrimeiramente, vamos importar a biblioteca matplotlib.",
"import matplotlib.pyplot as plt",
"Em seguida, vamos substituir os índices da tabela dadosFinais pelos nomes dos municípios listados, já que gostaríamos de fazer um gráfico do valor do IDEB por município.",
"dadosFinais.set_index([\"Nome_Município\"], inplace=True)",
"Finalmente, como nos interessa um gráfico do IDEB por município, só vamos utilizar os dados da coluna \"Valor\" na tabela dadosFinais (observe que o resultado desta operação é uma Series)",
"dadosFinais[\"Valor\"]",
"Estamos prontos para fazer nosso gráfico.",
"dadosFinais[\"Valor\"].plot(kind='barh')\nplt.title(\"IDEB por Município (Dados de 2009)\")",
"Comentários sobre a geração dos documentos e do script\nPara converter este notebook para um script Python, use o comando\nO arquivo removeextracode.tpl tem o seguinte conteúdo:\nCélula de Inicialização <a id='sobre_inicializacao'></a>\nAtravés da extensão \"init_cell\" do nbextensions, é possível alterar a ordem de inicialização das células do notebook. Se olharmos os metadados da célula abaixo, veremos que ela está marcada pra ser executada antes de todas as outras células, obtendo-se assim o resultado desejado (esta célula permite que os gráficos da matploblib sejam renderizados dentro do notebook).",
"%matplotlib inline"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
ethen8181/machine-learning | networkx/max_influence/max_influence.ipynb | mit | [
"<h1>Table of Contents<span class=\"tocSkip\"></span></h1>\n<div class=\"toc\"><ul class=\"toc-item\"><li><span><a href=\"#Submodular-Optimization-&-Influence-Maximization\" data-toc-modified-id=\"Submodular-Optimization-&-Influence-Maximization-1\"><span class=\"toc-item-num\">1 </span>Submodular Optimization & Influence Maximization</a></span><ul class=\"toc-item\"><li><span><a href=\"#Influence-Maximization-(IM)\" data-toc-modified-id=\"Influence-Maximization-(IM)-1.1\"><span class=\"toc-item-num\">1.1 </span>Influence Maximization (IM)</a></span></li><li><span><a href=\"#Getting-Started\" data-toc-modified-id=\"Getting-Started-1.2\"><span class=\"toc-item-num\">1.2 </span>Getting Started</a></span></li><li><span><a href=\"#Spread-Process---Independent-Cascade-(IC)\" data-toc-modified-id=\"Spread-Process---Independent-Cascade-(IC)-1.3\"><span class=\"toc-item-num\">1.3 </span>Spread Process - Independent Cascade (IC)</a></span></li><li><span><a href=\"#Greedy-Algorithm\" data-toc-modified-id=\"Greedy-Algorithm-1.4\"><span class=\"toc-item-num\">1.4 </span>Greedy Algorithm</a></span></li><li><span><a href=\"#Submodular-Optimization\" data-toc-modified-id=\"Submodular-Optimization-1.5\"><span class=\"toc-item-num\">1.5 </span>Submodular Optimization</a></span></li><li><span><a href=\"#Cost-Effective-Lazy-Forward-(CELF)-Algorithm\" data-toc-modified-id=\"Cost-Effective-Lazy-Forward-(CELF)-Algorithm-1.6\"><span class=\"toc-item-num\">1.6 </span>Cost Effective Lazy Forward (CELF) Algorithm</a></span></li><li><span><a href=\"#Larger-Network\" data-toc-modified-id=\"Larger-Network-1.7\"><span class=\"toc-item-num\">1.7 </span>Larger Network</a></span></li><li><span><a href=\"#Conclusion\" data-toc-modified-id=\"Conclusion-1.8\"><span class=\"toc-item-num\">1.8 </span>Conclusion</a></span></li></ul></li><li><span><a href=\"#Reference\" data-toc-modified-id=\"Reference-2\"><span class=\"toc-item-num\">2 </span>Reference</a></span></li></ul></div>",
"# code for loading the format for the notebook\nimport os\n\n# path : store the current path to convert back to it later\npath = os.getcwd()\nos.chdir(os.path.join('..', '..', 'notebook_format'))\n\nfrom formats import load_style\nload_style(plot_style=False)\n\nos.chdir(path)\n\n# 1. magic for inline plot\n# 2. magic to print version\n# 3. magic so that the notebook will reload external python modules\n# 4. magic to enable retina (high resolution) plots\n# https://gist.github.com/minrk/3301035\n%matplotlib inline\n%load_ext watermark\n%load_ext autoreload\n%autoreload 2\n%config InlineBackend.figure_format='retina'\n\nimport time\nimport numpy as np\nimport matplotlib.pyplot as plt\nfrom igraph import Graph # pip install python-igraph\n\n%watermark -a 'Ethen' -d -t -v -p igraph,numpy,matplotlib",
"Submodular Optimization & Influence Maximization\nThe content and example in this documentation is build on top of the wonderful blog post at the following link. Blog: Influence Maximization in Python - Greedy vs CELF. \nInfluence Maximization (IM)\nInfluence Maximization (IM) is a field of network analysis with a lot of applications - from viral marketing to disease modeling and public health interventions. IM is the task of finding a small subset of nodes in a network such that the resulting \"influence\" propagating from that subset reaches the largest number of nodes in the network. \"Influence\" represents anything that can be passed across connected peers within a network, such as information, behavior, disease or product adoption. To make it even more concrete, IM can be used to answer the question:\n\nIf we can try to convince a subset of individuals to adopt a new product or innovation, and the goal is to trigger a large cascade of further adoptions, which set of individuals should we target?\n\nKempe et al. (2003) were the first to formalize IM as the following combinatorial optimization problem: Given a network with $n$ nodes and given a \"spreading\" or propagation process on that network, choose a \"seed set\" $S$ of size $k<n$ to maximize the number of nodes in the network that are ultimately influenced.\nSolving this problem turns out to be extremely computationally burdensome. For example, in a relatively small network of 1,000 nodes, there are ${n\\choose k} \\approx 8$ trillion different possible candidates of size $k=5$ seed sets, which is impossible to solve directly even on state-of-the-art high performance computing resources. Consequently, over the last 15 years, researchers has been actively trying to find approximate solutions to the problem that can be solved quickly. This notebook walks through:\n\nHow to implement two of the earliest and most fundamental approximation algorithms in Python - the Greedy and the CELF algorithms - and compares their performance.\nWe will also spend some time discussing the field of submodular optimization, as it turns out, the combinatorial optimization problem we described above is submodular.\n\nGetting Started\nWe begin by loading a few modules. There are many popular network modeling packages, but we'll use the igraph package. Don't worry if you're not acquainted with the library, we will explain the syntax, and if you like, you can even swap it out with a different graph library that you prefer.\nWe'll first test these algorithms to see if they can produce the correct solution for a simple example for which we know the two nodes which are the most influential. Below we create a 10-node/20-edge directed igraph network object. This artificially created network is designed to ensure that nodes 0 and 1 are the most influential. We do this by creating 8 links outgoing from each of these nodes compared to only 1 outgoing links for the other 8 nodes. We also ensure nodes 0 and 1 are not neighbors so that having one in the seed set does not make the other redundant.",
"source = [0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 2, 3, 4, 5]\ntarget = [2, 3, 4, 5, 6, 7, 8, 9, 2, 3, 4, 5, 6, 7, 8, 9, 6, 7, 8, 9]\n\n# create a directed graph\ngraph = Graph(directed=True)\n\n# add the nodes/vertices (the two are used interchangeably) and edges\n# 1. the .add_vertices method adds the number of vertices\n# to the graph and igraph uses integer vertex id starting from zero\n# 2. to add edges, we call the .add_edges method, where edges\n# are specified by a tuple of integers. \ngraph.add_vertices(10)\ngraph.add_edges(zip(source, target))\nprint('vertices count:', graph.vcount())\nprint('edges count:', graph.ecount())\n\n# a graph api should allow us to retrieve the neighbors of a node\nprint('neighbors: ', graph.neighbors(2, mode='out'))\n\n# or create an adjacency list of the graph,\n# as we can see node 0 and 1 are the most influential\n# as the two nodes are connected to a lot of other nodes\ngraph.get_adjlist()",
"Spread Process - Independent Cascade (IC)\nIM algorithms solve the optimization problem for a given spread or propagation process. We therefore first need to specify a function that simulates the spread from a given seed set across the network. We'll simulate the influence spread using the popular Independent Cascade (IC) model, although there are many others we could have chosen.\nIndependent Cascade starts by having an initial set of seed nodes, $A_0$, that start the diffusion process, and the process unfolds in discrete steps according to the following randomized rule: \nWhen node $v$ first becomes active in step $t$, it is given a single chance to activate each currently inactive\nneighbor $w$; this process succeeds with a probability $p_{v,w}$, a parameter of the system — independently of the history thus far. If $v$ succeeds, then $w$ will become active in step $t + 1$; but whether or not $v$ succeeds in this current step $t$, it cannot make any further attempts to activate $w$ in subsequent rounds. This process runs until no more activations are possible. Here, we assume that the nodes are progressive, meaning the node will only go from inactive to active, but not the other way around.",
"def compute_independent_cascade(graph, seed_nodes, prob, n_iters=1000):\n total_spead = 0\n\n # simulate the spread process over multiple runs\n for i in range(n_iters):\n np.random.seed(i)\n active = seed_nodes[:]\n new_active = seed_nodes[:]\n \n # for each newly activated nodes, find its neighbors that becomes activated\n while new_active:\n activated_nodes = []\n for node in new_active:\n neighbors = graph.neighbors(node, mode='out')\n success = np.random.uniform(0, 1, len(neighbors)) < prob\n activated_nodes += list(np.extract(success, neighbors))\n\n # ensure the newly activated nodes doesn't already exist\n # in the final list of activated nodes before adding them\n # to the final list\n new_active = list(set(activated_nodes) - set(active))\n active += new_active\n\n total_spead += len(active)\n\n return total_spead / n_iters\n\n\n# assuming we start with 1 seed node\nseed_nodes = [0]\ncompute_independent_cascade(graph, seed_nodes, prob=0.2)",
"We calculate the expected spread of a given seed set by taking the average over a large number of Monte Carlo simulations. The outer loop in the function iterates over each of these simulations and calculates the spread for each iteration, at the end, the mean of each iteration will be our unbiased estimation for the expected spread of the seed nodes we've provided. The actual number of simulation required is up to debate, through experiment I found 1,000 to work well enough, whereas 100 was too low. On the other hand, the paper even set the simulation number up to 10,000.\nWithin each Monte Carlo iteration, we simulate the spread of influence throughout the network over time, where a different \"time period\" occurs within each of the while loop iterations, which checks whether any new nodes were activated in the previous time step. If no new nodes were activated (when new_active is an empty list and therefore evaluates to False) then the independent cascade process terminates, and the function moves onto the next simulation after recording the total spread for this simulation. The term total spread here refers to the number of nodes ultimately activated (some algorithms are framed in terms of the \"additional spread\" in which case we would subtract the size of the seed set so the code would be amended to len(active) - len(seed_nodes). \nGreedy Algorithm\nWith our spread function in hand, we can now turn to the IM algorithms themselves. We begin with the Greedy algorithm. The method is referred to as greedy as it adds the node that currently provides the best spread to our solution set without considering if it is actually the optimal solution in the long run, to elaborate the process is:\n\nWe start with an empty seed set/nodes.\nFor all the nodes that are not in the seed set/nodes, we find the node with the largest spread and adds it to the seed\nWe repeat step 2 until $k$ seed nodes are found.\n\nThis algorithm only needs to calculate the spread of $\\sum_{i=0}^k (n-i)\\approx kn$ nodes, which is just 5,000 in the case of our 1,000 node and $k=5$ network (a lot less that 8 trillion!). Of course, this computational improvement comes at the cost of the resulting seed set only being an approximate solution to the IM problem because it only considers the incremental spread of the $k$ nodes individually rather than combined. Fortunately, this seemingly naive greedy algorithm is theoretically guaranteed to choose a seed set whose spread will be at least 63% of the spread of the optimal seed set. The proof of the guarantee relies heavily on the \"submodular\" property of spread functions, which will be explained in more detail in later section.\nThe following greedy() function implements the algorithm. It produces the optimal set of k seed nodes for the graph graph. Apart from returning the optimal seed set, it also records average spread of that seed set along with a list showing the cumulative time taken to complete each iteration, we will use these information to compare with a different algorithm, CELF, in later section.",
"def greedy(graph, k, prob=0.2, n_iters=1000):\n \"\"\"\n Find k nodes with the largest spread (determined by IC) from a igraph graph\n using the Greedy Algorithm.\n \"\"\"\n\n # we will be storing elapsed time and spreads along the way, in a setting where\n # we only care about the final solution, we don't need to record these\n # additional information\n elapsed = []\n spreads = []\n solution = []\n start_time = time.time()\n\n for _ in range(k):\n best_node = -1\n best_spread = -np.inf\n\n # loop over nodes that are not yet in our final solution\n # to find biggest marginal gain\n nodes = set(range(graph.vcount())) - set(solution)\n for node in nodes:\n spread = compute_independent_cascade(graph, solution + [node], prob, n_iters)\n if spread > best_spread:\n best_spread = spread\n best_node = node\n\n solution.append(best_node)\n spreads.append(best_spread)\n\n elapse = round(time.time() - start_time, 3)\n elapsed.append(elapse)\n\n return solution, spreads, elapsed\n\n# the result tells us greedy algorithm was able to find the two most influential\n# node, node 0 and node 1\nk = 2\nprob = 0.2\nn_iters = 1000\ngreedy_solution, greedy_spreads, greedy_elapsed = greedy(graph, k, prob, n_iters)\nprint('solution: ', greedy_solution)\nprint('spreads: ', greedy_spreads)\nprint('elapsed: ', greedy_elapsed)",
"Submodular Optimization\nNow that we have a brief understanding of the IM problem and taken a first stab at solving this problem, let's take a step back and formally discuss submodular optimization. A function $f$ is said to be submodular if it satisfies the diminishing return property. More formally, if we were given a ground set $V$, a function $f:2^V \\rightarrow \\mathbb{R}$ (the function's space is 2 power $V$, as the function can either contain or not contain each element in the set $V$). The submodular property is defined as:\n\\begin{align}\nf(A \\cup {i}) - f(A) \\geq f(B \\cup {i}) - f(B)\n\\end{align}\nFor any $A \\subseteq B \\subseteq V$ and $i \\in V \\setminus B$. Hence by adding any element $i$ to $A$, which is a subset of $B$ yields as least as much value (or more) if we were to add $i$ to $B$. In other words, the marginal gain of adding $i$ to $A$ should be greater or equal to the marginal gain of adding $i$ to $B$ if $A$ is a subset of $B$.\nThe next property is known as monotone. We say that a submodular function is monotone if for any $A \\subseteq B\n\\subseteq V$, we have $f(A) \\leq f(B)$. This means that adding more elements to a set cannot decrease its value.\nFor example: Let $f(X)=max(X)$. We have the set $X= {1,2,3,4,5}$, and we choose $A={1,2}$ and $B={1,2,5}$. Given those information, we can see $f(A)=2$ and $f(B)=5$ and the marginal gain of items 3,4 is :\n\\begin{align}\nf(3 \\, | \\, A) = 1 \\ \\nonumber\nf(4 \\, | \\, B) = 0 \\ \\nonumber\nf(3 \\, | \\, A) = 2 \\ \\nonumber\nf(4 \\, | \\, B) = 0\n\\end{align}\nHere we use the shorthand $f(i \\, | \\, A)$, to denote $f(A \\cup {i}) - f(A)$.\nNote that $f(i \\, | \\, A) \\ge f(i \\, | \\, B)$ for any choice of $i$, $A$ and $B$. This is because $f$ is submodular and monotone. To recap, submodular functions has the diminishing return property saying adding an element to a larger set results in smaller marginal increase in the value of $f$ (compared to adding the element to a smaller set). And monotone ensures that adding additional element to the solution set does not decrease the function's value.\nSince the functions we're dealing with functions that are monotone, the set with maximum value is always including everything from the ground set $V$. But what we're actually interested in is when we impose a cardinality constraint - that is, finding the set of size at most k that maximizes the utility. Formally:\n\\begin{align}\nA^* = \\underset{A: |A| \\leq k}{\\text{argmax}} \\,\\, f(A)\n\\end{align}\nFor instance, in our IM problem, we are interested in finding the subset $k$ nodes that generates the largest influence. The greedy algorithm we showed above is one approach of solving this combinatorial problem.\n\nGiven a ground set $V$, if we're interested in populating a solution set of size $k$.\nThe algorithm starts with the empty set $A_0$\nThen repeats the following step for $i = 0, ... , (k-1)$:\n\n\\begin{align}\nA_{i+1} = A_{i} \\cup { \\underset{v \\in V \\setminus A_i}{\\text{argmax}} \\,\\, f(A_i \\cup {v}) }\n\\end{align}\nFrom a theoretical standpoint, this procedure guarantees a solution that has a score of 0.63 of the optimal set.",
"# if we check the solutions from the greedy algorithm we've\n# implemented above, we can see that our solution is in fact\n# submodular, as the spread we get is in diminshing order\nnp.diff(np.hstack([np.array([0]), greedy_spreads]))",
"Cost Effective Lazy Forward (CELF) Algorithm\nCELF Algorithm was developed by Leskovec et al. (2007). In other places, this is referred to as the Lazy Greedy Algorithm. Although the Greedy algorithm is much quicker than solving the full problem, it is still very slow when used on realistically sized networks. CELF was one of the first significant subsequent improvements.\nCELF exploits the sub-modularity property of the spread function, which implies that the marginal spread of a given node in one iteration of the Greedy algorithm cannot be any larger than its marginal spread in the previous iteration. This helps us to choose the nodes for which we evaluate the spread function in a more sophisticated manner, rather than simply evaluating the spread for all nodes. More specifically, in the first round, we calculate the spread for all nodes (like Greedy) and store them in a list/heap, which is then sorted. Naturally, the top node is added to the seed set in the first iteration, and then removed from the list/heap. In the next iteration, only the spread for the top node is calculated. If, after resorting, that node remains at the top of the list/heap, then it must have the highest marginal gain of all nodes. Why? Because we know that if we calculated the marginal gain for all other nodes, they'd be lower than the value currently in the list (due to submodularity) and therefore the \"top node\" would remain on top. This process continues, finding the node that remains on top after calculating its marginal spread, and then adding it to the seed set. By avoiding calculating the spread for many nodes, CELF turns out to be much faster than Greedy, which we'll show below.\nThe celf() function below that implements the algorithm, is split into two components. The first component, like the Greedy algorithm, iterates over each node in the graph and selects the node with the highest spread into the seed set. However, it also stores the spreads of each node for use in the second component.\nThe second component iterates to find the remaining $k-1$ seed nodes. Within each iteration, the algorithm evaluates the marginal spread of the top node. If, after resorting, the top node stays in place then that node is selected as the next seed node. If not, then the marginal spread of the new top node is evaluated and so on.\nLike greedy(), the function returns the optimal seed set, the resulting spread and the time taken to compute each iteration. In addition, it also returns the list lookups, which keeps track of how many spread calculations were performed at each iteration. We didn't bother doing this for greedy() because we know the number of spread calculations in iteration $i$ is $N-i-1$.",
"import heapq\n\n\ndef celf(graph, k, prob, n_iters=1000):\n \"\"\"\n Find k nodes with the largest spread (determined by IC) from a igraph graph\n using the Cost Effective Lazy Forward Algorithm, a.k.a Lazy Greedy Algorithm.\n \"\"\"\n start_time = time.time()\n\n # find the first node with greedy algorithm:\n # python's heap is a min-heap, thus\n # we negate the spread to get the node\n # with the maximum spread when popping from the heap\n gains = []\n for node in range(graph.vcount()):\n spread = compute_independent_cascade(graph, [node], prob, n_iters)\n heapq.heappush(gains, (-spread, node))\n\n # we pop the heap to get the node with the best spread,\n # when storing the spread to negate it again to store the actual spread\n spread, node = heapq.heappop(gains)\n solution = [node]\n spread = -spread\n spreads = [spread]\n\n # record the number of times the spread is computed\n lookups = [graph.vcount()]\n elapsed = [round(time.time() - start_time, 3)]\n\n for _ in range(k - 1):\n node_lookup = 0\n matched = False\n\n while not matched:\n node_lookup += 1\n\n # here we need to compute the marginal gain of adding the current node\n # to the solution, instead of just the gain, i.e. we need to subtract\n # the spread without adding the current node\n _, current_node = heapq.heappop(gains)\n spread_gain = compute_independent_cascade(\n graph, solution + [current_node], prob, n_iters) - spread\n\n # check if the previous top node stayed on the top after pushing\n # the marginal gain to the heap\n heapq.heappush(gains, (-spread_gain, current_node))\n matched = gains[0][1] == current_node\n\n # spread stores the cumulative spread\n spread_gain, node = heapq.heappop(gains)\n spread -= spread_gain\n solution.append(node)\n spreads.append(spread)\n lookups.append(node_lookup)\n\n elapse = round(time.time() - start_time, 3)\n elapsed.append(elapse)\n\n return solution, spreads, elapsed, lookups\n\nk = 2\nprob = 0.2\nn_iters = 1000\n\ncelf_solution, celf_spreads, celf_elapsed, celf_lookups = celf(graph, k, prob, n_iters)\nprint('solution: ', celf_solution)\nprint('spreads: ', celf_spreads)\nprint('elapsed: ', celf_elapsed)\nprint('lookups: ', celf_lookups)",
"Larger Network\nNow that we know both algorithms at least work correctly for a simple network for which we know the answer, we move on to a more generic graph to compare the performance and efficiency of each method. Any igraph network object will work, but for the purposes of this post we will use a random Erdos-Renyi graph with 100 nodes and 300 edges. The exact type of graph doesn't matter as the main points hold for any graph. Rather than explicitly defining the nodes and edges like we did above, here we make use of the .Erdos_Renyi() method to automatically create the graph.",
"np.random.seed(1234)\ngraph = Graph.Erdos_Renyi(n=100, m=300, directed=True)",
"Given the graph, we again compare both optimizers with the same parameter. Again for the n_iters parameter, it is not uncommon to see it set to a much higher number in literatures, such as 10,000 to get a more accurate estimate of spread, we chose a lower number here so we don't have to wait as long for the results",
"k = 10\nprob = 0.1\nn_iters = 1500\ncelf_solution, celf_spreads, celf_elapsed, celf_lookups = celf(graph, k, prob, n_iters)\ngreedy_solution, greedy_spreads, greedy_elapsed = greedy(graph, k, prob, n_iters)\n\n# print resulting solution\nprint('celf output: ' + str(celf_solution))\nprint('greedy output: ' + str(greedy_solution))",
"Thankfully, both optimization method yields the same solution set.\nIn the next few code chunk, we will use some of the information we've stored while performing the optimizing to perform a more thorough comparison. First, by plotting the resulting expected spread from both optimization method. We can see both methods yield the same expected spread.",
"# change default style figure and font size\nplt.rcParams['figure.figsize'] = 8, 6\nplt.rcParams['font.size'] = 12\n\nlw = 4\nfig = plt.figure(figsize=(9,6))\nax = fig.add_subplot(111)\nax.plot(range(1, len(greedy_spreads) + 1), greedy_spreads, label=\"Greedy\", color=\"#FBB4AE\", lw=lw)\nax.plot(range(1, len(celf_spreads) + 1), celf_spreads, label=\"CELF\", color=\"#B3CDE3\", lw=lw)\nax.legend(loc=2)\nplt.ylabel('Expected Spread')\nplt.title('Expected Spread')\nplt.xlabel('Size of Seed Set')\nplt.tick_params(bottom=False, left=False)\nplt.show()",
"We now compare the speed of each algorithm. The plot below shows that the computation time of Greedy is larger than CELF for all seed set sizes greater than 1 and the difference in computational times grows exponentially with the size of the seed set. This is because Greedy must compute the spread of $N-i-1$ nodes in iteration $i$ whereas CELF generally performs far fewer spread computations after the first iteration.",
"lw = 4\nfig = plt.figure(figsize=(9,6))\nax = fig.add_subplot(111)\nax.plot(range(1, len(greedy_elapsed) + 1), greedy_elapsed, label=\"Greedy\", color=\"#FBB4AE\", lw=lw)\nax.plot(range(1, len(celf_elapsed) + 1), celf_elapsed, label=\"CELF\", color=\"#B3CDE3\", lw=lw)\nax.legend(loc=2)\nplt.ylabel('Computation Time (Seconds)')\nplt.xlabel('Size of Seed Set')\nplt.title('Computation Time')\nplt.tick_params(bottom=False, left=False)\nplt.show()",
"We can get some further insight into the superior computational efficiency of CELF by observing how many \"node lookups\" it had to perform during each of the 10 rounds. The list that records this information shows that the first round iterated over all 100 nodes of the network. This is identical to Greedy which is why the graph above shows that the running time is equivalent for $k=1$. However, for subsequent iterations, there are far fewer spread computations because the marginal spread of a node in a previous iteration is a good indicator for its marginal spread in a future iteration. Note the relationship between the values below and the corresponding computation time presented in the graph above. There is a visible jump in the blue line for higher values of the \"node lookups\". This again solidifies the fact that while CELF produces identical solution set as Greedy, it usually has enormous speedups over the standard Greedy procedure.",
"celf_lookups",
"Conclusion\nWe implemented both the Greedy and CELF algorithms and showed the following:\n\nBoth correctly identify the influential nodes in simple examples\nBoth result in the same seed set for a larger example.\nThe CELF algorithm runs a lot faster for any seed set $k>1$. The speed arises from the fact that after the first round, CELF performs far fewer spread computations than Greedy.\nDuring the Greedy Algorithm section, we mentioned briefly that a natural greedy strategy obtains a solution that is provably within 63% of optimal. We didn't formally proved this statement here, but there are several good notes online that goes more in-depth into the proof behind this. Notes: N. Buchbinder, M.Feldman - Submodular Functions Maximization Problems (2017)\n\nReference\n\nBlog: Influence Maximization in Python - Greedy vs CELF\nBlog: The greedy algorithm for monotone submodular maximization\nPaper: D. Kempe, J. Kleinberg, E. Tardos - Maximizing the Spread of Influence through a Social\nNetwork (2003)\nNotes: N. Buchbinder, M.Feldman - Submodular Functions Maximization Problems (2017)"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
jwjohnson314/data-803 | notebooks/Regularization and Model Tuning.ipynb | mit | [
"Regularization\nRegularization is the name for a technique developed at different times and in different ways in statistics and machine learning for improving the predictive quality of a model. The idea is to make a model simpler than it might otherwise be by either making the coefficients small, making the coefficients zero, or perhaps some combination of both at the same time. Regularization is implemented by default in sklearn's linear models.",
"from IPython.core.pylabtools import figsize\nimport pandas as pd\nimport numpy as np\nimport matplotlib.pyplot as plt\n%matplotlib inline\nplt.style.use('bmh')\n\nfrom sklearn.datasets import make_classification\nfrom sklearn.cross_validation import train_test_split, cross_val_score\nfrom sklearn.grid_search import GridSearchCV\nfrom sklearn.linear_model import LogisticRegression, LogisticRegressionCV\nfrom sklearn.metrics import accuracy_score, classification_report, confusion_matrix\nfrom sklearn.preprocessing import StandardScaler",
"Let's generate some sample data. 100000 observations, 50 features, only 5 of which matter, 7 of which are redundant, split among 2 classes for classification.",
"X, y = make_classification(n_samples=100000, \n n_features=50, \n n_informative=5, \n n_redundant=7, \n n_classes=2,\n random_state=2)",
"When building linear models, it's a good idea to standarize all of your predictors (mean at zero, variance 1).",
"figsize(12, 6)\nplt.scatter(range(50), np.mean(X, axis=0));\n\nscaler = StandardScaler()\nX = scaler.fit_transform(X)\n\nfigsize(12, 6)\nplt.scatter(range(50), np.mean(X, axis=0));\n\n# nice normal looking predictors\nfigsize(18, 8)\nax = plt.subplot(441)\nplt.hist(X[:, 0]);\n \nax = plt.subplot(442)\nplt.hist(X[:, 1]);\n\nax = plt.subplot(443)\nplt.hist(X[:, 2]);\n\nax = plt.subplot(444)\nplt.hist(X[:, 3]);\n\n# multicollinearity\ncorrelations = np.corrcoef(X, rowvar=0)\n\ncorrpairs = {}\nfor i in range(50):\n for j in range(i+1, 50, 1):\n if correlations[i, j] > 0.25:\n print(i, j, correlations[i,j])\n corrpairs[(i,j)] = correlations[i,j]\n\n# plot is slow - 1 min or more\nfigsize(12, 18)\n\nplt.subplot(311)\nplt.scatter(X[:, 16], X[:, 37])\n\nplt.subplot(312)\nplt.scatter(X[:, 16], X[:, 18])\n\nplt.subplot(313)\nplt.scatter(X[:, 26], X[:, 43]);",
"Let's perform a train-test split for cross-validation.",
"Xtr, Xte, ytr, yte = train_test_split(X, y, test_size=0.2, random_state=0)",
"Next let's build a model using the default parameters and look at several different measures of performance.",
"default_model = LogisticRegression(random_state=0).fit(Xtr,ytr) # instantiate and fit\npred = default_model.predict(Xte) # make predictions\nprint('Accuracy: %s\\n' % default_model.score(Xte, yte)) \nprint(classification_report(yte, pred)) \nprint('Confusion Matrix:\\n\\n %s\\n' % confusion_matrix(yte, pred)); ",
"In the sklearn implementation, this default model <i>is</i> a regularized model, using $\\mathcal{l}2$ regularization with $C = 1$. That is, the cost function to be minimized is $$-\\frac{1}{n}\\sum_{i=1}^n[y_i\\log(p_i) - (1-y_i)\\log(y_i - p_i)]+\\frac{1}{C}\\cdot\\sum_{j=1}^m w_j^2.$$ Here, $y_i$ is the $i^{th}$ response (target), $p_i$ is the predicted probability of that target, and $w_j$ are the coefficients of the linear model. In a traditional statistical implementation, the second sum wouldn't be there as it biases the model. This is the regularization.\nThere is no reason to believe that $C = 1$ is the ideal choice; it may be better to increase or decrease $C$. One way to search for better values by doing a grid search over a set of possible values for $C$, assessing the best choice using cross-validation.",
"cs = [10**(i+1) for i in range(2)] + [10**(-i) for i in range(5)] # create a list of C's\nprint(cs)\n\nlm = LogisticRegression(random_state=0) \n\ngrid = GridSearchCV(estimator=lm, \n param_grid=dict(C=cs), \n scoring='accuracy',\n verbose=1,\n cv=5, \n n_jobs=-1, # parallelize over all cores\n refit=True) # instatiate the grid search (note model input)\n\ngrid.fit(Xtr, ytr) # fit \nprint(\"Best score: %s\" % grid.best_score_)\nprint(\"Best choice of C: %s\" % grid.best_estimator_.C)\n\n# change the metric\ngrid_prec = GridSearchCV(estimator=lm, \n param_grid=dict(C=cs), \n scoring='precision',\n verbose=1,\n cv=5, \n n_jobs=-1, # parallelize over all cores\n refit=True) # instatiate the grid search (note model input)\n\ngrid_prec.fit(Xtr, ytr) # fit \nprint(\"Best score: %s\" % grid_prec.best_score_)\nprint(\"Best choice of C: %s\" % grid_prec.best_estimator_.C)\n\n# change the metric\ngrid_auc = GridSearchCV(estimator=lm, \n param_grid=dict(C=cs), \n scoring='roc_auc',\n verbose=1,\n cv=5, \n n_jobs=-1, # parallelize over all cores\n refit=True) # instatiate the grid search (note model input)\n\ngrid_auc.fit(Xtr, ytr) # fit \nprint(\"Best score: %s\" % grid_auc.best_score_)\nprint(\"Best choice of C: %s\" % grid_auc.best_estimator_.C)\n\ngrid_preds = grid.predict(Xte) \nprint('Accuracy: %s\\n' % accuracy_score(grid.predict(Xte), yte))\nprint(classification_report(yte, grid_preds))\nprint('Confusion Matrix:\\n\\n %s\\n' % confusion_matrix(yte, grid_preds));\n\ngrid.best_estimator_.coef_\n\nfigsize(12, 6)\nplt.scatter(range(grid.best_estimator_.coef_.shape[1]),\n grid.best_estimator_.coef_)\nplt.ylabel('value of coefficient')\nplt.xlabel('predictor variable (index)');",
"Another way to do this is with the 'LogisticRegressionCV' function. This is a logistic regression function built with tuning $C$ via cross-validation in mind. This time, we'll set the penalty to $\\mathcal{l}1$, we'll let python pick 10 possible $C$'s, we'll use all cores on my machine ('n_jobs=-1'), and we'll use the liblinear solver (which is the only one of the three possible choice which can optimize with the l1 penalty). The $\\mathcal{l}1$ penalty is $$-\\frac{1}{n}\\sum_{i=1}^n[y_i\\log(p_i) - (1-y_i)\\log(y_i - p_i)]+\\frac{1}{C}\\cdot\\sum_{j=1}^m |w_j|.$$ This will take a minute or two to run.",
"cvmodel = LogisticRegressionCV(penalty='l1', \n Cs=10, \n n_jobs=-1,\n verbose=1,\n scoring='accuracy',\n solver='liblinear') # liblinear only for l1 penalty\n\n# takes about a minute\ncv_fit = cvmodel.fit(Xtr,ytr)\n\ncvmodel.C_\n\ncvmodel.coef_ # now all very small, most effectively 0\n\nplt.scatter(range(cvmodel.coef_.shape[1]), cvmodel.coef_[0])\nplt.ylabel('value of coefficient')\nplt.xlabel('predictor variable (index)')\nplt.title('coefficients with l1 regularization');\n\ncv_preds = cvmodel.predict(Xte)\nprint(accuracy_score(cv_preds, yte))\n\ntuned_cv_scores = cross_val_score(cv_fit, X, y, scoring='accuracy',n_jobs=-1, verbose=2)\n\nprint(tuned_cv_scores)\nprint(np.mean(tuned_cv_scores))\n\ndefault_cv_scores = cross_val_score(default_model.fit(Xtr, ytr), X, y, scoring='accuracy',n_jobs=-1, verbose=2)\n\nprint(default_cv_scores)\nprint(np.mean(default_cv_scores))\n\nfig, ax = plt.subplots(1,2, sharey=True, figsize=(16, 6))\n\nax[0].scatter(range(grid.best_estimator_.coef_.shape[1]),\n grid.best_estimator_.coef_)\nax[0].set_ylabel('value of coefficient')\nax[0].set_xlabel('predictor variable (index)')\nax[0].set_title('Coefficients with l2 Penalty')\n\nax[1].scatter(range(cvmodel.coef_.shape[1]), cvmodel.coef_[0])\nax[1].set_ylabel('value of coefficient')\nax[1].set_xlabel('predictor variable (index)')\nax[1].set_title('Coefficients with l1 Penalty');\n\ntrivial = np.isclose(cvmodel.coef_, np.zeros(shape=cvmodel.coef_.shape)).flatten()\nnontrivial = []\nfor i in range(len(trivial)):\n if not trivial[i]:\n nontrivial.append(i)\n\nnontrivial\n\nfinal = LogisticRegression(C=cvmodel.C_[0], penalty='l1', solver='liblinear').fit(X[:, nontrivial], y)\n\n# thanks StackOverflow! \n# see http://stackoverflow.com/questions/36373266/change-in-running-behavior-of-sklearn-code-between-laptop-and-desktop/37259431 \nimport warnings\nwarnings.filterwarnings(\"ignore\")\n\nfinal_cv_scores = cross_val_score(final, X[:, nontrivial], y, scoring='accuracy', n_jobs=-1)\n\nprint(final_cv_scores)\nprint(np.mean(final_cv_scores))\n\nalt = cross_val_score(LogisticRegressionCV(penalty='l1', solver='liblinear', verbose=2, n_jobs=-1), X[:, nontrivial], y, scoring='accuracy')\nprint(alt)\nprint(np.mean(alt))"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
InsightLab/data-science-cookbook | 2019/02-python-bibliotecas-manipulacao-dados/pandas_basico.ipynb | mit | [
"Pandas\nImportando o Pandas e o NumPy",
"import pandas as pd\nimport numpy as np",
"Series\nUma Series é um objeto semelhante a uma vetor que possui um vetor de dados e um vetor de labels associadas chamado index.\nSua documentação completa se encontra em: http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.html#pandas.Series\nInstanciando uma Series",
"\"\"\" Apenas a partir dos valores \"\"\"\n\nobj = pd.Series([4, 7, -5, 3])\nobj\n\nobj.values\n\nobj.index\n\n\"\"\" A partir dos valores e dos índices \"\"\"\n\nobj2 = pd.Series([4, 7, -5, 3], index=['d','b','a','c'])\nobj2\n\nobj2.index\n\n\"\"\" A partir de um dictionary \"\"\"\n\nsdata = {'Ohio': 35000, 'Texas': 71000, 'Oregon': 16000, 'Utah': 5000}\nobj3 = pd.Series(sdata)\nobj3\n\n\"\"\" A partir de um dictionary e dos índices \"\"\"\n\nstates = ['California', 'Ohio', 'Oregon', 'Texas']\nobj4 = pd.Series(sdata, index=states)\nobj4",
"Acessando elementos de uma Series",
"obj2['a']\n\nobj2['d'] = 6\nobj2['d']\n\nobj2[['c','a','d']]\n\nobj2[obj2 > 0]",
"Algumas operações permitidas em uma Series",
"\"\"\" Multiplicação por um escalar \"\"\"\n\nobj2 * 2\n\n\"\"\" Operações de vetor do numpy \"\"\"\n\nimport numpy as np\n\nnp.exp(obj2)\n\n\"\"\" Funções que funcionam com dictionaries \"\"\"\n\n'b' in obj2\n\n'e' in obj2\n\n\"\"\" Funções para identificar dados faltando \"\"\"\n\nobj4.isnull()\n\nobj4.notnull()\n\n\"\"\" Operações aritméticas com alinhamento automático dos índices \"\"\"\n\nobj3 + obj4",
"DataFrame\nUm DataFrame representa uma estrutura de dados tabular, semelhante a uma planilha de excel, contendo um conjunto ordenado de colunas, podendo ser cada uma de tipos de valores diferente. Um DataFrame possui um índice de linhas e um de colunas e pode ser encarado como um dict de Series.\nSua documentação completa se encontra em: http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.html\nInstanciando um DataFrame",
"\"\"\" A partir de um dictionary de vetores \"\"\"\n\ndata = {'state': ['Ohio', 'Ohio', 'Ohio', 'Nevada', 'Nevada'], \\\n 'year': [2000, 2001, 2002, 2001, 2002], \\\n 'pop': [1.5, 1.7, 3.6, 2.4, 2.9]}\n\nframe = pd.DataFrame(data)\nframe\n\n\"\"\" A partir de um dictionary em uma ordem específica das colunas \"\"\"\n\npd.DataFrame(data, columns=['year', 'state', 'pop'])\n\n\"\"\" A partir de um dictionary e dos índices das colunas e/ou dos índices das linhas \"\"\"\n\nframe2 = pd.DataFrame(data, columns=['year', 'state', 'pop', 'debt'], index=['one', 'two', 'three', 'four', 'five'])\nframe2\n\n\"\"\" A partir de um dictionary de dictionaries aninhados \"\"\"\n\npop = {'Nevada': {2001: 2.4, 2002: 2.9}, 'Ohio': {2000: 1.5, 2001: 1.7, 2002: 3.6}}\n\nframe3 = pd.DataFrame(pop)\nframe3",
"Note que estas não são todas as formas possíveis de se fazê-lo. Para uma visão mais completa veja a seguinte tabela com as possíveis entradas para o construtor do DataFrame:\nType |Notes\n-----|-----\n2D ndarray | A matrix of data, passing optional row and column labels\ndict of arrays, lists, or tuples | Each sequence becomes a column in the DataFrame. All sequences must be the same length.\nNumPy structured/record array | Treated as the “dict of arrays” case\ndict of Series | Each value becomes a column. Indexes from each Series are unioned together to form the result’s row index if no explicit index is passed.\ndict of dicts | Each inner dict becomes a column. Keys are unioned to form the row index as in the “dict of Series” case.\nlist of dicts or Series | Each item becomes a row in the DataFrame. Union of dict keys or Series indexes become the DataFrame’s column labels\nList of lists or tuples | Treated as the “2D ndarray” case\nAnother DataFrame | The DataFrame’s indexes are used unless different ones are passed\nNumPy MaskedArray | Like the “2D ndarray” case except masked values become NA/missing in the DataFrame result\nManipulando linhas e colunas de um DataFrame",
"\"\"\" Acessando colunas como em uma Series ou dictionary \"\"\"\n\nframe2['state']\n\n\"\"\" Como colunas como um atributo \"\"\"\n\nframe2.year\n\n\"\"\" Acessando linhas com o nome da linha \"\"\"\n\nframe2.ix['three']\n\n\"\"\" Acessando linhas com o índice da linha \"\"\"\n\nframe2.ix[3]\n\n\"\"\" Modificando uma coluna com um valor \"\"\"\n\nframe2['debt'] = 16.5\nframe2\n\n\"\"\" Modificando uma coluna com um vetor \"\"\"\n\nframe2['debt'] = np.arange(5.)\nframe2\n\n\"\"\" Modificando uma coluna com uma Series \"\"\"\n\nval = pd.Series([-1.2, -1.5, -1.7], index=['two', 'four', 'five'])\n\nframe2['debt'] = val\nframe2\n\n\"\"\" Adicionando uma coluna que não existe \"\"\"\n\nframe2['eastern'] = frame2.state == 'Ohio'\nframe2\n\n\"\"\" Deletando uma coluna \"\"\"\n\ndel frame2['eastern']\nframe2.columns"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
gaufung/Data_Analytics_Learning_Note | Data_Analytics_in_Action/pandasIO.ipynb | mit | [
"Pandas 数据读写\nAPI\n读取 | 写入 \n--- | ---\nread_csv | to_csv\nread_excel | to_excel\nread_hdf | to_hdf\nread_sql | to_sql\nread_json | to_json\nread_html | to_html\nread_stata | to_stata\nread_clipboard | to_clipboard\nread_pickle | to_pickle\nCVS 文件读写\ncsv 文件内容\nwhite,read,blue,green,animal\n1,5,2,3,cat\n2,7,8,5,dog\n3,3,6,7,horse\n2,2,8,3,duck\n4,4,2,1,mouse",
"import numpy as np\nimport pandas as pd\ncsvframe=pd.read_csv('myCSV_01.csv')\ncsvframe\n\n# 也可以通过read_table来读写数据\npd.read_table('myCSV_01.csv',sep=',')",
"读取没有head的数据\n1,5,2,3,cat\n2,7,8,5,dog\n3,3,6,7,horse\n2,2,8,3,duck\n4,4,2,1,mouse",
"pd.read_csv('myCSV_02.csv',header=None)",
"可以指定header",
"pd.read_csv('myCSV_02.csv',names=['white','red','blue','green','animal'])",
"创建一个具有等级结构的DataFrame对象,可以添加index_col选项,数据文件格式\ncolors,status,item1,item2,item3\nblack,up,3,4,6\nblack,down,2,6,7\nwhite,up,5,5,5\nwhite,down,3,3,2\nred,up,2,2,2\nred,down,1,1,4",
"pd.read_csv('myCSV_03.csv',index_col=['colors','status'])",
"Regexp 解析TXT文件\n使用正则表达式指定sep,来达到解析数据文件的目的。\n正则元素 | 功能\n--- | ---\n. | 换行符以外所有元素\n\\d | 数字\n\\D | 非数字\n\\s | 空白字符\n\\S | 非空白字符\n\\n | 换行符\n\\t | 制表符\n\\uxxxx | 使用十六进制表示ideaUnicode字符 \n数据文件随机以制表符和空格分隔\nwhite red blue green\n1 4 3 2\n2 4 6 7",
"pd.read_csv('myCSV_04.csv',sep='\\s+')",
"读取有字母分隔的数据\n000end123aaa122\n001end125aaa144",
"pd.read_csv('myCSV_05.csv',sep='\\D*',header=None,engine='python')",
"读取文本文件跳过一些不必要的行\n```\nlog file\nthis file has been generate by automatic system\nwhite,red,blue,green,animal\n12-feb-2015:counting of animals inside the house\n1,3,5,2,cat\n2,4,8,5,dog\n13-feb-2015:counting of animals inside the house\n3,3,6,7,horse\n2,2,8,3,duck\n```",
"pd.read_table('myCSV_06.csv',sep=',',skiprows=[0,1,3,6])",
"从TXT文件中读取部分数据\n只想读文件的一部分,可明确指定解析的行号,这时候用到nrows和skiprows选项,从指定的行开始和从起始行往后读多少行(norow=i)",
"pd.read_csv('myCSV_02.csv',skiprows=[2],nrows=3,header=None)",
"实例 :\n对于一列数据,每隔两行取一个累加起来,最后把和插入到列的Series对象中",
"out = pd.Series()\ni=0\npieces = pd.read_csv('myCSV_01.csv',chunksize=3)\nfor piece in pieces:\n print piece\n out.set_value(i,piece['white'].sum())\n i += 1\nout",
"写入文件\n\nto_csv(filenmae)\nto_csv(filename,index=False,header=False)\nto_csv(filename,na_rep='NaN')\n\nHTML文件读写\n写入HTML文件",
"frame = pd.DataFrame(np.arange(4).reshape((2,2)))\nprint frame.to_html()",
"创建复杂的DataFrame",
"frame = pd.DataFrame(np.random.random((4,4)),\n index=['white','black','red','blue'],\n columns=['up','down','left','right'])\nframe\n\ns = ['<HTML>']\ns.append('<HEAD><TITLE>MY DATAFRAME</TITLE></HEAD>')\ns.append('<BODY>')\ns.append(frame.to_html())\ns.append('</BODY></HTML>')\nhtml=''.join(s)\nwith open('myFrame.html','w') as html_file:\n html_file.write(html)\n",
"HTML读表格",
"web_frames = pd.read_html('myFrame.html')\nweb_frames[0]\n\n# 以网址作为参数\nranking = pd.read_html('http://www.meccanismocomplesso.org/en/meccanismo-complesso-sito-2/classifica-punteggio/')\nranking[0]",
"读写xml文件\n使用的第三方的库 lxml",
"from lxml import objectify\nxml = objectify.parse('books.xml')\nxml\n\nroot =xml.getroot()\n\nroot.Book.Author\n\nroot.Book.PublishDate\n\nroot.getchildren()\n\n[child.tag for child in root.Book.getchildren()]\n\n[child.text for child in root.Book.getchildren()]\n\ndef etree2df(root):\n column_names=[]\n for i in range(0,len(root.getchildren()[0].getchildren())):\n column_names.append(root.getchildren()[0].getchildren()[i].tag)\n xml_frame = pd.DataFrame(columns=column_names)\n for j in range(0,len(root.getchildren())):\n obj = root.getchildren()[j].getchildren()\n texts = []\n for k in range(0,len(column_names)):\n texts.append(obj[k].text)\n row = dict(zip(column_names,texts))\n row_s=pd.Series(row)\n row_s.name=j\n xml_frame = xml_frame.append(row_s)\n return xml_frame\netree2df(root)",
"读写Excel文件",
"pd.read_excel('data.xlsx')\n\npd.read_excel('data.xlsx','Sheet2')\n\nframe = pd.DataFrame(np.random.random((4,4)),\n index=['exp1','exp2','exp3','exp4'],\n columns=['Jan2015','Feb2015','Mar2015','Apr2015'])\nframe\n\nframe.to_excel('data2.xlsx')",
"JSON数据",
"frame = pd.DataFrame(np.arange(16).reshape((4,4)),\n index=['white','black','red','blue'],\n columns=['up','down','right','left'])\nframe.to_json('frame.json')\n\n# 读取json\npd.read_json('frame.json')",
"HDF5数据\nHDF文件(hierarchical data from)等级数据格式,用二进制文件存储数据。",
"from pandas.io.pytables import HDFStore\nstore = HDFStore('mydata.h5')\nstore['obj1']=frame\n\nstore['obj1']",
"pickle数据",
"frame.to_pickle('frame.pkl')\n\npd.read_pickle('frame.pkl')",
"数据库连接\n以sqlite3为例介绍",
"frame=pd.DataFrame(np.arange(20).reshape((4,5)),\n columns=['white','red','blue','black','green'])\nframe\n\nfrom sqlalchemy import create_engine\nenegine=create_engine('sqlite:///foo.db')\n\nframe.to_sql('colors',enegine)\n\npd.read_sql('colors',enegine)"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
jljones/portfolio | ds/Webscraping_Craigslist_multi.ipynb | apache-2.0 | [
"Webscraping Craigslist for Housing Listings in the East Bay\nJennifer Jones",
"%pylab inline\nimport numpy as np\nimport pandas as pd\nimport matplotlib.pyplot as plt\nimport requests\nfrom bs4 import BeautifulSoup as bs4",
"Craigslist houses for sale\nLook on the Craigslist website, select relevant search criteria, and then take a look at the web address:\nHouses for sale in the East Bay:\nhttp://sfbay.craigslist.org/search/eby/rea?housing_type=6\nHouses for sale in selected neighborhoods in the East Bay:\nhttp://sfbay.craigslist.org/search/eby/rea?nh=46&nh=47&nh=48&nh=49&nh=112&nh=54&nh=55&nh=60&nh=62&nh=63&nh=66&housing_type=6 \nGeneral Procedure\n```python\nGet the data using the requests module\nurl = 'http://sfbay.craigslist.org/search/eby/rea?housing_type=6'\nresp = requests.get(url) \nBeautifulSoup can quickly parse the text, specify text is html\ntxt = bs4(resp.text, 'html.parser')\n```\nHouse entries\nLooked through output via print(txt.prettify()) to display the html in a more readable way, to note the structure of housing listings\nSaw housing entries contained in <p class=\"row\">\nhouses = txt.find_all('p', attrs={'class': 'row'})\nGet data from multiple pages on Craigslist\nFirst page:\nurl = 'http://sfbay.craigslist.org/search/eby/rea?housing_type=6'\nFor multiple pages, the pattern is:\nhttp://sfbay.craigslist.org/search/eby/rea?s=100&housing_type=6\nhttp://sfbay.craigslist.org/search/eby/rea?s=200&housing_type=6\netc.",
"# Get the data using the requests module \nnpgs = np.arange(0,10,1)\nnpg = 100\n\nbase_url = 'http://sfbay.craigslist.org/search/eby/rea?'\nurls = [base_url + 'housing_type=6']\n\nfor pg in range(len(npgs)):\n url = base_url + 's=' + str(npg) + '&housing_type=6'\n urls.append(url)\n npg += 100\n\nmore_reqs = []\nfor p in range(len(npgs)+1):\n more_req = requests.get(urls[p]) \n more_reqs.append(more_req)\n\nprint(urls)\n\n# USe BeautifulSoup to parse the text\nmore_txts = []\nfor p in range(len(npgs)+1):\n more_txt = bs4(more_reqs[p].text, 'html.parser')\n more_txts.append(more_txt)\n\n# Save the housing entries to a list\nmore_houses = [more_txts[h].findAll(attrs={'class': \"row\"}) for h in range(len(more_txts))] \nprint(len(more_houses))\nprint(len(more_houses[0]))\n\n# Make a list of housing entries from all of the pages of data\nnpg = len(more_houses)\n\nhouses_all = [] \nfor n in range(npg):\n houses_all.extend(more_houses[n])\nprint(len(houses_all))",
"Extract and clean data to put in a database",
"# Define 4 functions for the price, neighborhood, sq footage & # bedrooms, and time\n# that can deal with missing values (to prevent errors from showing up when running the code)\n\n# Prices\ndef find_prices(results):\n prices = []\n for rw in results:\n price = rw.find('span', {'class': 'price'})\n if price is not None:\n price = float(price.text.strip('$'))\n else:\n price = np.nan\n prices.append(price)\n return prices\n\n# Define a function for neighborhood in case a field is missing in 'class': 'pnr'\ndef find_neighborhood(results):\n neighborhoods = []\n for rw in results:\n split = rw.find('span', {'class': 'pnr'}).text.strip(' (').split(')')\n #split = rw.find(attrs={'class': 'pnr'}).text.strip(' (').split(')')\n if len(split) == 2:\n neighborhood = split[0]\n elif 'pic map' or 'pic' or 'map' in split[0]:\n neighborhood = np.nan\n neighborhoods.append(neighborhood)\n return neighborhoods\n\n# Make a function to deal with size in case #br or ft2 is missing\ndef find_size_and_brs(results):\n sqft = []\n bedrooms = []\n for rw in results:\n split = rw.find('span', attrs={'class': 'housing'})\n # If the field doesn't exist altogether in a housing entry\n if split is not None:\n #if rw.find('span', {'class': 'housing'}) is not None:\n # Removes leading and trailing spaces and dashes, splits br & ft\n #split = rw.find('span', attrs={'class': 'housing'}).text.strip('/- ').split(' - ')\n split = split.text.strip('/- ').split(' - ')\n if len(split) == 2:\n n_brs = split[0].replace('br', '')\n size = split[1].replace('ft2', '')\n elif 'br' in split[0]: # in case 'size' field is missing\n n_brs = split[0].replace('br', '')\n size = np.nan\n elif 'ft2' in split[0]: # in case 'br' field is missing\n size = split[0].replace('ft2', '')\n n_brs = np.nan\n else:\n size = np.nan\n n_brs = np.nan\n sqft.append(float(size))\n bedrooms.append(float(n_brs))\n return sqft, bedrooms\n\n# Time posted\ndef find_times(results):\n times = []\n for rw in results:\n time = rw.findAll(attrs={'class': 'pl'})[0].time['datetime']\n if time is not None:\n time# = time\n else:\n time = np.nan\n times.append(time)\n return pd.to_datetime(times)\n\n# Apply functions to data to extract useful information\nprices_all = find_prices(houses_all)\nneighborhoods_all = find_neighborhood(houses_all) \nsqft_all, bedrooms_all = find_size_and_brs(houses_all)\ntimes_all = find_times(houses_all)\n\n# Check\nprint(len(prices_all))\n#print(len(neighborhoods_all))\n#print(len(sqft_all))\n#print(len(bedrooms_all))\n#print(len(times_all))",
"Add data to pandas database",
"# Make a dataframe to export cleaned data\ndata = np.array([sqft_all, bedrooms_all, prices_all]).T\nprint(data.shape)\n\nalldata = pd.DataFrame(data = data, columns = ['SqFeet', 'nBedrooms', 'Price'])\nalldata.head(4)\n\nalldata['DatePosted'] = times_all\nalldata['Neighborhood'] = neighborhoods_all\n\nalldata.head(4)\n\n# Check data types\nprint(alldata.dtypes)\nprint(type(alldata.DatePosted[0]))\nprint(type(alldata.SqFeet[0]))\nprint(type(alldata.nBedrooms[0]))\nprint(type(alldata.Neighborhood[0]))\nprint(type(alldata.Price[0]))\n\n# To change index to/from time field\n# alldata.set_index('DatePosted', inplace = True)\n# alldata.reset_index(inplace=True)",
"Download data to csv file",
"alldata.to_csv('./webscraping_craigslist.csv', sep=',', na_rep=np.nan, header=True, index=False)\n",
"Data for Berkeley",
"# Get houses listed in Berkeley\nprint(len(alldata[alldata['Neighborhood'] == 'berkeley']))\nalldata[alldata['Neighborhood'] == 'berkeley']\n\n# Home prices in Berkeley (or the baseline)\n\n# Choose a baseline, based on proximity to current location\n# 'berkeley', 'berkeley north / hills', 'albany / el cerrito'\nneighborhood_name = 'berkeley'\n\nprint('The average home price in %s is: $' %neighborhood_name, '{0:8,.0f}'.format(alldata.groupby('Neighborhood').mean().Price.ix[neighborhood_name]), '\\n')\nprint('The most expensive home price in %s is: $' %neighborhood_name, '{0:8,.0f}'.format(alldata.groupby('Neighborhood').max().Price.ix[neighborhood_name]), '\\n')\nprint('The least expensive home price in %s is: $' %neighborhood_name, '{0:9,.0f}'.format(alldata.groupby('Neighborhood').min().Price.ix[neighborhood_name]), '\\n')",
"Scatter plots",
"# Plot house prices in the East Bay\n\ndef scatterplot(X, Y, labels, xmax): # =X.max()): # labels=[]\n \n # Set up the figure\n fig = plt.figure(figsize=(15,8)) # width, height\n\n fntsz=20\n titlefntsz=25\n lablsz=20\n mrkrsz=8\n matplotlib.rc('xtick', labelsize = lablsz); matplotlib.rc('ytick', labelsize = lablsz)\n\n # Plot a scatter plot\n ax = fig.add_subplot(111) # row column position \n ax.plot(X,Y,'bo')\n\n # Grid\n ax.grid(b = True, which='major', axis='y') # which='major','both'; options/kwargs: color='r', linestyle='-', linewidth=2)\n\n # Format x axis\n #ax.set_xticks(range(0,len(X))); \n ax.set_xlabel(labels[0], fontsize = titlefntsz)\n #ax.set_xticklabels(X.index, rotation='vertical') # 90, 45, 'vertical'\n ax.set_xlim(0,xmax)\n\n # Format y axis\n #minor_yticks = np.arange(0, 1600000, 100000)\n #ax.set_yticks(minor_yticks, minor = True) \n ax.set_ylabel(labels[1], fontsize = titlefntsz)\n \n # Set Title\n ax.set_title('$\\mathrm{Average \\; Home \\; Prices \\; in \\; the \\; East \\; Bay \\; (Source: Craigslist)}$', fontsize = titlefntsz)\n #fig.suptitle('Home Prices in the East Bay (Source: Craigslist)')\n \n # Save figure\n #plt.savefig(\"home_prices.pdf\",bbox_inches='tight')\n\n # Return plot object\n return fig, ax\n\n\nX = alldata.SqFeet\nY = alldata.Price/1000 # in 1000's of Dollars\nlabels = ['$\\mathrm{Square \\; Feet}$', '$\\mathrm{Price \\; (in \\; 1000\\'s \\; of \\; Dollars)}$']\nax = scatterplot(X,Y,labels,20000)\n\nX = alldata.nBedrooms\nY = alldata.Price/1000 # in 1000's of Dollars\nlabels = ['$\\mathrm{Number \\; of \\; Bedrooms}$', '$\\mathrm{Price \\; (in \\; 1000\\'s \\; of \\; Dollars)}$']\nax = scatterplot(X,Y,labels,X.max())",
"Price",
"# How many houses for sale are under $700k?\nprice_baseline = 700000\nprint(alldata[(alldata.Price < price_baseline)].count())\n\n# Return entries for houses under $700k\n# alldata[(alldata.Price < price_baseline)]\n# In which neighborhoods are these houses located?\nset(alldata[(alldata.Price < price_baseline)].Neighborhood)\n\n\n# Would automate this later, just do \"quick and dirty\" solution for now, to take a fast look\n# Neighborhoods to plot\nneighborhoodsplt = ['El Dorado Hills',\n 'richmond / point / annex',\n 'hercules, pinole, san pablo, el sob',\n 'albany / el cerrito',\n 'oakland downtown',\n 'san leandro',\n 'pittsburg / antioch',\n 'fremont / union city / newark',\n 'walnut creek',\n 'brentwood / oakley',\n 'oakland west',\n 'vallejo / benicia',\n 'berkeley north / hills',\n 'oakland north / temescal',\n 'oakland hills / mills',\n 'berkeley',\n 'oakland lake merritt / grand',\n 'sacramento',\n 'Oakland',\n 'concord / pleasant hill / martinez',\n 'alameda',\n 'dublin / pleasanton / livermore',\n 'hayward / castro valley',\n 'Tracy, CA',\n 'Oakland Berkeley San Francisco',\n 'danville / san ramon',\n 'oakland rockridge / claremont',\n 'Eastmont',\n 'Stockton',\n 'Folsom',\n 'Tracy',\n 'Brentwood',\n 'Twain Harte, CA',\n 'oakland east',\n 'fairfield / vacaville',\n 'Pinole, Hercules, Richmond, San Francisc']\n\n#neighborhoodsplt = set(alldata[(alldata.Price < price_baseline)].Neighborhood.sort_values(ascending=True, inplace=True))",
"Group results by neighborhood and plot",
"by_neighborhood = alldata.groupby('Neighborhood').Price.mean()\n#by_neighborhood\n\n#alldata.groupby('Neighborhood').Price.mean().ix[neighborhoodsplt]\n\n# Home prices in the East Bay\n\n# Group the results by neighborhood, and then take the average home price in each neighborhood\nby_neighborhood = alldata.groupby('Neighborhood').Price.mean().ix[neighborhoodsplt]\nby_neighborhood_sort_price = by_neighborhood.sort_values(ascending = True) # uncomment\nby_neighborhood_sort_price.index # a list of the neighborhoods sorted by price\n\n# Plot average home price for each neighborhood in the East Bay\nfig = plt.figure()\nfig.set_figheight(8.0)\nfig.set_figwidth(13.0)\n\nfntsz=20\ntitlefntsz=25\nlablsz=20\nmrkrsz=8\n\nmatplotlib.rc('xtick', labelsize = lablsz); matplotlib.rc('ytick', labelsize = lablsz)\n\nax = fig.add_subplot(111) # row column position \n\n# Plot a bar chart\nax.bar(range(len(by_neighborhood_sort_price.index)), by_neighborhood_sort_price, align='center')\n\n# Add a horizontal line for Berkeley's average home price, corresponds with Berkeley bar\nax.axhline(y=by_neighborhood.ix['berkeley'], linestyle='--')\n\n# Add a grid\nax.grid(b = True, which='major', axis='y') # which='major','both'; options/kwargs: color='r', linestyle='-', linewidth=2)\n\n# Format x axis\nax.set_xticks(range(0,len(by_neighborhood))); \nax.set_xticklabels(by_neighborhood_sort_price.index, rotation='vertical') # 90, 45, 'vertical'\nax.set_xlim(-1, len(by_neighborhood_sort_price.index))\n\n# Format y axis\nax.set_ylabel('$\\mathrm{Price \\; (Dollars)}$', fontsize = titlefntsz) # in Hundreds of Thousands of Dollars\n\n# Set figure title\nax.set_title('$\\mathrm{Average \\; Home \\; Prices \\; in \\; the \\; East \\; Bay \\; (Source: Craigslist)}$', fontsize = titlefntsz)\n\n# Save figure\n#plt.savefig(\"home_prices.pdf\",bbox_inches='tight')\n"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
kaushik94/tardis | docs/research/code_comparison/plasma_compare/plasma_compare.ipynb | bsd-3-clause | [
"Plasma comparison",
"from tardis.simulation import Simulation\nfrom tardis.io.config_reader import Configuration\nfrom IPython.display import FileLinks",
"The example tardis_example can be downloaded here\ntardis_example.yml",
"config = Configuration.from_yaml('tardis_example.yml')\nsim = Simulation.from_config(config)",
"Accessing the plasma states\nIn this example, we are accessing Si and also the unionized number density (0)",
"# All Si ionization states\nsim.plasma.ion_number_density.loc[14]\n\n# Normalizing by si number density\nsim.plasma.ion_number_density.loc[14] / sim.plasma.number_density.loc[14]\n\n# Accessing the first ionization state\n\nsim.plasma.ion_number_density.loc[14, 1]\n\nsim.plasma.update(density=[1e-13])\n\nsim.plasma.ion_number_density",
"Updating the plasma state\nIt is possible to update the plasma state with different temperatures or dilution factors (as well as different densities.). We are updating the radiative temperatures and plotting the evolution of the ionization state",
"si_ionization_state = None\nfor cur_t_rad in range(1000, 20000, 100):\n sim.plasma.update(t_rad=[cur_t_rad])\n if si_ionization_state is None:\n si_ionization_state = sim.plasma.ion_number_density.loc[14].copy()\n si_ionization_state.columns = [cur_t_rad]\n else:\n si_ionization_state[cur_t_rad] = sim.plasma.ion_number_density.loc[14].copy()\n\n%pylab inline\n\nfig = figure(0, figsize=(10, 10))\nax = fig.add_subplot(111)\nsi_ionization_state.T.iloc[:, :3].plot(ax=ax)\nxlabel('radiative Temperature [K]')\nylabel('Number density [1/cm$^3$]')"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
neurodata/ndmg | tutorials/Overview.ipynb | apache-2.0 | [
"Ndmg Tutorial: Running Inside Python\nThis tutorial provides a basic overview of how to run ndmg manually within Python. <br>\nWe begin by checking for dependencies,\nthen we set our input parameters,\nthen we smiply run the pipeline.\nRunning the pipeline is quite simple: call ndmg_dwi_pipeline.ndmg_dwi_worker with the correct arguments. <br>\nNote that, although you can run the pipeline in Python, the absolute easiest way (outside Gigantum) is to run the pipeline from the command line once all dependencies are installed using the following command: <br>\nndmg_bids </absolute/input/dir> </absolute/output/dir>. <br>\nThis will run a single session from the input directory, and output the results into your output directory.\nBut for now, let's look at running in Python -- <br>\nLet's begin!",
"import os\nimport os.path as op\nimport glob\nimport shutil\nimport warnings\nimport subprocess\nfrom pathlib import Path\n\nfrom ndmg.scripts import ndmg_dwi_pipeline\nfrom ndmg.scripts.ndmg_bids import get_atlas\nfrom ndmg.utils import cloud_utils",
"Check for dependencies, Set Directories\nThe below code is a simple check that makes sure AFNI and FSL are installed. <br>\nWe also set the input, data, and atlas paths.\nMake sure that AFNI and FSL are installed",
"# FSL\ntry:\n print(f\"Your fsl directory is located here: {os.environ['FSLDIR']}\")\nexcept KeyError:\n raise AssertionError(\"You do not have FSL installed! See installation instructions here: https://fsl.fmrib.ox.ac.uk/fsl/fslwiki/FslInstallation\")\n \n# AFNI\ntry:\n print(f\"Your AFNI directory is located here: {subprocess.check_output('which afni', shell=True, universal_newlines=True)}\")\nexcept subprocess.CalledProcessError:\n raise AssertionError(\"You do not have AFNI installed! See installation instructions here: https://afni.nimh.nih.gov/pub/dist/doc/htmldoc/background_install/main_toc.html\")",
"Set Input, Output, and Atlas Locations\nHere, you set:\n1. the input_dir - this is where your input data lives.\n2. the out_dir - this is where your output data will go.",
"# get atlases\nndmg_dir = Path.home() / \".ndmg\"\natlas_dir = ndmg_dir / \"ndmg_atlases\"\nget_atlas(str(atlas_dir), \"2mm\")\n\n# These\ninput_dir = ndmg_dir / \"input\"\nout_dir = ndmg_dir / \"output\"\n\nprint(f\"Your input and output directory will be : {input_dir} and {out_dir}\")\n\nassert op.exists(input_dir), f\"You must have an input directory with data. Your input directory is located here: {input_dir}\"",
"Choose input parameters\nNaming Conventions\nHere, we define input variables to the pipeline.\nTo run the ndmg pipeline, you need four files:\n1. a t1w - this is a high-resolution anatomical image.\n2. a dwi - the diffusion image.\n3. bvecs - this is a text file that defines the gradient vectors created by a DWI scan.\n4. bvals - this is a text file that defines magnitudes for the gradient vectors created by a DWI scan.\nThe naming convention is in the BIDs spec.",
"# Specify base directory and paths to input files (dwi, bvecs, bvals, and t1w required)\nsubject_id = 'sub-0025864'\n\n# Define the location of our input files.\nt1w = str(input_dir / f\"{subject_id}/ses-1/anat/{subject_id}_ses-1_T1w.nii.gz\")\ndwi = str(input_dir / f\"{subject_id}/ses-1/dwi/{subject_id}_ses-1_dwi.nii.gz\")\nbvecs = str(input_dir / f\"{subject_id}/ses-1/dwi/{subject_id}_ses-1_dwi.bvec\")\nbvals = str(input_dir / f\"{subject_id}/ses-1/dwi/{subject_id}_ses-1_dwi.bval\")\n\nprint(f\"Your anatomical image location: {t1w}\")\nprint(f\"Your dwi image location: {dwi}\")\nprint(f\"Your bvector location: {bvecs}\")\nprint(f\"Your bvalue location: {bvals}\")",
"Parameter Choices and Output Directory\nHere, we choose the parameters to run the pipeline with.\nIf you are inexperienced with diffusion MRI theory, feel free to just use the default parameters.\n\natlases = ['desikan', 'CPAC200', 'DKT', 'HarvardOxfordcort', 'HarvardOxfordsub', 'JHU', 'Schaefer2018-200', 'Talairach', 'aal', 'brodmann', 'glasser', 'yeo-7-liberal', 'yeo-17-liberal'] : The atlas that defines the node location of the graph you create.\nmod_types = ['det', 'prob'] : Deterministic or probablistic tractography.\ntrack_types = ['local', 'particle'] : Local or particle tracking.\nmods = ['csa', 'csd'] : Constant Solid Angle or Constrained Spherical Deconvolution.\nregs = ['native', 'native_dsn', 'mni'] : Registration style. If native, do all registration in each scan's space; if mni, register scans to the MNI atlas; if native_dsn, do registration in native space, and then fit the streamlines to MNI space.\nvox_size = ['1mm', '2mm'] : Whether our voxels are 1mm or 2mm.\nseeds = int : Seeding density for tractography. More seeds generally results in a better graph, but at a much higher computational cost.",
"# Use the default parameters.\natlas = 'desikan'\nmod_type = 'prob'\ntrack_type = 'local'\nmod_func = 'csd'\nreg_style = 'native'\nvox_size = '2mm'\nseeds = 1\n",
"Get masks and labels\nThe pipeline needs these two variables as input. <br>\nRunning the pipeline via ndmg_bids does this for you.",
"# Auto-set paths to neuroparc files\nmask = str(atlas_dir / \"atlases/mask/MNI152NLin6_res-2x2x2_T1w_descr-brainmask.nii.gz\")\nlabels = [str(i) for i in (atlas_dir / \"atlases/label/Human/\").glob(f\"*{atlas}*2x2x2.nii.gz\")]\n\nprint(f\"mask location : {mask}\")\nprint(f\"atlas location : {labels}\")",
"Run the pipeline!",
"ndmg_dwi_pipeline.ndmg_dwi_worker(dwi=dwi, bvals=bvals, bvecs=bvecs, t1w=t1w, atlas=atlas, mask=mask, labels=labels, outdir=str(out_dir), vox_size=vox_size, mod_type=mod_type, track_type=track_type, mod_func=mod_func, seeds=seeds, reg_style=reg_style, clean=False, skipeddy=True, skipreg=True)",
"Try It Yourself : Command Line\nndmg runs best as a standalone program on the command line.\nThe simplest form of the command, given that you have input data, pass an output folder, and have all dependencies installed, is the following:\nndmg_bids </absolute/input/dir> </absolute/output/dir>\n\nHere, we'll show you how to set this up yourself.\nSetup: Running Locally\n\ninstall FSL\ninstall AFNI\ngit clone https://github.com/neurodata/ndmg.git\ncd ndmg\npip install -r requirements.txt\npip install .\n\nRunning Locally\nMost Basic\nThis will run the first session from your input dataset, and put the results into the output dataset.\nWe still recommend the --atlas flag so that graphs don't get generated on all possible atlases.\nndmg_bids --atlas desikan </absolute/input/dir> </absolute/output/dir>\n\nSpecifying Participant and Session\nYou can also specify a particular participant and session.\n(This is extremely useful for setting up batch scripts to run large datasets).\nndmg_bids --atlas desikan --participant_label <label> --session_label <number> </absolute/input/dir> </absolute/output/dir>\n\nDifferent Registration Styles, Diffusion Models, Tractography Styles\nYou can use:\n- the --sp flag to set the registration space;\n- the --mf flag to set the diffusion model; and\n- the --mod flag to set deterministic / probablistic tracking;\nndmg_bids --atlas desikan --sp <space> --mf <model> --mod <tracking style> </absolute/input/dir> </absolute/output/dir>\n\nSetup: Running in Docker\nIf you're having problems installing the program locally, it's often easier to use Docker.\n\ninstall docker\ndocker pull neurodata/ndmg_dev:latest\n\nRunning in Docker\nOption A (Docker executable approach):\nOnce you've downloaded the docker image, you can:\nAttach your local input and output folders with -v, <br>\nRun the image, <br>\nand input your participant and session labels into the container.\ndocker run -ti --rm --privileged -e DISPLAY=$DISPLAY -v <absolute/path/to/input/data>:/input -v <absolute/path/to/output/data>:/outputs neurodata/ndmg_dev:latest --participant_label <label> --session_label <number> --atlas desikan /input /output\n\nOption B (Inside Docker container):\nYou can also enter the container yourself and then run ndmg from inside the container.\ndocker run -ti --rm --privileged --entrypoint /bin/bash -e DISPLAY=$DISPLAY -v <absolute/path/to/input/data>/input -v <absolute/path/to/output/data>/output ndmg_dev:latest\n\nndmg_bids --participant_label <label> --session_label <number> --atlas desikan /input /output"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
Ccaccia73/semimonocoque | 01_SemiMonoCoque.ipynb | mit | [
"Semi-Monocoque Theory",
"from pint import UnitRegistry\nimport sympy\nimport networkx as nx\n#import numpy as np\nimport matplotlib.pyplot as plt\n#import sys\n%matplotlib inline\n#from IPython.display import display",
"Import Section class, which contains all calculations",
"from Section import Section",
"Initialization of sympy symbolic tool and pint for dimension analysis (not really implemented rn as not directly compatible with sympy)",
"ureg = UnitRegistry()\nsympy.init_printing()",
"Define sympy parameters used for geometric description of sections",
"A, A0, t, t0, a, b, h, L = sympy.symbols('A A_0 t t_0 a b h L', positive=True)",
"We also define numerical values for each symbol in order to plot scaled section and perform calculations",
"values = [(A, 150 * ureg.millimeter**2),(A0, 250 * ureg.millimeter**2),(a, 80 * ureg.millimeter), \\\n (b, 20 * ureg.millimeter),(h, 35 * ureg.millimeter),(L, 2000 * ureg.millimeter)]\ndatav = [(v[0],v[1].magnitude) for v in values]",
"First example: Closed section\nDefine graph describing the section:\n1) stringers are nodes with parameters:\n- x coordinate\n- y coordinate\n- Area\n2) panels are oriented edges with parameters:\n- thickness\n- lenght which is automatically calculated",
"stringers = {1:[(sympy.Integer(0),h),A],\n 2:[(a/2,h),A],\n 3:[(a,h),A],\n 4:[(a-b,sympy.Integer(0)),A],\n 5:[(b,sympy.Integer(0)),A]}\n\npanels = {(1,2):t,\n (2,3):t,\n (3,4):t,\n (4,5):t,\n (5,1):t}",
"Define section and perform first calculations",
"S1 = Section(stringers, panels)",
"Verify that we find a simply closed section",
"S1.cycles",
"Plot of S1 section in original reference frame\nDefine a dictionary of coordinates used by Networkx to plot section as a Directed graph.\nNote that arrows are actually just thicker stubs",
"start_pos={ii: [float(S1.g.node[ii]['ip'][i].subs(datav)) for i in range(2)] for ii in S1.g.nodes() }\n\nplt.figure(figsize=(12,8),dpi=300)\nnx.draw(S1.g,with_labels=True, arrows= True, pos=start_pos)\nplt.arrow(0,0,20,0)\nplt.arrow(0,0,0,20)\n#plt.text(0,0, 'CG', fontsize=24)\nplt.axis('equal')\nplt.title(\"Section in starting reference Frame\",fontsize=16);",
"Expression of Inertial properties wrt Center of Gravity in with original rotation",
"S1.Ixx0, S1.Iyy0, S1.Ixy0, S1.α0",
"Plot of S1 section in inertial reference Frame\nSection is plotted wrt center of gravity and rotated (if necessary) so that x and y are principal axes.\nCenter of Gravity and Shear Center are drawn",
"positions={ii: [float(S1.g.node[ii]['pos'][i].subs(datav)) for i in range(2)] for ii in S1.g.nodes() }\n\nx_ct, y_ct = S1.ct.subs(datav)\n\nplt.figure(figsize=(12,8),dpi=300)\nnx.draw(S1.g,with_labels=True, pos=positions)\nplt.plot([0],[0],'o',ms=12,label='CG')\nplt.plot([x_ct],[y_ct],'^',ms=12, label='SC')\n#plt.text(0,0, 'CG', fontsize=24)\n#plt.text(x_ct,y_ct, 'SC', fontsize=24)\nplt.legend(loc='lower right', shadow=True)\nplt.axis('equal')\nplt.title(\"Section in pricipal reference Frame\",fontsize=16);",
"Expression of inertial properties in principal reference frame",
"S1.Ixx, S1.Iyy, S1.Ixy, S1.θ",
"Shear center expression",
"S1.ct",
"Analisys of symmetry properties of the section\nFor x and y axes pair of symmetric nodes and edges are searched for",
"S1.symmetry",
"Compute axial loads in Stringers in S1\nWe first define some symbols:",
"Tx, Ty, Nz, Mx, My, Mz, F, ry, ry, mz = sympy.symbols('T_x T_y N_z M_x M_y M_z F r_y r_x m_z')",
"Set loads on the section:\nExample 1: shear in y direction and bending moment in x direction",
"S1.set_loads(_Tx=0, _Ty=Ty, _Nz=0, _Mx=Mx, _My=0, _Mz=0)",
"Compute axial loads in stringers and shear flows in panels",
"S1.compute_stringer_actions()\nS1.compute_panel_fluxes();",
"Axial loads",
"S1.N",
"Shear flows",
"S1.q",
"Example 2: twisting moment in z direction",
"S1.set_loads(_Tx=0, _Ty=0, _Nz=0, _Mx=0, _My=0, _Mz=Mz)\nS1.compute_stringer_actions()\nS1.compute_panel_fluxes();",
"Axial loads",
"S1.N",
"Panel fluxes",
"S1.q",
"Set loads on the section:\nExample 3: shear in x direction and bending moment in y direction",
"S1.set_loads(_Tx=Tx, _Ty=0, _Nz=0, _Mx=0, _My=My, _Mz=0)\nS1.compute_stringer_actions()\nS1.compute_panel_fluxes();",
"Axial loads",
"S1.N",
"Panel fluxes\nNot really an easy expression",
"S1.q",
"Compute Jt\nComputation of torsional moment of inertia:",
"S1.compute_Jt()\nS1.Jt",
"Second example: Open section",
"stringers = {1:[(sympy.Integer(0),h),A],\n 2:[(sympy.Integer(0),sympy.Integer(0)),A],\n 3:[(a,sympy.Integer(0)),A],\n 4:[(a,h),A]}\n\npanels = {(1,2):t,\n (2,3):t,\n (3,4):t}",
"Define section and perform first calculations",
"S2 = Section(stringers, panels)",
"Verify that the section is open",
"S2.cycles",
"Plot of S2 section in original reference frame\nDefine a dictionary of coordinates used by Networkx to plot section as a Directed graph.\nNote that arrows are actually just thicker stubs",
"start_pos={ii: [float(S2.g.node[ii]['ip'][i].subs(datav)) for i in range(2)] for ii in S2.g.nodes() }\n\nplt.figure(figsize=(12,8),dpi=300)\nnx.draw(S2.g,with_labels=True, arrows= True, pos=start_pos)\nplt.arrow(0,0,20,0)\nplt.arrow(0,0,0,20)\n#plt.text(0,0, 'CG', fontsize=24)\nplt.axis('equal')\nplt.title(\"Section in starting reference Frame\",fontsize=16);",
"Expression of Inertial properties wrt Center of Gravity in with original rotation",
"S2.Ixx0, S2.Iyy0, S2.Ixy0, S2.α0",
"Plot of S2 section in inertial reference Frame\nSection is plotted wrt center of gravity and rotated (if necessary) so that x and y are principal axes.\nCenter of Gravity and Shear Center are drawn",
"positions={ii: [float(S2.g.node[ii]['pos'][i].subs(datav)) for i in range(2)] for ii in S2.g.nodes() }\n\nx_ct, y_ct = S2.ct.subs(datav)\n\nplt.figure(figsize=(12,8),dpi=300)\nnx.draw(S2.g,with_labels=True, pos=positions)\nplt.plot([0],[0],'o',ms=12,label='CG')\nplt.plot([x_ct],[y_ct],'^',ms=12, label='SC')\n#plt.text(0,0, 'CG', fontsize=24)\n#plt.text(x_ct,y_ct, 'SC', fontsize=24)\nplt.legend(loc='lower right', shadow=True)\nplt.axis('equal')\nplt.title(\"Section in pricipal reference Frame\",fontsize=16);",
"Expression of inertial properties in principal reference frame",
"S2.Ixx, S2.Iyy, S2.Ixy, S2.θ",
"Shear center expression",
"S2.ct",
"Analisys of symmetry properties of the section\nFor x and y axes pair of symmetric nodes and edges are searched for",
"S2.symmetry",
"Compute axial loads in Stringers in S2\nSet loads on the section:\nExample 2: shear in y direction and bending moment in x direction",
"S2.set_loads(_Tx=0, _Ty=Ty, _Nz=0, _Mx=Mx, _My=0, _Mz=0)",
"Compute axial loads in stringers and shear flows in panels",
"S2.compute_stringer_actions()\nS2.compute_panel_fluxes();",
"Axial loads",
"S2.N",
"Shear flows",
"S2.q",
"Set loads on the section:\nExample 2: shear in x direction and bending moment in y direction",
"S2.set_loads(_Tx=Tx, _Ty=0, _Nz=0, _Mx=0, _My=My, _Mz=0)\nS2.compute_stringer_actions()\nS2.compute_panel_fluxes();\n\nS2.N\n\nS2.q",
"Second example (2): Open section",
"stringers = {1:[(a,h),A],\n 2:[(sympy.Integer(0),h),A],\n 3:[(sympy.Integer(0),sympy.Integer(0)),A],\n 4:[(a,sympy.Integer(0)),A]}\n\npanels = {(1,2):t,\n (2,3):t,\n (3,4):t}",
"Define section and perform first calculations",
"S2_2 = Section(stringers, panels)",
"Plot of S2 section in original reference frame\nDefine a dictionary of coordinates used by Networkx to plot section as a Directed graph.\nNote that arrows are actually just thicker stubs",
"start_pos={ii: [float(S2_2.g.node[ii]['ip'][i].subs(datav)) for i in range(2)] for ii in S2_2.g.nodes() }\n\nplt.figure(figsize=(12,8),dpi=300)\nnx.draw(S2_2.g,with_labels=True, arrows= True, pos=start_pos)\nplt.arrow(0,0,20,0)\nplt.arrow(0,0,0,20)\n#plt.text(0,0, 'CG', fontsize=24)\nplt.axis('equal')\nplt.title(\"Section in starting reference Frame\",fontsize=16);",
"Expression of Inertial properties wrt Center of Gravity in with original rotation",
"S2_2.Ixx0, S2_2.Iyy0, S2_2.Ixy0, S2_2.α0",
"Plot of S2 section in inertial reference Frame\nSection is plotted wrt center of gravity and rotated (if necessary) so that x and y are principal axes.\nCenter of Gravity and Shear Center are drawn",
"positions={ii: [float(S2_2.g.node[ii]['pos'][i].subs(datav)) for i in range(2)] for ii in S2_2.g.nodes() }\n\nx_ct, y_ct = S2_2.ct.subs(datav)\n\nplt.figure(figsize=(12,8),dpi=300)\nnx.draw(S2_2.g,with_labels=True, pos=positions)\nplt.plot([0],[0],'o',ms=12,label='CG')\nplt.plot([x_ct],[y_ct],'^',ms=12, label='SC')\n#plt.text(0,0, 'CG', fontsize=24)\n#plt.text(x_ct,y_ct, 'SC', fontsize=24)\nplt.legend(loc='lower right', shadow=True)\nplt.axis('equal')\nplt.title(\"Section in pricipal reference Frame\",fontsize=16);",
"Expression of inertial properties in principal reference frame",
"S2_2.Ixx, S2_2.Iyy, S2_2.Ixy, S2_2.θ",
"Shear center expression",
"S2_2.ct"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
tensorflow/neural-structured-learning | workshops/kdd_2020/adversarial_regularization_mnist.ipynb | apache-2.0 | [
"Copyright 2020 Google LLC",
"#@title Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.",
"Adversarial Regularization for Image Classification\nThe core idea of adversarial learning is to train a model with\nadversarially-perturbed data (called adversarial examples) in addition to the\norganic training data. The adversarial examples are constructed to intentionally\nmislead the model into making wrong predictions or classifications. By training\nwith such examples, the model learns to be robust against adversarial\nperturbation when making predictions.\nIn this tutorial, we illustrate the following procedure of applying adversarial\nlearning to obtain robust models using the Neural Structured Learning framework:\n\nCreate a neural network as a base model. In this tutorial, the base model is\n created with the tf.keras functional API; this procedure is compatible\n with models created by tf.keras sequential and subclassing APIs as well.\nWrap the base model with the AdversarialRegularization wrapper class,\n which is provided by the NSL framework, to create a new tf.keras.Model\n instance. This new model will include the adversarial loss as a\n regularization term in its training objective.\nConvert examples in the training data to feature dictionaries.\nTrain and evaluate the new model.\n\nBoth the base and the new model will be evaluated against natural and\nadversarial inputs.\nSetup\nInstall the Neural Structured Learning package.",
"!pip install --quiet neural-structured-learning\n\nimport matplotlib.pyplot as plt\nimport tensorflow as tf\nimport tensorflow_datasets as tfds\nimport numpy as np\nimport neural_structured_learning as nsl",
"Hyperparameters\nWe collect and explain the hyperparameters (in an HParams object) for model\ntraining and evaluation.\nInput/Output:\n\ninput_shape: The shape of the input tensor. Each image is 28-by-28\npixels with 1 channel.\nnum_classes: There are a total of 10 classes, corresponding to 10\ndigits [0-9].\n\nModel architecture:\n\nconv_filters: A list of numbers, each specifying the number of\nfilters in a convolutional layer.\nkernel_size: The size of 2D convolution window, shared by all\nconvolutional layers.\npool_size: Factors to downscale the image in each max-pooling layer.\nnum_fc_units: The number of units (i.e., width) of each\nfully-connected layer.\n\nTraining and evaluation:\n\nbatch_size: Batch size used for training and evaluation.\nepochs: The number of training epochs.\n\nAdversarial learning:\n\nadv_multiplier: The weight of adversarial loss in the training\n objective, relative to the labeled loss.\nadv_step_size: The magnitude of adversarial perturbation.\nadv_grad_norm: The norm to measure the magnitude of adversarial\n perturbation.\npgd_iterations: The number of iterative steps to take when using PGD.\npgd_epsilon: The bounds of the perturbation. PGD will project back to\n this epsilon ball when generating the adversary.\nclip_value_min: Clips the final adversary to be at least as large as\n this value. This keeps the perturbed pixel values in a valid domain.\nclip_value_max: Clips the final adversary to be no larger than this\n value. This also keeps the perturbed pixel values in a valid domain.",
"class HParams(object):\n def __init__(self):\n self.input_shape = [28, 28, 1]\n self.num_classes = 10\n self.conv_filters = [32, 64, 64]\n self.kernel_size = (3, 3)\n self.pool_size = (2, 2)\n self.num_fc_units = [64]\n self.batch_size = 32\n self.epochs = 5\n self.adv_multiplier = 0.2\n self.adv_step_size = 0.01\n self.adv_grad_norm = 'infinity'\n self.pgd_iterations = 40\n self.pgd_epsilon = 0.2\n self.clip_value_min = 0.0\n self.clip_value_max = 1.0\n\nHPARAMS = HParams()",
"MNIST dataset\nThe MNIST dataset contains grayscale\nimages of handwritten digits (from '0' to '9'). Each image showes one digit at\nlow resolution (28-by-28 pixels). The task involved is to classify images into\n10 categories, one per digit.\nHere we load the MNIST dataset from\nTensorFlow Datasets. It handles\ndownloading the data and constructing a tf.data.Dataset. The loaded dataset\nhas two subsets:\n\ntrain with 60,000 examples, and\ntest with 10,000 examples.\n\nExamples in both subsets are stored in feature dictionaries with the following\ntwo keys:\n\nimage: Array of pixel values, ranging from 0 to 255.\nlabel: Groundtruth label, ranging from 0 to 9.",
"datasets = tfds.load('mnist')\n\ntrain_dataset = datasets['train']\ntest_dataset = datasets['test']\n\nIMAGE_INPUT_NAME = 'image'\nLABEL_INPUT_NAME = 'label'",
"To make the model numerically stable, we normalize the pixel values to [0, 1]\nby mapping the dataset over the normalize function. After shuffling training\nset and batching, we convert the examples to feature tuples (image, label)\nfor training the base model. We also provide a function to convert from tuples\nto dictionaries for later use.",
"def normalize(features):\n features[IMAGE_INPUT_NAME] = tf.cast(\n features[IMAGE_INPUT_NAME], dtype=tf.float32) / 255.0\n return features\n\ndef convert_to_tuples(features):\n return features[IMAGE_INPUT_NAME], features[LABEL_INPUT_NAME]\n\ndef convert_to_dictionaries(image, label):\n return {IMAGE_INPUT_NAME: image, LABEL_INPUT_NAME: label}\n\ntrain_dataset = train_dataset.map(normalize).shuffle(10000).batch(HPARAMS.batch_size).map(convert_to_tuples)\ntest_dataset = test_dataset.map(normalize).batch(HPARAMS.batch_size).map(convert_to_tuples)",
"Base model\nOur base model will be a neural network consisting of 3 convolutional layers\nfollwed by 2 fully-connected layers (as defined in HPARAMS). Here we define\nit using the Keras functional API. Feel free to try other APIs or model\narchitectures.",
"def build_base_model(hparams):\n \"\"\"Builds a model according to the architecture defined in `hparams`.\"\"\"\n inputs = tf.keras.Input(\n shape=hparams.input_shape, dtype=tf.float32, name=IMAGE_INPUT_NAME)\n\n x = inputs\n for i, num_filters in enumerate(hparams.conv_filters):\n x = tf.keras.layers.Conv2D(\n num_filters, hparams.kernel_size, activation='relu')(\n x)\n if i < len(hparams.conv_filters) - 1:\n # max pooling between convolutional layers\n x = tf.keras.layers.MaxPooling2D(hparams.pool_size)(x)\n x = tf.keras.layers.Flatten()(x)\n for num_units in hparams.num_fc_units:\n x = tf.keras.layers.Dense(num_units, activation='relu')(x)\n pred = tf.keras.layers.Dense(hparams.num_classes, activation=None)(x)\n # pred = tf.keras.layers.Dense(hparams.num_classes, activation='softmax')(x)\n model = tf.keras.Model(inputs=inputs, outputs=pred)\n return model\n\nbase_model = build_base_model(HPARAMS)\nbase_model.summary()",
"Next we train and evaluate the base model.",
"base_model.compile(optimizer='adam',\n loss=tf.keras.losses.SparseCategoricalCrossentropy(\n from_logits=True),\n metrics=['acc'])\nbase_model.fit(train_dataset, epochs=HPARAMS.epochs)\n\nresults = base_model.evaluate(test_dataset)\nnamed_results = dict(zip(base_model.metrics_names, results))\nprint('\\naccuracy:', named_results['acc'])",
"Adversarial-regularized model\nHere we show how to incorporate adversarial training into a Keras model with a\nfew lines of code, using the NSL framework. The base model is wrapped to create\na new tf.Keras.Model, whose training objective includes adversarial\nregularization.\nWe will train one using the FGSM adversary and one using a stronger PGD\nadversary.\nFirst, we create config objects with relevant hyperparameters.",
"fgsm_adv_config = nsl.configs.make_adv_reg_config(\n multiplier=HPARAMS.adv_multiplier,\n # With FGSM, we want to take a single step equal to the epsilon ball size,\n # to get the largest allowable perturbation.\n adv_step_size=HPARAMS.pgd_epsilon,\n adv_grad_norm=HPARAMS.adv_grad_norm,\n clip_value_min=HPARAMS.clip_value_min,\n clip_value_max=HPARAMS.clip_value_max\n)\n\npgd_adv_config = nsl.configs.make_adv_reg_config(\n multiplier=HPARAMS.adv_multiplier,\n adv_step_size=HPARAMS.adv_step_size,\n adv_grad_norm=HPARAMS.adv_grad_norm,\n pgd_iterations=HPARAMS.pgd_iterations,\n pgd_epsilon=HPARAMS.pgd_epsilon,\n clip_value_min=HPARAMS.clip_value_min,\n clip_value_max=HPARAMS.clip_value_max\n)",
"Now we can wrap a base model with AdversarialRegularization. Here we create \nnew base models (base_fgsm_model, base_pgd_model), so that the existing one\n(base_model) can be used in later comparison.\nThe returned adv_model is a tf.keras.Model object, whose training objective\nincludes a regularization term for the adversarial loss. To compute that loss,\nthe model has to have access to the label information (feature label), in\naddition to regular input (feature image). For this reason, we convert the\nexamples in the datasets from tuples back to dictionaries. And we tell the\nmodel which feature contains the label information via the label_keys\nparameter.\nWe will create two adversarially regularized models: fgsm_adv_model\n(regularized with FGSM) and pgd_adv_model (regularized with PGD).",
"# Create model for FGSM.\nbase_fgsm_model = build_base_model(HPARAMS)\n# Create FGSM-regularized model.\nfgsm_adv_model = nsl.keras.AdversarialRegularization(\n base_fgsm_model,\n label_keys=[LABEL_INPUT_NAME],\n adv_config=fgsm_adv_config\n)\n\n# Create model for PGD.\nbase_pgd_model = build_base_model(HPARAMS)\n# Create PGD-regularized model.\npgd_adv_model = nsl.keras.AdversarialRegularization(\n base_pgd_model,\n label_keys=[LABEL_INPUT_NAME],\n adv_config=pgd_adv_config\n)\n\n# Data for training.\ntrain_set_for_adv_model = train_dataset.map(convert_to_dictionaries)\ntest_set_for_adv_model = test_dataset.map(convert_to_dictionaries)",
"Next we compile, train, and evaluate the\nadversarial-regularized model. There might be warnings like\n\"Output missing from loss dictionary,\" which is fine because\nthe adv_model doesn't rely on the base implementation to\ncalculate the total loss.",
"fgsm_adv_model.compile(optimizer='adam',\n loss=tf.keras.losses.SparseCategoricalCrossentropy(\n from_logits=True),\n metrics=['acc'])\nfgsm_adv_model.fit(train_set_for_adv_model, epochs=HPARAMS.epochs)\n\nresults = fgsm_adv_model.evaluate(test_set_for_adv_model)\nnamed_results = dict(zip(fgsm_adv_model.metrics_names, results))\nprint('\\naccuracy:', named_results['sparse_categorical_accuracy'])\n\npgd_adv_model.compile(optimizer='adam',\n loss=tf.keras.losses.SparseCategoricalCrossentropy(\n from_logits=True),\n metrics=['acc'])\npgd_adv_model.fit(train_set_for_adv_model, epochs=HPARAMS.epochs)\n\nresults = pgd_adv_model.evaluate(test_set_for_adv_model)\nnamed_results = dict(zip(pgd_adv_model.metrics_names, results))\nprint('\\naccuracy:', named_results['sparse_categorical_accuracy'])",
"Both adversarially regularized models perform well on the test set.\nRobustness under Adversarial Perturbations\nNow we compare the base model and the adversarial-regularized model for\nrobustness under adversarial perturbation.\nWe will show how the base model is vulnerable to attacks from both FGSM and PGD,\nthe FGSM-regularized model can resist FGSM attacks but is vulnerable to PGD, and\nthe PGD-regularized model is able to resist both forms of attack.\nWe use gen_adv_neighbor to generate adversaries for our models.\nAttacking the Base Model",
"# Set up the neighbor config for FGSM.\nfgsm_nbr_config = nsl.configs.AdvNeighborConfig(\n adv_grad_norm=HPARAMS.adv_grad_norm,\n adv_step_size=HPARAMS.pgd_epsilon,\n clip_value_min=0.0,\n clip_value_max=1.0,\n)\n\n# The labeled loss function provides the loss for each sample we pass in. This\n# will be used to calculate the gradient.\nlabeled_loss_fn = tf.keras.losses.SparseCategoricalCrossentropy(\n from_logits=True,\n)\n\n\n%%time\n# Generate adversarial images using FGSM on the base model.\nperturbed_images, labels, predictions = [], [], []\n\n# We want to record the accuracy.\nmetric = tf.keras.metrics.SparseCategoricalAccuracy()\n\nfor batch in test_set_for_adv_model:\n # Record the loss calculation to get the gradient.\n with tf.GradientTape() as tape:\n tape.watch(batch)\n losses = labeled_loss_fn(batch[LABEL_INPUT_NAME],\n base_model(batch[IMAGE_INPUT_NAME]))\n \n # Generate the adversarial example.\n fgsm_images, _ = nsl.lib.adversarial_neighbor.gen_adv_neighbor(\n batch[IMAGE_INPUT_NAME],\n losses,\n fgsm_nbr_config,\n gradient_tape=tape\n )\n\n # Update our accuracy metric.\n y_true = batch['label']\n y_pred = base_model(fgsm_images)\n metric(y_true, y_pred)\n\n # Store images for later use.\n perturbed_images.append(fgsm_images)\n labels.append(y_true.numpy())\n predictions.append(tf.argmax(y_pred, axis=-1).numpy())\n\nprint('%s model accuracy: %f' % ('base', metric.result().numpy()))",
"Let's examine what some of these images look like.",
"def examine_images(perturbed_images, labels, predictions, model_key):\n batch_index = 0\n\n batch_image = perturbed_images[batch_index]\n batch_label = labels[batch_index]\n batch_pred = predictions[batch_index]\n\n batch_size = HPARAMS.batch_size\n n_col = 4\n n_row = (batch_size + n_col - 1) / n_col\n\n print('accuracy in batch %d:' % batch_index)\n print('%s model: %d / %d' %\n (model_key, np.sum(batch_label == batch_pred), batch_size))\n\n plt.figure(figsize=(15, 15))\n for i, (image, y) in enumerate(zip(batch_image, batch_label)):\n y_base = batch_pred[i]\n plt.subplot(n_row, n_col, i+1)\n plt.title('true: %d, %s: %d' % (y, model_key, y_base), color='r'\n if y != y_base else 'k')\n plt.imshow(tf.keras.preprocessing.image.array_to_img(image), cmap='gray')\n plt.axis('off')\n\n plt.show()\n\nexamine_images(perturbed_images, labels, predictions, 'base')",
"Our perturbation budget of 0.2 is quite large, but even so, the perturbed\nnumbers are clearly recognizable to the human eye. On the other hand, our\nnetwork is fooled into misclassifying several examples.\nAs we can see, the FGSM attack is already highly effective, and quick to\nexecute, heavily reducing the model accuracy. We will see below, that the PGD\nattack is even more effective, even with the same perturbation budget.",
"# Set up the neighbor config for PGD.\npgd_nbr_config = nsl.configs.AdvNeighborConfig(\n adv_grad_norm=HPARAMS.adv_grad_norm,\n adv_step_size=HPARAMS.adv_step_size,\n pgd_iterations=HPARAMS.pgd_iterations,\n pgd_epsilon=HPARAMS.pgd_epsilon,\n clip_value_min=HPARAMS.clip_value_min,\n clip_value_max=HPARAMS.clip_value_max,\n)\n\n# pgd_model_fn generates a prediction from which we calculate the loss, and the\n# gradient for a given interation.\npgd_model_fn = base_model\n\n# We need to pass in the loss function for repeated calculation of the gradient.\npgd_loss_fn = tf.keras.losses.SparseCategoricalCrossentropy(\n from_logits=True, \n)\nlabeled_loss_fn = pgd_loss_fn\n\n%%time\n# Generate adversarial images using PGD on the base model.\nperturbed_images, labels, predictions = [], [], []\n\n# Record the accuracy.\nmetric = tf.keras.metrics.SparseCategoricalAccuracy()\n\nfor batch in test_set_for_adv_model:\n # Gradient tape to calculate the loss on the first iteration.\n with tf.GradientTape() as tape:\n tape.watch(batch)\n losses = labeled_loss_fn(batch[LABEL_INPUT_NAME],\n base_model(batch[IMAGE_INPUT_NAME]))\n \n # Generate the adversarial examples.\n pgd_images, _ = nsl.lib.adversarial_neighbor.gen_adv_neighbor(\n batch[IMAGE_INPUT_NAME],\n losses,\n pgd_nbr_config,\n gradient_tape=tape,\n pgd_model_fn=pgd_model_fn,\n pgd_loss_fn=pgd_loss_fn,\n pgd_labels=batch[LABEL_INPUT_NAME],\n )\n\n # Update our accuracy metric.\n y_true = batch['label']\n y_pred = base_model(pgd_images)\n metric(y_true, y_pred)\n\n # Store images for visualization.\n perturbed_images.append(pgd_images)\n labels.append(y_true.numpy())\n predictions.append(tf.argmax(y_pred, axis=-1).numpy())\n\nprint('%s model accuracy: %f' % ('base', metric.result().numpy()))\n\nexamine_images(perturbed_images, labels, predictions, 'base')",
"The PGD attack is much stronger, but it also takes longer to run.\nAttacking the FGSM Regularized Model",
"# Set up the neighbor config.\nfgsm_nbr_config = nsl.configs.AdvNeighborConfig(\n adv_grad_norm=HPARAMS.adv_grad_norm,\n adv_step_size=HPARAMS.pgd_epsilon,\n clip_value_min=0.0,\n clip_value_max=1.0,\n)\n\n# The labeled loss function provides the loss for each sample we pass in. This\n# will be used to calculate the gradient.\nlabeled_loss_fn = tf.keras.losses.SparseCategoricalCrossentropy(\n from_logits=True,\n)\n\n%%time\n# Generate adversarial images using FGSM on the regularized model.\nperturbed_images, labels, predictions = [], [], []\n\n# Record the accuracy.\nmetric = tf.keras.metrics.SparseCategoricalAccuracy()\n\nfor batch in test_set_for_adv_model:\n # Record the loss calculation to get its gradients.\n with tf.GradientTape() as tape:\n tape.watch(batch)\n # We attack the adversarially regularized model.\n losses = labeled_loss_fn(batch[LABEL_INPUT_NAME],\n fgsm_adv_model.base_model(batch[IMAGE_INPUT_NAME]))\n \n # Generate the adversarial examples.\n fgsm_images, _ = nsl.lib.adversarial_neighbor.gen_adv_neighbor(\n batch[IMAGE_INPUT_NAME],\n losses,\n fgsm_nbr_config,\n gradient_tape=tape\n )\n\n # Update our accuracy metric.\n y_true = batch['label']\n y_pred = fgsm_adv_model.base_model(fgsm_images)\n metric(y_true, y_pred)\n\n # Store images for visualization.\n perturbed_images.append(fgsm_images)\n labels.append(y_true.numpy())\n predictions.append(tf.argmax(y_pred, axis=-1).numpy())\n\nprint('%s model accuracy: %f' % ('base', metric.result().numpy()))\n\nexamine_images(perturbed_images, labels, predictions, 'fgsm_reg')",
"As we can see, the FGSM-regularized model performs much better than the base\nmodel on images perturbed by FGSM. How does it do against PGD?",
"# Set up the neighbor config for PGD.\npgd_nbr_config = nsl.configs.AdvNeighborConfig(\n adv_grad_norm=HPARAMS.adv_grad_norm,\n adv_step_size=HPARAMS.adv_step_size,\n pgd_iterations=HPARAMS.pgd_iterations,\n pgd_epsilon=HPARAMS.pgd_epsilon,\n clip_value_min=HPARAMS.clip_value_min,\n clip_value_max=HPARAMS.clip_value_max,\n)\n\n# pgd_model_fn generates a prediction from which we calculate the loss, and the\n# gradient for a given interation.\npgd_model_fn = fgsm_adv_model.base_model\n\n# We need to pass in the loss function for repeated calculation of the gradient.\npgd_loss_fn = tf.keras.losses.SparseCategoricalCrossentropy(\n from_logits=True, \n)\nlabeled_loss_fn = pgd_loss_fn\n\n%%time\n# Generate adversarial images using PGD on the FGSM-regularized model.\nperturbed_images, labels, predictions = [], [], []\n\nmetric = tf.keras.metrics.SparseCategoricalAccuracy()\n\nfor batch in test_set_for_adv_model:\n # Gradient tape to calculate the loss on the first iteration.\n with tf.GradientTape() as tape:\n tape.watch(batch)\n losses = labeled_loss_fn(batch[LABEL_INPUT_NAME],\n fgsm_adv_model.base_model(batch[IMAGE_INPUT_NAME]))\n \n # Generate the adversarial examples.\n pgd_images, _ = nsl.lib.adversarial_neighbor.gen_adv_neighbor(\n batch[IMAGE_INPUT_NAME],\n losses,\n pgd_nbr_config,\n gradient_tape=tape,\n pgd_model_fn=pgd_model_fn,\n pgd_loss_fn=pgd_loss_fn,\n pgd_labels=batch[LABEL_INPUT_NAME],\n )\n \n # Update our accuracy metric.\n y_true = batch['label']\n y_pred = fgsm_adv_model.base_model(pgd_images)\n metric(y_true, y_pred)\n\n # Store images for visualization.\n perturbed_images.append(pgd_images)\n labels.append(y_true.numpy())\n predictions.append(tf.argmax(y_pred, axis=-1).numpy())\n\nprint('%s model accuracy: %f' % ('base', metric.result().numpy()))\n\nexamine_images(perturbed_images, labels, predictions, 'fgsm_reg')",
"While the FGSM regularized model was robust to attacks via FGSM, as we can see\nit is still vulnerable to attacks from PGD, which is a stronger attack mechanism\nthan FGSM.\nAttacking the PGD Regularized Model",
"# Set up the neighbor config.\nfgsm_nbr_config = nsl.configs.AdvNeighborConfig(\n adv_grad_norm=HPARAMS.adv_grad_norm,\n adv_step_size=HPARAMS.pgd_epsilon,\n clip_value_min=0.0,\n clip_value_max=1.0,\n)\n\n# The labeled loss function provides the loss for each sample we pass in. This\n# will be used to calculate the gradient.\nlabeled_loss_fn = tf.keras.losses.SparseCategoricalCrossentropy(\n from_logits=True,\n)\n\n%%time\n# Generate adversarial images using FGSM on the regularized model.\nperturbed_images, labels, predictions = [], [], []\n\n# Record the accuracy.\nmetric = tf.keras.metrics.SparseCategoricalAccuracy()\n\nfor batch in test_set_for_adv_model:\n # Record the loss calculation to get its gradients.\n with tf.GradientTape() as tape:\n tape.watch(batch)\n # We attack the adversarially regularized model.\n losses = labeled_loss_fn(batch[LABEL_INPUT_NAME],\n pgd_adv_model.base_model(batch[IMAGE_INPUT_NAME]))\n\n # Generate the adversarial examples.\n fgsm_images, _ = nsl.lib.adversarial_neighbor.gen_adv_neighbor(\n batch[IMAGE_INPUT_NAME],\n losses,\n fgsm_nbr_config,\n gradient_tape=tape\n )\n\n # Update our accuracy metric.\n y_true = batch['label']\n y_pred = pgd_adv_model.base_model(fgsm_images)\n metric(y_true, y_pred)\n\n # Store images for visualization.\n perturbed_images.append(fgsm_images)\n labels.append(y_true.numpy())\n predictions.append(tf.argmax(y_pred, axis=-1).numpy())\n\nprint('%s model accuracy: %f' % ('base', metric.result().numpy()))\n\nexamine_images(perturbed_images, labels, predictions, 'pgd_reg')\n\n# Set up the neighbor config for PGD.\npgd_nbr_config = nsl.configs.AdvNeighborConfig(\n adv_grad_norm=HPARAMS.adv_grad_norm,\n adv_step_size=HPARAMS.adv_step_size,\n pgd_iterations=HPARAMS.pgd_iterations,\n pgd_epsilon=HPARAMS.pgd_epsilon,\n clip_value_min=HPARAMS.clip_value_min,\n clip_value_max=HPARAMS.clip_value_max,\n)\n\n# pgd_model_fn generates a prediction from which we calculate the loss, and the\n# gradient for a given interation.\npgd_model_fn = pgd_adv_model.base_model\n\n# We need to pass in the loss function for repeated calculation of the gradient.\npgd_loss_fn = tf.keras.losses.SparseCategoricalCrossentropy(\n from_logits=True, \n)\nlabeled_loss_fn = pgd_loss_fn\n\n%%time\n# Generate adversarial images using PGD on the PGD-regularized model.\nperturbed_images, labels, predictions = [], [], []\n\nmetric = tf.keras.metrics.SparseCategoricalAccuracy()\n\nfor batch in test_set_for_adv_model:\n # Gradient tape to calculate the loss on the first iteration.\n with tf.GradientTape() as tape:\n tape.watch(batch)\n losses = labeled_loss_fn(batch[LABEL_INPUT_NAME],\n pgd_adv_model.base_model(batch[IMAGE_INPUT_NAME]))\n \n # Generate the adversarial examples.\n pgd_images, _ = nsl.lib.adversarial_neighbor.gen_adv_neighbor(\n batch[IMAGE_INPUT_NAME],\n losses,\n pgd_nbr_config,\n gradient_tape=tape,\n pgd_model_fn=pgd_model_fn,\n pgd_loss_fn=pgd_loss_fn,\n pgd_labels=batch[LABEL_INPUT_NAME],\n )\n \n # Update our accuracy metric.\n y_true = batch['label']\n y_pred = pgd_adv_model.base_model(pgd_images)\n metric(y_true, y_pred)\n\n # Store images for visualization.\n perturbed_images.append(pgd_images)\n labels.append(y_true.numpy())\n predictions.append(tf.argmax(y_pred, axis=-1).numpy())\n\nprint('%s model accuracy: %f' % ('base', metric.result().numpy()))\n\nexamine_images(perturbed_images, labels, predictions, 'pgd_reg')",
"The PGD-regularized model is strong against both attack types.\nConclusion\nIn this colab, we've explored two gradient-based attack methods, FGSM, and its\nstronger variant PGD. We have seen how neural networks not trained to defend\nagainst these attacks are vulnerable to these attacks, and also how to utilize\nadversarial regularization in the Neural Structured Learning framework to\nimprove robustness."
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
google/starthinker | colabs/google_api_to_bigquery.ipynb | apache-2.0 | [
"Google API To BigQuery\nExecute any Google API function and store results to BigQuery.\nLicense\nCopyright 2020 Google LLC,\nLicensed under the Apache License, Version 2.0 (the \"License\");\nyou may not use this file except in compliance with the License.\nYou may obtain a copy of the License at\nhttps://www.apache.org/licenses/LICENSE-2.0\nUnless required by applicable law or agreed to in writing, software\ndistributed under the License is distributed on an \"AS IS\" BASIS,\nWITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\nSee the License for the specific language governing permissions and\nlimitations under the License.\nDisclaimer\nThis is not an officially supported Google product. It is a reference implementation. There is absolutely NO WARRANTY provided for using this code. The code is Apache Licensed and CAN BE fully modified, white labeled, and disassembled by your team.\nThis code generated (see starthinker/scripts for possible source):\n - Command: \"python starthinker_ui/manage.py colab\"\n - Command: \"python starthinker/tools/colab.py [JSON RECIPE]\"\n1. Install Dependencies\nFirst install the libraries needed to execute recipes, this only needs to be done once, then click play.",
"!pip install git+https://github.com/google/starthinker\n",
"2. Set Configuration\nThis code is required to initialize the project. Fill in required fields and press play.\n\nIf the recipe uses a Google Cloud Project:\n\nSet the configuration project value to the project identifier from these instructions.\n\n\nIf the recipe has auth set to user:\n\nIf you have user credentials:\nSet the configuration user value to your user credentials JSON.\n\n\n\nIf you DO NOT have user credentials:\n\nSet the configuration client value to downloaded client credentials.\n\n\n\nIf the recipe has auth set to service:\n\nSet the configuration service value to downloaded service credentials.",
"from starthinker.util.configuration import Configuration\n\n\nCONFIG = Configuration(\n project=\"\",\n client={},\n service={},\n user=\"/content/user.json\",\n verbose=True\n)\n\n",
"3. Enter Google API To BigQuery Recipe Parameters\n\nEnter an api name and version.\nSpecify the function using dot notation.\nSpecify the arguments using json.\nIterate is optional, use if API returns a list of items that are not unpacking correctly.\nThe API Key may be required for some calls.\nThe Developer Token may be required for some calls.\nGive BigQuery dataset and table where response will be written.\nAll API calls are based on discovery document, for example the Campaign Manager API.\nModify the values below for your use case, can be done multiple times, then click play.",
"FIELDS = {\n 'auth_read':'user', # Credentials used for reading data.\n 'api':'displayvideo', # See developer guide.\n 'version':'v1', # Must be supported version.\n 'function':'advertisers.list', # Full function dot notation path.\n 'kwargs':{'partnerId': 234340}, # Dictionray object of name value pairs.\n 'kwargs_remote':{}, # Fetch arguments from remote source.\n 'api_key':'', # Associated with a Google Cloud Project.\n 'developer_token':'', # Associated with your organization.\n 'login_customer_id':'', # Associated with your Adwords account.\n 'dataset':'', # Existing dataset in BigQuery.\n 'table':'', # Table to write API call results to.\n}\n\nprint(\"Parameters Set To: %s\" % FIELDS)\n",
"4. Execute Google API To BigQuery\nThis does NOT need to be modified unless you are changing the recipe, click play.",
"from starthinker.util.configuration import execute\nfrom starthinker.util.recipe import json_set_fields\n\nTASKS = [\n {\n 'google_api':{\n 'auth':{'field':{'name':'auth_read','kind':'authentication','order':1,'default':'user','description':'Credentials used for reading data.'}},\n 'api':{'field':{'name':'api','kind':'string','order':1,'default':'displayvideo','description':'See developer guide.'}},\n 'version':{'field':{'name':'version','kind':'string','order':2,'default':'v1','description':'Must be supported version.'}},\n 'function':{'field':{'name':'function','kind':'string','order':3,'default':'advertisers.list','description':'Full function dot notation path.'}},\n 'kwargs':{'field':{'name':'kwargs','kind':'json','order':4,'default':{'partnerId':234340},'description':'Dictionray object of name value pairs.'}},\n 'kwargs_remote':{'field':{'name':'kwargs_remote','kind':'json','order':5,'default':{},'description':'Fetch arguments from remote source.'}},\n 'key':{'field':{'name':'api_key','kind':'string','order':6,'default':'','description':'Associated with a Google Cloud Project.'}},\n 'headers':{\n 'developer-token':{'field':{'name':'developer_token','kind':'string','order':7,'default':'','description':'Associated with your organization.'}},\n 'login-customer-id':{'field':{'name':'login_customer_id','kind':'string','order':8,'default':'','description':'Associated with your Adwords account.'}}\n },\n 'results':{\n 'bigquery':{\n 'dataset':{'field':{'name':'dataset','kind':'string','order':9,'default':'','description':'Existing dataset in BigQuery.'}},\n 'table':{'field':{'name':'table','kind':'string','order':10,'default':'','description':'Table to write API call results to.'}}\n }\n }\n }\n }\n]\n\njson_set_fields(TASKS, FIELDS)\n\nexecute(CONFIG, TASKS, force=True)\n"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
tpin3694/tpin3694.github.io | machine-learning/f1_score.ipynb | mit | [
"Title: F1 Score\nSlug: f1_score\nSummary: How to evaluate a Python machine learning using F1 score. \nDate: 2017-09-15 12:00\nCategory: Machine Learning\nTags: Model Evaluation\nAuthors: Chris Albon\n<a alt=\"F1 Score\" href=\"https://machinelearningflashcards.com\">\n <img src=\"f1_score/F1_Score_print.png\" class=\"flashcard center-block\">\n</a>\nPreliminaries",
"# Load libraries\nfrom sklearn.model_selection import cross_val_score\nfrom sklearn.linear_model import LogisticRegression\nfrom sklearn.datasets import make_classification",
"Generate Features And Target Data",
"# Generate features matrix and target vector\nX, y = make_classification(n_samples = 10000,\n n_features = 3,\n n_informative = 3,\n n_redundant = 0,\n n_classes = 2,\n random_state = 1)",
"Create Logistic Regression",
"# Create logistic regression\nlogit = LogisticRegression()",
"Cross-Validate Model Using F1",
"# Cross-validate model using precision\ncross_val_score(logit, X, y, scoring=\"f1\")"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
savioabuga/arrows | arrows.ipynb | mit | [
"arrows: Yet Another Twitter/Python Data Analysis\nGeospatially, Temporally, and Linguistically Analyzing Tweets about Top U.S. Presidential Candidates with Pandas, TextBlob, Seaborn, and Cartopy\nHi, I'm Raj. For my internship this summer, I've been using data science and geospatial Python libraries like xray, numpy, rasterio, and cartopy. A week ago, I had a discussion about the relevance of Bernie Sanders among millenials - and so, I set out to get a rough idea by looking at recent tweets.\nI don't explain any of the code in this document, but you can skip the code and just look at the results if you like. If you're interested in going further with this data, I've posted source code and the dataset at https://github.com/raj-kesavan/arrows.\nIf you have any comments or suggestions (oneither code or analysis), please let me know at rajk@berkeley.edu. Enjoy!\nFirst, I used Tweepy to pull down 20,000 tweets for each of Hillary Clinton, Bernie Sanders, Rand Paul, and Jeb Bush [retrieve_tweets.py].\nI've also already done some calculations, specifically of polarity, subjectivity, influence, influenced polarity, and longitude and latitude (all explained later) [preprocess.py].",
"from arrows.preprocess import load_df",
"Just adding some imports and setting graph display options.",
"from textblob import TextBlob\nimport pandas as pd\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport matplotlib\nimport seaborn as sns\nimport cartopy\npd.set_option('display.max_colwidth', 200)\npd.options.display.mpl_style = 'default'\nmatplotlib.style.use('ggplot')\nsns.set_context('talk')\nsns.set_style('whitegrid')\nplt.rcParams['figure.figsize'] = [12.0, 8.0]\n% matplotlib inline",
"Let's look at our data! \nload_df loads it in as a pandas.DataFrame, excellent for statistical analysis and graphing.",
"df = load_df('arrows/data/results.csv')\n\ndf.info()",
"We'll be looking primarily at candidate, created_at, lang, place, user_followers_count, user_time_zone, polarity, and influenced_polarity, and text.",
"df[['candidate', 'created_at', 'lang', 'place', 'user_followers_count', \n 'user_time_zone', 'polarity', 'influenced_polarity', 'text']].head(1)",
"First I'll look at sentiment, calculated with TextBlob using the text column. Sentiment is composed of two values, polarity - a measure of the positivity or negativity of a text - and subjectivity. Polarity is between -1.0 and 1.0; subjectivity between 0.0 and 1.0.",
"TextBlob(\"Tear down this wall!\").sentiment",
"Unfortunately, it doesn't work too well on anything other than English.",
"TextBlob(\"Radix malorum est cupiditas.\").sentiment",
"TextBlob has a cool translate() function that uses Google Translate to take care of that for us, but we won't be using it here - just because tweets include a lot of slang and abbreviations that can't be translated very well.",
"sentence = TextBlob(\"Radix malorum est cupiditas.\").translate()\nprint(sentence)\nprint(sentence.sentiment)",
"All right - let's figure out the most (positively) polarized English tweets.",
"english_df = df[df.lang == 'en']\nenglish_df.sort('polarity', ascending = False).head(3)[['candidate', 'polarity', 'subjectivity', 'text']]",
"Extrema don't mean much. We might get more interesting data with mean polarities for each candidate. Let's also look at influenced polarity, which takes into account the number of retweets and followers.",
"candidate_groupby = english_df.groupby('candidate')\ncandidate_groupby[['polarity', 'influence', 'influenced_polarity']].mean()",
"So tweets about Jeb Bush, on average, aren't as positive as the other candidates, but the people tweeting about Bush get more retweets and followers. \nI used the formula influence = sqrt(followers + 1) * sqrt(retweets + 1). You can experiment with different functions if you like [preprocess.py:influence].\nWe can look at the most influential tweets about Jeb Bush to see what's up.",
"jeb = candidate_groupby.get_group('Jeb Bush')\njeb_influence = jeb.sort('influence', ascending = False)\njeb_influence[['influence', 'polarity', 'influenced_polarity', 'user_name', 'text', 'created_at']].head(5)",
"Side note: you can see that sentiment analysis isn't perfect - the last tweet is certainly negative toward Jeb Bush, but it was actually assigned a positive polarity. Over a large number of tweets, though, sentiment analysis is more meaningful.\nAs to the high influence of tweets about Bush: it looks like Donald Trump (someone with a lot of followers) has been tweeting a lot about Bush over the other candidates - one possible reason for Jeb's greater influenced_polarity.",
"df[df.user_name == 'Donald J. Trump'].groupby('candidate').size()",
"Looks like our favorite toupéed candidate hasn't even been tweeting about anyone else!\nWhat else can we do? We know the language each tweet was (tweeted?) in.",
"language_groupby = df.groupby(['candidate', 'lang'])\nlanguage_groupby.size()",
"That's a lot of languages! Let's try plotting to get a better idea, but first, I'll remove smaller language/candidate groups.\nBy the way, each lang value is an IANA language tag - you can look them up at https://www.iana.org/assignments/language-subtag-registry/language-subtag-registry.",
"largest_languages = language_groupby.filter(lambda group: len(group) > 10)",
"I'll also remove English, since it would just dwarf all the other languages.",
"non_english = largest_languages[largest_languages.lang != 'en']\nnon_english_groupby = non_english.groupby(['lang', 'candidate'], as_index = False)\n\nsizes = non_english_groupby.text.agg(np.size)\nsizes = sizes.rename(columns={'text': 'count'})\nsizes_pivot = sizes.pivot_table(index='lang', columns='candidate', values='count', fill_value=0)\n\nplot = sns.heatmap(sizes_pivot)\nplot.set_title('Number of non-English Tweets by Candidate', family='Ubuntu')\nplot.set_ylabel('language code', family='Ubuntu')\nplot.set_xlabel('candidate', family='Ubuntu')\nplot.figure.set_size_inches(12, 7)",
"Looks like Spanish and Portuguese speakers mostly tweet about Jeb Bush, while Francophones lean more liberal, and Clinton tweeters span the largest range of languages.\nWe also have the time-of-tweet information - I'll plot influenced polarity over time for each candidate. I'm also going to resample the influenced_polarity values to 1 hour intervals to get a smoother graph.",
"mean_polarities = df.groupby(['candidate', 'created_at']).influenced_polarity.mean()\nplot = mean_polarities.unstack('candidate').resample('60min').plot()\nplot.set_title('Influenced Polarity over Time by Candidate', family='Ubuntu')\nplot.set_ylabel('influenced polarity', family='Ubuntu')\nplot.set_xlabel('time', family='Ubuntu')\nplot.figure.set_size_inches(12, 7)",
"Since I only took the last 20,000 tweets for each candidate, I didn't receive as large a timespan from Clinton (a candidate with many, many tweeters) compared to Rand Paul. \nBut we can still analyze the data in terms of hour-of-day. I'd like to know when tweeters in each language tweet each day, and I'm going to use percentages instead of raw number of tweets so I can compare across different languages easily.\nBy the way, the times in the dataframe are in UTC.",
"language_sizes = df.groupby('lang').size()\nthreshold = language_sizes.quantile(.75)\n\ntop_languages_df = language_sizes[language_sizes > threshold]\ntop_languages = set(top_languages_df.index) - {'und'}\ntop_languages\n\ndf['hour'] = df.created_at.apply(lambda datetime: datetime.hour) \nfor language_code in top_languages:\n lang_df = df[df.lang == language_code]\n normalized = lang_df.groupby('hour').size() / lang_df.lang.count()\n plot = normalized.plot(label = language_code)\n\nplot.set_title('Tweet Frequency in non-English Languages by Hour of Day', family='Ubuntu')\nplot.set_ylabel('normalized frequency', family='Ubuntu')\nplot.set_xlabel('hour of day (UTC)', family='Ubuntu')\nplot.legend()\nplot.figure.set_size_inches(12, 7)",
"Note that English, French, and Spanish are significantly flatter than the other languages - this means that there's a large spread of speakers all over the globe.\nBut why is Portuguese spiking at 11pm Brasilia time / 3 am Lisbon time? Let's find out!\nMy first guess was that maybe there's a single person making a ton of posts at that time.",
"df_of_interest = df[(df.hour == 2) & (df.lang == 'pt')]\n\nprint('Number of tweets:', df_of_interest.text.count())\nprint('Number of unique users:', df_of_interest.user_name.unique().size)",
"So that's not it. Maybe there was a major event everyone was retweeting?",
"df_of_interest.text.head(25).unique()",
"Seems to be a lot of these 'Jeb Bush diz que foi atingido...' tweets. How many? We can't just count unique ones because they all are different slightly, but we can check for a large-enough substring.",
"df_of_interest[df_of_interest.text.str.contains('Jeb Bush diz que foi atingido')].text.count()",
"That's it!\nLooks like there was a news article from a Brazilian website (http://jconline.ne10.uol.com.br/canal/mundo/internacional/noticia/2015/07/05/jeb-bush-diz-que-foi-atingido-por-criticas-de-trump-a-mexicanos-188801.php) that happened to get a lot of retweets at that time period. \nA similar article in English is at http://www.nytimes.com/politics/first-draft/2015/07/04/an-angry-jeb-bush-says-he-takes-donald-trumps-remarks-personally/.\nSince languages can span across different countries, we might get results if we search by location, rather than just language.\nWe don't have very specific geolocation information other than timezone, so let's try plotting candidate sentiment over the 4 major U.S. timezones (Los Angeles, Denver, Chicago, and New York). This is also be a good opportunity to look at a geographical map.",
"tz_df = english_df.dropna(subset=['user_time_zone'])\nus_tz_df = tz_df[tz_df.user_time_zone.str.contains(\"US & Canada\")]\nus_tz_candidate_groupby = us_tz_df.groupby(['candidate', 'user_time_zone'])\nus_tz_candidate_groupby.influenced_polarity.mean()",
"That's our raw data: now to plot it on a map. I got the timezone Shapefile from http://efele.net/maps/tz/world/. First, I read in the Shapefile with Cartopy.",
"tz_shapes = cartopy.io.shapereader.Reader('arrows/world/tz_world_mp.shp')\ntz_records = list(tz_shapes.records())\ntz_translator = {\n 'Eastern Time (US & Canada)': 'America/New_York',\n 'Central Time (US & Canada)': 'America/Chicago',\n 'Mountain Time (US & Canada)': 'America/Denver',\n 'Pacific Time (US & Canada)': 'America/Los_Angeles',\n}\namerican_tz_records = {\n tz_name: next(filter(lambda record: record.attributes['TZID'] == tz_id, tz_records))\n for tz_name, tz_id \n in tz_translator.items() \n}",
"Next, I have to choose a projection and plot it (again using Cartopy). The Albers Equal-Area is good for maps of the U.S. I'll also download some featuresets from the Natural Earth dataset to display state borders.",
"albers_equal_area = cartopy.crs.AlbersEqualArea(-95, 35)\nplate_carree = cartopy.crs.PlateCarree()\n\nstates_and_provinces = cartopy.feature.NaturalEarthFeature(\n category='cultural',\n name='admin_1_states_provinces_lines',\n scale='50m',\n facecolor='none'\n)\n\ncmaps = [matplotlib.cm.Blues, matplotlib.cm.Greens, \n matplotlib.cm.Reds, matplotlib.cm.Purples]\nnorm = matplotlib.colors.Normalize(vmin=0, vmax=30) \n\ncandidates = df['candidate'].unique()\n\nplt.rcParams['figure.figsize'] = [6.0, 4.0]\nfor index, candidate in enumerate(candidates):\n plt.figure()\n plot = plt.axes(projection=albers_equal_area)\n plot.set_extent((-125, -66, 20, 50))\n plot.add_feature(cartopy.feature.LAND)\n plot.add_feature(cartopy.feature.COASTLINE)\n plot.add_feature(cartopy.feature.BORDERS)\n plot.add_feature(states_and_provinces, edgecolor='gray')\n plot.add_feature(cartopy.feature.LAKES, facecolor=\"#00BCD4\")\n\n for tz_name, record in american_tz_records.items():\n tz_specific_df = us_tz_df[us_tz_df.user_time_zone == tz_name]\n tz_candidate_specific_df = tz_specific_df[tz_specific_df.candidate == candidate]\n mean_polarity = tz_candidate_specific_df.influenced_polarity.mean()\n\n plot.add_geometries(\n [record.geometry], \n crs=plate_carree,\n color=cmaps[index](norm(mean_polarity)),\n alpha=.8\n )\n \n plot.set_title('Influenced Polarity toward {} by U.S. Timezone'.format(candidate), family='Ubuntu')\n plot.figure.set_size_inches(6, 3.5)\n plt.show()\n print()",
"My friend Gabriel Wang pointed out that U.S. timezones other than Pacific don't mean much since each timezone covers both blue and red states, but the data is still interesting. \nAs expected, midwestern states lean toward Jeb Bush. I wasn't expecting Jeb Bush's highest polarity-tweets to come from the East; this is probably Donald Trump (New York, New York) messing with our data again. \nIn a few months I'll look at these statistics with the latest tweets and compare.\nWhat are tweeters outside the U.S. saying about our candidates?\nOutside of the U.S., if someone is in a major city, the timezone is often that city itself. Here are the top (by number of tweets) non-American 25 timezones in our dataframe.",
"american_timezones = ('US & Canada|Canada|Arizona|America|Hawaii|Indiana|Alaska'\n '|New_York|Chicago|Los_Angeles|Detroit|CST|PST|EST|MST')\nforeign_tz_df = tz_df[~tz_df.user_time_zone.str.contains(american_timezones)]\n\nforeign_tz_groupby = foreign_tz_df.groupby('user_time_zone')\nforeign_tz_groupby.size().sort(inplace = False, ascending = False).head(25)",
"I also want to look at polarity, so I'll only use English tweets.\n(Sorry, Central/South Americans - my very rough method of filtering out American timezones gets rid of some of your timezones too. Let me know if there's a better way to do this.)",
"foreign_english_tz_df = foreign_tz_df[foreign_tz_df.lang == 'en']",
"Now we have a dataframe containing (mostly) world cities as time zones. Let's get the top cities by number of tweets for each candidate, then plot polarities.",
"foreign_tz_groupby = foreign_english_tz_df.groupby(['candidate', 'user_time_zone'])\ntop_foreign_tz_df = foreign_tz_groupby.filter(lambda group: len(group) > 40)\n\ntop_foreign_tz_groupby = top_foreign_tz_df.groupby(['user_time_zone', 'candidate'], as_index = False)\n\nmean_influenced_polarities = top_foreign_tz_groupby.influenced_polarity.mean()\n\npivot = mean_influenced_polarities.pivot_table(\n index='user_time_zone', \n columns='candidate', \n values='influenced_polarity', \n fill_value=0\n)\n\nplot = sns.heatmap(pivot)\nplot.set_title('Influenced Polarity in Major Foreign Cities by Candidate', family='Ubuntu')\nplot.set_ylabel('city', family='Ubuntu')\nplot.set_xlabel('candidate', family='Ubuntu')\nplot.figure.set_size_inches(12, 7)",
"Exercise for the reader: why is Rand Paul disliked in Athens? You can probably guess, but the actual tweets causing this are rather amusing.\nGreco-libertarian relations aside, the data shows that London and Amsterdam are among the most influential of cities, with the former leaning toward Jeb Bush and the latter about neutral.\nIn India, Clinton-supporters reside in New Delhi while Chennai tweeters back Rand Paul. By contrast, in 2014, New Delhi constituents voted for the conservative Bharatiya Janata Party while Chennai voted for the more liberal All India Anna Dravida Munnetra Kazhagam Party - so there seems to be some kind of cultural difference between the voters of 2014 and the tweeters of today.\nLast thing I thought was interesting: Athens has the highest mean polarity for Bernie Sanders, the only city for which this is the case. Could this have anything to do with the recent economic crisis, 'no' vote for austerity, and Bernie's social democratic tendencies?\nFinally, I'll look at specific geolocation (latitude and longitude) data. Since only about 750 out of 80,000 tweets had geolocation enabled, this data can't really be used for sentiment analysis, but we can still get a good idea of international spread.\nFirst I'll plot everything on a world map, then break it up by candidate in the U.S.",
"df_place = df.dropna(subset=['place'])\nmollweide = cartopy.crs.Mollweide()\n\nplot = plt.axes(projection=mollweide)\nplot.set_global()\nplot.add_feature(cartopy.feature.LAND)\nplot.add_feature(cartopy.feature.COASTLINE)\nplot.add_feature(cartopy.feature.BORDERS)\n\nplot.scatter(\n list(df_place.longitude), \n list(df_place.latitude), \n transform=plate_carree, \n zorder=2\n)\nplot.set_title('International Tweeters with Geolocation Enabled', family='Ubuntu')\nplot.figure.set_size_inches(14, 9)\n\nplot = plt.axes(projection=albers_equal_area)\n\nplot.set_extent((-125, -66, 20, 50))\n\nplot.add_feature(cartopy.feature.LAND)\nplot.add_feature(cartopy.feature.COASTLINE)\nplot.add_feature(cartopy.feature.BORDERS)\nplot.add_feature(states_and_provinces, edgecolor='gray')\nplot.add_feature(cartopy.feature.LAKES, facecolor=\"#00BCD4\")\n\ncandidate_groupby = df_place.groupby('candidate', as_index = False)\n\ncolors = ['#1976d2', '#7cb342', '#f4511e', '#7b1fa2']\nfor index, (name, group) in enumerate(candidate_groupby):\n longitudes = group.longitude.values\n latitudes = group.latitude.values\n plot.scatter(\n longitudes, \n latitudes, \n transform=plate_carree, \n color=colors[index], \n label=name,\n zorder=2\n )\nplot.set_title('U.S. Tweeters by Candidate', family='Ubuntu')\nplt.legend(loc='lower left')\nplot.figure.set_size_inches(12, 7)",
"As expected, U.S. tweeters are centered around L.A., the Bay Area, Chicago, New York, and Boston. Rand Paul and Bernie Sanders tweeters are more spread out over the country.\nThat's all I have for now. \nIf you found this interesting and are curious for more, I encourage you to download the dataset (or get your own dataset based on your interests) and share your findings.\nSource code is at https://github.com/raj-kesavan/arrows, and I can be reached at raj.ksvn@gmail.com for any questions, comments, or criticism. Looking forward to hearing your feedback!"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
flaviocordova/udacity_deep_learn_project | gan_mnist/Intro_to_GANs_Solution.ipynb | mit | [
"Generative Adversarial Network\nIn this notebook, we'll be building a generative adversarial network (GAN) trained on the MNIST dataset. From this, we'll be able to generate new handwritten digits!\nGANs were first reported on in 2014 from Ian Goodfellow and others in Yoshua Bengio's lab. Since then, GANs have exploded in popularity. Here are a few examples to check out:\n\nPix2Pix \nCycleGAN\nA whole list\n\nThe idea behind GANs is that you have two networks, a generator $G$ and a discriminator $D$, competing against each other. The generator makes fake data to pass to the discriminator. The discriminator also sees real data and predicts if the data it's received is real or fake. The generator is trained to fool the discriminator, it wants to output data that looks as close as possible to real data. And the discriminator is trained to figure out which data is real and which is fake. What ends up happening is that the generator learns to make data that is indistiguishable from real data to the discriminator.\n\nThe general structure of a GAN is shown in the diagram above, using MNIST images as data. The latent sample is a random vector the generator uses to contruct it's fake images. As the generator learns through training, it figures out how to map these random vectors to recognizable images that can foold the discriminator.\nThe output of the discriminator is a sigmoid function, where 0 indicates a fake image and 1 indicates an real image. If you're interested only in generating new images, you can throw out the discriminator after training. Now, let's see how we build this thing in TensorFlow.",
"%matplotlib inline\n\nimport pickle as pkl\nimport numpy as np\nimport tensorflow as tf\nimport matplotlib.pyplot as plt\n\nfrom tensorflow.examples.tutorials.mnist import input_data\nmnist = input_data.read_data_sets('MNIST_data')",
"Model Inputs\nFirst we need to create the inputs for our graph. We need two inputs, one for the discriminator and one for the generator. Here we'll call the discriminator input inputs_real and the generator input inputs_z. We'll assign them the appropriate sizes for each of the networks.",
"def model_inputs(real_dim, z_dim):\n inputs_real = tf.placeholder(tf.float32, (None, real_dim), name='input_real') \n inputs_z = tf.placeholder(tf.float32, (None, z_dim), name='input_z')\n \n return inputs_real, inputs_z",
"Generator network\n\nHere we'll build the generator network. To make this network a universal function approximator, we'll need at least one hidden layer. We should use a leaky ReLU to allow gradients to flow backwards through the layer unimpeded. A leaky ReLU is like a normal ReLU, except that there is a small non-zero output for negative input values.\nVariable Scope\nHere we need to use tf.variable_scope for two reasons. Firstly, we're going to make sure all the variable names start with generator. Similarly, we'll prepend discriminator to the discriminator variables. This will help out later when we're training the separate networks.\nWe could just use tf.name_scope to set the names, but we also want to reuse these networks with different inputs. For the generator, we're going to train it, but also sample from it as we're training and after training. The discriminator will need to share variables between the fake and real input images. So, we can use the reuse keyword for tf.variable_scope to tell TensorFlow to reuse the variables instead of creating new ones if we build the graph again.\nTo use tf.variable_scope, you use a with statement:\npython\nwith tf.variable_scope('scope_name', reuse=False):\n # code here\nHere's more from the TensorFlow documentation to get another look at using tf.variable_scope.\nLeaky ReLU\nTensorFlow doesn't provide an operation for leaky ReLUs, so we'll need to make one . For this you can use take the outputs from a linear fully connected layer and pass them to tf.maximum. Typically, a parameter alpha sets the magnitude of the output for negative values. So, the output for negative input (x) values is alpha*x, and the output for positive x is x:\n$$\nf(x) = max(\\alpha * x, x)\n$$\nTanh Output\nThe generator has been found to perform the best with $tanh$ for the generator output. This means that we'll have to rescale the MNIST images to be between -1 and 1, instead of 0 and 1.",
"def generator(z, out_dim, n_units=128, reuse=False, alpha=0.01):\n with tf.variable_scope('generator', reuse=reuse):\n # Hidden layer\n h1 = tf.layers.dense(z, n_units, activation=None)\n # Leaky ReLU\n h1 = tf.maximum(alpha * h1, h1)\n \n # Logits and tanh output\n logits = tf.layers.dense(h1, out_dim, activation=None)\n out = tf.tanh(logits)\n \n return out",
"Discriminator\nThe discriminator network is almost exactly the same as the generator network, except that we're using a sigmoid output layer.",
"def discriminator(x, n_units=128, reuse=False, alpha=0.01):\n with tf.variable_scope('discriminator', reuse=reuse):\n # Hidden layer\n h1 = tf.layers.dense(x, n_units, activation=None)\n # Leaky ReLU\n h1 = tf.maximum(alpha * h1, h1)\n \n logits = tf.layers.dense(h1, 1, activation=None)\n out = tf.sigmoid(logits)\n \n return out, logits",
"Hyperparameters",
"# Size of input image to discriminator\ninput_size = 784\n# Size of latent vector to generator\nz_size = 100\n# Sizes of hidden layers in generator and discriminator\ng_hidden_size = 128\nd_hidden_size = 128\n# Leak factor for leaky ReLU\nalpha = 0.01\n# Smoothing \nsmooth = 0.1",
"Build network\nNow we're building the network from the functions defined above.\nFirst is to get our inputs, input_real, input_z from model_inputs using the sizes of the input and z.\nThen, we'll create the generator, generator(input_z, input_size). This builds the generator with the appropriate input and output sizes.\nThen the discriminators. We'll build two of them, one for real data and one for fake data. Since we want the weights to be the same for both real and fake data, we need to reuse the variables. For the fake data, we're getting it from the generator as g_model. So the real data discriminator is discriminator(input_real) while the fake discriminator is discriminator(g_model, reuse=True).",
"tf.reset_default_graph()\n# Create our input placeholders\ninput_real, input_z = model_inputs(input_size, z_size)\n\n# Build the model\ng_model = generator(input_z, input_size, n_units=g_hidden_size, alpha=alpha)\n# g_model is the generator output\n\nd_model_real, d_logits_real = discriminator(input_real, n_units=d_hidden_size, alpha=alpha)\nd_model_fake, d_logits_fake = discriminator(g_model, reuse=True, n_units=d_hidden_size, alpha=alpha)",
"Discriminator and Generator Losses\nNow we need to calculate the losses, which is a little tricky. For the discriminator, the total loss is the sum of the losses for real and fake images, d_loss = d_loss_real + d_loss_fake. The losses will by sigmoid cross-entropys, which we can get with tf.nn.sigmoid_cross_entropy_with_logits. We'll also wrap that in tf.reduce_mean to get the mean for all the images in the batch. So the losses will look something like \npython\ntf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=logits, labels=labels))\nFor the real image logits, we'll use d_logits_real which we got from the discriminator in the cell above. For the labels, we want them to be all ones, since these are all real images. To help the discriminator generalize better, the labels are reduced a bit from 1.0 to 0.9, for example, using the parameter smooth. This is known as label smoothing, typically used with classifiers to improve performance. In TensorFlow, it looks something like labels = tf.ones_like(tensor) * (1 - smooth)\nThe discriminator loss for the fake data is similar. The logits are d_logits_fake, which we got from passing the generator output to the discriminator. These fake logits are used with labels of all zeros. Remember that we want the discriminator to output 1 for real images and 0 for fake images, so we need to set up the losses to reflect that.\nFinally, the generator losses are using d_logits_fake, the fake image logits. But, now the labels are all ones. The generator is trying to fool the discriminator, so it wants to discriminator to output ones for fake images.",
"# Calculate losses\nd_loss_real = tf.reduce_mean(\n tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_real, \n labels=tf.ones_like(d_logits_real) * (1 - smooth)))\nd_loss_fake = tf.reduce_mean(\n tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_fake, \n labels=tf.zeros_like(d_logits_real)))\nd_loss = d_loss_real + d_loss_fake\n\ng_loss = tf.reduce_mean(\n tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_fake,\n labels=tf.ones_like(d_logits_fake)))",
"Optimizers\nWe want to update the generator and discriminator variables separately. So we need to get the variables for each part build optimizers for the two parts. To get all the trainable variables, we use tf.trainable_variables(). This creates a list of all the variables we've defined in our graph.\nFor the generator optimizer, we only want to generator variables. Our past selves were nice and used a variable scope to start all of our generator variable names with generator. So, we just need to iterate through the list from tf.trainable_variables() and keep variables to start with generator. Each variable object has an attribute name which holds the name of the variable as a string (var.name == 'weights_0' for instance). \nWe can do something similar with the discriminator. All the variables in the discriminator start with discriminator.\nThen, in the optimizer we pass the variable lists to var_list in the minimize method. This tells the optimizer to only update the listed variables. Something like tf.train.AdamOptimizer().minimize(loss, var_list=var_list) will only train the variables in var_list.",
"# Optimizers\nlearning_rate = 0.002\n\n# Get the trainable_variables, split into G and D parts\nt_vars = tf.trainable_variables()\ng_vars = [var for var in t_vars if var.name.startswith('generator')]\nd_vars = [var for var in t_vars if var.name.startswith('discriminator')]\n\nd_train_opt = tf.train.AdamOptimizer(learning_rate).minimize(d_loss, var_list=d_vars)\ng_train_opt = tf.train.AdamOptimizer(learning_rate).minimize(g_loss, var_list=g_vars)",
"Training",
"batch_size = 100\nepochs = 100\nsamples = []\nlosses = []\n# Only save generator variables\nsaver = tf.train.Saver(var_list=g_vars)\nwith tf.Session() as sess:\n sess.run(tf.global_variables_initializer())\n for e in range(epochs):\n for ii in range(mnist.train.num_examples//batch_size):\n batch = mnist.train.next_batch(batch_size)\n \n # Get images, reshape and rescale to pass to D\n batch_images = batch[0].reshape((batch_size, 784))\n batch_images = batch_images*2 - 1\n \n # Sample random noise for G\n batch_z = np.random.uniform(-1, 1, size=(batch_size, z_size))\n \n # Run optimizers\n _ = sess.run(d_train_opt, feed_dict={input_real: batch_images, input_z: batch_z})\n _ = sess.run(g_train_opt, feed_dict={input_z: batch_z})\n \n # At the end of each epoch, get the losses and print them out\n train_loss_d = sess.run(d_loss, {input_z: batch_z, input_real: batch_images})\n train_loss_g = g_loss.eval({input_z: batch_z})\n \n print(\"Epoch {}/{}...\".format(e+1, epochs),\n \"Discriminator Loss: {:.4f}...\".format(train_loss_d),\n \"Generator Loss: {:.4f}\".format(train_loss_g)) \n # Save losses to view after training\n losses.append((train_loss_d, train_loss_g))\n \n # Sample from generator as we're training for viewing afterwards\n sample_z = np.random.uniform(-1, 1, size=(16, z_size))\n gen_samples = sess.run(\n generator(input_z, input_size, n_units=g_hidden_size, reuse=True, alpha=alpha),\n feed_dict={input_z: sample_z})\n samples.append(gen_samples)\n saver.save(sess, './checkpoints/generator.ckpt')\n\n# Save training generator samples\nwith open('train_samples.pkl', 'wb') as f:\n pkl.dump(samples, f)",
"Training loss\nHere we'll check out the training losses for the generator and discriminator.",
"fig, ax = plt.subplots()\nlosses = np.array(losses)\nplt.plot(losses.T[0], label='Discriminator')\nplt.plot(losses.T[1], label='Generator')\nplt.title(\"Training Losses\")\nplt.legend()",
"Generator samples from training\nHere we can view samples of images from the generator. First we'll look at images taken while training.",
"def view_samples(epoch, samples):\n fig, axes = plt.subplots(figsize=(7,7), nrows=4, ncols=4, sharey=True, sharex=True)\n for ax, img in zip(axes.flatten(), samples[epoch]):\n ax.xaxis.set_visible(False)\n ax.yaxis.set_visible(False)\n im = ax.imshow(img.reshape((28,28)), cmap='Greys_r')\n \n return fig, axes\n\n# Load samples from generator taken while training\nwith open('train_samples.pkl', 'rb') as f:\n samples = pkl.load(f)",
"These are samples from the final training epoch. You can see the generator is able to reproduce numbers like 1, 7, 3, 2. Since this is just a sample, it isn't representative of the full range of images this generator can make.",
"_ = view_samples(-1, samples)",
"Below I'm showing the generated images as the network was training, every 10 epochs. With bonus optical illusion!",
"rows, cols = 10, 6\nfig, axes = plt.subplots(figsize=(7,12), nrows=rows, ncols=cols, sharex=True, sharey=True)\n\nfor sample, ax_row in zip(samples[::int(len(samples)/rows)], axes):\n for img, ax in zip(sample[::int(len(sample)/cols)], ax_row):\n ax.imshow(img.reshape((28,28)), cmap='Greys_r')\n ax.xaxis.set_visible(False)\n ax.yaxis.set_visible(False)",
"It starts out as all noise. Then it learns to make only the center white and the rest black. You can start to see some number like structures appear out of the noise like 1s and 9s.\nSampling from the generator\nWe can also get completely new images from the generator by using the checkpoint we saved after training. We just need to pass in a new latent vector $z$ and we'll get new samples!",
"saver = tf.train.Saver(var_list=g_vars)\nwith tf.Session() as sess:\n saver.restore(sess, tf.train.latest_checkpoint('checkpoints'))\n sample_z = np.random.uniform(-1, 1, size=(16, z_size))\n gen_samples = sess.run(\n generator(input_z, input_size, n_units=g_hidden_size, reuse=True, alpha=alpha),\n feed_dict={input_z: sample_z})\n_ = view_samples(0, [gen_samples])"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
kit-cel/wt | sigNT/tutorial/approximation.ipynb | gpl-2.0 | [
"Content and Objective\n\nShow approximations by using gaussian approximation\nAdditionally, applying Gram-Schmidt for \"orthonormalizing\" a set of functions",
"# importing\nimport numpy as np\nimport scipy.signal\nimport scipy as sp\n\nimport sympy as sym\nfrom sympy.plotting import plot\n",
"definitions",
"# define symbol\nx = sym.Symbol('x')\n\n# function to be approximated\nf = sym.cos( x )\nf = sym.exp( x )\n#f = sym.sqrt( x )\n\n# define lower and upper bound for L[a,b] \n# -> might be relevant to be changed if you are adapting the function to be approximated\na = -1\nb = 1",
"Define Gram-Schmidt",
"# basis and their number of functions\nM = [ x**c for c in range( 0, 4 ) ]\n\nn = len( M )\nprint(M)\n\n# apply Gram-Schmidt for user-defined set M \n\n# init ONB\nONB = [ ]\n\n# loop for new functions and apply Gram-Schmidt\nfor _n in range( n ):\n \n # get function\n f_temp = M[ _n ]\n \n # subtract influence of past ONB functions\n if _n >= 1:\n for _k in range( _n ):\n f_temp -= sym.integrate( M[ _n ] * ONB[ _k ], (x,a,b) ) * ONB[ _k ]\n \n # get norm\n norm = float( sym.integrate( f_temp * f_temp , (x,a,b) ) )\n \n # return normalized function\n ONB.append( f_temp / np.sqrt( norm) )\n\nprint(ONB)\n\n# opt in if you like to see the correlation matrix\nif 0:\n corr_matrix = np.zeros( ( n, n ) )\n\n for _m in range( n ):\n for _n in range( n ):\n corr_matrix[ _m, _n ] = float( sym.integrate( ONB[_m] * ONB[_n], (x,a,b) ) )\n\n np.set_printoptions(precision=2)\n corr_matrix[ np.isclose( corr_matrix, 0 ) ] = 0\n\n print( corr_matrix ) \n\n# opt in if you like to see figures of the base functions\n# NOTE: Become unhandy if it's too many of them\nif 0:\n for _n in range( n):\n p = plot( M[_n], (x,a,b), show=False )\n p.extend( plot( ONB[_n], (x,a,b), line_color='r', show=False ) )\n\n p.show()",
"now approximate a function",
"# init approx and extend successively\napprox = 0\n\n# add next ONB function with according coefficient\nfor _n in range( n ):\n \n coeff = sym.integrate( f * ONB[ _n ], (x,a,b) )\n approx += coeff * ONB[ _n ]\n\n# if you like to see the function\nprint( approx )\n\np = plot( f, (x,a,b), show=False) \np.extend( plot( approx, (x,a,b), line_color='r', show=False) )\np.show()\n\nplot( f - approx, (x,a,b) )"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
jsjol/GaussianProcessRegressionForDiffusionMRI | notebooks/show_ODFs.ipynb | bsd-3-clause | [
"%load_ext autoreload\n%autoreload 2\n\nimport os\nimport sys\nmodule_path = os.path.abspath(os.path.join('..'))\nif module_path not in sys.path:\n sys.path.append(module_path)\n\nimport numpy as np\nimport json\nimport matplotlib.pyplot as plt\n\nfrom dipy.reconst import mapmri\nimport dipy.reconst.dti as dti\nfrom dipy.viz import window, actor\nfrom dipy.data import get_data, get_sphere\nfrom dipy.core.gradients import gradient_table\n\nfrom diGP.preprocessing import get_HCP_loader\nfrom diGP.preprocessing_pipelines import preprocess_SPARC\nfrom diGP.dataManipulations import log_q_squared\nfrom diGP.model import GaussianProcessModel, get_default_kernel\n\nwith open('../config.json', 'r') as json_file:\n conf = json.load(json_file)",
"Load the data.",
"dataset = 'SPARC'\n\nif dataset == 'HCP':\n subject_path = conf['HCP']['data_paths']['mgh_1007']\n loader = get_HCP_loader(subject_path)\n small_data_path = '{}/mri/small_data.npy'.format(subject_path)\n\n loader.update_filename_data(small_data_path)\n\n data = loader.data\n gtab = loader.gtab\n voxel_size = loader.voxel_size\nelif dataset == 'SPARC':\n subject_path = conf['SPARC']['data_paths']['gradient_60']\n\n gtab, data, voxel_size = preprocess_SPARC(subject_path, normalize=True)\n \n\nbtable = np.loadtxt(get_data('dsi4169btable'))\n#btable = np.loadtxt(get_data('dsi515btable'))\n\ngtab_dsi = gradient_table(btable[:, 0], btable[:, 1:],\n big_delta=gtab.big_delta, small_delta=gtab.small_delta)",
"Fit a MAPL model to the data.",
"map_model_laplacian_aniso = mapmri.MapmriModel(gtab, radial_order=6,\n laplacian_regularization=True,\n laplacian_weighting='GCV')\n\nmapfit_laplacian_aniso = map_model_laplacian_aniso.fit(data)",
"We want to use an FA image as background, this requires us to fit a DTI model.",
"tenmodel = dti.TensorModel(gtab)\ntenfit = tenmodel.fit(data)\n\nfitted = {'MAPL': mapfit_laplacian_aniso.predict(gtab)[:, :, 0],\n 'DTI': tenfit.predict(gtab)[:, :, 0]}",
"Fit GP without mean and with DTI and MAPL as mean.",
"kern = get_default_kernel(n_max=6, spatial_dims=2)\ngp_model = GaussianProcessModel(gtab, spatial_dims=2, kernel=kern, verbose=False)\ngp_fit = gp_model.fit(np.squeeze(data), mean=None, voxel_size=voxel_size[0:2], retrain=True)\n\nkern = get_default_kernel(n_max=2, spatial_dims=2)\ngp_dti_model = GaussianProcessModel(gtab, spatial_dims=2, kernel=kern, verbose=False)\ngp_dti_fit = gp_dti_model.fit(np.squeeze(data), mean=fitted['DTI'], voxel_size=voxel_size[0:2], retrain=True)\n\nkern = get_default_kernel(n_max=2, spatial_dims=2)\ngp_mapl_model = GaussianProcessModel(gtab, spatial_dims=2, kernel=kern, verbose=False)\ngp_mapl_fit = gp_mapl_model.fit(np.squeeze(data), mean=fitted['MAPL'], voxel_size=voxel_size[0:2], retrain=True)",
"gp_model = GaussianProcessModel(gtab, spatial_dims=2, q_magnitude_transform=np.sqrt, verbose=False)\ngp_fit = gp_model.fit(np.squeeze(data), mean=None, voxel_size=voxel_size[0:2], retrain=True)\ngp_dti_fit = gp_model.fit(np.squeeze(data), mean=fitted['DTI'], voxel_size=voxel_size[0:2], retrain=True)\ngp_mapl_fit = gp_model.fit(np.squeeze(data), mean=fitted['MAPL'], voxel_size=voxel_size[0:2], retrain=True)",
"pred = {'MAPL': mapfit_laplacian_aniso.predict(gtab_dsi)[:, :, 0],\n 'DTI': tenfit.predict(gtab_dsi)[:, :, 0]}",
"Compute the ODFs\nLoad an odf reconstruction sphere",
"sphere = get_sphere('symmetric724').subdivide(1)",
"The radial order $s$ can be increased to sharpen the results, but it might\nalso make the odfs noisier. Note that a \"proper\" ODF corresponds to $s=0$.",
"odf = {'MAPL': mapfit_laplacian_aniso.odf(sphere, s=0),\n 'DTI': tenfit.odf(sphere)}\n\nodf['GP'] = gp_fit.odf(sphere, gtab_dsi=gtab_dsi, mean=None)[:, :, None, :]\nodf['DTI_GP'] = gp_dti_fit.odf(sphere, gtab_dsi=gtab_dsi, mean=pred['DTI'])[:, :, None, :]\nodf['MAPL_GP'] = gp_mapl_fit.odf(sphere, gtab_dsi=gtab_dsi, mean=pred['MAPL'])[:, :, None, :]",
"Display the ODFs",
"for name, _odf in odf.items():\n ren = window.Renderer()\n ren.background((1, 1, 1))\n\n odf_actor = actor.odf_slicer(_odf, sphere=sphere, scale=0.5, colormap='jet')\n background_actor = actor.slicer(tenfit.fa, opacity=1)\n\n odf_actor.display(z=0)\n odf_actor.RotateZ(90)\n\n background_actor.display(z=0)\n background_actor.RotateZ(90)\n background_actor.SetPosition(0, 0, -1)\n\n ren.add(background_actor)\n ren.add(odf_actor)\n\n window.record(ren, out_path='odfs_{}.png'.format(name), size=(1000, 1000))"
] | [
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
esa-as/2016-ml-contest | ar4/ar4_submission2_VALIDATION.ipynb | apache-2.0 | [
"Facies classification using machine learning techniques\nCopy of <a href=\"https://home.deib.polimi.it/bestagini/\">Paolo Bestagini's</a> \"Try 2\", augmented, by Alan Richardson (Ausar Geophysical), with an ML estimator for PE in the wells where it is missing (rather than just using the mean).\nIn the following, we provide a possible solution to the facies classification problem described at https://github.com/seg/2016-ml-contest.\nThe proposed algorithm is based on the use of random forests combined in one-vs-one multiclass strategy. In particular, we would like to study the effect of:\n- Robust feature normalization.\n- Feature imputation for missing feature values.\n- Well-based cross-validation routines.\n- Feature augmentation strategies.\nScript initialization\nLet us import the used packages and define some parameters (e.g., colors, labels, etc.).",
"# Import\nfrom __future__ import division\n%matplotlib inline\nimport matplotlib as mpl\nimport matplotlib.pyplot as plt\nmpl.rcParams['figure.figsize'] = (20.0, 10.0)\ninline_rc = dict(mpl.rcParams)\nfrom classification_utilities import make_facies_log_plot\n\nimport pandas as pd\nimport numpy as np\n#import seaborn as sns\n\nfrom sklearn import preprocessing\nfrom sklearn.model_selection import LeavePGroupsOut\nfrom sklearn.metrics import f1_score\nfrom sklearn.multiclass import OneVsOneClassifier\nfrom sklearn.ensemble import RandomForestClassifier, RandomForestRegressor\n\nfrom scipy.signal import medfilt\n\nimport sys, scipy, sklearn\nprint('Python: ' + sys.version.split('\\n')[0])\nprint(' ' + sys.version.split('\\n')[1])\nprint('Pandas: ' + pd.__version__)\nprint('Numpy: ' + np.__version__)\nprint('Scipy: ' + scipy.__version__)\nprint('Sklearn: ' + sklearn.__version__)\n\n# Parameters\nfeature_names = ['GR', 'ILD_log10', 'DeltaPHI', 'PHIND', 'PE', 'NM_M', 'RELPOS']\nfacies_names = ['SS', 'CSiS', 'FSiS', 'SiSh', 'MS', 'WS', 'D', 'PS', 'BS']\nfacies_colors = ['#F4D03F', '#F5B041','#DC7633','#6E2C00', '#1B4F72','#2E86C1', '#AED6F1', '#A569BD', '#196F3D']",
"Load data\nLet us load training data and store features, labels and other data into numpy arrays.",
"# Load data from file\ndata = pd.read_csv('../facies_vectors.csv')\n\n# Store features and labels\nX = data[feature_names].values # features\ny = data['Facies'].values # labels\n\n# Store well labels and depths\nwell = data['Well Name'].values\ndepth = data['Depth'].values",
"Data inspection\nLet us inspect the features we are working with. This step is useful to understand how to normalize them and how to devise a correct cross-validation strategy. Specifically, it is possible to observe that:\n- Some features seem to be affected by a few outlier measurements.\n- Only a few wells contain samples from all classes.\n- PE measurements are available only for some wells.",
"# Define function for plotting feature statistics\ndef plot_feature_stats(X, y, feature_names, facies_colors, facies_names):\n \n # Remove NaN\n nan_idx = np.any(np.isnan(X), axis=1)\n X = X[np.logical_not(nan_idx), :]\n y = y[np.logical_not(nan_idx)]\n \n # Merge features and labels into a single DataFrame\n features = pd.DataFrame(X, columns=feature_names)\n labels = pd.DataFrame(y, columns=['Facies'])\n for f_idx, facies in enumerate(facies_names):\n labels[labels[:] == f_idx] = facies\n data = pd.concat((labels, features), axis=1)\n\n # Plot features statistics\n facies_color_map = {}\n for ind, label in enumerate(facies_names):\n facies_color_map[label] = facies_colors[ind]\n\n sns.pairplot(data, hue='Facies', palette=facies_color_map, hue_order=list(reversed(facies_names)))\n\n# Feature distribution\n# plot_feature_stats(X, y, feature_names, facies_colors, facies_names)\n# mpl.rcParams.update(inline_rc)\n\n# Facies per well\nfor w_idx, w in enumerate(np.unique(well)):\n ax = plt.subplot(3, 4, w_idx+1)\n hist = np.histogram(y[well == w], bins=np.arange(len(facies_names)+1)+.5)\n plt.bar(np.arange(len(hist[0])), hist[0], color=facies_colors, align='center')\n ax.set_xticks(np.arange(len(hist[0])))\n ax.set_xticklabels(facies_names)\n ax.set_title(w)\n\n# Features per well\nfor w_idx, w in enumerate(np.unique(well)):\n ax = plt.subplot(3, 4, w_idx+1)\n hist = np.logical_not(np.any(np.isnan(X[well == w, :]), axis=0))\n plt.bar(np.arange(len(hist)), hist, color=facies_colors, align='center')\n ax.set_xticks(np.arange(len(hist)))\n ax.set_xticklabels(feature_names)\n ax.set_yticks([0, 1])\n ax.set_yticklabels(['miss', 'hit'])\n ax.set_title(w)",
"Feature imputation\nLet us fill missing PE values. This is the only cell that differs from the approach of Paolo Bestagini. Currently no feature engineering is used, but this should be explored in the future.",
"def make_pe(X, seed):\n reg = RandomForestRegressor(max_features='sqrt', n_estimators=50, random_state=seed)\n DataImpAll = data[feature_names].copy()\n DataImp = DataImpAll.dropna(axis = 0, inplace=False)\n Ximp=DataImp.loc[:, DataImp.columns != 'PE']\n Yimp=DataImp.loc[:, 'PE']\n reg.fit(Ximp, Yimp)\n X[np.array(DataImpAll.PE.isnull()),4] = reg.predict(DataImpAll.loc[DataImpAll.PE.isnull(),:].drop('PE',axis=1,inplace=False))\n return X",
"Feature augmentation\nOur guess is that facies do not abrutly change from a given depth layer to the next one. Therefore, we consider features at neighboring layers to be somehow correlated. To possibly exploit this fact, let us perform feature augmentation by:\n- Aggregating features at neighboring depths.\n- Computing feature spatial gradient.",
"# Feature windows concatenation function\ndef augment_features_window(X, N_neig):\n \n # Parameters\n N_row = X.shape[0]\n N_feat = X.shape[1]\n\n # Zero padding\n X = np.vstack((np.zeros((N_neig, N_feat)), X, (np.zeros((N_neig, N_feat)))))\n\n # Loop over windows\n X_aug = np.zeros((N_row, N_feat*(2*N_neig+1)))\n for r in np.arange(N_row)+N_neig:\n this_row = []\n for c in np.arange(-N_neig,N_neig+1):\n this_row = np.hstack((this_row, X[r+c]))\n X_aug[r-N_neig] = this_row\n\n return X_aug\n\n# Feature gradient computation function\ndef augment_features_gradient(X, depth):\n \n # Compute features gradient\n d_diff = np.diff(depth).reshape((-1, 1))\n d_diff[d_diff==0] = 0.001\n X_diff = np.diff(X, axis=0)\n X_grad = X_diff / d_diff\n \n # Compensate for last missing value\n X_grad = np.concatenate((X_grad, np.zeros((1, X_grad.shape[1]))))\n \n return X_grad\n\n# Feature augmentation function\ndef augment_features(X, well, depth, seed=None, pe=True, N_neig=1):\n seed = seed or None\n \n if pe:\n X = make_pe(X, seed)\n # Augment features\n X_aug = np.zeros((X.shape[0], X.shape[1]*(N_neig*2+2)))\n for w in np.unique(well):\n w_idx = np.where(well == w)[0]\n X_aug_win = augment_features_window(X[w_idx, :], N_neig)\n X_aug_grad = augment_features_gradient(X[w_idx, :], depth[w_idx])\n X_aug[w_idx, :] = np.concatenate((X_aug_win, X_aug_grad), axis=1)\n \n # Find padded rows\n padded_rows = np.unique(np.where(X_aug[:, 0:7] == np.zeros((1, 7)))[0])\n \n return X_aug, padded_rows\n\n# Augment features\nX_aug, padded_rows = augment_features(X, well, depth)",
"Generate training, validation and test data splits\nThe choice of training and validation data is paramount in order to avoid overfitting and find a solution that generalizes well on new data. For this reason, we generate a set of training-validation splits so that:\n- Features from each well belongs to training or validation set.\n- Training and validation sets contain at least one sample for each class.",
"# Initialize model selection methods\nlpgo = LeavePGroupsOut(2)\n\n# Generate splits\nsplit_list = []\nfor train, val in lpgo.split(X, y, groups=data['Well Name']):\n hist_tr = np.histogram(y[train], bins=np.arange(len(facies_names)+1)+.5)\n hist_val = np.histogram(y[val], bins=np.arange(len(facies_names)+1)+.5)\n if np.all(hist_tr[0] != 0) & np.all(hist_val[0] != 0):\n split_list.append({'train':train, 'val':val})\n \n# Print splits\nfor s, split in enumerate(split_list):\n print('Split %d' % s)\n print(' training: %s' % (data['Well Name'][split['train']].unique()))\n print(' validation: %s' % (data['Well Name'][split['val']].unique()))",
"Classification parameters optimization\nLet us perform the following steps for each set of parameters:\n- Select a data split.\n- Normalize features using a robust scaler.\n- Train the classifier on training data.\n- Test the trained classifier on validation data.\n- Repeat for all splits and average the F1 scores.\nAt the end of the loop, we select the classifier that maximizes the average F1 score on the validation set. Hopefully, this classifier should be able to generalize well on new data.",
"# Parameters search grid (uncomment parameters for full grid search... may take a lot of time)\nN_grid = [100] # [50, 100, 150]\nM_grid = [10] # [5, 10, 15]\nS_grid = [25] # [10, 25, 50, 75]\nL_grid = [5] # [2, 3, 4, 5, 10, 25]\nparam_grid = []\nfor N in N_grid:\n for M in M_grid:\n for S in S_grid:\n for L in L_grid:\n param_grid.append({'N':N, 'M':M, 'S':S, 'L':L})\n\n# Train and test a classifier\ndef train_and_test(X_tr, y_tr, X_v, well_v, clf):\n \n # Feature normalization\n scaler = preprocessing.RobustScaler(quantile_range=(25.0, 75.0)).fit(X_tr)\n X_tr = scaler.transform(X_tr)\n X_v = scaler.transform(X_v)\n \n # Train classifier\n clf.fit(X_tr, y_tr)\n \n # Test classifier\n y_v_hat = clf.predict(X_v)\n \n # Clean isolated facies for each well\n for w in np.unique(well_v):\n y_v_hat[well_v==w] = medfilt(y_v_hat[well_v==w], kernel_size=5)\n \n return y_v_hat\n\n# For each set of parameters\n# score_param = []\n# for param in param_grid:\n\n# # For each data split\n# score_split = []\n# for split in split_list:\n\n# # Remove padded rows\n# split_train_no_pad = np.setdiff1d(split['train'], padded_rows)\n\n# # Select training and validation data from current split\n# X_tr = X_aug[split_train_no_pad, :]\n# X_v = X_aug[split['val'], :]\n# y_tr = y[split_train_no_pad]\n# y_v = y[split['val']]\n\n# # Select well labels for validation data\n# well_v = well[split['val']]\n\n# # Train and test\n# y_v_hat = train_and_test(X_tr, y_tr, X_v, well_v, param)\n\n# # Score\n# score = f1_score(y_v, y_v_hat, average='micro')\n# score_split.append(score)\n\n# # Average score for this param\n# score_param.append(np.mean(score_split))\n# print('F1 score = %.3f %s' % (score_param[-1], param))\n\n# # Best set of parameters\n# best_idx = np.argmax(score_param)\n# param_best = param_grid[best_idx]\n# score_best = score_param[best_idx]\n# print('\\nBest F1 score = %.3f %s' % (score_best, param_best))",
"Predict labels on test data\nLet us now apply the selected classification technique to test data.",
"param_best = {'S': 25, 'M': 10, 'L': 5, 'N': 100}\n\n# Load data from file\ntest_data = pd.read_csv('../validation_data_nofacies.csv')\n\n# Prepare test data\nwell_ts = test_data['Well Name'].values\ndepth_ts = test_data['Depth'].values\nX_ts = test_data[feature_names].values\n\ny_pred = []\nprint('o' * 100)\nfor seed in range(100):\n np.random.seed(seed)\n\n # Make training data.\n X_train, padded_rows = augment_features(X, well, depth, seed=seed)\n y_train = y\n X_train = np.delete(X_train, padded_rows, axis=0)\n y_train = np.delete(y_train, padded_rows, axis=0) \n param = param_best\n clf = OneVsOneClassifier(RandomForestClassifier(n_estimators=param['N'], criterion='entropy',\n max_features=param['M'], min_samples_split=param['S'], min_samples_leaf=param['L'],\n class_weight='balanced', random_state=seed), n_jobs=-1)\n \n # Make blind data.\n X_test, _ = augment_features(X_ts, well_ts, depth_ts, seed=seed, pe=False)\n\n # Train and test.\n y_ts_hat = train_and_test(X_train, y_train, X_test, well_ts, clf)\n \n # Collect result.\n y_pred.append(y_ts_hat)\n print('.', end='')\n \nnp.save('100_realizations.npy', y_pred)"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
nicolas998/wmf | Examples/Ejemplo_Hidrologia_Maximos.ipynb | gpl-3.0 | [
"Realiza el análisis hidrológico de la cuenca de Danta",
"%matplotlib inline\nfrom wmf import wmf \nimport numpy as np\nimport pylab as pl\nimport datetime as dt\nimport os\nimport pandas as pd\nimport pickle\nimport plot_y_tablas as pyt\nfrom scipy import stats as stat\n\nfrom IPython.display import HTML\n\nHTML('''<script>\ncode_show=true; \nfunction code_toggle() {\n if (code_show){\n $('div.input').hide();\n } else {\n $('div.input').show();\n }\n code_show = !code_show\n} \n$( document ).ready(code_toggle);\n</script>\n<form action=\"javascript:code_toggle()\"><input type=\"submit\" value=\"Click here to toggle on/off the raw code.\"></form>''')",
"Lectura de mapas de direcciones y de elevación:\n\nTrazado de cuencas y corrientes",
"cuCap = wmf.SimuBasin(0,0,0,0,rute='/media/nicolas/discoGrande/01_SIATA/nc_cuencas/Picacha_Abajo.nc')\n\n#Guarda Vector de la cuenca\ncuCap.Save_Basin2Map('/media/nicolas/discoGrande/01_SIATA/vector/Cuenca_AltaVista2.shp')\n\ncuCap.Save_Net2Map('/media/nicolas/discoGrande/01_SIATA/vector/Red_Altavista_Abajo.shp',dx = 12.7, umbral=470)",
"Tiempo de viaje",
"cuCap.GetGeo_Parameters()\n\ncuCap.Tc\n\n#Parametros Geomorfologicos de las cuencas \ncuCap.GetGeo_Parameters(rutaParamASC=ruta_images+'param_cap.txt',\n plotTc=True,\n rutaTcPlot=ruta_images+'Tc_cap.png')",
"No se tienen en cuenta los tiempos de concentración de campo y munera y Giandotti, los demás si, se tiene como tiempo de concentración medio un valor de $T_c = 2.69 hrs$",
"0.58*60.0\n\n#Tiempo medio y mapas de tiempos de viajes\nTcCap = np.array(cuCap.Tc.values()).mean()\n#Calcula tiempos de viajes\ncuCap.GetGeo_IsoChrones(TcCap, Niter= 6)\n\n#Figura de tiempos de viaje \ncuCap.Plot_basin(cuCap.CellTravelTime,\n ruta = '/media/nicolas/discoGrande/01_SIATA/ParamCuencas/AltaVistaAbajo/IsoCronas.png', \n lines_spaces=0.01)",
"Este mapa debe ser recalculado con una mayor cantidad de iteraciones, lo dejamos haciendo luego, ya que toma tiempo, de momento esta malo.",
"ruta_images = '/media/nicolas/discoGrande/01_SIATA/ParamCuencas/AltaVistaAbajo/'\n\ncuCap.Plot_Travell_Hist(ruta=ruta_images + 'Histogram_IsoCronas.png')",
"Curva hipsometrica y cauce ppal",
"cuCap.GetGeo_Cell_Basics()\ncuCap.GetGeo_Ppal_Hipsometric(intervals=50)\n\ncuCap.Plot_Hipsometric(normed=True,ventana=10, ruta=ruta_images+'Hipsometrica_Captacion.png')\n\ncuCap.PlotPpalStream(ruta=ruta_images+'Perfil_cauce_ppal_Capta.png')",
"El cauce principal presenta un desarrollo típico de una cuenca mediana-grande, en donde de ve claramente una zona de producción de sedimentos entre los 0 y 10 km, y de los 10km en adelante se presenta una zona de transporte y depositación con pendientes que oscilan entre 0.0 y 0.8 %",
"cuCap.PlotSlopeHist(bins=[0,2,0.2],ruta=ruta_images+'Slope_hist_cap.png')",
"El histograma de pendientes muestra que gran parte de las pendientes son inferiores al 0.6, por lo cual se considera que el cauce ppal de la cuenca se desarrolla ppalmente sobre un valle.\nMapas de Variables Geomorfo",
"cuCap.GetGeo_HAND()\n\ncuCap.Plot_basin(cuCap.CellHAND_class, ruta=ruta_images+'Map_HAND_class.png', lines_spaces=0.01)\n\ncuCap.Plot_basin(cuCap.CellSlope, ruta=ruta_images + 'Map_Slope.png', lines_spaces=0.01)",
"El mapa de pendientes muestra como las mayores pendientes en la cuenca se desarrollan en la parte alta de la misma, en la medida en que se observa el desarrollo de la cuenca en la zona baja esta muestra claramente como las pendientes comienzan a ser bajas.",
"IT = cuCap.GetGeo_IT()\ncuCap.Plot_basin(IT, ruta = ruta_images+'Indice_topografico.png', lines_spaces= 0.01)",
"Precipitación\nA continuación se realiza el análisis de la precipitación en la zona, de esta manera se conocen las condiciones climáticas en la región.\nProcedimiento para Desagregar lluvia (obtener IDF)\nLee la estación de epm con datos horarios \nCaudales\nCalculo de caudales medio de largo plazo mediante el campo de precipitación estimado para la zona mediante el uso de las estaciones del IDEAM \n\nQ medio por balance.\nQmax por regionalización y HU sintéticas \nQmin por regionalización y análisis de serie de caudales simulada a la salida de la cuenca \n\nCaudal Medio Largo Plazo",
"Precip = 1650\ncuCap.GetQ_Balance(Precip)\ncuCap.Plot_basin(cuCap.CellETR, ruta=ruta_images+'Map_ETR_Turc.png', lines_spaces=0.01)\n\ncuCap.GetQ_Balance(Precip)\nprint 'Caudal Captacion:', cuCap.CellQmed[-1]\n\ncuCap.Plot_basin(Precip - cuCap.CellETR, ruta = ruta_images+'Map_RunOff_mm_ano.png',\n lines_spaces=0.01,\n colorTable = 'jet_r')",
"Caudales extremos por regionalización\nSe calculan los caudales extremos máximos y mínimos para los periodos de retorno de:\n- 2.33, 5, 10, 25, 50, 75 y 100\nSe utilizan las siguientes metodologías:\n\nRegionalización con gumbel y lognormal.",
"#Periodos de retrorno para obtener maximos y minimos\nTr=[2.33, 5, 10, 25, 50, 100]\n\nQmaxRegGum = cuCap.GetQ_Max(cuCap.CellQmed, Dist='gumbel', Tr= Tr, Coef = [6.71, 3.29], Expo = [1.2, 0.9])\nQmaxRegLog = cuCap.GetQ_Max(cuCap.CellQmed, Dist='lognorm', Tr= Tr, Coef = [6.71, 3.29], Expo = [1.2, 0.9])\nQminRegLog = cuCap.GetQ_Min(cuCap.CellQmed, Dist='lognorm', Tr= Tr,)\nQminRegGum = cuCap.GetQ_Min(cuCap.CellQmed, Dist='gumbel', Tr= Tr,)",
"Se guarda el mapa con el caudal medio, y los maximos y minimos para toda la red hídrica de la cuenca",
"Dict = {'Qmed':cuCap.CellQmed}\nfor t,q in zip([2.33,5,10,25,50,100],QminRegGum): \n Dict.update({'min_g'+str(t):q})\nfor t,q in zip([2.33,5,10,25,50,100],QminRegLog): \n Dict.update({'min_l'+str(t):q})\nfor t,q in zip([2.33,5,10,25,50,100],QmaxRegGum): \n Dict.update({'max_g'+str(t):q})\nfor t,q in zip([2.33,5,10,25,50,100],QmaxRegLog): \n Dict.update({'max_l'+str(t):q})",
"Caudales Máximos\nAdicional a los caudales máximos estimados por regionalización, se estiman los caudales máximos por el método de hidrógrafa unitaria sintética:\n\nsneyder.\nscs.\nwilliams",
"cuCap.GetGeo_Parameters()\n\n#Parametros pára maximos \nTcCap = np.median(cuCap.Tc.values())\n#CN = 50\nCN=80\nprint 'tiempo viaje medio Captacion:', TcCap\nprint u'Número de curva:', CN\n\n# Obtención de la lluvia de diseño.\nIntensidad = [40.9, 49.5, 55.5, 60.6, 67.4, 75.7]\n# Lluvia efectiva \nlluviaTr,lluvEfect,S = cuCap.GetHU_DesingStorm(np.array(Intensidad),\n\tTcCap,\n\tCN=CN,\n\tplot='si',\n\truta=ruta_images + 'Q_max_LLuvia_Efectiva_descarga.png',\n\tTr=[2.33, 5, 10, 25, 50, 75, 100])",
"Se presenta en la figura como para diferentes periodos de retorno se da una pérdida de la cantidad de lluvia efectiva",
"#Calcula los HU para la descarga\nTscs,Qscs,HU=cuCap.GetHU_SCS(cuCap.GeoParameters['Area[km2]'],\n TcCap,)\nTsnyder,Qsnyder,HU,Diferencia=cuCap.GetHU_Snyder(cuCap.GeoParameters['Area[km2]'],\n\tTcCap,\n\tCp=0.8,\n\tFc=2.9)\n\t#Cp=1.65/(np.sqrt(PendCauce)**0.38))\nTwilliam,Qwilliam,HU=cuCap.GetHU_Williams(cuCap.GeoParameters['Area[km2]'],\n\tcuCap.GeoParameters['Long_Cuenca [km]'],\n\t780,\n\tTcCap)\n#Agrupa los hidrogramas unitarios para luego plotearlos\nD = {'snyder':{'time':Tsnyder,'HU':Qsnyder},\n 'scs':{'time':Tscs,'HU':Qscs},\n 'williams':{'time':Twilliam,'HU':Qwilliam}}\n#Hace el plot de ellos \ncuCap.PlotHU_Synthetic(D,ruta=ruta_images + 'Q_max_HU.png')",
"Hidrogramas unitarios calibrados para la cuenca, williams muestra un rezago en este caso con las demás metodologías",
"#QmaxRegGum = cuCap.GetQ_Max(cuCap.CellQmed, Dist='gumbel', Tr= Tr, Coef = [6.71, 3.29], Expo = [1.2, 0.9])\n#QmaxRegLog = cuCap.GetQ_Max(cuCap.CellQmed, Dist='lognorm', Tr= Tr, Coef = [6.71, 3.29], Expo = [1.2, 0.9])\n\n#Realiza la convolucion de los hidrogramas sinteticos con la tormenta de diseno\nHidroSnyder,QmaxSnyder,Tsnyder = cuCap.GetHU_Convolution(Tsnyder,Qsnyder,lluvEfect)\nHidroWilliam,QmaxWilliam,Twilliam = cuCap.GetHU_Convolution(Twilliam,Qwilliam,lluvEfect)\nHidroSCS,QmaxSCS,Tscs = cuCap.GetHU_Convolution(Tscs,Qscs,lluvEfect)\n\nDicQmax = {#'Snyder':QmaxSnyder,\n #'Williams':QmaxWilliam,\n #'SCS': QmaxSCS,\n 'Gumbel': QmaxRegGum[:,-1],\n 'Log-Norm': QmaxRegLog[:,-1]}\n#Plot de maximos \npyt.PlotQmaxTr(DicQmax,Tr,ruta=ruta_images + 'Q_max_Metodos_Tr_descarga.png')\n#Tablas de maximos \nDataQmax = pd.DataFrame(DicQmax, index=Tr)\n#Escritura en excel.\nwriter = pd.ExcelWriter(ruta_images + 'Qmax_captacion.xlsx')\nDataQmax.to_excel(ruta_images + 'Qmax_captacion.xls')\n\ncuCap.GeoParameters"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
julienchastang/unidata-python-workshop | notebooks/Python_Ecosystem/Scientific_Python_Ecosystem_Overview.ipynb | mit | [
"The Scientific Python Ecosystem\nPython\nPython is a interpreted, high-level programming language that is meant to be easily understandable and usable for a multitude of purposes. It is composed of libraries that contain useful tools for you to do quick and efficient data analysis and visualization. These libraries are like Lego blocks - you can pick and choose which ones you want to build your end product. The Scientific Python Ecosystem is composed of key libraries (i.e. NumPy, SciPy, Pandas, Matplotlib) that serve as a basis for most other libraries (i.e. MetPy). In this notebook, we'll briefly touch on several of these foundational libraries of the SciPy Ecosystem.\nGetting Python: Anaconda\nAnaconda provides distributions of Python and the main third-party packages either as a full distribution or as a lighter-weight verison, \"Miniconda\". We recommend using Anaconda to build and maintain your Python stack, as it provides command line tools to download and update Python libraries. You can check it out at https://www.anaconda.com/distribution/.\nJupyter\nThe Jupyter library provides \"literate programming\" interfaces for Python and other programming languages. This file is displayed using the Jupyter library, either within Jupyter Notebook or Lab. It incorporates code, prose, and other text (equations, HTML) to make a seamless document for your analysis or presentation by working in small blocks. This also allows for quick prototyping and debugging of code as you write!\nThe Building Blocks of the SciPy World\n\nWhile Python is the basis for everything, this figure demonstrates how packages build on top of each other (causing dependencies). Additionally, packages are constantly under development, so this structure does have some transient nature to it, as the SciPy world continue to expand (see Dask as a recent addition to this framework).\nData Analysis/Computation Libraries\nNumPy\nNumPy is the primary numerical computation library in Python. It works with N-dimensional arrays and matrices and performs basic computations on them.",
"import numpy as np\nx = np.arange(1,11)\ny = np.arange(100,110)\nmean_x_y = np.mean([x,y])\nprint(mean_x_y)",
"Pandas\nPandas is an excellent library for handling tabular data and quickly performing data analysis on it. It can handle many textfile types.",
"import pandas as pd\ndf = pd.read_csv('../Pandas/Jan17_CO_ASOS.txt', sep='\\t')\ndf.head()",
"xarray\nxarray is a Python library meant to handle N-dimensional arrays with metadata (think netCDF files). With the Dask library, it can work with Big Data efficiently in a Python framework.",
"import xarray as xr\nds = xr.open_dataset('../../data/NARR_19930313_0000.nc')\nds",
"Dask\nDask is a parallel-computing library in Python. You can use it on your laptop, cloud environment, or on a high-performance computer (NCAR's Cheyenne for example). It allows for lazy evaluations so that computations only occur after you've chained all of your operations together. Additionally, it has a built-in scheduler to scale with your computational demand to optimize your parellel resources.\nSciPy\nThe SciPy library has a lot of advanced mathematical functions that are not contained in Numpy, including Fast Fourier Transforms, interpolation methods, and linear algebra operations.\nScikit-learn\nScikit-learn is the primary machine learning library for Python. It can do simple things like regressions and classifications, or more advanced techniques like random forests. It can perform some neural network operations, but for big data implementations, check out the keras library.\nScikit-image\nAn image processing library built on NumPy\nVisualization Libraries\nMatplotlib\nMatplotlib is one of the core visualization libraries in Python and produces publication-quality figures without much configuration.",
"import matplotlib.pyplot as plt\nplt.plot(x,y)\nplt.title('Demo of Matplotlib')\nplt.show()",
"CartoPy\nCartoPy is the primary geographical mapping and visualization library in Python, as support for Basemap has been discontinued. It can handle various projections and transformation to/from projections to map data accurately for your problem.\nAtmospheric Science Libraries\nMetPy\nMetPy is developed at Unidata with support from the user community as a replacement for GEMPAK. Its primary functions are to read in data, perform meteorological calculations on it, and visualize it in useful way for education and research. \nPint\nPint is a unit-handling library, which MetPy relies upon for its calculations. Pint allow units to be attached to NumPy arrays, which allows for unit-aware calculations and easy conversions to reduce unit-based errors.\nnetcdf4-python\nThis is another Unidata package that serves as an interface from Python to the netCDF-C library. As a result, netCDF files can easily be read and written in Python.\nSiphon\nThe Siphon library, developed at Unidata, is a remote access library, built for accessing data on THREDDS servers, but also has hooks into the Wyoming, IGRA, and Iowa State upper air databases, the National Buoy Data Center, and the NHC and SPC storm reports as well. \nFor more information on the SciPy Ecosystem, check out these links: https://www.scipy.org/about.html and https://scipy-lectures.org/intro/intro.html"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
nehal96/Deep-Learning-ND-Exercises | Sentiment Analysis/Handwritten Digit Recognition with TFLearn and MNIST/handwritten-digit-recognition-with-tflearn.ipynb | mit | [
"Handwritten Number Recognition with TFLearn and MNIST\nIn this notebook, we'll be building a neural network that recognizes handwritten numbers 0-9. \nThis kind of neural network is used in a variety of real-world applications including: recognizing phone numbers and sorting postal mail by address. To build the network, we'll be using the MNIST data set, which consists of images of handwritten numbers and their correct labels 0-9.\nWe'll be using TFLearn, a high-level library built on top of TensorFlow to build the neural network. We'll start off by importing all the modules we'll need, then load the data, and finally build the network.",
"# Import Numpy, TensorFlow, TFLearn, and MNIST data\nimport numpy as np\nimport tensorflow as tf\nimport tflearn\nimport tflearn.datasets.mnist as mnist",
"Retrieving training and test data\nThe MNIST data set already contains both training and test data. There are 55,000 data points of training data, and 10,000 points of test data.\nEach MNIST data point has:\n1. an image of a handwritten digit and \n2. a corresponding label (a number 0-9 that identifies the image)\nWe'll call the images, which will be the input to our neural network, X and their corresponding labels Y.\nWe're going to want our labels as one-hot vectors, which are vectors that holds mostly 0's and one 1. It's easiest to see this in a example. As a one-hot vector, the number 0 is represented as [1, 0, 0, 0, 0, 0, 0, 0, 0, 0], and 4 is represented as [0, 0, 0, 0, 1, 0, 0, 0, 0, 0].\nFlattened data\nFor this example, we'll be using flattened data or a representation of MNIST images in one dimension rather than two. So, each handwritten number image, which is 28x28 pixels, will be represented as a one dimensional array of 784 pixel values. \nFlattening the data throws away information about the 2D structure of the image, but it simplifies our data so that all of the training data can be contained in one array whose shape is [55000, 784]; the first dimension is the number of training images and the second dimension is the number of pixels in each image. This is the kind of data that is easy to analyze using a simple neural network.",
"# Retrieve the training and test data\ntrainX, trainY, testX, testY = mnist.load_data(one_hot=True)",
"Visualize the training data\nProvided below is a function that will help you visualize the MNIST data. By passing in the index of a training example, the function show_digit will display that training image along with it's corresponding label in the title.",
"# Visualizing the data\nimport matplotlib.pyplot as plt\n%matplotlib inline\n\n# Function for displaying a training image by it's index in the MNIST set\ndef show_digit(index):\n label = trainY[index].argmax(axis=0)\n # Reshape 784 array into 28x28 image\n image = trainX[index].reshape([28,28])\n plt.title('Training data, index: %d, Label: %d' % (index, label))\n plt.imshow(image, cmap='gray_r')\n plt.show()\n \n# Display the first (index 0) training image\nshow_digit(0)\nshow_digit(10)",
"Building the network\nTFLearn lets you build the network by defining the layers in that network. \nFor this example, you'll define:\n\nThe input layer, which tells the network the number of inputs it should expect for each piece of MNIST data. \nHidden layers, which recognize patterns in data and connect the input to the output layer, and\nThe output layer, which defines how the network learns and outputs a label for a given image.\n\nLet's start with the input layer; to define the input layer, you'll define the type of data that the network expects. For example,\nnet = tflearn.input_data([None, 100])\nwould create a network with 100 inputs. The number of inputs to your network needs to match the size of your data. For this example, we're using 784 element long vectors to encode our input data, so we need 784 input units.\nAdding layers\nTo add new hidden layers, you use \nnet = tflearn.fully_connected(net, n_units, activation='ReLU')\nThis adds a fully connected layer where every unit (or node) in the previous layer is connected to every unit in this layer. The first argument net is the network you created in the tflearn.input_data call, it designates the input to the hidden layer. You can set the number of units in the layer with n_units, and set the activation function with the activation keyword. You can keep adding layers to your network by repeated calling tflearn.fully_connected(net, n_units). \nThen, to set how you train the network, use:\nnet = tflearn.regression(net, optimizer='sgd', learning_rate=0.1, loss='categorical_crossentropy')\nAgain, this is passing in the network you've been building. The keywords: \n\noptimizer sets the training method, here stochastic gradient descent\nlearning_rate is the learning rate\nloss determines how the network error is calculated. In this example, with categorical cross-entropy.\n\nFinally, you put all this together to create the model with tflearn.DNN(net).\nExercise: Below in the build_model() function, you'll put together the network using TFLearn. You get to choose how many layers to use, how many hidden units, etc.\nHint: The final output layer must have 10 output nodes (one for each digit 0-9). It's also recommended to use a softmax activation layer as your final output layer.",
"# Define the neural network\ndef build_model(learning_rate):\n # This resets all parameters and variables, leave this here\n tf.reset_default_graph()\n \n #### Your code ####\n # Include the input layer, hidden layer(s), and set how you want to train the model\n \n # Input layer\n net = tflearn.input_data([None, 784])\n \n # Hidden layers\n net = tflearn.fully_connected(net, 200, activation='ReLU')\n net = tflearn.fully_connected(net, 40, activation='ReLU')\n \n # Output layers\n net = tflearn.fully_connected(net, 10, activation='softmax')\n net = tflearn.regression(net, optimizer='sgd', learning_rate=learning_rate, loss='categorical_crossentropy')\n \n # This model assumes that your network is named \"net\" \n model = tflearn.DNN(net)\n return model\n\n# Build the model\nmodel = build_model(learning_rate=0.1)",
"Training the network\nNow that we've constructed the network, saved as the variable model, we can fit it to the data. Here we use the model.fit method. You pass in the training features trainX and the training targets trainY. Below I set validation_set=0.1 which reserves 10% of the data set as the validation set. You can also set the batch size and number of epochs with the batch_size and n_epoch keywords, respectively. \nToo few epochs don't effectively train your network, and too many take a long time to execute. Choose wisely!",
"# Training\nmodel.fit(trainX, trainY, validation_set=0.1, show_metric=True, batch_size=100, n_epoch=10)",
"Testing\nAfter you're satisified with the training output and accuracy, you can then run the network on the test data set to measure it's performance! Remember, only do this after you've done the training and are satisfied with the results.\nA good result will be higher than 98% accuracy! Some simple models have been known to get up to 99.7% accuracy.",
"# Compare the labels that our model predicts with the actual labels\npredictions = (np.array(model.predict(testX))[:,0] >= 0.5).astype(np.int_)\n\n# Calculate the accuracy, which is the percentage of times the predicated labels matched the actual labels\ntest_accuracy = np.mean(predictions == testY[:,0], axis=0)\n\n# Print out the result\nprint(\"Test accuracy: \", test_accuracy)"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
gregorjerse/rt2 | 2015_2016/lab13/Extending values on vertices.ipynb | gpl-3.0 | [
"Extending values on vertices to a discrete gradient vector field\nDuring extension algorithm one has to compute lover_link for every vertex in the complex. So let us implement search for the lower link first. It requires quite a lot of code: first we find a star, then link and finally lower link for the given simplex.",
"from itertools import combinations, chain\n\ndef simplex_closure(a): \n \"\"\"Returns the generator that iterating over all subsimplices (of all dimensions) in the closure\n of the simplex a. The simplex a is also included.\n \"\"\"\n return chain.from_iterable([combinations(a, l) for l in range(1, len(a) + 1)])\n \ndef closure(K):\n \"\"\"Add all missing subsimplices to K in order to make it a simplicial complex.\"\"\"\n return list({s for a in K for s in simplex_closure(a)})\n\ndef contained(a, b):\n \"\"\"Returns True is a is a subsimplex of b, False otherwise.\"\"\"\n return all((v in b for v in a))\n\ndef star(s, cx):\n \"\"\"Return the set of all simplices in the cx that contais simplex s.\n \"\"\"\n return {p for p in cx if contained(s, p)}\n\ndef intersection(s1, s2):\n \"\"\"Return the intersection of s1 and s2.\"\"\"\n return list(set(s1).intersection(s2))\n\ndef link(s, cx):\n \"\"\"Returns link of the simplex s in the complex cx.\n \"\"\"\n # Link consists of all simplices from the closed star that have \n # empty intersection with s.\n return [c for c in closure(star(s, cx)) if not intersection(s, c)]\n\ndef simplex_value(s, f, aggregate):\n \"\"\"Return the value of f on vertices of s\n aggregated by the aggregate function.\n \"\"\"\n return aggregate([f[v] for v in s])\n\ndef lower_link(s, cx, f):\n \"\"\"Return the lower link of the simplex s in the complex cx.\n The dictionary f is the mapping from vertices (integers)\n to the values on vertices.\n \"\"\"\n sval = simplex_value(s, f, min)\n return [s for s in link(s, cx) \n if simplex_value(s, f, max) < sval]",
"Let us test the above function on the simple example: full triangle with values 0, 1 and 2 on the vertices labeled with 1, 2 and 3.",
"K = closure([(1, 2, 3)])\nf = {1: 0, 2: 1, 3: 2}\nfor v in (1, 2, 3):\n print\"{0}: {1}\".format((v,), lower_link((v,), K, f))",
"Now let us implement an extension algorithm. We are leaving out the cancelling step for clarity.",
"def join(a, b):\n \"\"\"Return the join of 2 simplices a and b.\"\"\"\n return tuple(sorted(set(a).union(b)))\n\ndef extend(K, f):\n \"\"\"Extend the field to the complex K.\n Function on vertices is given in f.\n Returns the pair V, C, where V is the dictionary containing discrete gradient vector field\n and C is the list of all critical cells.\n \"\"\"\n V = dict()\n C = []\n for v in (s for s in K if len(s)==1):\n ll = lower_link(v, K, f)\n if len(ll) b== 0:\n C.append(v)\n else:\n V1, C1 = extend(ll, f)\n mv, mc = min([(f[c[0]], c) for c in C1 if len(c)==1])\n V[v] = join(v, mc)\n for c in (c for c in C1 if c != mc):\n C.append(join(v, c))\n for a, b in V1.items():\n V[join(a, v)] = join(b, v)\n return V, C",
"Let us test the algorithm on the example from the previous step (full triangle).",
"K = closure([(1, 2, 3)])\nf = {1: 0, 2: 1, 3: 2}\nextend(K, f)\n\nK = closure([(1, 2, 3), (2, 3, 4)])\nf = {1: 0, 2: 1, 3: 2, 4: 0}\nextend(K, f)\n\nK = closure([(1, 2, 3), (2, 3, 4)])\nf = {1: 0, 2: 1, 3: 2, 4: 3}\nextend(K, f)"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
DJCordhose/ai | notebooks/workshops/tss/cnn-standard-architectures.ipynb | mit | [
"Training on an Advanced Standard CNN Architecture\n\nhttps://keras.io/applications/\nThe 9 Deep Learning Papers You Need To Know About: https://adeshpande3.github.io/adeshpande3.github.io/The-9-Deep-Learning-Papers-You-Need-To-Know-About.html\n\nNeural Network Architectures\n\ntop-1 rating on ImageNet: https://stats.stackexchange.com/questions/156471/imagenet-what-is-top-1-and-top-5-error-rate",
"import warnings\nwarnings.filterwarnings('ignore')\n\n%matplotlib inline\n%pylab inline\n\nimport matplotlib.pylab as plt\nimport numpy as np\n\nfrom distutils.version import StrictVersion\n\nimport sklearn\nprint(sklearn.__version__)\n\nassert StrictVersion(sklearn.__version__ ) >= StrictVersion('0.18.1')\n\nimport tensorflow as tf\ntf.logging.set_verbosity(tf.logging.ERROR)\nprint(tf.__version__)\n\nassert StrictVersion(tf.__version__) >= StrictVersion('1.1.0')\n\nimport keras\nprint(keras.__version__)\n\nassert StrictVersion(keras.__version__) >= StrictVersion('2.0.0')\n\nimport pandas as pd\nprint(pd.__version__)\n\nassert StrictVersion(pd.__version__) >= StrictVersion('0.20.0')",
"Preparation",
"# for VGG, ResNet, and MobileNet\nINPUT_SHAPE = (224, 224)\n\n# for InceptionV3, InceptionResNetV2, Xception\n# INPUT_SHAPE = (299, 299)\n\nimport os\nimport skimage.data\nimport skimage.transform\nfrom keras.utils.np_utils import to_categorical\nimport numpy as np\n\ndef load_data(data_dir, type=\".ppm\"):\n num_categories = 6\n\n # Get all subdirectories of data_dir. Each represents a label.\n directories = [d for d in os.listdir(data_dir) \n if os.path.isdir(os.path.join(data_dir, d))]\n # Loop through the label directories and collect the data in\n # two lists, labels and images.\n labels = []\n images = []\n for d in directories:\n label_dir = os.path.join(data_dir, d)\n file_names = [os.path.join(label_dir, f) for f in os.listdir(label_dir) if f.endswith(type)]\n # For each label, load it's images and add them to the images list.\n # And add the label number (i.e. directory name) to the labels list.\n for f in file_names:\n images.append(skimage.data.imread(f))\n labels.append(int(d))\n images64 = [skimage.transform.resize(image, INPUT_SHAPE) for image in images]\n y = np.array(labels)\n y = to_categorical(y, num_categories)\n X = np.array(images64)\n return X, y\n\n# Load datasets.\nROOT_PATH = \"./\"\noriginal_dir = os.path.join(ROOT_PATH, \"speed-limit-signs\")\noriginal_images, original_labels = load_data(original_dir, type=\".ppm\")\n\nX, y = original_images, original_labels",
"Uncomment next three cells if you want to train on augmented image set\nOtherwise Overfitting can not be avoided because image set is simply too small",
"# !curl -O https://raw.githubusercontent.com/DJCordhose/speed-limit-signs/master/data/augmented-signs.zip\n# from zipfile import ZipFile\n# zip = ZipFile('augmented-signs.zip')\n# zip.extractall('.')\n\ndata_dir = os.path.join(ROOT_PATH, \"augmented-signs\")\naugmented_images, augmented_labels = load_data(data_dir, type=\".png\")\n\n# merge both data sets\n\nall_images = np.vstack((X, augmented_images))\nall_labels = np.vstack((y, augmented_labels))\n\n# shuffle\n# https://stackoverflow.com/a/4602224\n\np = numpy.random.permutation(len(all_labels))\nshuffled_images = all_images[p]\nshuffled_labels = all_labels[p]\nX, y = shuffled_images, shuffled_labels",
"Split test and train data 80% to 20%",
"from sklearn.model_selection import train_test_split\nX_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42, stratify=y)\nX_train.shape, y_train.shape",
"Training Xception\n\nSlighly optimized version of Inception: https://keras.io/applications/#xception\nInception V3 no longer using non-sequential tower architecture, rahter short cuts: https://keras.io/applications/#inceptionv3\nUses Batch Normalization:\nhttps://keras.io/layers/normalization/#batchnormalization\nhttp://cs231n.github.io/neural-networks-2/#batchnorm\nBatch Normalization still exist even in prediction model\nnormalizes activations for each batch around 0 and standard deviation close to 1\nreplaces Dropout except for final fc layers\nas a next step might make sense to alter classifier to again have Dropout for training\n\n\nAll that makes it ideal for our use case",
"from keras.applications.xception import Xception\n\nmodel = Xception(classes=6, weights=None)\n\nmodel.summary()\n\nmodel.compile(optimizer='adam',\n loss='categorical_crossentropy',\n metrics=['accuracy'])\n\n# !rm -rf ./tf_log\n# https://keras.io/callbacks/#tensorboard\ntb_callback = keras.callbacks.TensorBoard(log_dir='./tf_log')\n# To start tensorboard\n# tensorboard --logdir=./tf_log\n# open http://localhost:6006",
"This is a truly complex model\nBatch size needs to be small overthise model does not fit in memory\nWill take long to train, even on GPU\non augmented dataset 4 minutes on K80 per Epoch: 400 Minutes for 100 Epochs = 6-7 hours",
"# Depends on harware GPU architecture, model is really complex, batch needs to be small (this works well on K80)\nBATCH_SIZE = 25\n\nearly_stopping_callback = keras.callbacks.EarlyStopping(monitor='val_loss', patience=5, verbose=1)\n\n%time model.fit(X_train, y_train, epochs=50, validation_split=0.2, callbacks=[tb_callback, early_stopping_callback], batch_size=BATCH_SIZE)",
"Each Epoch takes very long\nExtremely impressing how fast it converges: Almost 100% for validation starting from epoch 25\nTODO: Metrics for Augmented Data\nAccuracy\n\nValidation Accuracy",
"train_loss, train_accuracy = model.evaluate(X_train, y_train, batch_size=BATCH_SIZE)\ntrain_loss, train_accuracy\n\ntest_loss, test_accuracy = model.evaluate(X_test, y_test, batch_size=BATCH_SIZE)\ntest_loss, test_accuracy\n\noriginal_loss, original_accuracy = model.evaluate(original_images, original_labels, batch_size=BATCH_SIZE)\noriginal_loss, original_accuracy\n\nmodel.save('xception-augmented.hdf5')\n\n!ls -lh xception-augmented.hdf5",
"Alternative: ResNet\n\nbasic ideas\ndepth does matter\n8x deeper than VGG\npossible by using shortcuts and skipping final fc layer\nhttps://keras.io/applications/#resnet50\nhttps://medium.com/towards-data-science/neural-network-architectures-156e5bad51ba\n\nhttp://arxiv.org/abs/1512.03385",
"from keras.applications.resnet50 import ResNet50\n\nmodel = ResNet50(classes=6, weights=None)\n\nmodel.summary()\n\nmodel.compile(optimizer='adam',\n loss='categorical_crossentropy',\n metrics=['accuracy'])\n\nearly_stopping_callback = keras.callbacks.EarlyStopping(monitor='val_loss', patience=10, verbose=1)\n\n!rm -rf ./tf_log\n# https://keras.io/callbacks/#tensorboard\ntb_callback = keras.callbacks.TensorBoard(log_dir='./tf_log')\n# To start tensorboard\n# tensorboard --logdir=./tf_log\n# open http://localhost:6006\n\n# Depends on harware GPU architecture, model is really complex, batch needs to be small (this works well on K80)\nBATCH_SIZE = 50\n\n# https://github.com/fchollet/keras/issues/6014\n# batch normalization seems to mess with accuracy when test data set is small, accuracy here is different from below\n%time model.fit(X_train, y_train, epochs=50, validation_split=0.2, batch_size=BATCH_SIZE, callbacks=[tb_callback, early_stopping_callback])\n# %time model.fit(X_train, y_train, epochs=50, validation_split=0.2, batch_size=BATCH_SIZE, callbacks=[tb_callback])",
"Results are a bit less good\nMaybe need to train longer?\nBatches can be larger, training is faster even though more epochs\nMetrics for Augmented Data\nAccuracy\n\nValidation Accuracy",
"train_loss, train_accuracy = model.evaluate(X_train, y_train, batch_size=BATCH_SIZE)\ntrain_loss, train_accuracy\n\ntest_loss, test_accuracy = model.evaluate(X_test, y_test, batch_size=BATCH_SIZE)\ntest_loss, test_accuracy\n\noriginal_loss, original_accuracy = model.evaluate(original_images, original_labels, batch_size=BATCH_SIZE)\noriginal_loss, original_accuracy\n\nmodel.save('resnet-augmented.hdf5')\n\n!ls -lh resnet-augmented.hdf5",
"No Hands-On Possible\nAll experiments take ours of compute time even on GPU\nWe can experiment with the results in the final notebook: How well do the different models generalize to real life?"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
darioizzo/d-CGP | doc/sphinx/notebooks/symbolic_regression_3.ipynb | gpl-3.0 | [
"Multi-objective memetic approach\nIn this third tutorial we consider an example with two dimensional input data and we approach its solution using a multi-objective approach where, aside the loss, we consider the formula complexity as a second objective.\nWe will use a memetic approach to learn the model parameters while evolution will shape the model itself.\nEventually you will learn:\n\n\nHow to instantiate a multi-objective symbolic regression problem.\n\n\nHow to use a memetic multi-objective approach to find suitable models for your data",
"# Some necessary imports.\nimport dcgpy\nimport pygmo as pg\n# Sympy is nice to have for basic symbolic manipulation.\nfrom sympy import init_printing\nfrom sympy.parsing.sympy_parser import *\ninit_printing()\n# Fundamental for plotting.\nfrom matplotlib import pyplot as plt\n%matplotlib inline",
"1 - The data",
"# We load our data from some available ones shipped with dcgpy.\n# In this particular case we use the problem sinecosine from the paper:\n# Vladislavleva, Ekaterina J., Guido F. Smits, and Dick Den Hertog.\n# \"Order of nonlinearity as a complexity measure for models generated by symbolic regression via pareto genetic\n# programming.\" IEEE Transactions on Evolutionary Computation 13.2 (2008): 333-349. \nX, Y = dcgpy.generate_sinecosine()\n\n\nfrom mpl_toolkits.mplot3d import Axes3D \n# And we plot them as to visualize the problem.\nfig = plt.figure()\nax = fig.add_subplot(111, projection='3d')\n_ = ax.scatter(X[:,0], X[:,1], Y[:,0])\n\n\n",
"2 - The symbolic regression problem",
"# We define our kernel set, that is the mathematical operators we will\n# want our final model to possibly contain. What to choose in here is left\n# to the competence and knowledge of the user. A list of kernels shipped with dcgpy \n# can be found on the online docs. The user can also define its own kernels (see the corresponding tutorial).\nss = dcgpy.kernel_set_double([\"sum\", \"diff\", \"mul\", \"sin\", \"cos\"])\n\n# We instantiate the symbolic regression optimization problem\n# Note how we specify to consider one ephemeral constant via\n# the kwarg n_eph. We also request 100 kernels with a linear \n# layout (this allows for the construction of longer expressions) and\n# we set the level back to 101 (in an attempt to skew the search towards\n# simple expressions)\nudp = dcgpy.symbolic_regression(\n points = X, labels = Y, kernels=ss(), \n rows = 1, \n cols = 100, \n n_eph = 1, \n levels_back = 101,\n multi_objective=True)\nprob = pg.problem(udp)\nprint(udp)",
"3 - The search algorithm",
"# We instantiate here the evolutionary strategy we want to use to\n# search for models. Note we specify we want the evolutionary operators\n# to be applied also to the constants via the kwarg *learn_constants*\nuda = dcgpy.momes4cgp(gen = 250, max_mut = 4)\nalgo = pg.algorithm(uda)\nalgo.set_verbosity(10)",
"4 - The search",
"# We use a population of 100 individuals\npop = pg.population(prob, 100)\n\n# Here is where we run the actual evolution. Note that the screen output\n# will show in the terminal (not on your Jupyter notebook in case \n# you are using it). Note you will have to run this a few times before \n# solving the problem entirely.\npop = algo.evolve(pop)",
"5 - Inspecting the non dominated front",
"# Compute here the non dominated front.\nndf = pg.non_dominated_front_2d(pop.get_f())\n\n# Inspect the front and print the proposed expressions.\nprint(\"{: >20} {: >30}\".format(\"Loss:\", \"Model:\"), \"\\n\")\nfor idx in ndf:\n x = pop.get_x()[idx]\n f = pop.get_f()[idx]\n a = parse_expr(udp.prettier(x))[0]\n print(\"{: >20} | {: >30}\".format(str(f[0]), str(a)), \"|\")\n\n# Lets have a look to the non dominated fronts in the final population.\nax = pg.plot_non_dominated_fronts(pop.get_f())\n_ = plt.xlabel(\"loss\")\n_ = plt.ylabel(\"complexity\")\n_ = plt.title(\"Non dominate fronts\")",
"6 - Lets have a look to the log content",
"# Here we get the log of the latest call to the evolve\nlog = algo.extract(dcgpy.momes4cgp).get_log()\ngen = [it[0] for it in log]\nloss = [it[2] for it in log]\ncompl = [it[4] for it in log]\n\n# And here we plot, for example, the generations against the best loss\n_ = plt.plot(gen, loss)\n_ = plt.title('last call to evolve')\n_ = plt.xlabel('generations')\n_ = plt.ylabel('loss')"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
mercybenzaquen/foundations-homework | foundations_hw/08/Homework8_benzaquen_congress_data.ipynb | mit | [
"!pip install pandas\n\n!pip install matplotlib\n\nimport pandas as pd\nimport matplotlib.pyplot as plt\n%matplotlib inline",
"Open your dataset up using pandas in a Jupyter notebook",
"df = pd.read_csv(\"congress.csv\", error_bad_lines=False)",
"Do a .head() to get a feel for your data",
"df.head()\n\n#bioguide: The alphanumeric ID for legislators in http://bioguide.congress.gov.",
"Write down 12 questions to ask your data, or 12 things to hunt for in the data\n1)How many senators and how many representatives in total since 1947?",
"df['chamber'].value_counts() #sounds like a lot. We might have repetitions.\n\ndf['bioguide'].describe() #we count the bioguide, which is unique to each legislator.\n#There are only 3188 unique values, hence only 3188 senators and representatives in total.",
"2) How many from each party in total ?",
"total_democrats = (df['party'] == 'D').value_counts()\ntotal_democrats\n\ntotal_republicans =(df['party'] == 'R').value_counts()\ntotal_republicans",
"3) What is the average age for people that have worked in congress (both Senators and Representatives)",
"df['age'].describe()",
"4) What is the average age of Senators that have worked in the Senate? And for Representatives in the house?",
"df.groupby(\"chamber\")['age'].describe()",
"5) How many in total from each state?",
"df['state'].value_counts()",
"5) How many Senators in total from each state? How many Representatives?",
"df.groupby(\"state\")['chamber'].value_counts()",
"6) How many terms are recorded in this dataset?",
"df['termstart'].describe() #here we would look at unique.",
"7) Who has been the oldest serving in the US, a senator or a representative? How old was he/she?",
"df.sort_values(by='age').tail(1) #A senator!",
"8) Who have been the oldest and youngest serving Representative in the US?",
"representative = df[df['chamber'] == 'house']\nrepresentative.sort_values(by='age').tail(1)\n\nrepresentative.sort_values(by='age').head(2)",
"9) Who have been the oldest and youngest serving Senator in the US?",
"senator = df[df['chamber'] == 'senate']\nsenator.sort_values(by='age')\n\nsenator.sort_values(by='age').head(2)",
"10) Who has served for more periods (in this question I am not paying attention to the period length)?",
"# Store a new column\ndf['complete_name'] = df['firstname']+ \" \"+ df['middlename'] + \" \"+df['lastname']\ndf.head()\n\nperiod_count = df.groupby('complete_name')['termstart'].value_counts().sort_values(ascending=False)\npd.DataFrame(period_count)\n\n\n#With the help of Stephan we figured out that term-start is every 2 years \n#(so this is not giving us info about how many terms has each legislator served)\n",
"We double-checked it by printing info from Thurmond, whom was part of the senate but appeared as if he had\nserved 26 periods of 6 years each (26*6 IMPOSIBLE!)\nThurmond = df[df['lastname'] == 'Thurmond']\nThurmond\n11) Who has served for more years?\nSenators = 6-year terms BUT the data we have is for 2-year terms\nRepresentatives = 2-year terms",
"terms_served_by_senators= senator.groupby('complete_name')['bioguide'].value_counts()\nyears= terms_served_by_senators * 2\ntotal_years_served = years.sort_values(ascending=False)\n\npd.DataFrame(total_years_served)\n\n\n\n\n\nterms_served_by_representative= representative.groupby(\"complete_name\")['bioguide'].value_counts()\nyears= terms_served_by_representative * 2\ntotal_years_served = years.sort_values(ascending=False)\n\npd.DataFrame(total_years_served)\n\n\n",
"12)The most popular name in congress is....",
"df['firstname'].value_counts()\n\n#this might be counting the same person many times but still we can get an idea of what names are more popular",
"Make three charts with your dataset\n1) Distribution of age",
"plt.style.use(\"ggplot\")\ndf['age'].hist(bins=15, xlabelsize=12, ylabelsize=12, color=['y'])\n\ndf.head(20).sort_values(by='age',ascending=True).plot(kind='barh', x=['complete_name'], y='age', color=\"y\")\n\ndf.plot.scatter(x='congress', y='age');\n\ndf.plot.hexbin(x='age', y='congress', gridsize=25, legend=True)"
] | [
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
phoebe-project/phoebe2-docs | development/tutorials/datasets_advanced.ipynb | gpl-3.0 | [
"Advanced: Datasets\nDatasets tell PHOEBE how and at what times to compute the model. In some cases these will include the actual observational data, and in other cases may only include the times at which you want to compute a synthetic model.\nIf you're not already familiar with the basic functionality of adding datasets, make sure to read the datasets tutorial first.\nSetup\nLet's first make sure we have the latest version of PHOEBE 2.4 installed (uncomment this line if running in an online notebook session such as colab).",
"#!pip install -I \"phoebe>=2.4,<2.5\"",
"As always, let's do imports and initialize a logger and a new Bundle.",
"import phoebe\nfrom phoebe import u # units\nimport numpy as np\nimport matplotlib.pyplot as plt\n\nlogger = phoebe.logger()\n\nb = phoebe.default_binary()",
"Passband Options\nPassband options follow the exact same rules as dataset columns.\nSending a single value to the argument will apply it to each component in which the time array is attached (either based on the list of components sent or the defaults from the dataset method).\nNote that for light curves, in particular, this rule gets slightly bent. The dataset arrays for light curves are attached at the system level, always. The passband-dependent options, however, exist for each star in the system. So, that value will get passed to each star if the component is not explicitly provided.",
"b.add_dataset('lc', \n times=[0,1],\n dataset='lc01', \n overwrite=True)\n\nprint(b.get_parameter(qualifier='times', dataset='lc01'))\n\nprint(b.filter(qualifier='ld_mode', dataset='lc01'))",
"As you might expect, if you want to pass different values to different components, simply provide them in a dictionary.",
"b.add_dataset('lc', \n times=[0,1], \n ld_mode='manual',\n ld_func={'primary': 'logarithmic', 'secondary': 'quadratic'}, \n dataset='lc01',\n overwrite=True)\n\nprint(b.filter(qualifier='ld_func', dataset='lc01'))",
"Note here that we didn't explicitly override the defaults for '_default', so they used the phoebe-wide defaults. If you wanted to set a value for the ld_coeffs of any star added in the future, you would have to provide a value for '_default' in the dictionary as well.",
"print(b.filter(qualifier'ld_func@lc01', check_default=False))",
"This syntax may seem a bit bulky - but alternatively you can add the dataset without providing values and then change the values individually using dictionary access or set_value.\nAdding a Dataset from a File\nManually from Arrays\nFor now, the only way to load data from a file is to do the parsing externally and pass the arrays on (as in the previous section).\nHere we'll load times, fluxes, and errors of a light curve from an external file and then pass them on to a newly created dataset. Since this is a light curve, it will automatically know that you want the summed light from all copmonents in the hierarchy.",
"times, fluxes, sigmas = np.loadtxt('test.lc.in', unpack=True)\nb.add_dataset('lc', \n times=times, \n fluxes=fluxes, \n sigmas=sigmas, \n dataset='lc01',\n overwrite=True)",
"Enabling and Disabling Datasets\nSee the Compute Tutorial\nDealing with Phases\nDatasets will no longer accept phases. It is the user's responsibility to convert\nphased data into times given an ephemeris. But it's still useful to be able to\nconvert times to phases (and vice versa) and be able to plot in phase.\nThose conversions can be handled via b.get_ephemeris, b.to_phase, and b.to_time.",
"print(b.get_ephemeris())\n\nprint(b.to_phase(0.0))\n\nprint(b.to_time(-0.25))",
"All of these by default use the period in the top-level of the current hierarchy,\nbut accept a component keyword argument if you'd like the ephemeris of an\ninner-orbit or the rotational ephemeris of a star in the system.\nWe'll see how plotting works later, but if you manually wanted to plot the dataset\nwith phases, all you'd need to do is:",
"print(b.to_phase(b.get_value(qualifier='times')))",
"or",
"print(b.to_phase('times@lc01'))",
"Although it isn't possible to attach data in phase-space, it is possible to tell PHOEBE at which phases to compute the model by setting compute_phases. Note that this overrides the value of times when the model is computed.",
"b.add_dataset('lc',\n compute_phases=np.linspace(0,1,11),\n dataset='lc01',\n overwrite=True)",
"The usage of compute_phases (as well as compute_times) will be discussed in further detail in the compute tutorial and the advanced: compute times & phases tutorial. \nNote also that although you can pass compute_phases directly to add_dataset, if you do not, it will be constrained by compute_times by default. In this case, you would need to flip the constraint before setting compute_phases. See the constraints tutorial and the flip_constraint API docs for more details on flipping constraints.",
"b.add_dataset('lc',\n times=[0],\n dataset='lc01', \n overwrite=True)\n\nprint(b['compute_phases@lc01'])\n\nb.flip_constraint('compute_phases', dataset='lc01', solve_for='compute_times')\n\nb.set_value('compute_phases', dataset='lc01', value=np.linspace(0,1,101))",
"Removing Datasets\nRemoving a dataset will remove matching parameters in either the dataset, model, or constraint contexts. This action is permanent and not undo-able via Undo/Redo.",
"print(b.datasets)",
"The simplest way to remove a dataset is by its dataset tag:",
"b.remove_dataset('lc01')\n\nprint(b.datasets)",
"But remove_dataset also takes any other tag(s) that could be sent to filter.",
"b.remove_dataset(kind='rv')\n\nprint(b.datasets)"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
qinwf-nuan/keras-js | notebooks/layers/wrappers/TimeDistributed.ipynb | mit | [
"import numpy as np\nfrom keras.models import Model\nfrom keras.layers import Input\nfrom keras.layers.core import Dense\nfrom keras.layers.convolutional import Conv2D\nfrom keras.layers.wrappers import TimeDistributed\nfrom keras import backend as K\nimport json\nfrom collections import OrderedDict\n\ndef format_decimal(arr, places=6):\n return [round(x * 10**places) / 10**places for x in arr]\n\nDATA = OrderedDict()",
"TimeDistributed\n[wrappers.TimeDistributed.0] wrap a Dense layer with units 4 (input: 3 x 6)",
"data_in_shape = (3, 6)\n\nlayer_0 = Input(shape=data_in_shape)\nlayer_1 = TimeDistributed(Dense(4))(layer_0)\nmodel = Model(inputs=layer_0, outputs=layer_1)\n\n# set weights to random (use seed for reproducibility)\nweights = []\nfor i, w in enumerate(model.get_weights()):\n np.random.seed(4000 + i)\n weights.append(2 * np.random.random(w.shape) - 1)\nmodel.set_weights(weights)\nweight_names = ['W', 'b']\nfor w_i, w_name in enumerate(weight_names):\n print('{} shape:'.format(w_name), weights[w_i].shape)\n print('{}:'.format(w_name), format_decimal(weights[w_i].ravel().tolist()))\n\ndata_in = 2 * np.random.random(data_in_shape) - 1\nresult = model.predict(np.array([data_in]))\ndata_out_shape = result[0].shape\ndata_in_formatted = format_decimal(data_in.ravel().tolist())\ndata_out_formatted = format_decimal(result[0].ravel().tolist())\nprint('')\nprint('in shape:', data_in_shape)\nprint('in:', data_in_formatted)\nprint('out shape:', data_out_shape)\nprint('out:', data_out_formatted)\n\nDATA['wrappers.TimeDistributed.0'] = {\n 'input': {'data': data_in_formatted, 'shape': data_in_shape},\n 'weights': [{'data': format_decimal(w.ravel().tolist()), 'shape': w.shape} for w in weights],\n 'expected': {'data': data_out_formatted, 'shape': data_out_shape}\n}",
"[wrappers.TimeDistributed.1] wrap a Conv2D layer with 6 3x3 filters (input: 5x4x4x2)",
"data_in_shape = (5, 4, 4, 2)\n\nlayer_0 = Input(shape=data_in_shape)\nlayer_1 = TimeDistributed(Conv2D(6, (3,3), data_format='channels_last'))(layer_0)\nmodel = Model(inputs=layer_0, outputs=layer_1)\n\n# set weights to random (use seed for reproducibility)\nweights = []\nfor i, w in enumerate(model.get_weights()):\n np.random.seed(4010 + i)\n weights.append(2 * np.random.random(w.shape) - 1)\nmodel.set_weights(weights)\nweight_names = ['W', 'b']\nfor w_i, w_name in enumerate(weight_names):\n print('{} shape:'.format(w_name), weights[w_i].shape)\n print('{}:'.format(w_name), format_decimal(weights[w_i].ravel().tolist()))\n\ndata_in = 2 * np.random.random(data_in_shape) - 1\nresult = model.predict(np.array([data_in]))\ndata_out_shape = result[0].shape\ndata_in_formatted = format_decimal(data_in.ravel().tolist())\ndata_out_formatted = format_decimal(result[0].ravel().tolist())\nprint('')\nprint('in shape:', data_in_shape)\nprint('in:', data_in_formatted)\nprint('out shape:', data_out_shape)\nprint('out:', data_out_formatted)\n\nDATA['wrappers.TimeDistributed.1'] = {\n 'input': {'data': data_in_formatted, 'shape': data_in_shape},\n 'weights': [{'data': format_decimal(w.ravel().tolist()), 'shape': w.shape} for w in weights],\n 'expected': {'data': data_out_formatted, 'shape': data_out_shape}\n}",
"export for Keras.js tests",
"print(json.dumps(DATA))"
] | [
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
tuanavu/coursera-university-of-washington | machine_learning/3_classification/assigment/week7/module-10-online-learning-assignment-graphlab.ipynb | mit | [
"Training Logistic Regression via Stochastic Gradient Ascent\nThe goal of this notebook is to implement a logistic regression classifier using stochastic gradient ascent. You will:\n\nExtract features from Amazon product reviews.\nConvert an SFrame into a NumPy array.\nWrite a function to compute the derivative of log likelihood function with respect to a single coefficient.\nImplement stochastic gradient ascent.\nCompare convergence of stochastic gradient ascent with that of batch gradient ascent.\n\nFire up GraphLab Create\nMake sure you have the latest version of GraphLab Create. Upgrade by\npip install graphlab-create --upgrade\nSee this page for detailed instructions on upgrading.",
"from __future__ import division\nimport graphlab",
"Load and process review dataset\nFor this assignment, we will use the same subset of the Amazon product review dataset that we used in Module 3 assignment. The subset was chosen to contain similar numbers of positive and negative reviews, as the original dataset consisted of mostly positive reviews.",
"products = graphlab.SFrame('amazon_baby_subset.gl/')",
"Just like we did previously, we will work with a hand-curated list of important words extracted from the review data. We will also perform 2 simple data transformations:\n\nRemove punctuation using Python's built-in string manipulation functionality.\nCompute word counts (only for the important_words)\n\nRefer to Module 3 assignment for more details.",
"import json\nwith open('important_words.json', 'r') as f: \n important_words = json.load(f)\nimportant_words = [str(s) for s in important_words]\n\n# Remote punctuation\ndef remove_punctuation(text):\n import string\n return text.translate(None, string.punctuation) \n\nproducts['review_clean'] = products['review'].apply(remove_punctuation)\n\n# Split out the words into individual columns\nfor word in important_words:\n products[word] = products['review_clean'].apply(lambda s : s.split().count(word))",
"The SFrame products now contains one column for each of the 193 important_words.",
"products",
"Split data into training and validation sets\nWe will now split the data into a 90-10 split where 90% is in the training set and 10% is in the validation set. We use seed=1 so that everyone gets the same result.",
"train_data, validation_data = products.random_split(.9, seed=1)\n\nprint 'Training set : %d data points' % len(train_data)\nprint 'Validation set: %d data points' % len(validation_data)",
"Convert SFrame to NumPy array\nJust like in the earlier assignments, we provide you with a function that extracts columns from an SFrame and converts them into a NumPy array. Two arrays are returned: one representing features and another representing class labels. \nNote: The feature matrix includes an additional column 'intercept' filled with 1's to take account of the intercept term.",
"import numpy as np\n\ndef get_numpy_data(data_sframe, features, label):\n data_sframe['intercept'] = 1\n features = ['intercept'] + features\n features_sframe = data_sframe[features]\n feature_matrix = features_sframe.to_numpy()\n label_sarray = data_sframe[label]\n label_array = label_sarray.to_numpy()\n return(feature_matrix, label_array)",
"Note that we convert both the training and validation sets into NumPy arrays.\nWarning: This may take a few minutes.",
"feature_matrix_train, sentiment_train = get_numpy_data(train_data, important_words, 'sentiment')\nfeature_matrix_valid, sentiment_valid = get_numpy_data(validation_data, important_words, 'sentiment') ",
"Are you running this notebook on an Amazon EC2 t2.micro instance? (If you are using your own machine, please skip this section)\nIt has been reported that t2.micro instances do not provide sufficient power to complete the conversion in acceptable amount of time. For interest of time, please refrain from running get_numpy_data function. Instead, download the binary file containing the four NumPy arrays you'll need for the assignment. To load the arrays, run the following commands:\narrays = np.load('module-10-assignment-numpy-arrays.npz')\nfeature_matrix_train, sentiment_train = arrays['feature_matrix_train'], arrays['sentiment_train']\nfeature_matrix_valid, sentiment_valid = arrays['feature_matrix_valid'], arrays['sentiment_valid']\n Quiz question: In Module 3 assignment, there were 194 features (an intercept + one feature for each of the 193 important words). In this assignment, we will use stochastic gradient ascent to train the classifier using logistic regression. How does the changing the solver to stochastic gradient ascent affect the number of features?\nBuilding on logistic regression\nLet us now build on Module 3 assignment. Recall from lecture that the link function for logistic regression can be defined as:\n$$\nP(y_i = +1 | \\mathbf{x}_i,\\mathbf{w}) = \\frac{1}{1 + \\exp(-\\mathbf{w}^T h(\\mathbf{x}_i))},\n$$\nwhere the feature vector $h(\\mathbf{x}_i)$ is given by the word counts of important_words in the review $\\mathbf{x}_i$. \nWe will use the same code as in Module 3 assignment to make probability predictions, since this part is not affected by using stochastic gradient ascent as a solver. Only the way in which the coefficients are learned is affected by using stochastic gradient ascent as a solver.",
"'''\nproduces probablistic estimate for P(y_i = +1 | x_i, w).\nestimate ranges between 0 and 1.\n'''\ndef predict_probability(feature_matrix, coefficients):\n # Take dot product of feature_matrix and coefficients \n score = np.dot(feature_matrix, coefficients)\n \n # Compute P(y_i = +1 | x_i, w) using the link function\n predictions = 1. / (1.+np.exp(-score)) \n return predictions",
"Derivative of log likelihood with respect to a single coefficient\nLet us now work on making minor changes to how the derivative computation is performed for logistic regression.\nRecall from the lectures and Module 3 assignment that for logistic regression, the derivative of log likelihood with respect to a single coefficient is as follows:\n$$\n\\frac{\\partial\\ell}{\\partial w_j} = \\sum_{i=1}^N h_j(\\mathbf{x}_i)\\left(\\mathbf{1}[y_i = +1] - P(y_i = +1 | \\mathbf{x}_i, \\mathbf{w})\\right)\n$$\nIn Module 3 assignment, we wrote a function to compute the derivative of log likelihood with respect to a single coefficient $w_j$. The function accepts the following two parameters:\n * errors vector containing $(\\mathbf{1}[y_i = +1] - P(y_i = +1 | \\mathbf{x}_i, \\mathbf{w}))$ for all $i$\n * feature vector containing $h_j(\\mathbf{x}_i)$ for all $i$\nComplete the following code block:",
"def feature_derivative(errors, feature): \n \n # Compute the dot product of errors and feature\n ## YOUR CODE HERE\n derivative = np.dot(errors, feature)\n\n return derivative",
"Note. We are not using regularization in this assignment, but, as discussed in the optional video, stochastic gradient can also be used for regularized logistic regression.\nTo verify the correctness of the gradient computation, we provide a function for computing average log likelihood (which we recall from the last assignment was a topic detailed in an advanced optional video, and used here for its numerical stability).\nTo track the performance of stochastic gradient ascent, we provide a function for computing average log likelihood. \n$$\\ell\\ell_A(\\mathbf{w}) = \\color{red}{\\frac{1}{N}} \\sum_{i=1}^N \\Big( (\\mathbf{1}[y_i = +1] - 1)\\mathbf{w}^T h(\\mathbf{x}_i) - \\ln\\left(1 + \\exp(-\\mathbf{w}^T h(\\mathbf{x}_i))\\right) \\Big) $$\nNote that we made one tiny modification to the log likelihood function (called compute_log_likelihood) in our earlier assignments. We added a $\\color{red}{1/N}$ term which averages the log likelihood accross all data points. The $\\color{red}{1/N}$ term makes it easier for us to compare stochastic gradient ascent with batch gradient ascent. We will use this function to generate plots that are similar to those you saw in the lecture.",
"def compute_avg_log_likelihood(feature_matrix, sentiment, coefficients):\n \n indicator = (sentiment==+1)\n scores = np.dot(feature_matrix, coefficients)\n logexp = np.log(1. + np.exp(-scores))\n \n # Simple check to prevent overflow\n mask = np.isinf(logexp)\n logexp[mask] = -scores[mask]\n \n lp = np.sum((indicator-1)*scores - logexp)/len(feature_matrix)\n \n return lp",
"Quiz Question: Recall from the lecture and the earlier assignment, the log likelihood (without the averaging term) is given by \n$$\\ell\\ell(\\mathbf{w}) = \\sum_{i=1}^N \\Big( (\\mathbf{1}[y_i = +1] - 1)\\mathbf{w}^T h(\\mathbf{x}_i) - \\ln\\left(1 + \\exp(-\\mathbf{w}^T h(\\mathbf{x}_i))\\right) \\Big) $$\nHow are the functions $\\ell\\ell(\\mathbf{w})$ and $\\ell\\ell_A(\\mathbf{w})$ related?\nModifying the derivative for stochastic gradient ascent\nRecall from the lecture that the gradient for a single data point $\\color{red}{\\mathbf{x}_i}$ can be computed using the following formula:\n$$\n\\frac{\\partial\\ell_{\\color{red}{i}}(\\mathbf{w})}{\\partial w_j} = h_j(\\color{red}{\\mathbf{x}i})\\left(\\mathbf{1}[y\\color{red}{i} = +1] - P(y_\\color{red}{i} = +1 | \\color{red}{\\mathbf{x}_i}, \\mathbf{w})\\right)\n$$\n Computing the gradient for a single data point\nDo we really need to re-write all our code to modify $\\partial\\ell(\\mathbf{w})/\\partial w_j$ to $\\partial\\ell_{\\color{red}{i}}(\\mathbf{w})/{\\partial w_j}$? \nThankfully No!. Using NumPy, we access $\\mathbf{x}i$ in the training data using feature_matrix_train[i:i+1,:]\nand $y_i$ in the training data using sentiment_train[i:i+1]. We can compute $\\partial\\ell{\\color{red}{i}}(\\mathbf{w})/\\partial w_j$ by re-using all the code written in feature_derivative and predict_probability.\nWe compute $\\partial\\ell_{\\color{red}{i}}(\\mathbf{w})/\\partial w_j$ using the following steps:\n* First, compute $P(y_i = +1 | \\mathbf{x}_i, \\mathbf{w})$ using the predict_probability function with feature_matrix_train[i:i+1,:] as the first parameter.\n* Next, compute $\\mathbf{1}[y_i = +1]$ using sentiment_train[i:i+1].\n* Finally, call the feature_derivative function with feature_matrix_train[i:i+1, j] as one of the parameters. \nLet us follow these steps for j = 1 and i = 10:",
"j = 1 # Feature number\ni = 10 # Data point number\ncoefficients = np.zeros(194) # A point w at which we are computing the gradient.\n\npredictions = predict_probability(feature_matrix_train[i:i+1,:], coefficients)\nindicator = (sentiment_train[i:i+1]==+1)\n\nerrors = indicator - predictions \ngradient_single_data_point = feature_derivative(errors, feature_matrix_train[i:i+1,j])\nprint \"Gradient single data point: %s\" % gradient_single_data_point\nprint \" --> Should print 0.0\"",
"Quiz Question: The code block above computed $\\partial\\ell_{\\color{red}{i}}(\\mathbf{w})/{\\partial w_j}$ for j = 1 and i = 10. Is $\\partial\\ell_{\\color{red}{i}}(\\mathbf{w})/{\\partial w_j}$ a scalar or a 194-dimensional vector?\nModifying the derivative for using a batch of data points\nStochastic gradient estimates the ascent direction using 1 data point, while gradient uses $N$ data points to decide how to update the the parameters. In an optional video, we discussed the details of a simple change that allows us to use a mini-batch of $B \\leq N$ data points to estimate the ascent direction. This simple approach is faster than regular gradient but less noisy than stochastic gradient that uses only 1 data point. Although we encorage you to watch the optional video on the topic to better understand why mini-batches help stochastic gradient, in this assignment, we will simply use this technique, since the approach is very simple and will improve your results.\nGiven a mini-batch (or a set of data points) $\\mathbf{x}{i}, \\mathbf{x}{i+1} \\ldots \\mathbf{x}{i+B}$, the gradient function for this mini-batch of data points is given by:\n$$\n\\color{red}{\\sum{s = i}^{i+B}} \\frac{\\partial\\ell_{s}}{\\partial w_j} = \\color{red}{\\sum_{s = i}^{i + B}} h_j(\\mathbf{x}_s)\\left(\\mathbf{1}[y_s = +1] - P(y_s = +1 | \\mathbf{x}_s, \\mathbf{w})\\right)\n$$\n Computing the gradient for a \"mini-batch\" of data points\nUsing NumPy, we access the points $\\mathbf{x}i, \\mathbf{x}{i+1} \\ldots \\mathbf{x}_{i+B}$ in the training data using feature_matrix_train[i:i+B,:]\nand $y_i$ in the training data using sentiment_train[i:i+B]. \nWe can compute $\\color{red}{\\sum_{s = i}^{i+B}} \\partial\\ell_{s}/\\partial w_j$ easily as follows:",
"j = 1 # Feature number\ni = 10 # Data point start\nB = 10 # Mini-batch size\ncoefficients = np.zeros(194) # A point w at which we are computing the gradient.\n\npredictions = predict_probability(feature_matrix_train[i:i+B,:], coefficients)\nindicator = (sentiment_train[i:i+B]==+1)\n\nerrors = indicator - predictions \ngradient_mini_batch = feature_derivative(errors, feature_matrix_train[i:i+B,j])\nprint \"Gradient mini-batch data points: %s\" % gradient_mini_batch\nprint \" --> Should print 1.0\"",
"Quiz Question: The code block above computed \n$\\color{red}{\\sum_{s = i}^{i+B}}\\partial\\ell_{s}(\\mathbf{w})/{\\partial w_j}$ \nfor j = 10, i = 10, and B = 10. Is this a scalar or a 194-dimensional vector?\n Quiz Question: For what value of B is the term\n$\\color{red}{\\sum_{s = 1}^{B}}\\partial\\ell_{s}(\\mathbf{w})/\\partial w_j$\nthe same as the full gradient\n$\\partial\\ell(\\mathbf{w})/{\\partial w_j}$?",
"print len(sentiment_train)",
"Averaging the gradient across a batch\nIt is a common practice to normalize the gradient update rule by the batch size B:\n$$\n\\frac{\\partial\\ell_{\\color{red}{A}}(\\mathbf{w})}{\\partial w_j} \\approx \\color{red}{\\frac{1}{B}} {\\sum_{s = i}^{i + B}} h_j(\\mathbf{x}_s)\\left(\\mathbf{1}[y_s = +1] - P(y_s = +1 | \\mathbf{x}_s, \\mathbf{w})\\right)\n$$\nIn other words, we update the coefficients using the average gradient over data points (instead of using a summation). By using the average gradient, we ensure that the magnitude of the gradient is approximately the same for all batch sizes. This way, we can more easily compare various batch sizes of stochastic gradient ascent (including a batch size of all the data points), and study the effect of batch size on the algorithm as well as the choice of step size.\nImplementing stochastic gradient ascent\nNow we are ready to implement our own logistic regression with stochastic gradient ascent. Complete the following function to fit a logistic regression model using gradient ascent:",
"from math import sqrt\ndef logistic_regression_SG(feature_matrix, sentiment, initial_coefficients, step_size, batch_size, max_iter):\n log_likelihood_all = []\n \n # make sure it's a numpy array\n coefficients = np.array(initial_coefficients)\n # set seed=1 to produce consistent results\n np.random.seed(seed=1)\n # Shuffle the data before starting\n permutation = np.random.permutation(len(feature_matrix))\n feature_matrix = feature_matrix[permutation,:]\n sentiment = sentiment[permutation]\n \n i = 0 # index of current batch\n # Do a linear scan over data\n for itr in xrange(max_iter):\n # Predict P(y_i = +1|x_i,w) using your predict_probability() function\n # Make sure to slice the i-th row of feature_matrix with [i:i+batch_size,:]\n ### YOUR CODE HERE\n predictions = predict_probability(feature_matrix[i:i+batch_size,:], coefficients)\n \n # Compute indicator value for (y_i = +1)\n # Make sure to slice the i-th entry with [i:i+batch_size]\n ### YOUR CODE HERE\n indicator = (sentiment[i:i+batch_size]==+1)\n \n # Compute the errors as indicator - predictions\n errors = indicator - predictions\n for j in xrange(len(coefficients)): # loop over each coefficient\n # Recall that feature_matrix[:,j] is the feature column associated with coefficients[j]\n # Compute the derivative for coefficients[j] and save it to derivative.\n # Make sure to slice the i-th row of feature_matrix with [i:i+batch_size,j]\n ### YOUR CODE HERE\n derivative = feature_derivative(errors, feature_matrix[i:i+batch_size,j])\n \n # compute the product of the step size, the derivative, and the **normalization constant** (1./batch_size)\n ### YOUR CODE HERE\n coefficients[j] += (1./batch_size)*(step_size * derivative)\n \n # Checking whether log likelihood is increasing\n # Print the log likelihood over the *current batch*\n lp = compute_avg_log_likelihood(feature_matrix[i:i+batch_size,:], sentiment[i:i+batch_size],\n coefficients)\n log_likelihood_all.append(lp)\n if itr <= 15 or (itr <= 1000 and itr % 100 == 0) or (itr <= 10000 and itr % 1000 == 0) \\\n or itr % 10000 == 0 or itr == max_iter-1:\n data_size = len(feature_matrix)\n print 'Iteration %*d: Average log likelihood (of data points in batch [%0*d:%0*d]) = %.8f' % \\\n (int(np.ceil(np.log10(max_iter))), itr, \\\n int(np.ceil(np.log10(data_size))), i, \\\n int(np.ceil(np.log10(data_size))), i+batch_size, lp)\n \n # if we made a complete pass over data, shuffle and restart\n i += batch_size\n if i+batch_size > len(feature_matrix):\n permutation = np.random.permutation(len(feature_matrix))\n feature_matrix = feature_matrix[permutation,:]\n sentiment = sentiment[permutation]\n i = 0\n \n # We return the list of log likelihoods for plotting purposes.\n return coefficients, log_likelihood_all",
"Note. In practice, the final set of coefficients is rarely used; it is better to use the average of the last K sets of coefficients instead, where K should be adjusted depending on how fast the log likelihood oscillates around the optimum.\nCheckpoint\nThe following cell tests your stochastic gradient ascent function using a toy dataset consisting of two data points. If the test does not pass, make sure you are normalizing the gradient update rule correctly.",
"sample_feature_matrix = np.array([[1.,2.,-1.], [1.,0.,1.]])\nsample_sentiment = np.array([+1, -1])\n\ncoefficients, log_likelihood = logistic_regression_SG(sample_feature_matrix, sample_sentiment, np.zeros(3),\n step_size=1., batch_size=2, max_iter=2)\nprint '-------------------------------------------------------------------------------------'\nprint 'Coefficients learned :', coefficients\nprint 'Average log likelihood per-iteration :', log_likelihood\nif np.allclose(coefficients, np.array([-0.09755757, 0.68242552, -0.7799831]), atol=1e-3)\\\n and np.allclose(log_likelihood, np.array([-0.33774513108142956, -0.2345530939410341])):\n # pass if elements match within 1e-3\n print '-------------------------------------------------------------------------------------'\n print 'Test passed!'\nelse:\n print '-------------------------------------------------------------------------------------'\n print 'Test failed'",
"Compare convergence behavior of stochastic gradient ascent\nFor the remainder of the assignment, we will compare stochastic gradient ascent against batch gradient ascent. For this, we need a reference implementation of batch gradient ascent. But do we need to implement this from scratch?\nQuiz Question: For what value of batch size B above is the stochastic gradient ascent function logistic_regression_SG act as a standard gradient ascent algorithm?\nRunning gradient ascent using the stochastic gradient ascent implementation\nInstead of implementing batch gradient ascent separately, we save time by re-using the stochastic gradient ascent function we just wrote — to perform gradient ascent, it suffices to set batch_size to the number of data points in the training data. Yes, we did answer above the quiz question for you, but that is an important point to remember in the future :)\nSmall Caveat. The batch gradient ascent implementation here is slightly different than the one in the earlier assignments, as we now normalize the gradient update rule.\nWe now run stochastic gradient ascent over the feature_matrix_train for 10 iterations using:\n* initial_coefficients = np.zeros(194)\n* step_size = 5e-1\n* batch_size = 1\n* max_iter = 10",
"coefficients, log_likelihood = logistic_regression_SG(feature_matrix_train, sentiment_train,\n initial_coefficients=np.zeros(194),\n step_size=5e-1, batch_size=1, max_iter=10)",
"Quiz Question. When you set batch_size = 1, as each iteration passes, how does the average log likelihood in the batch change?\n* Increases\n* Decreases\n* Fluctuates \nNow run batch gradient ascent over the feature_matrix_train for 200 iterations using:\n* initial_coefficients = np.zeros(194)\n* step_size = 5e-1\n* batch_size = len(feature_matrix_train)\n* max_iter = 200",
"# YOUR CODE HERE\ncoefficients_batch, log_likelihood_batch = logistic_regression_SG(feature_matrix_train, sentiment_train,\n initial_coefficients=np.zeros(194),\n step_size=5e-1, \n batch_size = len(feature_matrix_train), \n max_iter=200)",
"Quiz Question. When you set batch_size = len(train_data), as each iteration passes, how does the average log likelihood in the batch change?\n* Increases \n* Decreases\n* Fluctuates \nMake \"passes\" over the dataset\nTo make a fair comparison betweeen stochastic gradient ascent and batch gradient ascent, we measure the average log likelihood as a function of the number of passes (defined as follows):\n$$\n[\\text{# of passes}] = \\frac{[\\text{# of data points touched so far}]}{[\\text{size of dataset}]}\n$$\nQuiz Question Suppose that we run stochastic gradient ascent with a batch size of 100. How many gradient updates are performed at the end of two passes over a dataset consisting of 50000 data points?",
"# number of passes is number to complete the whole dataset\n# For each batch size, we update 1 gradient, so\n\n2*(50000/100)",
"Log likelihood plots for stochastic gradient ascent\nWith the terminology in mind, let us run stochastic gradient ascent for 10 passes. We will use\n* step_size=1e-1\n* batch_size=100\n* initial_coefficients to all zeros.",
"step_size = 1e-1\nbatch_size = 100\nnum_passes = 10\nnum_iterations = num_passes * int(len(feature_matrix_train)/batch_size)\n\ncoefficients_sgd, log_likelihood_sgd = logistic_regression_SG(feature_matrix_train, sentiment_train,\n initial_coefficients=np.zeros(194),\n step_size=1e-1, batch_size=100, max_iter=num_iterations)",
"We provide you with a utility function to plot the average log likelihood as a function of the number of passes.",
"import matplotlib.pyplot as plt\n%matplotlib inline\n\ndef make_plot(log_likelihood_all, len_data, batch_size, smoothing_window=1, label=''):\n plt.rcParams.update({'figure.figsize': (9,5)})\n log_likelihood_all_ma = np.convolve(np.array(log_likelihood_all), \\\n np.ones((smoothing_window,))/smoothing_window, mode='valid')\n plt.plot(np.array(range(smoothing_window-1, len(log_likelihood_all)))*float(batch_size)/len_data,\n log_likelihood_all_ma, linewidth=4.0, label=label)\n plt.rcParams.update({'font.size': 16})\n plt.tight_layout()\n plt.xlabel('# of passes over data')\n plt.ylabel('Average log likelihood per data point')\n plt.legend(loc='lower right', prop={'size':14})\n\nmake_plot(log_likelihood_sgd, len_data=len(feature_matrix_train), batch_size=100,\n label='stochastic gradient, step_size=1e-1')",
"Smoothing the stochastic gradient ascent curve\nThe plotted line oscillates so much that it is hard to see whether the log likelihood is improving. In our plot, we apply a simple smoothing operation using the parameter smoothing_window. The smoothing is simply a moving average of log likelihood over the last smoothing_window \"iterations\" of stochastic gradient ascent.",
"make_plot(log_likelihood_sgd, len_data=len(feature_matrix_train), batch_size=100,\n smoothing_window=30, label='stochastic gradient, step_size=1e-1')",
"Checkpoint: The above plot should look smoother than the previous plot. Play around with smoothing_window. As you increase it, you should see a smoother plot.\nStochastic gradient ascent vs batch gradient ascent\nTo compare convergence rates for stochastic gradient ascent with batch gradient ascent, we call make_plot() multiple times in the same cell.\nWe are comparing:\n* stochastic gradient ascent: step_size = 0.1, batch_size=100\n* batch gradient ascent: step_size = 0.5, batch_size=len(feature_matrix_train)\nWrite code to run stochastic gradient ascent for 200 passes using:\n* step_size=1e-1\n* batch_size=100\n* initial_coefficients to all zeros.",
"step_size = 1e-1\nbatch_size = 100\nnum_passes = 200\nnum_iterations = num_passes * int(len(feature_matrix_train)/batch_size)\n\n## YOUR CODE HERE\ncoefficients_sgd, log_likelihood_sgd = logistic_regression_SG(feature_matrix_train, sentiment_train,\n initial_coefficients=np.zeros(194),\n step_size=step_size, batch_size=batch_size, max_iter=num_iterations)",
"We compare the convergence of stochastic gradient ascent and batch gradient ascent in the following cell. Note that we apply smoothing with smoothing_window=30.",
"make_plot(log_likelihood_sgd, len_data=len(feature_matrix_train), batch_size=100,\n smoothing_window=30, label='stochastic, step_size=1e-1')\nmake_plot(log_likelihood_batch, len_data=len(feature_matrix_train), batch_size=len(feature_matrix_train),\n smoothing_window=1, label='batch, step_size=5e-1')",
"Quiz Question: In the figure above, how many passes does batch gradient ascent need to achieve a similar log likelihood as stochastic gradient ascent? \n\nIt's always better\n10 passes\n20 passes\n150 passes or more\n\nExplore the effects of step sizes on stochastic gradient ascent\nIn previous sections, we chose step sizes for you. In practice, it helps to know how to choose good step sizes yourself.\nTo start, we explore a wide range of step sizes that are equally spaced in the log space. Run stochastic gradient ascent with step_size set to 1e-4, 1e-3, 1e-2, 1e-1, 1e0, 1e1, and 1e2. Use the following set of parameters:\n* initial_coefficients=np.zeros(194)\n* batch_size=100\n* max_iter initialized so as to run 10 passes over the data.",
"batch_size = 100\nnum_passes = 10\nnum_iterations = num_passes * int(len(feature_matrix_train)/batch_size)\n\ncoefficients_sgd = {}\nlog_likelihood_sgd = {}\nfor step_size in np.logspace(-4, 2, num=7):\n coefficients_sgd[step_size], log_likelihood_sgd[step_size] = logistic_regression_SG(feature_matrix_train, sentiment_train,\n initial_coefficients=np.zeros(194),\n step_size=step_size, batch_size=batch_size, max_iter=num_iterations)",
"Plotting the log likelihood as a function of passes for each step size\nNow, we will plot the change in log likelihood using the make_plot for each of the following values of step_size:\n\nstep_size = 1e-4\nstep_size = 1e-3\nstep_size = 1e-2\nstep_size = 1e-1\nstep_size = 1e0\nstep_size = 1e1\nstep_size = 1e2\n\nFor consistency, we again apply smoothing_window=30.",
"for step_size in np.logspace(-4, 2, num=7):\n make_plot(log_likelihood_sgd[step_size], len_data=len(train_data), batch_size=100,\n smoothing_window=30, label='step_size=%.1e'%step_size)",
"Now, let us remove the step size step_size = 1e2 and plot the rest of the curves.",
"for step_size in np.logspace(-4, 2, num=7)[0:6]:\n make_plot(log_likelihood_sgd[step_size], len_data=len(train_data), batch_size=100,\n smoothing_window=30, label='step_size=%.1e'%step_size)",
"Quiz Question: Which of the following is the worst step size? Pick the step size that results in the lowest log likelihood in the end.\n1. 1e-2\n2. 1e-1\n3. 1e0\n4. 1e1\n5. 1e2\nQuiz Question: Which of the following is the best step size? Pick the step size that results in the highest log likelihood in the end.\n1. 1e-4\n2. 1e-2\n3. 1e0\n4. 1e1\n5. 1e2"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
jegibbs/phys202-2015-work | assignments/assignment11/OptimizationEx01.ipynb | mit | [
"Optimization Exercise 1\nImports",
"%matplotlib inline\nimport matplotlib.pyplot as plt\nimport numpy as np\nimport scipy.optimize as opt",
"Hat potential\nThe following potential is often used in Physics and other fields to describe symmetry breaking and is often known as the \"hat potential\":\n$$ V(x) = -a x^2 + b x^4 $$\nWrite a function hat(x,a,b) that returns the value of this function:",
"def hat(x,a,b):\n v = -a*(x**2) + b*(x**4)\n return v\n\nassert hat(0.0, 1.0, 1.0)==0.0\nassert hat(0.0, 1.0, 1.0)==0.0\nassert hat(1.0, 10.0, 1.0)==-9.0",
"Plot this function over the range $x\\in\\left[-3,3\\right]$ with $b=1.0$ and $a=5.0$:",
"a = 5.0\nb = 1.0\n\nx = np.linspace(-3.0, 3.0)\nplt.plot(x, hat(x,a,b))\nplt.plot(-1.5811388304396232, hat(-1.5811388304396232,a,b), 'ro')\nplt.plot(1.58113882, hat(1.58113882,a,b), 'ro')\nplt.xlabel('X')\nplt.ylabel('V(x)')\nplt.title('Hat Potential')\nplt.grid(True)\nplt.box(False);\n\nassert True # leave this to grade the plot",
"Write code that finds the two local minima of this function for $b=1.0$ and $a=5.0$.\n\nUse scipy.optimize.minimize to find the minima. You will have to think carefully about how to get this function to find both minima.\nPrint the x values of the minima.\nPlot the function as a blue line.\nOn the same axes, show the minima as red circles.\nCustomize your visualization to make it beatiful and effective.",
"opt.minimize(hat, -3, args=(a,b), method = \"Powell\")\n\nopt.minimize(hat, -3, args=(a,b))\n\nassert True # leave this for grading the plot",
"To check your numerical results, find the locations of the minima analytically. Show and describe the steps in your derivation using LaTeX equations. Evaluate the location of the minima using the above parameters.\n$$ V(x) = -a x^2 + b x^4 $$\n$$ V'(x) = -2a x + 4b x^3 $$\n$$ 0 = -2(5.0) x + 4(1.0) x^3 $$\n$$ 0 = -10 x + 4 x^3 $$\n$$ 10x = 4 x^3 $$\n$$ \\frac{5}{2} = x^2 $$\n$$ \\sqrt{\\frac{5}{2}} = x $$\n$$ x = +- 1.5811388301 $$\nI simply set the derivative of the function equal to zero, plugged in the values of a and b, and solved for x."
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
chrinide/optunity | notebooks/basic-cross-validation.ipynb | bsd-3-clause | [
"Basic: cross-validation\nThis notebook explores the main elements of Optunity's cross-validation facilities, including:\n\nstandard cross-validation\nusing strata and clusters while constructing folds\nusing different aggregators\n\nWe recommend perusing the <a href=\"http://docs.optunity.net/user/cross_validation.html\">related documentation</a> for more details.\nNested cross-validation is available as a separate notebook.",
"import optunity\nimport optunity.cross_validation",
"We start by generating some toy data containing 6 instances which we will partition into folds.",
"data = list(range(6))\nlabels = [True] * 3 + [False] * 3",
"Standard cross-validation <a id=standard></a>\nEach function to be decorated with cross-validation functionality must accept the following arguments:\n- x_train: training data\n- x_test: test data\n- y_train: training labels (required only when y is specified in the cross-validation decorator)\n- y_test: test labels (required only when y is specified in the cross-validation decorator)\nThese arguments will be set implicitly by the cross-validation decorator to match the right folds. Any remaining arguments to the decorated function remain as free parameters that must be set later on.\nLets start with the basics and look at Optunity's cross-validation in action. We use an objective function that simply prints out the train and test data in every split to see what's going on.",
"def f(x_train, y_train, x_test, y_test):\n print(\"\")\n print(\"train data:\\t\" + str(x_train) + \"\\t train labels:\\t\" + str(y_train))\n print(\"test data:\\t\" + str(x_test) + \"\\t test labels:\\t\" + str(y_test))\n return 0.0",
"We start with 2 folds, which leads to equally sized train and test partitions.",
"f_2folds = optunity.cross_validated(x=data, y=labels, num_folds=2)(f)\nprint(\"using 2 folds\")\nf_2folds()\n\n# f_2folds as defined above would typically be written using decorator syntax as follows\n# we don't do that in these examples so we can reuse the toy objective function\n\n@optunity.cross_validated(x=data, y=labels, num_folds=2)\ndef f_2folds(x_train, y_train, x_test, y_test):\n print(\"\")\n print(\"train data:\\t\" + str(x_train) + \"\\t train labels:\\t\" + str(y_train))\n print(\"test data:\\t\" + str(x_test) + \"\\t test labels:\\t\" + str(y_test))\n return 0.0",
"If we use three folds instead of 2, we get 3 iterations in which the training set is twice the size of the test set.",
"f_3folds = optunity.cross_validated(x=data, y=labels, num_folds=3)(f)\nprint(\"using 3 folds\")\nf_3folds()",
"If we do two iterations of 3-fold cross-validation (denoted by 2x3 fold), two sets of folds are generated and evaluated.",
"f_2x3folds = optunity.cross_validated(x=data, y=labels, num_folds=3, num_iter=2)(f)\nprint(\"using 2x3 folds\")\nf_2x3folds()",
"Using strata and clusters<a id=strata-clusters></a>\nStrata are defined as sets of instances that should be spread out across folds as much as possible (e.g. stratify patients by age). Clusters are sets of instances that must be put in a single fold (e.g. cluster measurements of the same patient).\nOptunity allows you to specify strata and/or clusters that must be accounted for while construct cross-validation folds. Not all instances have to belong to a stratum or clusters.\nStrata\nWe start by illustrating strata. Strata are specified as a list of lists of instances indices. Each list defines one stratum. We will reuse the toy data and objective function specified above. We will create 2 strata with 2 instances each. These instances will be spread across folds. We create two strata: ${0, 1}$ and ${2, 3}$.",
"strata = [[0, 1], [2, 3]]\nf_stratified = optunity.cross_validated(x=data, y=labels, strata=strata, num_folds=3)(f)\nf_stratified()",
"Clusters\nClusters work similarly, except that now instances within a cluster are guaranteed to be placed within a single fold. The way to specify clusters is identical to strata. We create two clusters: ${0, 1}$ and ${2, 3}$. These pairs will always occur in a single fold.",
"clusters = [[0, 1], [2, 3]]\nf_clustered = optunity.cross_validated(x=data, y=labels, clusters=clusters, num_folds=3)(f)\nf_clustered()",
"Strata and clusters\nStrata and clusters can be used together. Lets say we have the following configuration:\n\n1 stratum: ${0, 1, 2}$\n2 clusters: ${0, 3}$, ${4, 5}$\n\nIn this particular example, instances 1 and 2 will inevitably end up in a single fold, even though they are part of one stratum. This happens because the total data set has size 6, and 4 instances are already in clusters.",
"strata = [[0, 1, 2]]\nclusters = [[0, 3], [4, 5]]\nf_strata_clustered = optunity.cross_validated(x=data, y=labels, clusters=clusters, strata=strata, num_folds=3)(f)\nf_strata_clustered()",
"Aggregators <a id=aggregators></a>\nAggregators are used to combine the scores per fold into a single result. The default approach used in cross-validation is to take the mean of all scores. In some cases, we might be interested in worst-case or best-case performance, the spread, ...\nOpunity allows passing a custom callable to be used as aggregator. \nThe default aggregation in Optunity is to compute the mean across folds.",
"@optunity.cross_validated(x=data, num_folds=3)\ndef f(x_train, x_test):\n result = x_test[0]\n print(result)\n return result\n\nf(1)",
"This can be replaced by any function, e.g. min or max.",
"@optunity.cross_validated(x=data, num_folds=3, aggregator=max)\ndef fmax(x_train, x_test):\n result = x_test[0]\n print(result)\n return result\n\nfmax(1)\n\n@optunity.cross_validated(x=data, num_folds=3, aggregator=min)\ndef fmin(x_train, x_test):\n result = x_test[0]\n print(result)\n return result\n\nfmin(1)",
"Retaining intermediate results\nOften, it may be useful to retain all intermediate results, not just the final aggregated data. This is made possible via optunity.cross_validation.mean_and_list aggregator. This aggregator computes the mean for internal use in cross-validation, but also returns a list of lists containing the full evaluation results.",
"@optunity.cross_validated(x=data, num_folds=3,\n aggregator=optunity.cross_validation.mean_and_list)\ndef f_full(x_train, x_test, coeff):\n return x_test[0] * coeff\n\n# evaluate f\nmean_score, all_scores = f_full(1.0)\nprint(mean_score)\nprint(all_scores)\n",
"Note that a cross-validation based on the mean_and_list aggregator essentially returns a tuple of results. If the result is iterable, all solvers in Optunity use the first element as the objective function value. You can let the cross-validation procedure return other useful statistics too, which you can access from the solver trace.",
"opt_coeff, info, _ = optunity.minimize(f_full, coeff=[0, 1], num_evals=10)\nprint(opt_coeff)\nprint(\"call log\")\nfor args, val in zip(info.call_log['args']['coeff'], info.call_log['values']):\n print(str(args) + '\\t\\t' + str(val))",
"Cross-validation with scikit-learn <a id=cv-sklearn></a>\nIn this example we will show how to use cross-validation methods that are provided by scikit-learn in conjunction with Optunity. To do this we provide Optunity with the folds that scikit-learn produces in a specific format.\nIn supervised learning datasets often have unbalanced labels. When performing cross-validation with unbalanced data it is good practice to preserve the percentage of samples for each class across folds. To achieve this label balance we will use <a href=\"http://scikit-learn.org/stable/modules/generated/sklearn.cross_validation.StratifiedKFold.html\">StratifiedKFold</a>.",
"data = list(range(20))\nlabels = [1 if i%4==0 else 0 for i in range(20)]\n\n@optunity.cross_validated(x=data, y=labels, num_folds=5)\ndef unbalanced_folds(x_train, y_train, x_test, y_test):\n print(\"\")\n print(\"train data:\\t\" + str(x_train) + \"\\ntrain labels:\\t\" + str(y_train)) + '\\n'\n print(\"test data:\\t\" + str(x_test) + \"\\ntest labels:\\t\" + str(y_test)) + '\\n'\n return 0.0\n\nunbalanced_folds()",
"Notice above how the test label sets have a varying number of postive samples, some have none, some have one, and some have two.",
"from sklearn.cross_validation import StratifiedKFold\n\nstratified_5folds = StratifiedKFold(labels, n_folds=5)\nfolds = [[list(test) for train, test in stratified_5folds]]\n\n@optunity.cross_validated(x=data, y=labels, folds=folds, num_folds=5)\ndef balanced_folds(x_train, y_train, x_test, y_test):\n print(\"\")\n print(\"train data:\\t\" + str(x_train) + \"\\ntrain labels:\\t\" + str(y_train)) + '\\n'\n print(\"test data:\\t\" + str(x_test) + \"\\ntest labels:\\t\" + str(y_test)) + '\\n'\n return 0.0\n\nbalanced_folds()",
"Now all of our train sets have four positive samples and our test sets have one positive sample.\nTo use predetermined folds, place a list of the test sample idices into a list. And then insert that list into another list. Why so many nested lists? Because you can perform multiple cross-validation runs by setting num_iter appropriately and then append num_iter lists of test samples to the outer most list. Note that the test samples for a given fold are the idicies that you provide and then the train samples for that fold are all of the indices from all other test sets joined together. If not done carefully this may lead to duplicated samples in a train set and also samples that fall in both train and test sets of a fold if a datapoint is in multiple folds' test sets.",
"data = list(range(6))\nlabels = [True] * 3 + [False] * 3\n\nfold1 = [[0, 3], [1, 4], [2, 5]]\nfold2 = [[0, 5], [1, 4], [0, 3]] # notice what happens when the indices are not unique\nfolds = [fold1, fold2]\n\n@optunity.cross_validated(x=data, y=labels, folds=folds, num_folds=3, num_iter=2)\ndef multiple_iters(x_train, y_train, x_test, y_test):\n print(\"\")\n print(\"train data:\\t\" + str(x_train) + \"\\t train labels:\\t\" + str(y_train))\n print(\"test data:\\t\" + str(x_test) + \"\\t\\t test labels:\\t\" + str(y_test))\n return 0.0\n\nmultiple_iters()"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
Kaggle/learntools | notebooks/game_ai/raw/tut2.ipynb | apache-2.0 | [
"Introduction\nEven if you're new to Connect Four, you've likely developed several game-playing strategies. In this tutorial, you'll learn to use a heuristic to share your knowledge with the agent. \nGame trees\nAs a human player, how do you think about how to play the game? How do you weigh alternative moves?\nYou likely do a bit of forecasting. For each potential move, you predict what your opponent is likely to do in response, along with how you'd then respond, and what the opponent is likely to do then, and so on. Then, you choose the move that you think is most likely to result in a win.\nWe can formalize this idea and represent all possible outcomes in a (complete) game tree. \n<center>\n<img src=\"https://i.imgur.com/EZKHxyy.png\"><br/>\n</center>\nThe game tree represents each possible move (by agent and opponent), starting with an empty board. The first row shows all possible moves the agent (red player) can make. Next, we record each move the opponent (yellow player) can make in response, and so on, until each branch reaches the end of the game. (The game tree for Connect Four is quite large, so we show only a small preview in the image above.)\nOnce we can see every way the game can possibly end, it can help us to pick the move where we are most likely to win.\nHeuristics\nThe complete game tree for Connect Four has over 4 trillion different boards! So in practice, our agent only works with a small subset when planning a move. \nTo make sure the incomplete tree is still useful to the agent, we will use a heuristic (or heuristic function). The heuristic assigns scores to different game boards, where we estimate that boards with higher scores are more likely to result in the agent winning the game. You will design the heuristic based on your knowledge of the game.\nFor instance, one heuristic that might work reasonably well for Connect Four looks at each group of four adjacent locations in a (horizontal, vertical, or diagonal) line and assigns:\n- 1000000 (1e6) points if the agent has four discs in a row (the agent won), \n- 1 point if the agent filled three spots, and the remaining spot is empty (the agent wins if it fills in the empty spot), and\n- -100 points if the opponent filled three spots, and the remaining spot is empty (the opponent wins by filling in the empty spot).\nThis is also represented in the image below.\n<center>\n<img src=\"https://i.imgur.com/vzQa4ML.png\" width=70%><br/>\n</center>\nAnd how exactly will the agent use the heuristic? Consider it's the agent's turn, and it's trying to plan a move for the game board shown at the top of the figure below. There are seven possible moves (one for each column). For each move, we record the resulting game board.\n<center>\n<img src=\"https://i.imgur.com/PtnLOHt.png\" width=100%><br/>\n</center>\nThen we use the heuristic to assign a score to each board. To do this, we search the grid and look for all occurrences of the pattern in the heuristic, similar to a word search puzzle. Each occurrence modifies the score. For instance,\n- The first board (where the agent plays in column 0) gets a score of 2. This is because the board contains two distinct patterns that each add one point to the score (where both are circled in the image above).\n- The second board is assigned a score of 1.\n- The third board (where the agent plays in column 2) gets a score of 0. This is because none of the patterns from the heuristic appear in the board.\nThe first board receives the highest score, and so the agent will select this move. It's also the best outcome for the agent, since it has a guaranteed win in just one more move. Check this in figure now, to make sure it makes sense to you! \nThe heuristic works really well for this specific example, since it matches the best move with the highest score. It is just one of many heuristics that works reasonably well for creating a Connect Four agent, and you may find that you can design a heuristic that works much better!\nIn general, if you're not sure how to design your heuristic (i.e., how to score different game states, or which scores to assign to different conditions), often the best thing to do is to simply take an initial guess and then play against your agent. This will let you identify specific cases when your agent makes bad moves, which you can then fix by modifying the heuristic.\nCode\nOur one-step lookahead agent will:\n- use the heuristic to assign a score to each possible valid move, and\n- select the move that gets the highest score. (If multiple moves get the high score, we select one at random.)\n\"One-step lookahead\" refers to the fact that the agent looks only one step (or move) into the future, instead of deeper in the game tree. \nTo define this agent, we will use the functions in the code cell below. These functions will make more sense when we use them to specify the agent.",
"#$HIDE_INPUT$\nimport random\nimport numpy as np\n\n# Calculates score if agent drops piece in selected column\ndef score_move(grid, col, mark, config):\n next_grid = drop_piece(grid, col, mark, config)\n score = get_heuristic(next_grid, mark, config)\n return score\n\n# Helper function for score_move: gets board at next step if agent drops piece in selected column\ndef drop_piece(grid, col, mark, config):\n next_grid = grid.copy()\n for row in range(config.rows-1, -1, -1):\n if next_grid[row][col] == 0:\n break\n next_grid[row][col] = mark\n return next_grid\n\n# Helper function for score_move: calculates value of heuristic for grid\ndef get_heuristic(grid, mark, config):\n num_threes = count_windows(grid, 3, mark, config)\n num_fours = count_windows(grid, 4, mark, config)\n num_threes_opp = count_windows(grid, 3, mark%2+1, config)\n score = num_threes - 1e2*num_threes_opp + 1e6*num_fours\n return score\n\n# Helper function for get_heuristic: checks if window satisfies heuristic conditions\ndef check_window(window, num_discs, piece, config):\n return (window.count(piece) == num_discs and window.count(0) == config.inarow-num_discs)\n \n# Helper function for get_heuristic: counts number of windows satisfying specified heuristic conditions\ndef count_windows(grid, num_discs, piece, config):\n num_windows = 0\n # horizontal\n for row in range(config.rows):\n for col in range(config.columns-(config.inarow-1)):\n window = list(grid[row, col:col+config.inarow])\n if check_window(window, num_discs, piece, config):\n num_windows += 1\n # vertical\n for row in range(config.rows-(config.inarow-1)):\n for col in range(config.columns):\n window = list(grid[row:row+config.inarow, col])\n if check_window(window, num_discs, piece, config):\n num_windows += 1\n # positive diagonal\n for row in range(config.rows-(config.inarow-1)):\n for col in range(config.columns-(config.inarow-1)):\n window = list(grid[range(row, row+config.inarow), range(col, col+config.inarow)])\n if check_window(window, num_discs, piece, config):\n num_windows += 1\n # negative diagonal\n for row in range(config.inarow-1, config.rows):\n for col in range(config.columns-(config.inarow-1)):\n window = list(grid[range(row, row-config.inarow, -1), range(col, col+config.inarow)])\n if check_window(window, num_discs, piece, config):\n num_windows += 1\n return num_windows",
"The one-step lookahead agent is defined in the next code cell.",
"# The agent is always implemented as a Python function that accepts two arguments: obs and config\ndef agent(obs, config):\n # Get list of valid moves\n valid_moves = [c for c in range(config.columns) if obs.board[c] == 0]\n # Convert the board to a 2D grid\n grid = np.asarray(obs.board).reshape(config.rows, config.columns)\n # Use the heuristic to assign a score to each possible board in the next turn\n scores = dict(zip(valid_moves, [score_move(grid, col, obs.mark, config) for col in valid_moves]))\n # Get a list of columns (moves) that maximize the heuristic\n max_cols = [key for key in scores.keys() if scores[key] == max(scores.values())]\n # Select at random from the maximizing columns\n return random.choice(max_cols)",
"In the code for the agent, we begin by getting a list of valid moves. This is the same line of code we used in the previous tutorial!\nNext, we convert the game board to a 2D numpy array. For Connect Four, grid is an array with 6 rows and 7 columns.\nThen, the score_move() function calculates the value of the heuristic for each valid move. It uses a couple of helper functions:\n- drop_piece() returns the grid that results when the player drops its disc in the selected column.\n- get_heuristic() calculates the value of the heuristic for the supplied board (grid), where mark is the mark of the agent. This function uses the count_windows() function, which counts the number of windows (of four adjacent locations in a row, column, or diagonal) that satisfy specific conditions from the heuristic. Specifically, count_windows(grid, num_discs, piece, config) yields the number of windows in the game board (grid) that contain num_discs pieces from the player (agent or opponent) with mark piece, and where the remaining locations in the window are empty. For instance, \n - setting num_discs=4 and piece=obs.mark counts the number of times the agent got four discs in a row.\n - setting num_discs=3 and piece=obs.mark%2+1 counts the number of windows where the opponent has three discs, and the remaining location is empty (the opponent wins by filling in the empty spot).\nFinally, we get the list of columns that maximize the heuristic and select one (uniformly) at random. \n(Note: For this course, we decided to provide relatively slower code that was easier to follow. After you've taken the time to understand the code above, can you see how to re-write it, to make it run much faster? As a hint, note that the count_windows() function is used several times to loop over the locations in the game board.)\nIn the next code cell, we see the outcome of one game round against a random agent.",
"from kaggle_environments import make, evaluate\n\n# Create the game environment\nenv = make(\"connectx\")\n\n# Two random agents play one game round\nenv.run([agent, \"random\"])\n\n# Show the game\nenv.render(mode=\"ipython\")",
"We use the get_win_percentage() function from the previous tutorial to check how we can expect it to perform on average.",
"#$HIDE_INPUT$\ndef get_win_percentages(agent1, agent2, n_rounds=100):\n # Use default Connect Four setup\n config = {'rows': 6, 'columns': 7, 'inarow': 4}\n # Agent 1 goes first (roughly) half the time \n outcomes = evaluate(\"connectx\", [agent1, agent2], config, [], n_rounds//2)\n # Agent 2 goes first (roughly) half the time \n outcomes += [[b,a] for [a,b] in evaluate(\"connectx\", [agent2, agent1], config, [], n_rounds-n_rounds//2)]\n print(\"Agent 1 Win Percentage:\", np.round(outcomes.count([1,-1])/len(outcomes), 2))\n print(\"Agent 2 Win Percentage:\", np.round(outcomes.count([-1,1])/len(outcomes), 2))\n print(\"Number of Invalid Plays by Agent 1:\", outcomes.count([None, 0]))\n print(\"Number of Invalid Plays by Agent 2:\", outcomes.count([0, None]))\n\nget_win_percentages(agent1=agent, agent2=\"random\")",
"This agent performs much better than the random agent!\nYour turn\nContinue to the exercise to improve the heuristic."
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
charliememory/AutonomousDriving | CarND-Advanced-Lane-Lines/src/.ipynb_checkpoints/camera_calibration-checkpoint.ipynb | gpl-3.0 | [
"%%HTML\n<style> code {background-color : pink !important;} </style>",
"Camera Calibration with OpenCV\nRun the code in the cell below to extract object points and image points for camera calibration.",
"import numpy as np\nimport cv2\nimport glob\nimport matplotlib.pyplot as plt\n%matplotlib qt\n\n# prepare object points, like (0,0,0), (1,0,0), (2,0,0) ....,(6,5,0)\nobjp = np.zeros((6*8,3), np.float32)\nobjp[:,:2] = np.mgrid[0:8, 0:6].T.reshape(-1,2)\n\n# Arrays to store object points and image points from all the images.\nobjpoints = [] # 3d points in real world space\nimgpoints = [] # 2d points in image plane.\n\n# Make a list of calibration images\nimages = glob.glob('../camera_cal/calibration*.jpg')\n\n# Step through the list and search for chessboard corners\nfor idx, fname in enumerate(images):\n img = cv2.imread(fname)\n gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)\n\n # Find the chessboard corners\n ret, corners = cv2.findChessboardCorners(gray, (8,6), None)\n\n # If found, add object points, image points\n if ret == True:\n objpoints.append(objp)\n imgpoints.append(corners)\n\n # Draw and display the corners\n cv2.drawChessboardCorners(img, (8,6), corners, ret)\n #write_name = 'corners_found'+str(idx)+'.jpg'\n #cv2.imwrite(write_name, img)\n cv2.imshow('img', img)\n cv2.waitKey(500)\n\ncv2.destroyAllWindows()",
"If the above cell ran sucessfully, you should now have objpoints and imgpoints needed for camera calibration. Run the cell below to calibrate, calculate distortion coefficients, and test undistortion on an image!",
"import pickle\n%matplotlib inline\n\n# Test undistortion on an image\nimg = cv2.imread('calibration_wide/test_image.jpg')\nimg_size = (img.shape[1], img.shape[0])\n\n# Do camera calibration given object points and image points\nret, mtx, dist, rvecs, tvecs = cv2.calibrateCamera(objpoints, imgpoints, img_size,None,None)\n\n\ndst = cv2.undistort(img, mtx, dist, None, mtx)\ncv2.imwrite('calibration_wide/test_undist.jpg',dst)\n\n# Save the camera calibration result for later use (we won't worry about rvecs / tvecs)\ndist_pickle = {}\ndist_pickle[\"mtx\"] = mtx\ndist_pickle[\"dist\"] = dist\npickle.dump( dist_pickle, open( \"calibration_wide/wide_dist_pickle.p\", \"wb\" ) )\n#dst = cv2.cvtColor(dst, cv2.COLOR_BGR2RGB)\n# Visualize undistortion\nf, (ax1, ax2) = plt.subplots(1, 2, figsize=(20,10))\nax1.imshow(img)\nax1.set_title('Original Image', fontsize=30)\nax2.imshow(dst)\nax2.set_title('Undistorted Image', fontsize=30)"
] | [
"code",
"markdown",
"code",
"markdown",
"code"
] |
johnpfay/environ859 | 06_WebGIS/Notebooks/GeocodingWithOSM.ipynb | gpl-3.0 | [
"Geocoding using the Open Street Map API\nHere we explore an example of using an Application Programming Interface, or API. Briefly, an API is a set of commands we can send over the internet to a remote server, spurring the server to process these commands and return a response. In this example, we'll explore how we can use the Open Street Map's geocoding API to get the coordinates responding to a particular address.\nThis is not an in-depth exploration of this particular API, but rather an introduction on how to use an API within Python, specfically using the handy requests and json libraries. \n\nFirst we import the requests and json modules.<br>Usfeful documentation on these modules are found here: <br>\n* requests: http://docs.python-requests.org\n* json: https://pymotw.com/2/json/",
"#Import modules\nimport requests\nimport json",
"Now we will form the request to invoke the Open Street Map API. Documentation on this API is found here: \nhttp://wiki.openstreetmap.org/wiki/Nominatim\nFirst, we'll generate an example address to geocode. Why not use Environment Hall? But feel free to use your own address!",
"#Get the address\naddress = '9 Circuit Drive, Durham, NC, 27708' ",
"An API request consists two components: the service endpoint and a set of parameters associated with the service. \nWhen using the requests module to create and send our request, we supply the service endpoint is a string containing the server address (as a URL) and the service name (here, it's search). And the parameters are supplied in the form of a Python dictionary. Here, the two paramters we'll pass are the format and address parameters.",
"#Form the request\nosmURL = 'http://nominatim.openstreetmap.org/search'\nparams = {'format':'json','q':address} ",
"Now, we can use requests to send our command off to the OSM server. The server's response is saved as the response variable.",
"#Send the request\nresponse = requests.get(osmURL, params)",
"The response object below contains a lot of information. You are encouraged to explore this object further. Here we'll explore one property which is the full URL created. Copy and paste the result in your favorite browser, and you'll see the result of our request in raw form. When you try this, try changing 'json' to 'html' in the URL...",
"response.url\n\n#Opens the URL as an html response (vs JSON) in a web browser...\nimport webbrowser\nwebbrowser.open_new(response.url.replace('json','html'))",
"What we really want from the response, however, is the data returned by the service. The json function of the response object converts the response to an object in JavaScript Object Notation, or JSON. JSON is esentially a list of dictionaries that we can easily manipulate in Python.",
"#Read in the response as a JSON encoded object\njsonObj = response.json()",
"pprint or \"pretty print\" allows us to display JSON objects in a readable format. Let's make a pretty print of our JSON repsonse.",
"from pprint import pprint\npprint(jsonObj)",
"Our response contains only one item in the JSON list. We'll extract to a dictionary and print it's items.",
"dataDict = jsonObj[0]\nprint dataDict.keys()",
"Now we can easily grab the lat and lon objects from our response",
"lat = float(dataDict['lat'])\nlng = float(dataDict['lon'])\nprint \"The lat,lng\n\nd = jsonObj[0]\nd['lon'],d['lat']",
"Now let's inform the user of the result of the whole process...",
"print \"The address {0} is located at\\n{1}° Lat, {2}° Lon\".format(address,lat,lng)"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
statsmodels/statsmodels.github.io | v0.13.0/examples/notebooks/generated/variance_components.ipynb | bsd-3-clause | [
"Variance Component Analysis\nThis notebook illustrates variance components analysis for two-level\nnested and crossed designs.",
"import numpy as np\nimport statsmodels.api as sm\nfrom statsmodels.regression.mixed_linear_model import VCSpec\nimport pandas as pd",
"Make the notebook reproducible",
"np.random.seed(3123)",
"Nested analysis\nIn our discussion below, \"Group 2\" is nested within \"Group 1\". As a\nconcrete example, \"Group 1\" might be school districts, with \"Group\n2\" being individual schools. The function below generates data from\nsuch a population. In a nested analysis, the group 2 labels that\nare nested within different group 1 labels are treated as\nindependent groups, even if they have the same label. For example,\ntwo schools labeled \"school 1\" that are in two different school\ndistricts are treated as independent schools, even though they have\nthe same label.",
"def generate_nested(\n n_group1=200, n_group2=20, n_rep=10, group1_sd=2, group2_sd=3, unexplained_sd=4\n):\n\n # Group 1 indicators\n group1 = np.kron(np.arange(n_group1), np.ones(n_group2 * n_rep))\n\n # Group 1 effects\n u = group1_sd * np.random.normal(size=n_group1)\n effects1 = np.kron(u, np.ones(n_group2 * n_rep))\n\n # Group 2 indicators\n group2 = np.kron(np.ones(n_group1), np.kron(np.arange(n_group2), np.ones(n_rep)))\n\n # Group 2 effects\n u = group2_sd * np.random.normal(size=n_group1 * n_group2)\n effects2 = np.kron(u, np.ones(n_rep))\n\n e = unexplained_sd * np.random.normal(size=n_group1 * n_group2 * n_rep)\n y = effects1 + effects2 + e\n\n df = pd.DataFrame({\"y\": y, \"group1\": group1, \"group2\": group2})\n\n return df",
"Generate a data set to analyze.",
"df = generate_nested()",
"Using all the default arguments for generate_nested, the population\nvalues of \"group 1 Var\" and \"group 2 Var\" are 2^2=4 and 3^2=9,\nrespectively. The unexplained variance, listed as \"scale\" at the\ntop of the summary table, has population value 4^2=16.",
"model1 = sm.MixedLM.from_formula(\n \"y ~ 1\",\n re_formula=\"1\",\n vc_formula={\"group2\": \"0 + C(group2)\"},\n groups=\"group1\",\n data=df,\n)\nresult1 = model1.fit()\nprint(result1.summary())",
"If we wish to avoid the formula interface, we can fit the same model\nby building the design matrices manually.",
"def f(x):\n n = x.shape[0]\n g2 = x.group2\n u = g2.unique()\n u.sort()\n uv = {v: k for k, v in enumerate(u)}\n mat = np.zeros((n, len(u)))\n for i in range(n):\n mat[i, uv[g2.iloc[i]]] = 1\n colnames = [\"%d\" % z for z in u]\n return mat, colnames",
"Then we set up the variance components using the VCSpec class.",
"vcm = df.groupby(\"group1\").apply(f).to_list()\nmats = [x[0] for x in vcm]\ncolnames = [x[1] for x in vcm]\nnames = [\"group2\"]\nvcs = VCSpec(names, [colnames], [mats])",
"Finally we fit the model. It can be seen that the results of the\ntwo fits are identical.",
"oo = np.ones(df.shape[0])\nmodel2 = sm.MixedLM(df.y, oo, exog_re=oo, groups=df.group1, exog_vc=vcs)\nresult2 = model2.fit()\nprint(result2.summary())",
"Crossed analysis\nIn a crossed analysis, the levels of one group can occur in any\ncombination with the levels of the another group. The groups in\nStatsmodels MixedLM are always nested, but it is possible to fit a\ncrossed model by having only one group, and specifying all random\neffects as variance components. Many, but not all crossed models\ncan be fit in this way. The function below generates a crossed data\nset with two levels of random structure.",
"def generate_crossed(\n n_group1=100, n_group2=100, n_rep=4, group1_sd=2, group2_sd=3, unexplained_sd=4\n):\n\n # Group 1 indicators\n group1 = np.kron(\n np.arange(n_group1, dtype=int), np.ones(n_group2 * n_rep, dtype=int)\n )\n group1 = group1[np.random.permutation(len(group1))]\n\n # Group 1 effects\n u = group1_sd * np.random.normal(size=n_group1)\n effects1 = u[group1]\n\n # Group 2 indicators\n group2 = np.kron(\n np.arange(n_group2, dtype=int), np.ones(n_group2 * n_rep, dtype=int)\n )\n group2 = group2[np.random.permutation(len(group2))]\n\n # Group 2 effects\n u = group2_sd * np.random.normal(size=n_group2)\n effects2 = u[group2]\n\n e = unexplained_sd * np.random.normal(size=n_group1 * n_group2 * n_rep)\n y = effects1 + effects2 + e\n\n df = pd.DataFrame({\"y\": y, \"group1\": group1, \"group2\": group2})\n\n return df",
"Generate a data set to analyze.",
"df = generate_crossed()",
"Next we fit the model, note that the groups vector is constant.\nUsing the default parameters for generate_crossed, the level 1\nvariance should be 2^2=4, the level 2 variance should be 3^2=9, and\nthe unexplained variance should be 4^2=16.",
"vc = {\"g1\": \"0 + C(group1)\", \"g2\": \"0 + C(group2)\"}\noo = np.ones(df.shape[0])\nmodel3 = sm.MixedLM.from_formula(\"y ~ 1\", groups=oo, vc_formula=vc, data=df)\nresult3 = model3.fit()\nprint(result3.summary())",
"If we wish to avoid the formula interface, we can fit the same model\nby building the design matrices manually.",
"def f(g):\n n = len(g)\n u = g.unique()\n u.sort()\n uv = {v: k for k, v in enumerate(u)}\n mat = np.zeros((n, len(u)))\n for i in range(n):\n mat[i, uv[g[i]]] = 1\n colnames = [\"%d\" % z for z in u]\n return [mat], [colnames]\n\n\nvcm = [f(df.group1), f(df.group2)]\nmats = [x[0] for x in vcm]\ncolnames = [x[1] for x in vcm]\nnames = [\"group1\", \"group2\"]\nvcs = VCSpec(names, colnames, mats)",
"Here we fit the model without using formulas, it is simple to check\nthat the results for models 3 and 4 are identical.",
"oo = np.ones(df.shape[0])\nmodel4 = sm.MixedLM(df.y, oo[:, None], exog_re=None, groups=oo, exog_vc=vcs)\nresult4 = model4.fit()\nprint(result4.summary())"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
agrc/Presentations | UGIC/2022/SpatiallyEnabledDataFrames/alpha.ipynb | mit | [
"Ditch the Cursor\n\nEditing Feature Classes with Spatialy-Enabled DataFrames\nArcPy Is Great, But...\n\nProblem one: row[0]\n```python\ndef update_year_built(layer, year_fields):\n with arcpy.da.UpdateCursor(layer, year_fields) as cursor:\n for row in cursor:\n if row[0] is None or row[0] < 1 or row[0] == '':\n row[0] = f'{row[1]}{row[2]}' \n cursor.updateRow(row)\n\n```\n```python\nIf new parcels' owner/owner addr changed, or if PID is new, add to appropriate lists\nwith arcpy.da.SearchCursor(tville_parcels_fc, parcel_check_fields) as new_cursor:\n for row in new_cursor:\n if row[0] in old_parcels:\n old_name = old_parcels[row[0]][0]\n old_addr = old_parcels[row[0]][1]\n if row[1] != old_name or row[2] != old_addr:\n own_addr_changed.append(row[0])\n else:\n new_parcels.append(row[0])\n```\nProblem 2: Nested cursors to transfer data between feature classes\npython\nwith arcpy.da.SearchCursor(new_data_fc, fields) as new_data_cursor, \\\n arcpy.da.InsertCursor(current_data_fc, fields) as current_data_cursor:\n for row in new_data_cursor:\n current_data_cursor.insertRow(row)\n copied_records += 1\nProblem 3: Renaming/reordering fields\n```python\nfieldmappings = arcpy.FieldMappings()\nfieldmappings.addTable(energov_parcels_fc)\nfieldmappings.addTable(tville_parcels_fc)\nfields_list = [\n ('PIN', 'parcel_id'),\n ('own_cityst', 'own_citystate'),\n ('own_zip_fo', 'own_zip_four'),\n ('prop_locat', 'prop_location'),\n ('property_t', 'property_type'),\n ('neighborho', 'neighborhood_code'),\n ('adjusted_p', 'adjusted_prcl_total'),\n #: ...\n]\nfor field_map in fields_list:\n field_to_map_index = fieldmappings.findFieldMapIndex(field_map[0])\n field_to_map = fieldmappings.getFieldMap(field_to_map_index)\n field_to_map.addInputField(tville_parcels_fc, field_map[1])\n fieldmappings.replaceFieldMap(field_to_map_index, field_to_map)\n```\nProblem 4: Intermediate feature classes\npython\nssa_summarized_roads = fr'{output_gdb}\\ssa_bike_lanes_roads'\nssa_summarized_paths = fr'{output_gdb}\\ssa_bike_lanes_paths'\nssa_summarized_lengths = fr'{output_gdb}b\\SmallStatisticalAreas_2018_bike_lane_lengths'\ntract_summarized_roads = fr'{output_gdb}\\tract_bike_lanes_roads'\ntract_summarized_paths = fr'{output_gdb}\\tract_bike_lanes_paths'\ntract_summarized_lengths = fr'{output_gdb}\\census_tracts_2020_bike_lane_lengths'\nbuffered_tracts = fr'{output_gdb}\\census_tracts_2020_buffered_30ft'\nbuffered_areas = fr'{output_gdb}\\small_areas_buffered_200ft'\nbike_lanes = fr'{output_gdb}\\bike_lanes_20220111'\nmajor_paths = fr'{output_gdb}\\major_paths'\nEnter the Pandas!\n\npandas gives you the tools to work with tables of data defined by rows and columns, called a DataFrame",
"import pandas as pd\n\nmedians_df = pd.read_csv('assets/median_age.csv')\nmedians_df.head()",
"We can access individual rows and columns using .loc (with index labels) or .iloc (with indices)\npython\nmedians_df.loc[row labels, column labels]\nmedians_df.iloc[row indices, column indices]",
"medians_df.loc[[0, 1, 2, 5], 'County']\n\nmedians_df.iloc[10:15, :4]",
"We can also get just a few columns from all rows",
"medians_df[['Median_age', 'Avg_MonthlyIncome']].head()",
"Extending pandas Spatially\nThe ArcGIS API for Python provides Spatially Enabled DataFrames, which include geometry information.",
"from arcgis.features import GeoAccessor, GeoSeriesAccessor\n\ncounties_fc_path = r'C:\\Users\\jdadams\\AppData\\Roaming\\Esri\\ArcGISPro\\Favorites\\opensgid.agrc.utah.gov.sde\\opensgid.boundaries.county_boundaries'\ncounties_df = pd.DataFrame.spatial.from_featureclass(counties_fc_path)\ncounties_df.head()",
"pandas lets you work on rows that meet a certain condition",
"counties_df.loc[counties_df['stateplane'] == 'Central', ['name', 'stateplane', 'fips_str']]",
"You can easily add new columns",
"counties_df['emperor'] = 'Jake'\ncounties_df.head()",
"pandas provides powerful built in grouping and aggregation tools, along with Spatially Enabled DataFrames' geometry operations",
"counties_df.groupby('stateplane').count()\n\ncounties_df['acres'] = counties_df['SHAPE'].apply(lambda shape: shape.area / 4046.8564)\ncounties_df.groupby('stateplane')['acres'].sum()",
"pandas Solutions to our Arcpy Problems\n\nrow[0] Solution: Field Names\n```python\ndef update_unit_count(parcels_df):\n \"\"\"Update unit counts in-place for single family, duplex, and tri/quad\nArgs:\n parcels_df (pd.DataFrame): The evaluated parcel dataset with UNIT_COUNT, HOUSE_CNT, SUBTYPE, and NOTE columns\n\"\"\"\n\n# fix single family (non-pud)\nzero_or_null_unit_counts = (parcels_df['UNIT_COUNT'] == 0) | (parcels_df['UNIT_COUNT'].isna())\nparcels_df.loc[(zero_or_null_unit_counts) & (parcels_df['SUBTYPE'] == 'single_family'), 'UNIT_COUNT'] = 1\n\n# fix duplex\nparcels_df.loc[(parcels_df['SUBTYPE'] == 'duplex'), 'UNIT_COUNT'] = 2\n\n# fix triplex-quadplex\nparcels_df.loc[(parcels_df['UNIT_COUNT'] < parcels_df['HOUSE_CNT']) & (parcels_df['NOTE'] == 'triplex-quadplex'),\n 'UNIT_COUNT'] = parcels_df['HOUSE_CNT']\n\n```\nLet's make Erik the emperor of the small counties that use State Plane North",
"counties_df.loc[(counties_df['pop_lastcensus'] < 100000) & (counties_df['stateplane'] == 'North'), 'emperor'] = 'Erik'\ncounties_df[['name', 'pop_lastcensus', 'stateplane', 'emperor']].sort_values('name').head()",
"Nested Cursors Solution: Merged DataFrames\n```python\ndef _get_current_attachment_info_by_oid(self, live_data_subset_df):\n#: Join live attachment table to feature layer info\nlive_attachments_df = pd.DataFrame(self.feature_layer.attachments.search())\nlive_attachments_subset_df = live_attachments_df.reindex(columns=['PARENTOBJECTID', 'NAME', 'ID'])\nmerged_df = live_data_subset_df.merge(\n live_attachments_subset_df, left_on='OBJECTID', right_on='PARENTOBJECTID', how='left'\n)\n\nreturn merged_df\n\n```\nLet's add census data to our counties",
"census_fc_path = r'C:\\Users\\jdadams\\AppData\\Roaming\\Esri\\ArcGISPro\\Favorites\\opensgid.agrc.utah.gov.sde\\opensgid.demographic.census_counties_2020'\ncensus_df = pd.DataFrame.spatial.from_featureclass(census_fc_path)\ncounties_with_census_df = counties_df.merge(census_df[['geoid20', 'aland20']], left_on='fips_str', right_on='geoid20')\ncounties_with_census_df.head()",
"Renaming/Reordering Fields Solution: df.rename() and df.reindex()\npython\nfinal_parcels_df.rename(\n columns={\n 'name': 'CITY', #: from cities\n 'NewSA': 'SUBCOUNTY', #: From subcounties/regions\n 'BUILT_YR': 'APX_BLT_YR',\n 'BLDG_SQFT': 'TOT_BD_FT2',\n 'TOTAL_MKT_VALUE': 'TOT_VALUE',\n 'PARCEL_ACRES': 'ACRES',\n },\n inplace=True\n)\n```python\nfinal_fields = [\n 'SHAPE', 'UNIT_ID', 'TYPE', 'SUBTYPE', 'IS_OUG', 'UNIT_COUNT', 'DUA', 'ACRES', 'TOT_BD_FT2', 'TOT_VALUE',\n 'APX_BLT_YR', 'BLT_DECADE', 'CITY', 'COUNTY', 'SUBCOUNTY', 'PARCEL_ID'\n]\nlogging.info('Writing final data out to disk...')\noutput_df = final_parcels_df.reindex(columns=final_fields)\noutput_df.spatial.to_featureclass(output_fc, sanitize_columns=False)\n```\n\"Emperor\" is too bold; let's use \"Benevolent Dictator for Life\" instead.",
"renames = {\n 'name': 'County Name',\n 'pop_lastcensus': 'Last Census Population',\n 'emperor': 'Benevolent Dictator for Life',\n 'acres': 'Acres',\n 'aland20': 'Land Area',\n}\ncounties_with_census_df.rename(columns=renames, inplace=True)\ncounties_with_census_df.head()",
"Now that we've got it all looking good, let's reorder the fields and get rid of the ones we don't want",
"field_order = [\n 'County Name',\n 'Benevolent Dictator for Life',\n 'Acres',\n 'Land Area',\n 'Last Census Population',\n 'SHAPE'\n]\nfinal_counties_df = counties_with_census_df.reindex(columns=field_order)\nfinal_counties_df.head()",
"Intermediate Feature Classes: New DataFrame Variables\nWith everything we've done, we've not written a single feature class to either disk or in_memory\npython\ncounties_df\ncounties_with_census_df\nfinal_counties_df\nFinally, Write It All To Disk",
"final_counties_df.spatial.to_featureclass(r'C:\\gis\\Projects\\HousingInventory\\HousingInventory.gdb\\counties_ugic')",
"Spatial Joins\n```python\ncentroids_df = pd.DataFrame.spatial.from_featureclass(centroids_fc)\nwalksheds_df = pd.DataFrame.spatial.from_featureclass(walksheds_fc)\nwalk_centroids_df = centroids_df.spatial.join(walksheds_df, 'left', 'within')\n```\nUpdate an AGOL Hosted Feature Layer\n```python\nfeature_layer_item = gis.content.get(feature_layer_itemid)\nfeature_layer = arcgis.features.FeatureLayer.fromitem(feature_layer_item)\nlive_dataframe = pd.DataFrame.spatial.from_layer(feature_layer)\n: maniuplate/transform the existing data\ncleaned_dataframe = do_stuff(live_dataframe)\nfeature_layer.edit_features(updates=cleaned_dataframe.spatial.to_featureset())\n```\nResources?\n\nOfficial Docs\n\nPandas docs: https://pandas.pydata.org/docs/user_guide/index.html\nArcGIS API for Python Reference: https://developers.arcgis.com/python/api-reference/arcgis.features.toc.html#geoaccessor\n\nExample Code\n\nErik's 2020 UGIC Intro to Pandas presentation: https://agrc.github.io/Presentations/UGIC/2020/pandas.pdf\nUpdating AGOL with dataframe: https://github.com/agrc/palletjack/blob/main/src/palletjack/updaters.py#L175\nSo much dataframe craziness: https://github.com/agrc/housing-unit-inventory/tree/first-dev/src/housing_unit_inventory\n\n\njdadams@utah.gov\ngis.utah.gov/presentations"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
blink1073/oct2py | example/octavemagic_extension.ipynb | mit | [
"octavemagic: Octave inside IPython\nInstallation\nThe octavemagic extension provides the ability to interact with Octave. It is provided by the oct2py package,\nwhich may be installed using pip or easy_install.\nTo enable the extension, load it as follows:",
"%load_ext oct2py.ipython",
"Overview\nLoading the extension enables three magic functions: %octave, %octave_push, and %octave_pull.\nThe first is for executing one or more lines of Octave, while the latter allow moving variables between the Octave and Python workspace.\nHere you see an example of how to execute a single line of Octave, and how to transfer the generated value back to Python:",
"x = %octave [1 2; 3 4];\nx\n\na = [1, 2, 3]\n\n%octave_push a\n%octave a = a * 2;\n%octave_pull a\na",
"When using the cell magic, %%octave (note the double %), multiple lines of Octave can be executed together. Unlike\nwith the single cell magic, no value is returned, so we use the -i and -o flags to specify input and output variables. Also note the use of the semicolon to suppress the Octave output.",
"%%octave -i x -o U,S,V\n[U, S, V] = svd(x);\n\nprint(U, S, V)",
"Plotting\nPlot output is automatically captured and displayed, and using the -f flag you may choose its format (currently, png and svg are supported).",
"%%octave -f svg\n\np = [12 -2.5 -8 -0.1 8];\nx = 0:0.01:1;\n\npolyout(p, 'x')\nplot(x, polyval(p, x));",
"The width or the height can be specified to constrain the image while maintaining the original aspect ratio.",
"%%octave -f png -w 600\n\n% butterworth filter, order 2, cutoff pi/2 radians\nb = [0.292893218813452 0.585786437626905 0.292893218813452];\na = [1 0 0.171572875253810];\nfreqz(b, a, 32);\n\n%%octave -s 600,200 -f png\n\n% Note: On Windows, this will not show the plots unless Ghostscript is installed.\n\nsubplot(121);\n[x, y] = meshgrid(0:0.1:3);\nr = sin(x - 0.5).^2 + cos(y - 0.5).^2;\nsurf(x, y, r);\n\nsubplot(122);\nsombrero()",
"Multiple figures can be drawn. Note that when using imshow the image will be created as a PNG with the raw\nimage dimensions.",
"%%octave -f svg -h 300\nsombrero\nfigure\nimshow(rand(200,200))"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
broundy/udacity | nanodegrees/deep_learning_foundations/unit_2/project_2/dlnd_image_classification.ipynb | unlicense | [
"Image Classification\nIn this project, you'll classify images from the CIFAR-10 dataset. The dataset consists of airplanes, dogs, cats, and other objects. You'll preprocess the images, then train a convolutional neural network on all the samples. The images need to be normalized and the labels need to be one-hot encoded. You'll get to apply what you learned and build a convolutional, max pooling, dropout, and fully connected layers. At the end, you'll get to see your neural network's predictions on the sample images.\nGet the Data\nRun the following cell to download the CIFAR-10 dataset for python.",
"\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\nfrom urllib.request import urlretrieve\nfrom os.path import isfile, isdir\nfrom tqdm import tqdm\nimport problem_unittests as tests\nimport tarfile\n\ncifar10_dataset_folder_path = 'cifar-10-batches-py'\n\nclass DLProgress(tqdm):\n last_block = 0\n\n def hook(self, block_num=1, block_size=1, total_size=None):\n self.total = total_size\n self.update((block_num - self.last_block) * block_size)\n self.last_block = block_num\n\nif not isfile('cifar-10-python.tar.gz'):\n with DLProgress(unit='B', unit_scale=True, miniters=1, desc='CIFAR-10 Dataset') as pbar:\n urlretrieve(\n 'https://www.cs.toronto.edu/~kriz/cifar-10-python.tar.gz',\n 'cifar-10-python.tar.gz',\n pbar.hook)\n\nif not isdir(cifar10_dataset_folder_path):\n with tarfile.open('cifar-10-python.tar.gz') as tar:\n tar.extractall()\n tar.close()\n\n\ntests.test_folder_path(cifar10_dataset_folder_path)",
"Explore the Data\nThe dataset is broken into batches to prevent your machine from running out of memory. The CIFAR-10 dataset consists of 5 batches, named data_batch_1, data_batch_2, etc.. Each batch contains the labels and images that are one of the following:\n* airplane\n* automobile\n* bird\n* cat\n* deer\n* dog\n* frog\n* horse\n* ship\n* truck\nUnderstanding a dataset is part of making predictions on the data. Play around with the code cell below by changing the batch_id and sample_id. The batch_id is the id for a batch (1-5). The sample_id is the id for a image and label pair in the batch.\nAsk yourself \"What are all possible labels?\", \"What is the range of values for the image data?\", \"Are the labels in order or random?\". Answers to questions like these will help you preprocess the data and end up with better predictions.",
"%matplotlib inline\n%config InlineBackend.figure_format = 'retina'\n\nimport helper\nimport numpy as np\n\n# Explore the dataset\nbatch_id = 2\nsample_id = 3\nhelper.display_stats(cifar10_dataset_folder_path, batch_id, sample_id)",
"Implement Preprocess Functions\nNormalize\nIn the cell below, implement the normalize function to take in image data, x, and return it as a normalized Numpy array. The values should be in the range of 0 to 1, inclusive. The return object should be the same shape as x.",
"def normalize(x):\n \"\"\"\n Normalize a list of sample image data in the range of 0 to 1\n : x: List of image data. The image shape is (32, 32, 3)\n : return: Numpy array of normalize data\n \"\"\"\n return x / np.max(x)\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_normalize(normalize)",
"One-hot encode\nJust like the previous code cell, you'll be implementing a function for preprocessing. This time, you'll implement the one_hot_encode function. The input, x, are a list of labels. Implement the function to return the list of labels as One-Hot encoded Numpy array. The possible values for labels are 0 to 9. The one-hot encoding function should return the same encoding for each value between each call to one_hot_encode. Make sure to save the map of encodings outside the function.\nHint: Don't reinvent the wheel.",
"def one_hot_encode(x):\n \"\"\"\n One hot encode a list of sample labels. Return a one-hot encoded vector for each label.\n : x: List of sample Labels\n : return: Numpy array of one-hot encoded labels\n \"\"\"\n return np.eye(10)[x]\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\n\ntests.test_one_hot_encode(one_hot_encode)",
"Randomize Data\nAs you saw from exploring the data above, the order of the samples are randomized. It doesn't hurt to randomize it again, but you don't need to for this dataset.\nPreprocess all the data and save it\nRunning the code cell below will preprocess all the CIFAR-10 data and save it to file. The code below also uses 10% of the training data for validation.",
"\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\n# Preprocess Training, Validation, and Testing Data\nhelper.preprocess_and_save_data(cifar10_dataset_folder_path, normalize, one_hot_encode)",
"Check Point\nThis is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.",
"\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\nimport pickle\nimport problem_unittests as tests\nimport helper\n\n# Load the Preprocessed Validation data\nvalid_features, valid_labels = pickle.load(open('preprocess_validation.p', mode='rb'))",
"Build the network\nFor the neural network, you'll build each layer into a function. Most of the code you've seen has been outside of functions. To test your code more thoroughly, we require that you put each layer in a function. This allows us to give you better feedback and test for simple mistakes using our unittests before you submit your project.\n\nNote: If you're finding it hard to dedicate enough time for this course each week, we've provided a small shortcut to this part of the project. In the next couple of problems, you'll have the option to use classes from the TensorFlow Layers or TensorFlow Layers (contrib) packages to build each layer, except the layers you build in the \"Convolutional and Max Pooling Layer\" section. TF Layers is similar to Keras's and TFLearn's abstraction to layers, so it's easy to pickup.\nHowever, if you would like to get the most out of this course, try to solve all the problems without using anything from the TF Layers packages. You can still use classes from other packages that happen to have the same name as ones you find in TF Layers! For example, instead of using the TF Layers version of the conv2d class, tf.layers.conv2d, you would want to use the TF Neural Network version of conv2d, tf.nn.conv2d. \n\nLet's begin!\nInput\nThe neural network needs to read the image data, one-hot encoded labels, and dropout keep probability. Implement the following functions\n* Implement neural_net_image_input\n * Return a TF Placeholder\n * Set the shape using image_shape with batch size set to None.\n * Name the TensorFlow placeholder \"x\" using the TensorFlow name parameter in the TF Placeholder.\n* Implement neural_net_label_input\n * Return a TF Placeholder\n * Set the shape using n_classes with batch size set to None.\n * Name the TensorFlow placeholder \"y\" using the TensorFlow name parameter in the TF Placeholder.\n* Implement neural_net_keep_prob_input\n * Return a TF Placeholder for dropout keep probability.\n * Name the TensorFlow placeholder \"keep_prob\" using the TensorFlow name parameter in the TF Placeholder.\nThese names will be used at the end of the project to load your saved model.\nNote: None for shapes in TensorFlow allow for a dynamic size.",
"import tensorflow as tf\n\ndef neural_net_image_input(image_shape):\n \"\"\"\n Return a Tensor for a bach of image input\n : image_shape: Shape of the images\n : return: Tensor for image input.\n \"\"\"\n return tf.placeholder(tf.float32, shape=[None, image_shape[0], image_shape[1], image_shape[2]], name='x')\n\n\ndef neural_net_label_input(n_classes):\n \"\"\"\n Return a Tensor for a batch of label input\n : n_classes: Number of classes\n : return: Tensor for label input.\n \"\"\"\n return tf.placeholder(tf.float32, shape=[None, n_classes], name='y')\n\n\ndef neural_net_keep_prob_input():\n \"\"\"\n Return a Tensor for keep probability\n : return: Tensor for keep probability.\n \"\"\"\n return tf.placeholder(tf.float32, name='keep_prob')\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntf.reset_default_graph()\ntests.test_nn_image_inputs(neural_net_image_input)\ntests.test_nn_label_inputs(neural_net_label_input)\ntests.test_nn_keep_prob_inputs(neural_net_keep_prob_input)",
"Convolution and Max Pooling Layer\nConvolution layers have a lot of success with images. For this code cell, you should implement the function conv2d_maxpool to apply convolution then max pooling:\n* Create the weight and bias using conv_ksize, conv_num_outputs and the shape of x_tensor.\n* Apply a convolution to x_tensor using weight and conv_strides.\n * We recommend you use same padding, but you're welcome to use any padding.\n* Add bias\n* Add a nonlinear activation to the convolution.\n* Apply Max Pooling using pool_ksize and pool_strides.\n * We recommend you use same padding, but you're welcome to use any padding.\nNote: You can't use TensorFlow Layers or TensorFlow Layers (contrib) for this layer, but you can still use TensorFlow's Neural Network package. You may still use the shortcut option for all the other layers.",
"def conv2d_maxpool(x_tensor, conv_num_outputs, conv_ksize, conv_strides, pool_ksize, pool_strides):\n \"\"\"\n Apply convolution then max pooling to x_tensor\n :param x_tensor: TensorFlow Tensor\n :param conv_num_outputs: Number of outputs for the convolutional layer\n :param conv_ksize: kernal size 2-D Tuple for the convolutional layer\n :param conv_strides: Stride 2-D Tuple for convolution\n :param pool_ksize: kernal size 2-D Tuple for pool\n :param pool_strides: Stride 2-D Tuple for pool\n : return: A tensor that represents convolution and max pooling of x_tensor\n \"\"\"\n \n F_W = tf.Variable(tf.truncated_normal([conv_ksize[0], conv_ksize[1], x_tensor.shape.as_list()[3], conv_num_outputs], stddev=0.05))\n F_b = tf.Variable(tf.zeros(conv_num_outputs))\n \n output = tf.nn.conv2d(x_tensor, F_W, strides=[1, conv_strides[0], conv_strides[1], 1], padding='SAME')\n output = tf.nn.bias_add(output, F_b)\n output = tf.nn.relu(output)\n output = tf.nn.max_pool(output, ksize=[1, pool_ksize[0], pool_ksize[1], 1], strides=[1, pool_strides[0], pool_strides[1], 1], padding='SAME')\n \n return output \n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_con_pool(conv2d_maxpool)",
"Flatten Layer\nImplement the flatten function to change the dimension of x_tensor from a 4-D tensor to a 2-D tensor. The output should be the shape (Batch Size, Flattened Image Size). Shortcut option: you can use classes from the TensorFlow Layers or TensorFlow Layers (contrib) packages for this layer. For more of a challenge, only use other TensorFlow packages.",
"def flatten(x_tensor):\n \"\"\"\n Flatten x_tensor to (Batch Size, Flattened Image Size)\n : x_tensor: A tensor of size (Batch Size, ...), where ... are the image dimensions.\n : return: A tensor of size (Batch Size, Flattened Image Size).\n \"\"\"\n return tf.contrib.layers.flatten(x_tensor)\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_flatten(flatten)",
"Fully-Connected Layer\nImplement the fully_conn function to apply a fully connected layer to x_tensor with the shape (Batch Size, num_outputs). Shortcut option: you can use classes from the TensorFlow Layers or TensorFlow Layers (contrib) packages for this layer. For more of a challenge, only use other TensorFlow packages.",
"def fully_conn(x_tensor, num_outputs):\n \"\"\"\n Apply a fully connected layer to x_tensor using weight and bias\n : x_tensor: A 2-D tensor where the first dimension is batch size.\n : num_outputs: The number of output that the new tensor should be.\n : return: A 2-D tensor where the second dimension is num_outputs.\n \"\"\"\n return tf.contrib.layers.fully_connected(inputs=x_tensor, num_outputs=num_outputs, activation_fn=tf.nn.relu)\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_fully_conn(fully_conn)",
"Output Layer\nImplement the output function to apply a fully connected layer to x_tensor with the shape (Batch Size, num_outputs). Shortcut option: you can use classes from the TensorFlow Layers or TensorFlow Layers (contrib) packages for this layer. For more of a challenge, only use other TensorFlow packages.\nNote: Activation, softmax, or cross entropy should not be applied to this.",
"def output(x_tensor, num_outputs):\n \"\"\"\n Apply a output layer to x_tensor using weight and bias\n : x_tensor: A 2-D tensor where the first dimension is batch size.\n : num_outputs: The number of output that the new tensor should be.\n : return: A 2-D tensor where the second dimension is num_outputs.\n \"\"\"\n return tf.contrib.layers.fully_connected(inputs=x_tensor, num_outputs=num_outputs, activation_fn=None)\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_output(output)",
"Create Convolutional Model\nImplement the function conv_net to create a convolutional neural network model. The function takes in a batch of images, x, and outputs logits. Use the layers you created above to create this model:\n\nApply 1, 2, or 3 Convolution and Max Pool layers\nApply a Flatten Layer\nApply 1, 2, or 3 Fully Connected Layers\nApply an Output Layer\nReturn the output\nApply TensorFlow's Dropout to one or more layers in the model using keep_prob.",
"def conv_net(x, keep_prob):\n \"\"\"\n Create a convolutional neural network model\n : x: Placeholder tensor that holds image data.\n : keep_prob: Placeholder tensor that hold dropout keep probability.\n : return: Tensor that represents logits\n \"\"\"\n # Function Definition from Above:\n # conv2d_maxpool(x_tensor, conv_num_outputs, conv_ksize, conv_strides, pool_ksize, pool_strides) \n c_layer = conv2d_maxpool(x, 32, (8, 8), (1, 1), (4, 4), (2, 2)) \n c_layer = conv2d_maxpool(c_layer, 128, (4,4), (1,1), (4,4), (2,2))\n c_layer = conv2d_maxpool(c_layer, 512, (2,2), (1,1), (4,4), (2,2))\n c_layer = tf.nn.dropout(c_layer, keep_prob)\n\n # Function Definition from Above:\n # flatten(x_tensor)\n flat = flatten(c_layer)\n\n # Function Definition from Above:\n # fully_conn(x_tensor, num_outputs)\n fc_layer = fully_conn(flat, 512)\n fc_layer = tf.nn.dropout(fc_layer, keep_prob)\n fc_layer = fully_conn(flat, 128)\n fc_layer = tf.nn.dropout(fc_layer, keep_prob)\n fc_layer = fully_conn(flat, 32)\n \n # Function Definition from Above:\n # output(x_tensor, num_outputs)\n o_layer = output(fc_layer, 10)\n return o_layer\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\n\n##############################\n## Build the Neural Network ##\n##############################\n\n# Remove previous weights, bias, inputs, etc..\ntf.reset_default_graph()\n\n# Inputs\nx = neural_net_image_input((32, 32, 3))\ny = neural_net_label_input(10)\nkeep_prob = neural_net_keep_prob_input()\n\n# Model\nlogits = conv_net(x, keep_prob)\n\n# Name logits Tensor, so that is can be loaded from disk after training\nlogits = tf.identity(logits, name='logits')\n\n# Loss and Optimizer\ncost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=y))\noptimizer = tf.train.AdamOptimizer().minimize(cost)\n\n# Accuracy\ncorrect_pred = tf.equal(tf.argmax(logits, 1), tf.argmax(y, 1))\naccuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32), name='accuracy')\n\ntests.test_conv_net(conv_net)",
"Train the Neural Network\nSingle Optimization\nImplement the function train_neural_network to do a single optimization. The optimization should use optimizer to optimize in session with a feed_dict of the following:\n* x for image input\n* y for labels\n* keep_prob for keep probability for dropout\nThis function will be called for each batch, so tf.global_variables_initializer() has already been called.\nNote: Nothing needs to be returned. This function is only optimizing the neural network.",
"def train_neural_network(session, optimizer, keep_probability, feature_batch, label_batch):\n \"\"\"\n Optimize the session on a batch of images and labels\n : session: Current TensorFlow session\n : optimizer: TensorFlow optimizer function\n : keep_probability: keep probability\n : feature_batch: Batch of Numpy image data\n : label_batch: Batch of Numpy label data\n \"\"\"\n session.run(optimizer, feed_dict={\n x: feature_batch,\n y: label_batch,\n keep_prob: keep_probability})\n pass\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_train_nn(train_neural_network)",
"Show Stats\nImplement the function print_stats to print loss and validation accuracy. Use the global variables valid_features and valid_labels to calculate validation accuracy. Use a keep probability of 1.0 to calculate the loss and validation accuracy.",
"def print_stats(session, feature_batch, label_batch, cost, accuracy):\n \"\"\"\n Print information about loss and validation accuracy\n : session: Current TensorFlow session\n : feature_batch: Batch of Numpy image data\n : label_batch: Batch of Numpy label data\n : cost: TensorFlow cost function\n : accuracy: TensorFlow accuracy function\n \"\"\"\n loss = session.run(cost, feed_dict={ x: feature_batch, y: label_batch, keep_prob: 1.0})\n \n valid_acc = session.run(accuracy, feed_dict={x: valid_features, y: valid_labels, keep_prob: 1.0})\n \n print('Loss: {:>10.4f} Validation Accuracy: {:.6f}'.format(loss, valid_acc))\n pass",
"Hyperparameters\nTune the following parameters:\n* Set epochs to the number of iterations until the network stops learning or start overfitting\n* Set batch_size to the highest number that your machine has memory for. Most people set them to common sizes of memory:\n * 64\n * 128\n * 256\n * ...\n* Set keep_probability to the probability of keeping a node using dropout",
"# TODO: Tune Parameters\nepochs = 15\nbatch_size = 512\nkeep_probability = .7",
"Train on a Single CIFAR-10 Batch\nInstead of training the neural network on all the CIFAR-10 batches of data, let's use a single batch. This should save time while you iterate on the model to get a better accuracy. Once the final validation accuracy is 50% or greater, run the model on all the data in the next section.",
"\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\nprint('Checking the Training on a Single Batch...')\nwith tf.Session() as sess:\n # Initializing the variables\n sess.run(tf.global_variables_initializer())\n \n # Training cycle\n for epoch in range(epochs):\n batch_i = 1\n for batch_features, batch_labels in helper.load_preprocess_training_batch(batch_i, batch_size):\n train_neural_network(sess, optimizer, keep_probability, batch_features, batch_labels)\n print('Epoch {:>2}, CIFAR-10 Batch {}: '.format(epoch + 1, batch_i), end='')\n print_stats(sess, batch_features, batch_labels, cost, accuracy)",
"Fully Train the Model\nNow that you got a good accuracy with a single CIFAR-10 batch, try it with all five batches.",
"\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\nsave_model_path = './image_classification'\n\nprint('Training...')\nwith tf.Session() as sess:\n # Initializing the variables\n sess.run(tf.global_variables_initializer())\n \n # Training cycle\n for epoch in range(epochs):\n # Loop over all batches\n n_batches = 5\n for batch_i in range(1, n_batches + 1):\n for batch_features, batch_labels in helper.load_preprocess_training_batch(batch_i, batch_size):\n train_neural_network(sess, optimizer, keep_probability, batch_features, batch_labels)\n print('Epoch {:>2}, CIFAR-10 Batch {}: '.format(epoch + 1, batch_i), end='')\n print_stats(sess, batch_features, batch_labels, cost, accuracy)\n \n # Save Model\n saver = tf.train.Saver()\n save_path = saver.save(sess, save_model_path)",
"Checkpoint\nThe model has been saved to disk.\nTest Model\nTest your model against the test dataset. This will be your final accuracy. You should have an accuracy greater than 50%. If you don't, keep tweaking the model architecture and parameters.",
"\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\n%matplotlib inline\n%config InlineBackend.figure_format = 'retina'\n\nimport tensorflow as tf\nimport pickle\nimport helper\nimport random\n\n# Set batch size if not already set\ntry:\n if batch_size:\n pass\nexcept NameError:\n batch_size = 64\n\nsave_model_path = './image_classification'\nn_samples = 4\ntop_n_predictions = 3\n\ndef test_model():\n \"\"\"\n Test the saved model against the test dataset\n \"\"\"\n\n test_features, test_labels = pickle.load(open('preprocess_training.p', mode='rb'))\n loaded_graph = tf.Graph()\n\n with tf.Session(graph=loaded_graph) as sess:\n # Load model\n loader = tf.train.import_meta_graph(save_model_path + '.meta')\n loader.restore(sess, save_model_path)\n\n # Get Tensors from loaded model\n loaded_x = loaded_graph.get_tensor_by_name('x:0')\n loaded_y = loaded_graph.get_tensor_by_name('y:0')\n loaded_keep_prob = loaded_graph.get_tensor_by_name('keep_prob:0')\n loaded_logits = loaded_graph.get_tensor_by_name('logits:0')\n loaded_acc = loaded_graph.get_tensor_by_name('accuracy:0')\n \n # Get accuracy in batches for memory limitations\n test_batch_acc_total = 0\n test_batch_count = 0\n \n for train_feature_batch, train_label_batch in helper.batch_features_labels(test_features, test_labels, batch_size):\n test_batch_acc_total += sess.run(\n loaded_acc,\n feed_dict={loaded_x: train_feature_batch, loaded_y: train_label_batch, loaded_keep_prob: 1.0})\n test_batch_count += 1\n\n print('Testing Accuracy: {}\\n'.format(test_batch_acc_total/test_batch_count))\n\n # Print Random Samples\n random_test_features, random_test_labels = tuple(zip(*random.sample(list(zip(test_features, test_labels)), n_samples)))\n random_test_predictions = sess.run(\n tf.nn.top_k(tf.nn.softmax(loaded_logits), top_n_predictions),\n feed_dict={loaded_x: random_test_features, loaded_y: random_test_labels, loaded_keep_prob: 1.0})\n helper.display_image_predictions(random_test_features, random_test_labels, random_test_predictions)\n\n\ntest_model()",
"Why 50-80% Accuracy?\nYou might be wondering why you can't get an accuracy any higher. First things first, 50% isn't bad for a simple CNN. Pure guessing would get you 10% accuracy. However, you might notice people are getting scores well above 80%. That's because we haven't taught you all there is to know about neural networks. We still need to cover a few more techniques.\nSubmitting This Project\nWhen submitting this project, make sure to run all the cells before saving the notebook. Save the notebook file as \"dlnd_image_classification.ipynb\" and save it as a HTML file under \"File\" -> \"Download as\". Include the \"helper.py\" and \"problem_unittests.py\" files in your submission."
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
maciejkula/triplet_recommendations_keras | triplet_keras.ipynb | apache-2.0 | [
"Recommendations in Keras using triplet loss\nAlong the lines of BPR [1]. \n[1] Rendle, Steffen, et al. \"BPR: Bayesian personalized ranking from implicit feedback.\" Proceedings of the Twenty-Fifth Conference on Uncertainty in Artificial Intelligence. AUAI Press, 2009.\nThis is implemented (more efficiently) in LightFM (https://github.com/lyst/lightfm). See the MovieLens example (https://github.com/lyst/lightfm/blob/master/examples/movielens/example.ipynb) for results comparable to this notebook.\nSet up the architecture\nA simple dense layer for both users and items: this is exactly equivalent to latent factor matrix when multiplied by binary user and item indices. There are three inputs: users, positive items, and negative items. In the triplet objective we try to make the positive item rank higher than the negative item for that user.\nBecause we want just one single embedding for the items, we use shared weights for the positive and negative item inputs (a siamese architecture).\nThis is all very simple but could be made arbitrarily complex, with more layers, conv layers and so on. I expect we'll be seeing a lot of papers doing just that.",
"\"\"\"\nTriplet loss network example for recommenders\n\"\"\"\n\nfrom __future__ import print_function\n\nimport numpy as np\n\nfrom keras import backend as K\nfrom keras.models import Model\nfrom keras.layers import Embedding, Flatten, Input, merge\nfrom keras.optimizers import Adam\n\nimport data\nimport metrics\n\n\ndef identity_loss(y_true, y_pred):\n\n return K.mean(y_pred - 0 * y_true)\n\n\ndef bpr_triplet_loss(X):\n\n positive_item_latent, negative_item_latent, user_latent = X\n\n # BPR loss\n loss = 1.0 - K.sigmoid(\n K.sum(user_latent * positive_item_latent, axis=-1, keepdims=True) -\n K.sum(user_latent * negative_item_latent, axis=-1, keepdims=True))\n\n return loss\n\n\ndef build_model(num_users, num_items, latent_dim):\n\n positive_item_input = Input((1, ), name='positive_item_input')\n negative_item_input = Input((1, ), name='negative_item_input')\n\n # Shared embedding layer for positive and negative items\n item_embedding_layer = Embedding(\n num_items, latent_dim, name='item_embedding', input_length=1)\n\n user_input = Input((1, ), name='user_input')\n\n positive_item_embedding = Flatten()(item_embedding_layer(\n positive_item_input))\n negative_item_embedding = Flatten()(item_embedding_layer(\n negative_item_input))\n user_embedding = Flatten()(Embedding(\n num_users, latent_dim, name='user_embedding', input_length=1)(\n user_input))\n\n loss = merge(\n [positive_item_embedding, negative_item_embedding, user_embedding],\n mode=bpr_triplet_loss,\n name='loss',\n output_shape=(1, ))\n\n model = Model(\n input=[positive_item_input, negative_item_input, user_input],\n output=loss)\n model.compile(loss=identity_loss, optimizer=Adam())\n\n return model",
"Load and transform data\nWe're going to load the Movielens 100k dataset and create triplets of (user, known positive item, randomly sampled negative item).\nThe success metric is AUC: in this case, the probability that a randomly chosen known positive item from the test set is ranked higher for a given user than a ranomly chosen negative item.",
"latent_dim = 100\nnum_epochs = 10\n\n# Read data\ntrain, test = data.get_movielens_data()\nnum_users, num_items = train.shape\n\n# Prepare the test triplets\ntest_uid, test_pid, test_nid = data.get_triplets(test)\n\nmodel = build_model(num_users, num_items, latent_dim)\n\n# Print the model structure\nprint(model.summary())\n\n# Sanity check, should be around 0.5\nprint('AUC before training %s' % metrics.full_auc(model, test))",
"Run the model\nRun for a couple of epochs, checking the AUC after every epoch.",
"for epoch in range(num_epochs):\n\n print('Epoch %s' % epoch)\n\n # Sample triplets from the training data\n uid, pid, nid = data.get_triplets(train)\n\n X = {\n 'user_input': uid,\n 'positive_item_input': pid,\n 'negative_item_input': nid\n }\n\n model.fit(X,\n np.ones(len(uid)),\n batch_size=64,\n nb_epoch=1,\n verbose=0,\n shuffle=True)\n\n print('AUC %s' % metrics.full_auc(model, test))",
"The AUC is in the low-90s. At some point we start overfitting, so it would be a good idea to stop early or add some regularization."
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
biof-309-python/BIOF309-2016-Fall | Week_06/Week 06 - 02 - Conditionals.ipynb | mit | [
"Conditions\nSource: This material adapted from the Python for Biologists website.\nConditions, True and False\nA condition is simply a bit of code that can produce a true or false answer. The easiest way to understand how conditions work in Python is try out a few examples. The following example prints out the result of testing (or evaluating) a bunch of different conditions – some mathematical examples, some using string methods, and one for testing if a value is included in a list:",
"print(3 == 5)\n\nprint(3 > 5)\n\nprint(3 <= 5)\n\nprint(len(\"ATGC\") > 5)\n\nprint(\"GAATTC\".count(\"T\") > 1)\n\nprint(\"ATGCTT\".startswith(\"ATG\"))\n\nprint(\"ATGCTT\".endswith(\"TTT\"))\n\nprint(\"ATGCTT\".isupper())\n\nprint(\"ATGCTT\".islower())\n\nprint(\"V\" in [\"V\", \"W\", \"L\"])",
"But what’s actually being printed here? At first glance, it looks like we’re printing the strings “True” and “False”, but those strings don’t appear anywhere in our code. What is actually being printed is the special built-in values that Python uses to represent true and false – they are capitalized so that we know they’re these special values.\nWe can show that these values are special by trying to print them. The following code runs without errors (note the absence of quotation marks):",
"print(True)\n\nprint(False)",
"There’s a wide range of things that we can include in conditions, and it would be impossible to give an exhaustive list here. The basic building blocks are:\n\nequals (represented by ==)\ngreater and less than (represented by > and <)\ngreater and less than or equal to (represented by >= and <=)\nnot equal (represented by !=)\nis a value in a list (represented by in)\n\nare two objects the same (represented by is).\nNotice that the test for equality is two equals signs, not one. Forgetting the second equals sign will cause an error.\nNow that we know how to express tests as conditions, let’s see what we can do with them.\nif statements\nThe simplest kind of conditional statement is an if statement. Hopefully the syntax is fairly simple to understand:",
"expression_level = 125\nif expression_level > 100:\n print(\"gene is highly expressed\")",
"We write the word if, followed by a condition, and end the first line with a colon. There follows a block of indented lines of code (the body of the if statement), which will only be executed if the condition is true. This colon-plus-block pattern should be familiar to you from the sections on loops and functions.\nMost of the time, we want to use an if statement to test a property of some variable whose value we don’t know at the time when we are writing the program. The example above is obviously useless, as the value of the expression_level variable is not going to change!\nHere’s a slightly more interesting example: we’ll define a list of gene accession names and print out just the ones that start with “a”:",
"accs = ['ab56', 'bh84', 'hv76', 'ay93', 'ap97', 'bd72']\nfor accession in accs:\n if accession.startswith('a'):\n print(accession)",
"If you take a close look at the code above, you’ll see something interesting – the lines of code inside the loop are indented (just as we’ve seen before), but the line of code inside the if statement is indented twice – once for the loop, and once for the if. This is the first time we’ve seen multiple levels of indentation, but it’s very common once we start working with larger programs – whenever we have one loop or if statement nested inside another, we’ll have this type of indentation.\nPython is quite happy to have as many levels of indentation as needed, but you’ll need to keep careful track of which lines of code belong at which level. If you find yourself writing a piece of code that requires more than three levels of indentation, it’s generally an indication that that piece of code should be turned into a function.\nelse statements\nClosely related to the if statement is the else statement. The examples above use a yes/no type of decision-making: should we print the gene accession number or not? Often we need an either/or type of decision, where we have two possible actions to take. To do this, we can add on an else clause after the end of the body of an if statement:",
"expression_level = 125\nif expression_level > 100:\n print(\"gene is highly expressed\")\nelse:\n print(\"gene is lowly expressed\")",
"The else statement doesn’t have any condition of its own – rather, the else statement body is execute when the if statement to which it’s attached is not executed.\nHere’s an example which uses if and else to split up a list of accession names into two different files – accessions that start with “a” go into the first file, and all other accessions go into the second file:",
"file1 = open(\"a_accessions.txt\", \"w\")\nfile2 = open(\"other_accessions.txt\", \"w\")\naccs = ['ab56', 'bh84', 'hv76', 'ay93', 'ap97', 'bd72']\nfor accession in accs:\n if accession.startswith('a'):\n file1.write(accession + \"\\n\")\n else:\n file2.write(accession + \"\\n\")",
"Notice how there are multiple indentation levels as before, but that the if and else statements are at the same level.\nelif statements\nWhat if we have more than two possible branches? For example, say we want three files of accession names: ones that start with “a”, ones that start with “b”, and all others. We could have a second if statement nested inside the else clause of the first if statement:",
"file1 = open(\"a_accessions.txt\", \"w\")\nfile2 = open(\"b_accessions.txt\", \"w\")\nfile3 = open(\"other_accessions.txt\", \"w\")\n\naccs = ['ab56', 'bh84', 'hv76', 'ay93', 'ap97', 'bd72']\n\nfor accession in accs:\n if accession.startswith('a'):\n file1.write(accession + \"\\n\")\n else:\n if accession.startswith('b'):\n file2.write(accession + \"\\n\")\n else:\n file3.write(accession + \"\\n\")",
"This works, but is difficult to read – we can quickly see that we need an extra level of indentation for every additional choice we want to include. To get round this, Python has an elif statement, which merges together else and if and allows us to rewrite the above example in a much more elegant way:",
"file1 = open(\"a_accessions.txt\", \"w\")\nfile2 = open(\"b_accessions.txt\", \"w\")\nfile3 = open(\"other_accessions.txt\", \"w\")\n\naccs = ['ab56', 'bh84', 'hv76', 'ay93', 'ap97', 'bd72']\n\nfor accession in accs:\n if accession.startswith('a'):\n file1.write(accession + \"\\n\")\n elif accession.startswith('b'):\n file2.write(accession + \"\\n\")\n else:\n file3.write(accession + \"\\n\")",
"Notice how this version of the code only needs two levels of indention. In fact, using elif we can have any number of branches and still only require a single extra level of indentation:",
"for accession in accs:\n if accession.startswith('a'):\n file1.write(accession + \"\\n\")\n elif accession.startswith('b'):\n file2.write(accession + \"\\n\")\n elif accession.startswith('c'):\n file3.write(accession + \"\\n\")\n elif accession.startswith('d'):\n file4.write(accession + \"\\n\")\n elif accession.startswith('e'):\n file5.write(accession + \"\\n\")\n else:\n file6.write(accession + \"\\n\")",
"Another way of handling complex decision branches like this – especially useful when dealing with validation and errors – is using exceptions, which have their own chapter in Advanced Python for Biologists.\nwhile loops\nHere’s one final thing we can do with conditions: use them to determine when to exit a loop. In section 4 we learned about loops that iterate over a collection of items (like a list, a string or a file). Python has another type of loop called a while loop. Rather than running a set number of times, a while loop runs until some condition is met. For example, here’s a bit of code that increments a count variable by one each time round the loop, stopping when the count variable reaches ten:",
"count = 0\nwhile count<10:\n print(count)\n count = count + 1",
"Because normal loops in Python are so powerful2 , while loops are used much less frequently than in other languages, so we won’t discuss them further.\nBuilding up complex conditions\nWhat if we wanted to express a condition that was made up of several parts? Imagine we want to go through our list of accessions and print out only the ones that start with “a” and end with “3”. We could use two nested if statements:",
"accs = ['ab56', 'bh84', 'hv76', 'ay93', 'ap97', 'bd72']\n\nfor accession in accs:\n if accession.startswith('a'):\n if accession.endswith('3'):\n print(accession)",
"but this brings in an extra, unneeded level of indention. A better way is to join up the two condition with and to make a complex expression:",
"accs = ['ab56', 'bh84', 'hv76', 'ay93', 'ap97', 'bd72']\n\nfor accession in accs:\n if accession.startswith('a') and accession.endswith('3'):\n print(accession)",
"This version is nicer in two ways: it doesn’t require the extra level of indentation, and the condition reads in a very natural way. We can also use or to join up two conditions, to produce a complex condition that will be true if either of the two simple conditions are true:",
"accs = ['ab56', 'bh84', 'hv76', 'ay93', 'ap97', 'bd72']\n\nfor accession in accs:\n if accession.startswith('a') or accession.startswith('b'):\n print(accession)",
"We can even join up complex conditions to make more complex conditions – here’s an example which prints accessions if they start with either “a” or “b” and end with “4”:",
"accs = ['ab56', 'bh84', 'hv76', 'ay93', 'ap97', 'bd72']\n\nfor acc in accs:\n if (acc.startswith('a') or acc.startswith('b')) and acc.endswith('4'):\n print(acc)",
"Notice how we have to include parentheses in the above example to avoid ambiguity. Finally, we can negate any type of condition by prefixing it with the word not. This example will print out accessions that start with “a” and don’t end with 6:",
"accs = ['ab56', 'bh84', 'hv76', 'ay93', 'ap97', 'bd72']\n\nfor acc in accs:\n if acc.startswith('a') and not acc.endswith('6'):\n print(acc)",
"By using a combination of and, or and not (along with parentheses where necessary) we can build up arbitrarily complex conditions. This kind of use for conditions – identifying elements in a list – can often be done better using either the filter function, or a list comprehension.\nThese three words are collectively known as boolean operators and crop up in a lot of places. For example, if you wanted to search for information on using Python in biology, but didn’t want to see pages that talked about biology of snakes, you might do a search for “biology python -snake“. This is actually a complex condition just like the ones above – Google automatically adds and between words, and uses the hyphen to mean not. So you’re asking for pages that mention python and biology but not snakes.\nWriting true/false functions\nSometimes we want to write a function that can be used in a condition. This is very easy to do – we just make sure that our function always returns either True or False. Remember that True and False are built-in values in Python, so they can be passed around, stored in variables, and returned, just like numbers or strings.\nHere’s a function that determines whether or not a DNA sequence is AT-rich (we’ll say that a sequence is AT-rich if it has an AT content of more than 0.65):",
"def is_at_rich(dna):\n length = len(dna)\n a_count = dna.upper().count('A')\n t_count = dna.upper().count('T')\n at_content = (a_count + t_count) / length\n if at_content > 0.65:\n return True\n else:\n return False",
"We’ll test this function on a few sequences to see if it works:",
"print(is_at_rich(\"ATTATCTACTA\"))\nprint(is_at_rich(\"CGGCAGCGCT\"))",
"The output shows that the function returns True or False just like the other conditions we’ve been looking at:\nTrue\nFalse\n\nTherefore we can use our function in an if statement:",
"if is_at_rich(my_dna):\n # do something with the sequence",
"Because the last four lines of our function are devoted to evaluating a condition and returning True or False, we can write a slightly more compact version. In this example we evaluate the condition, and then return the result right away:",
"def is_at_rich(dna):\n length = len(dna)\n a_count = dna.upper().count('A')\n t_count = dna.upper().count('T')\n at_content = (a_count + t_count) / length\n return at_content > 0.65",
"This is a little more concise, and also easier to read once you’re familiar with the idiom.\nRecap\nIn this short section, we’ve dealt with two things: conditions, and the statements that use them.\nWe’ve seen how simple conditions can be joined together to make more complex ones, and how the concepts of truth and falsehood are built in to Python on a fundamental level. We’ve also seen how we can incorporate True and False in our own functions in a way that allows them to be used as part of conditions.\nWe’ve been introduced to four different tools that use conditions – if, else, elif, and while – in approximate order of usefulness. You’ll probably find, in the programs that you write and in your solutions to the exercises in this book, that you use if and else very frequently, elif occasionally, and while almost never."
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
CalPolyPat/phys202-2015-work | assignments/assignment03/NumpyEx04.ipynb | mit | [
"Numpy Exercise 4\nImports",
"import numpy as np\n%matplotlib inline\nimport matplotlib.pyplot as plt\nimport seaborn as sns",
"Complete graph Laplacian\nIn discrete mathematics a Graph is a set of vertices or nodes that are connected to each other by edges or lines. If those edges don't have directionality, the graph is said to be undirected. Graphs are used to model social and communications networks (Twitter, Facebook, Internet) as well as natural systems such as molecules.\nA Complete Graph, $K_n$ on $n$ nodes has an edge that connects each node to every other node.\nHere is $K_5$:",
"import networkx as nx\nK_5=nx.complete_graph(5)\nnx.draw(K_5)",
"The Laplacian Matrix is a matrix that is extremely important in graph theory and numerical analysis. It is defined as $L=D-A$. Where $D$ is the degree matrix and $A$ is the adjecency matrix. For the purpose of this problem you don't need to understand the details of these matrices, although their definitions are relatively simple.\nThe degree matrix for $K_n$ is an $n \\times n$ diagonal matrix with the value $n-1$ along the diagonal and zeros everywhere else. Write a function to compute the degree matrix for $K_n$ using NumPy.",
"def complete_deg(n):\n return (n-1)*np.identity(n, dtype=int)\n\nD = complete_deg(5)\nassert D.shape==(5, 5)\nassert D.dtype==np.dtype(int)\nassert np.all(D.diagonal()==4*np.ones(5))\nassert np.all(D-np.diag(D.diagonal())==np.zeros((5,5),dtype=int))",
"The adjacency matrix for $K_n$ is an $n \\times n$ matrix with zeros along the diagonal and ones everywhere else. Write a function to compute the adjacency matrix for $K_n$ using NumPy.",
"def complete_adj(n):\n return np.ones((n,n), dtype=int)-np.identity(n, dtype=int)\n \n\nA = complete_adj(5)\nassert A.shape==(5,5)\nassert A.dtype==np.dtype(int)\nassert np.all(A+np.eye(5,dtype=int)==np.ones((5,5),dtype=int))",
"Use NumPy to explore the eigenvalues or spectrum of the Laplacian L of $K_n$. What patterns do you notice as $n$ changes? Create a conjecture about the general Laplace spectrum of $K_n$.",
"def L(n): return complete_deg(n)-complete_adj(n)\nsmalleig = np.empty((100,))\nfor n in np.arange(2,100):\n lap = L(n) \n eig = np.linalg.eigvals(lap)\n np.append(smalleig, np.min(eig))\nplt.plot(np.arange(100), smalleig)",
"The smallest eigenvalues of all the laplacians of $K_n$ are equal to n, the order of the graph."
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
survey-methods/samplics | docs/source/tutorial/replicate_weights.ipynb | mit | [
"Replicate weights\nReplicate weights are usually created for the purpose of variance (uncertainty) estimation. One common use case for replication-based methods is the estimation of non-linear parameters fow which Taylor-based approximation may not be accurate enough. Another use case is when the number of primary sampling units selected per stratum is small (low degree of freedom). Replicate weights are usually created for the purpose of variance (uncertainty)estimation. One common use case for replication-based methods is the estimation of non-linear parameters fow which Taylor-based approximation may not be accurate enough. Another use case is when the number of primary sampling units selected per stratum is small (low degree of freedom). \nIn this tutorial, we will explore creating replicate weights using the class ReplicateWeight. Three replication methods have been implemented: balanced repeated replication (BRR) including the Fay-BRR, bootstrap and jackknife. The replicate method of interest is specified when initializing the class by using the parameter method. The parameter method takes the values \"bootstrap\", \"brr\", or \"jackknife\". In this tutorial, we show how the API works for producing replicate weights.",
"import numpy as np\nimport pandas as pd\n\nimport samplics\nfrom samplics.datasets import PSUSample, SSUSample\nfrom samplics.weighting import ReplicateWeight",
"We import the sample data...",
"# Load PSU sample data\npsu_sample_cls = PSUSample()\npsu_sample_cls.load_data()\npsu_sample = psu_sample_cls.data\n\n# Load PSU sample data\nssu_sample_cls = SSUSample()\nssu_sample_cls.load_data()\nssu_sample = ssu_sample_cls.data\n\nfull_sample = pd.merge(\n psu_sample[[\"cluster\", \"region\", \"psu_prob\"]], \n ssu_sample[[\"cluster\", \"household\", \"ssu_prob\"]], \n on=\"cluster\")\n\nfull_sample[\"inclusion_prob\"] = full_sample[\"psu_prob\"] * full_sample[\"ssu_prob\"] \nfull_sample[\"design_weight\"] = 1 / full_sample[\"inclusion_prob\"] \n\nfull_sample.head(15)",
"Balanced Repeated Replication (BRR) <a name=\"section1\"></a>\nThe basic idea of BRR is to slip the sample in independent random groups. The groups are then threated as independent replicates of the the sample design. A special case is when the sample is split into two half samples in each stratum. This design is suitable to many survey designs where only two psus are selected by stratum. In practice, one of the psu is asigned to the first random group and the other psu is assign to the second group. The sample weights are double for one group (say the first one) and the sample weights in the other group are set to zero. To ensure that the replicates are independent, we use hadamard matrices to assign the random groups.",
"import scipy\nscipy.linalg.hadamard(8)",
"In our example, we have 10 psus. If we do not have explicit stratification then replicate() will group the clusters into 5 strata (2 per stratum). In this case, the smallest number of replicates possible using the hadamard matrix is 8. \nThe result below shows that replicate() created 5 strata by grouping clusters 7 and 10 in the first stratum, clusters 16 and 24 in the second stratum, and so on. We can achieve the same result by providing setting stratification=True and providing the stratum variable to replicate().",
"brr = ReplicateWeight(method=\"brr\", stratification=False)\nbrr_wgt = brr.replicate(full_sample[\"design_weight\"], full_sample[\"cluster\"])\n\nbrr_wgt.drop_duplicates().head(10)",
"An extension of BRR is the Fay's method. In the Fay's approach, instead of multiplying one half-sample by zero, we multiple the sampel weights by a factor $\\alpha$ and the other halh-sample by $2-\\alpha$. We refer to $\\alpha$ as the fay coefficient. Note that when $\\alpha=0$ then teh Fay's method reduces to BRR.",
"fay = ReplicateWeight(method=\"brr\", stratification=False, fay_coef=0.3)\nfay_wgt = fay.replicate(\n full_sample[\"design_weight\"], \n full_sample[\"cluster\"], \n rep_prefix=\"fay_weight_\",\n psu_varname=\"cluster\", \n str_varname=\"stratum\"\n)\n\nfay_wgt.drop_duplicates().head(10)",
"Bootstrap <a name=\"section2\"></a>\nFor the bootstrap replicates, we need to provide the number of replicates. When the number of replicates is not provided, ReplicateWeight will default to 500. The bootstrap consists of selecting the same number of psus as in the sample but with replacement. The selection is independently repeated for each replicate.",
"bootstrap = ReplicateWeight(method=\"bootstrap\", stratification=False, number_reps=50)\nboot_wgt = bootstrap.replicate(full_sample[\"design_weight\"], full_sample[\"cluster\"])\n\nboot_wgt.drop_duplicates().head(10)",
"Jackknife <a name=\"section3\"></a>\nBelow, we illustrate the API for creating replicate weights using the jackknife method.",
"jackknife = ReplicateWeight(method=\"jackknife\", stratification=False)\njkn_wgt = jackknife.replicate(full_sample[\"design_weight\"], full_sample[\"cluster\"])\n\njkn_wgt.drop_duplicates().head(10)",
"With stratification...",
"jackknife = ReplicateWeight(method=\"jackknife\", stratification=True)\njkn_wgt = jackknife.replicate(full_sample[\"design_weight\"], full_sample[\"cluster\"], full_sample[\"region\"])\n\njkn_wgt.drop_duplicates().head(10)",
"Important. For any of the three methods, we can request the replicate coefficient instead of the replicate weights by rep_coefs=True.",
"#jackknife = ReplicateWeight(method=\"jackknife\", stratification=True)\njkn_wgt = jackknife.replicate(\n full_sample[\"design_weight\"], full_sample[\"cluster\"], full_sample[\"region\"], rep_coefs=True\n)\n\njkn_wgt.drop_duplicates().sort_values(by=\"_stratum\").head(15)\n\n#fay = ReplicateWeight(method=\"brr\", stratification=False, fay_coef=0.3)\nfay_wgt = fay.replicate(\n full_sample[\"design_weight\"], \n full_sample[\"cluster\"], \n rep_prefix=\"fay_weight_\",\n psu_varname=\"cluster\", \n str_varname=\"stratum\",\n rep_coefs=True\n)\n\nfay_wgt.drop_duplicates().head(10)"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
basnijholt/orbitalfield | Phase-diagrams.ipynb | bsd-2-clause | [
"Phase diagram for multiple angles\nStart a ipcluster from the Cluster tab in Jupyter or use the command:\nipcluster start \nin a terminal.",
"from ipyparallel import Client\ncluster = Client()\ndview = cluster[:]\ndview.use_dill()\nlview = cluster.load_balanced_view()\nlen(dview)",
"This next cell is for internal use with our cluster at the department, a local ipcluster will work: use the cell above.",
"# import os\n# from scripts.hpc05 import HPC05Client\n# os.environ['SSH_AUTH_SOCK'] = os.path.join(os.path.expanduser('~'), 'ssh-agent.socket')\n# cluster = HPC05Client()",
"Make sure to add the correct path like:\nsys.path.append(\"/path/where/to/ipynb/runs\")",
"%%px --local\nimport sys\nimport os\n# CHANGE THE LINE BELOW INTO THE CORRECT FOLDER!\nsys.path.append(os.path.join(os.path.expanduser('~'), 'orbitalfield'))\nimport kwant\nimport numpy as np\nfrom fun import *\n\ndef gap_and_decay(lead, p, val, tol=1e-4):\n gap = find_gap(lead, p, val, tol)\n decay_length = find_decay_length(lead, p, val)\n return gap, decay_length\n\nimport holoviews as hv\nimport holoviews_rc\nhv.notebook_extension()",
"Uncomment the lines for the wire that you want to use.",
"%%px --local\n# angle = 0 # WIRE WITH SC ON TOP\n\nangle = 45 # WIRE WITH SC ON SIDE\np = make_params(t_interface=7/8*constants.t, Delta=68.4, r1=50, r2=70, \n orbital=True, angle=angle, A_correction=True, alpha=100) #r2=70\n\np.V = lambda x, y, z: 2 / 50 * z\nlead = make_3d_wire_external_sc(a=constants.a, r1=p.r1, r2=p.r2, angle=p.angle)\n\n# WIRE WITH CONSTANT GAP\n# lead = make_3d_wire()\n# p = make_params(V=lambda x, y, z: 0, orbital=True)",
"You can specify the angles that you want to calculate in thetas and phis.\nAlso specify the range of magnetic field and chemical potential in Bs and mu_mesh.",
"# give an array of angles that you want to use\n\n# thetas = np.array([0, np.tan(1/10), 0.5 * np.pi - np.tan(1/10), 0.5 * np.pi])\n# phis = np.array([0, np.tan(1/10), 0.5 * np.pi - np.tan(1/10), 0.5 * np.pi])\n\nthetas = np.array([0.5 * np.pi])\nphis = np.array([0])\n\n# the range of magnetic field and chemical potential\nBs = np.linspace(0, 2, 400)\nmu_mesh = np.linspace(0, 35, 400)\n\n# creates a 3D array with all values of magnetic field for all specified angles\npos = spherical_coords(Bs.reshape(-1, 1, 1), thetas.reshape(1, -1, 1), phis.reshape(1, 1, -1))\npos_vec = pos.reshape(-1, 3)\n\nmus_output = lview.map_sync(lambda B: find_phase_bounds(lead, p, B, num_bands=40), pos_vec)\nmus, vals, mask = create_mask(Bs, thetas, phis, mu_mesh, mus_output)\n\nN = len(vals)\nstep = N // (len(phis) * len(thetas))\nprint(N, step)",
"Check whether the correct angles were used and see the phase boundaries",
"import holoviews_rc\nfrom itertools import product\nfrom math import pi\n\nkwargs = {'kdims': [dimensions.B, dimensions.mu],\n 'extents': bnds(Bs, mu_mesh),\n 'label': 'Topological boundaries',\n 'group': 'Lines'}\n\nangles = list(product(enumerate(phis), enumerate(thetas)))\n\nboundaries = {(theta / pi, phi / pi): hv.Path((Bs, mus[i, j, :, ::2]), **kwargs)\n for (i, phi), (j, theta) in angles}\n\nBlochSpherePlot.bgcolor = 'white'\n\nsphere = {(theta / pi, phi / pi): BlochSphere([[1, 0, 0], spherical_coords(1, theta, phi)], group='Sphere')\n for (i, phi), (j, theta) in angles}\n\nhv.HoloMap(boundaries, **dimensions.angles) + hv.HoloMap(sphere, **dimensions.angles)",
"Calculate full phase diagram\nMake sure tempdata exists in the current folder. \nSet full_phase_diagram to False if you only want the band gap in the non-trivial region or True if you want it in the whole Bs, mu_mesh range.",
"full_phase_diagram = False",
"The next cell calculates the gaps and decay lengths.\nYou can stop and rerun the code, it will skip over the files that already exist.\nMake sure the folder tempdata/ exists.",
"import os.path\nimport sys\n\nfname_list = []\nfor i, n in enumerate(range(0, N, step)):\n fname = \"tempdata/\" + str(n)+\"-\"+str((i+1)*step)+\".dat\"\n fname_list.append(fname)\n \n if not os.path.isfile(fname): # check if file already exists\n lview.results.clear()\n cluster.results.clear()\n cluster.metadata.clear()\n print(fname)\n sys.stdout.flush()\n if full_phase_diagram:\n gaps_and_decays_output = lview.map_async(lambda val: gap_and_decay(lead, p, val[:-1] + (True,)), vals[n:(i+1) * step])\n else:\n gaps_and_decays_output = lview.map_async(lambda val: gap_and_decay(lead, p, val), vals[n:(i+1) * step])\n gaps_and_decays_output.wait_interactive()\n np.savetxt(fname, gaps_and_decays_output.result())\n print(n, (i+1) * step)\ncluster.shutdown(hub=True)\n\ngaps_and_decay_output = np.vstack([np.loadtxt(fname) for fname in fname_list])\ngaps_output, decay_length_output = np.array(gaps_and_decay_output).T\n\ngaps = np.array(gaps_output).reshape(mask.shape)\ngaps[1:, 0] = gaps[0, 0]\n\ndecay_lengths = np.array(decay_length_output).reshape(mask.shape)\ndecay_lengths[1:, 0] = decay_lengths[0, 0]\n\nif full_phase_diagram:\n gaps = gaps*(mask*2 - 1)\n decay_lengths = decay_lengths*(mask*2 - 1)\n gaps_output = gaps.reshape(-1)\n decay_length_output = decay_lengths.reshape(-1)",
"Save\nRun this function to save the data to hdf5 format, it will include all data and parameters that are used in the simulation.",
"fname = 'data/test.h5'\nsave_data(fname, Bs, thetas, phis, mu_mesh, mus_output, gaps_output, decay_length_output, p, constants)",
"Check how the phase diagram looks\nThis will show all data.",
"%%output size=200\n%%opts Image [colorbar=False] {+axiswise} (clims=(0, 0.1))\nphase_diagram = create_holoviews(fname)\n\n(phase_diagram.Phase_diagram.Band_gap.hist()\n + phase_diagram.Phase_diagram.Inverse_decay_length \n + phase_diagram.Sphere.I).cols(2)\n\n%%opts Image [colorbar=True]\nphase_diagram.Phase_diagram.Band_gap\n\nphase_diagram.cdims"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
wheeler-microfluidics/mr-box-peripheral-board.py | mr_box_peripheral_board/notebooks/Streaming plot demo.ipynb | mit | [
"Table of Contents\n<p><div class=\"lev1 toc-item\"><a href=\"#Embedded-in-Jupyter-notebook\" data-toc-modified-id=\"Embedded-in-Jupyter-notebook-1\"><span class=\"toc-item-num\">1 </span>Embedded in Jupyter notebook</a></div><div class=\"lev1 toc-item\"><a href=\"#Using-GTK\" data-toc-modified-id=\"Using-GTK-2\"><span class=\"toc-item-num\">2 </span>Using GTK</a></div><div class=\"lev2 toc-item\"><a href=\"#Example-of-how-to-compress-bytes-(e.g.,-JSON)-to-bzip2\" data-toc-modified-id=\"Example-of-how-to-compress-bytes-(e.g.,-JSON)-to-bzip2-21\"><span class=\"toc-item-num\">2.1 </span>Example of how to compress bytes (e.g., JSON) to bzip2</a></div>\n\n# Embedded in Jupyter notebook",
"%matplotlib notebook\n\nimport matplotlib as mpl\nimport matplotlib.pyplot as plt\n\nimport time\nimport threading\n\nimport ipywidgets as ipw\nimport numpy as np\nimport pandas as pd\n\n\nfig, axis = plt.subplots()\n\naxis._get_lines()\nstop_event = threading.Event()\nnp.random.seed(0)\ndata = []\n\n\ndef _draw():\n while True:\n if data:\n pd.Series(np.concatenate(data)).plot(ax=axis)\n fig.canvas.show()\n data.append(np.random.rand(10))\n if stop_event.wait(.5):\n break\n stop_event.clear()\n start.disabled = False\n stop.disabled = True\n \n \ndef _start(*args):\n start.disabled = True\n stop.disabled = False\n thread = threading.Thread(target=_draw)\n thread.daemon = True\n thread.start()\n\nstart = ipw.Button(description='Start')\nstart.on_click(_start)\nstop = ipw.Button(description='Stop')\nstop.on_click(lambda *args: stop_event.set())\nclear = ipw.Button(description='Clear')\ndef _clear(*args):\n axis.cla()\n for i in xrange(len(data)):\n data.pop()\nclear.on_click(_clear)\n\nwidget = ipw.HBox([start, stop, clear])\nwidget",
"Using GTK",
"import gtk\nimport gobject\nimport threading\nimport datetime as dt\n\nimport matplotlib as mpl\nimport matplotlib.style\nimport numpy as np\nimport pandas as pd\n\nfrom mr_box_peripheral_board.ui.gtk.streaming_plot import StreamingPlot\n\n\ndef _generate_data(stop_event, data_ready, data):\n delta_t = dt.timedelta(seconds=.1)\n samples_per_plot = 5\n\n while True:\n time_0 = dt.datetime.now()\n values_i = np.random.rand(samples_per_plot)\n absolute_times_i = pd.Series([time_0 + i * delta_t\n for i in xrange(len(values_i))])\n data_i = pd.Series(values_i, index=absolute_times_i)\n data.append(data_i)\n data_ready.set()\n if stop_event.wait(samples_per_plot *\n delta_t.total_seconds()):\n break\n \nwith mpl.style.context('seaborn',\n {'image.cmap': 'gray',\n 'image.interpolation' : 'none'}):\n win = gtk.Window()\n win.set_default_size(800, 600)\n view = StreamingPlot(data_func=_generate_data)\n win.add(view.widget)\n win.connect('check-resize', lambda *args: view.on_resize())\n win.set_position(gtk.WIN_POS_MOUSE)\n win.show_all()\n view.fig.tight_layout()\n win.connect('destroy', gtk.main_quit)\n gobject.idle_add(view.start)\n \n def auto_close(*args):\n if not view.stop_event.is_set():\n # User did not explicitly pause the measurement. Automatically\n # close the measurement and continue.\n win.destroy()\n gobject.timeout_add(5000, auto_close)\n \n measurement_complete = threading.Event()\n \n view.widget.connect('destroy', lambda *args: measurement_complete.set())\n\n gtk.gdk.threads_init()\n gtk.gdk.threads_enter()\n gtk.main()\n gtk.gdk.threads_leave()\n \n print measurement_complete.wait()",
"Example of how to compress bytes (e.g., JSON) to bzip2",
"from IPython.display import display\nimport bz2\n\n\ndata = pd.concat(view.data)\ndata_json = data.to_json()\ndata_json_bz2 = bz2.compress(data_json)\ndata_from_json = pd.read_json(bz2.decompress(data_json_bz2), typ='series')\nlen(data_json), len(data_json_bz2)"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
quantumlib/ReCirq | docs/qaoa/binary_paintshop.ipynb | apache-2.0 | [
"Copyright 2021 The Cirq Developers",
"# @title Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.",
"Binary Paintshop Problem with Quantum Approximate Optimization Algorithm\n<table class=\"tfo-notebook-buttons\" align=\"left\">\n <td>\n <a target=\"_blank\" href=\"https://quantumai.google/cirq/experiments/qaoa/binary_paintshop>\"><img src=\"https://quantumai.google/site-assets/images/buttons/quantumai_logo_1x.png\" />View on QuantumAI</a>\n </td>\n <td>\n <a target=\"_blank\" href=\"https://colab.research.google.com/github/quantumlib/ReCirq/blob/master/docs/qaoa/binary_paintshop.ipynb\"><img src=\"https://quantumai.google/site-assets/images/buttons/colab_logo_1x.png\" />Run in Google Colab</a>\n </td>\n <td>\n <a target=\"_blank\" href=\"https://github.com/quantumlib/ReCirq/blob/master/docs/qaoa/binary_paintshop\"><img src=\"https://quantumai.google/site-assets/images/buttons/github_logo_1x.png\" />View source on GitHub</a>\n </td>\n <td>\n <a href=\"https://storage.googleapis.com/tensorflow_docs/ReCirq/docs/qaoa/binary_paintshop\"><img src=\"https://quantumai.google/site-assets/images/buttons/download_icon_1x.png\" />Download notebook</a>\n </td>\n</table>",
"from typing import Sequence, Tuple\nimport numpy as np\n\ntry:\n import cirq\nexcept ImportError:\n print(\"installing cirq...\")\n !pip install --quiet cirq\n print(\"installed cirq.\")\n import cirq\n\nimport cirq_ionq as ionq",
"Binary Paintshop Problem\nAssume an automotive paint shop and a random, but fixed sequence of 2*n cars. Each car has a identical partner that only differs in the color it has to be painted.",
"CAR_PAIR_COUNT = 10\ncar_sequence = np.random.permutation([x for x in range(CAR_PAIR_COUNT)] * 2)\nprint(car_sequence)",
"The task is to paint the cars such that in the end for every pair of cars one is painted in red and the other in blue. The objective of the following minimization procedure is to minimize the number of color changes in the paintshop.",
"def color_changes(paint_bitstring: Sequence[int], car_sequence: Sequence[int]) -> int:\n \"\"\"Count the number of times the color changes if the robots\n paint each car in car_sequence according to paint_bitstring,\n which notes the color for the first car in each pair.\n\n Args:\n paint_bitstring: A sequence that determines the color to\n paint the first car in pair i. For example, 0 for blue\n and nonzero for red.\n car_sequence: A sequence that determines which cars are\n paired together\n\n Returns:\n Count of the number of times the robots change the color\n \"\"\"\n color_sequence = []\n painted_once = set()\n for car in car_sequence:\n if car in painted_once:\n # paint the other color for the second car in the pair\n color_sequence.append(not paint_bitstring[car])\n else:\n # paint the noted color for the first car in the pair\n color_sequence.append(paint_bitstring[car])\n painted_once.add(car)\n paint_change_counter = 0\n # count the number of times two adjacent cars differ in color\n for color0, color1 in zip(color_sequence, color_sequence[1:]):\n if color0 != color1:\n paint_change_counter += 1\n return paint_change_counter",
"If two consecutive cars in the sequence are painted in different colors the robots have to rinse the old color, clean the nozzles and flush in the new color. This color change procedure costs time, paint, water and ultimately costs money, which is why we want to minimize the number of color changes. However, a rearrangement of the car sequence is not at our disposal (because of restrictions that are posed by the remainig manufacturing processes), but we can decide once we reach the first car of each car pair which color to paint the pair first. When we have chosen the color for the first car the other car has to be painted in the other respective color. Obvious generalizations exist, for example more than two colors and groups of cars with more than 2 cars where it is permissible to exchange colors, however for demonstration purposes it suffices to consider the here presented binary version of the paintshop problem. It is NP-hard to solve the binary paintshop problem exactly as well as approximately with an arbitrary performance guarantee. A performance guarantee in this context would be a proof that an approximation algorithm never gives us a solution with a number of color changes that is more than some factor times the optimal number of color changes. This is the situation where substantial quantum speedup can be assumed (c.f. Quantum Computing in the NISQ era and beyond). The quantum algorithm presented here can deliver, on average, better solutions than all polynomial runtime heuristics specifically developed for the paintshop problem in constant time (constant query complexity) (c.f. Beating classical heuristics for the binary paint shop problem with the quantum approximate optimization algorithm).\nSpin Glass\nTo be able to solve the binary paintshop problem with the Quantum Approximate Optimization Algorithm (QAOA) we need to translate the problem to a spin glass problem. Interestingly, that is possible with no spatial overhead, i.e. the spin glass has as many spins as the sequence has car pairs. The state of every spin represents the color we paint the respective first car in the seqence of every car pair. Every second car is painted with the repsective other color. The interactions of the spin glass can be deduced proceeding through the fixed car sequence: If two cars are adjacent to each other and both of them are either the first or the second car in their respective car pairs we can add a ferromagnetic interaction to the spin glass in order to penalize the color change between these two cars. If two cars are next to each other and one of the cars is the first and the other the second in their respective car pairs we have to add a antiferromagnetic interaction to the spin glass in order to penalize the color change because in this case the color for the car that is the second car in its car pair is exactly the opposite. All color changes in the car sequence are equivalent which is why we have equal magnitude ferromagnetic and antiferromagnetic interactions and additionally we choose unit magnitude interactions.",
"def spin_glass(car_sequence: Sequence[int]) -> Sequence[Tuple[int, int, int]]:\n \"\"\"Assign interactions between adjacent cars.\n\n Assign a ferromagnetic(1) interaction if both elements of the pair are\n the first/second in their respective pairs. Otheriwse, assign an antiferromagnetic(-1)\n interaction. Yield a tuple with the two paired cars followed by the\n chosen interaction.\n \"\"\"\n ferromagnetic = -1\n antiferromagnetic = 1\n appeared_already = set()\n for car0, car1 in zip(car_sequence, car_sequence[1:]):\n if car0 == car1:\n continue\n if car0 in appeared_already:\n appeared_already.add(car0)\n if car1 in appeared_already:\n yield car0, car1, ferromagnetic\n else:\n yield car0, car1, antiferromagnetic\n else:\n appeared_already.add(car0)\n if car1 in appeared_already:\n yield car0, car1, antiferromagnetic\n else:\n yield car0, car1, ferromagnetic",
"Quantum Approximate Optimization Algorithm\nWe want to execute a one block version of the QAOA circuit for the binary\npaintshop instance with p = 1 on a trapped-ion\nquantum computer of IonQ. This device is composed of 11 fully connected qubits with average single- and two-qubit fidelities of 99.5% and 97.5% respectively (Benchmarking an 11-qubit quantum computer).\nAs most available quantum hardware, trapped ion\nquantum computers only allow the application of gates\nfrom a restricted native gate set predetermined by the\nphysics of the quantum processor. To execute an arbitrary gate, compilation of the desired gate into available gates is required. For trapped ions, a generic native\ngate set consists of a parameterized two-qubit rotation, the Molmer Sorensen gate,\n$R_\\mathrm{XX}(\\alpha)=\\mathrm{exp}[-\\mathrm{i}\\alpha \\sigma_\\mathrm{x}^{(i)}\\sigma_\\mathrm{x}^{(j)}/2]$ and a parametrized single qubit rotation:\n$R(\\theta,\\phi)=\\begin{pmatrix}\n\\cos{(\\theta/2)} & -\\mathrm{i}\\mathrm{e}^{-\\mathrm{i}\\phi}\\sin{(\\theta/2)} \\-\\mathrm{i}\\mathrm{e}^{\\mathrm{i}\\phi}\\sin{(\\theta/2)} & \\cos{(\\theta/2)}\n\\end{pmatrix}$\nQAOA circuits employ parametrized two body $\\sigma_z$ rotations, $R_\\mathrm{ZZ}(\\gamma)=\\mathrm{exp}[-i\\gamma \\sigma_\\mathrm{z}^{(i)}\\sigma_\\mathrm{z}^{(j)}]$. To circumvent a compilation overhead and optimally leverage the Ion Trap, we inject pairs of Hadamard gates $H H^{\\dagger} = 1$ for every qubit in between the two body $\\sigma_z$ rotations. This means we are able to formulate the phase separator entirely with Molmer Sorensen gates. To support this, the QAOA circuit starts in the state where all qubits are in the groundstate $\\left| 0\\right\\rangle$ instead of the superposition of all computational basis states $\\left| + \\right\\rangle$,",
"def phase_separator(\n gamma: float, qubit_register: Sequence[cirq.Qid], car_sequence: Sequence[int]\n) -> Sequence[cirq.Operation]:\n \"\"\"Yield a sequence of Molmer Sorensen gates to implement a\n phase separator over the ferromagnetic/antiferromagnetic\n interactions between adjacent cars, as defined by spin_glass\n \"\"\"\n for car_pair0, car_pair1, interaction in spin_glass(car_sequence):\n yield cirq.ms(interaction * gamma).on(\n qubit_register[car_pair0], qubit_register[car_pair1]\n )\n\n\nqubit_register = cirq.LineQubit.range(CAR_PAIR_COUNT)\ncircuit = cirq.Circuit([phase_separator(0.1, qubit_register, car_sequence)])",
"Because we replaced the two body $\\sigma_z$ rotations with Molmer Sorensen gates we also have to adjust the mixer slightly to account for the injected Hadamard gates.",
"def mixer(beta: float, qubit_register: Sequence[cirq.Qid]) -> Iterator[cirq.Operation]:\n \"\"\"Yield a QAOA mixer of RX gates, modified by adding RY gates first,\n to account for the additional Hadamard gates.\n \"\"\"\n yield cirq.ry(np.pi / 2).on_each(qubit_register)\n yield cirq.rx(beta - np.pi).on_each(qubit_register)",
"To find the right parameters for the QAOA circuit, we have to assess the quality of the solutions for a given set of parameters. To this end, we execute the QAOA circuit with fixed parameters 100 times and calculate the average number of color changes.",
"def average_color_changes(\n parameters: Tuple[float, float],\n qubit_register: Sequence[cirq.Qid],\n car_sequence: Sequence[int],\n) -> float:\n \"\"\"Calculate the average number of color changes over all measurements of\n the QAOA circuit, aross `repetitions` many runs, for provided parameters\n beta and gamma.\n\n Args:\n parameters: tuple of (`beta`, `gamma`), the two parameters for the QAOA circuit\n qubit_register: A sequence of qubits for the circuit to use.\n car_sequence: A sequence that determines which cars are paired together.\n\n Returns:\n A float average number of color changes over all measurements.\n \"\"\"\n beta, gamma = parameters\n repetitions = 100\n circuit = cirq.Circuit()\n circuit.append(phase_separator(gamma, qubit_register, car_sequence))\n circuit.append(mixer(beta, qubit_register))\n circuit.append(cirq.measure(*qubit_register, key=\"z\"))\n results = service.run(circuit, repetitions=repetitions)\n avg_cc = 0\n for paint_bitstring in results.measurements[\"z\"]:\n avg_cc += color_changes(paint_bitstring, car_sequence) / repetitions\n return avg_cc",
"We optimize the average number of color changes by adjusting the parameters with scipy.optimzes function minimize. The results of these optimsation runs strongly depend on the random starting values we choose for the parameters, which is why we restart the optimization procedure for different starting parameters 10 times and take the best performing optimized parameters.",
"from scipy.optimize import minimize\n\nservice = cirq.Simulator()\nbeta, gamma = np.random.rand(2)\naverage_cc = average_color_changes([beta, gamma], qubit_register, car_sequence)\noptimization_function = lambda x: average_color_changes(x, qubit_register, car_sequence)\nfor _ in range(10):\n initial_guess = np.random.rand(2)\n optimization_result = minimize(\n optimization_function, initial_guess, method=\"SLSQP\", options={\"eps\": 0.1}\n )\n average_cc_temp = average_color_changes(\n optimization_result.x, qubit_register, car_sequence\n )\n if average_cc > average_cc_temp:\n beta, gamma = optimization_result.x\n average_cc = average_cc_temp\naverage_cc",
"Note here that the structure of the problem graphs of the binary paintshop problem allow for an alternative technique to come up with good parameters independent of the specifics of the respective instance of the problem: Training the quantum approximate optimization algorithm without access to a quantum processing unit\nOnce the parameters are optimised, we execute the optimised QAOA circuit 100 times and output the solution with the least color changes.\nPlease replace <your key> with your IonQ API key and <remote host> with the API endpoint.",
"repetitions = 100\ncircuit = cirq.Circuit()\ncircuit.append(phase_separator(gamma, qubit_register, car_sequence))\ncircuit.append(mixer(beta, qubit_register))\ncircuit.append(cirq.measure(*qubit_register, key=\"z\"))\nservice = ionq.Service(\n remote_host=\"<remote host>\", api_key=\"<your key>\", default_target=\"qpu\"\n)\nresults = service.run(circuit, repetitions=repetitions)\nbest_result = CAR_PAIR_COUNT\nfor paint_bitstring in results.measurements[\"z\"]:\n result = color_changes(paint_bitstring, car_sequence)\n if result < best_result:\n best_result = result\n best_paint_bitstring = paint_bitstring\nprint(f\"The minimal number of color changes found by level-1 QAOA is: {best_result}\")\nprint(\n f\"The car pairs have to be painted according to {best_paint_bitstring}, with index i representing the paint of the first car of pair i.\"\n)\nprint(f\" The other car in pair i is painted the second color.\")",
"Note here, that in a future production environment the optimization and execution phase of the QAOA should be merged, i.e. we output in the end the best performing sample gathered during the training phase of the QAOA circuit. For educational purposes, we separated here the training and the evaluation phase of the QAOA."
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
tensorflow/docs-l10n | site/zh-cn/hub/tutorials/bangla_article_classifier.ipynb | apache-2.0 | [
"Copyright 2019 The TensorFlow Hub Authors.\nLicensed under the Apache License, Version 2.0 (the \"License\");",
"# Copyright 2019 The TensorFlow Hub Authors. All Rights Reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS, \n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n# ==============================================================================",
"使用 TF-Hub 对孟加拉语文章进行分类\n<table class=\"tfo-notebook-buttons\" align=\"left\">\n <td><a target=\"_blank\" href=\"https://tensorflow.google.cn/hub/tutorials/bangla_article_classifier\"><img src=\"https://tensorflow.google.cn/images/tf_logo_32px.png\">在 TensorFlow.org 上查看 </a></td>\n <td><a target=\"_blank\" href=\"https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/zh-cn/hub/tutorials/bangla_article_classifier.ipynb\"><img src=\"https://tensorflow.google.cn/images/colab_logo_32px.png\">在 Google Colab 中运行 </a></td>\n <td><a target=\"_blank\" href=\"https://github.com/tensorflow/docs-l10n/blob/master/site/zh-cn/hub/tutorials/bangla_article_classifier.ipynb\"><img src=\"https://tensorflow.google.cn/images/GitHub-Mark-32px.png\">在 GitHub 中查看源代码</a></td>\n <td><a href=\"https://storage.googleapis.com/tensorflow_docs/hub/examples/colab/bangla_article_classifier.ipynb\">{img1下载笔记本</a></td>\n</table>\n\n小心:除了使用 pip 安装 Python 软件包外,此笔记本还使用 sudo apt install 安装系统软件包:unzip。\n此 Colab 演示了如何使用 Tensorflow Hub 对非英语/本地语言进行文本分类。在这里,我们选择孟加拉语作为本地语言并使用预训练的单词嵌入向量解决多类分类任务,在这个任务中我们将孟加拉语的新闻文章分为 5 类。针对孟加拉语进行预训练的嵌入向量来自 FastText,这是一个由 Facebook 创建的库,其中包含 157 种语言的预训练单词向量。\n我们将使用 TF-Hub 的预训练嵌入向量导出程序先将单词嵌入向量转换为文本嵌入向量模块,然后使用该模块通过 tf.keras(Tensorflow 的高级用户友好 API)训练分类器来构建深度学习模型。即使我们在这里使用 fastText 嵌入向量,您也可以导出任何通过其他任务预训练的其他嵌入向量,并使用 Tensorflow Hub 快速获得结果。 \n设置",
"%%bash\n# https://github.com/pypa/setuptools/issues/1694#issuecomment-466010982\npip install gdown --no-use-pep517\n\n%%bash\nsudo apt-get install -y unzip\n\nimport os\n\nimport tensorflow as tf\nimport tensorflow_hub as hub\n\nimport gdown\nimport numpy as np\nfrom sklearn.metrics import classification_report\nimport matplotlib.pyplot as plt\nimport seaborn as sns",
"数据集\n我们将使用 BARD(孟加拉语文章数据集),内含从不同孟加拉语新闻门户收集的约 3,76,226 篇文章,并标记为 5 个类别:经济、国内、国际、体育和娱乐。我们从 Google 云端硬盘下载这个文件,此 (bit.ly/BARD_DATASET) 链接指向此 GitHub 仓库。",
"gdown.download(\n url='https://drive.google.com/uc?id=1Ag0jd21oRwJhVFIBohmX_ogeojVtapLy',\n output='bard.zip',\n quiet=True\n)\n\n%%bash\nunzip -qo bard.zip",
"将预训练的单词向量导出到 TF-Hub 模块\nTF-Hub 提供了一些方便的脚本将单词嵌入向量转换为 TF-Hub 文本嵌入向量模块,详见这里。要使模块适用于孟加拉语或其他语言,我们只需将单词嵌入向量 .txt 或 .vec 文件下载到与 export_v2.py 相同的目录中,然后运行脚本。\n导出程序会读取嵌入向量,并将其导出到 Tensorflow SavedModel。SavedModel 包含完整的 TensorFlow 程序,其中包括权重和计算图。TF-Hub 可以将 SavedModel 作为模块进行加载,我们将用它来构建文本分类模型。由于我们使用 tf.keras 来构建模型,因此我们将使用 hub.KerasLayer,它为 Hub 模块提供用作 Keras 层的封装容器。\n首先,我们从 fastText 获得单词嵌入向量,并从 TF-Hub 仓库获得嵌入向量导出程序。",
"%%bash\ncurl -O https://dl.fbaipublicfiles.com/fasttext/vectors-crawl/cc.bn.300.vec.gz\ncurl -O https://raw.githubusercontent.com/tensorflow/hub/master/examples/text_embeddings_v2/export_v2.py\ngunzip -qf cc.bn.300.vec.gz --k",
"然后,我们在嵌入向量文件上运行导出程序脚本。由于 fastText 嵌入向量具有标题行并且相当大(转换为模块后,孟加拉语大约为 3.3 GB),因此我们忽略第一行,仅将前 100, 000 个词例导入文本嵌入向量模块。",
"%%bash\npython export_v2.py --embedding_file=cc.bn.300.vec --export_path=text_module --num_lines_to_ignore=1 --num_lines_to_use=100000\n\nmodule_path = \"text_module\"\nembedding_layer = hub.KerasLayer(module_path, trainable=False)",
"文本嵌入向量模块以一维字符串张量中的句子批次作为输入,并输出与句子相对应的形状 (batch_size, embedding_dim) 的嵌入向量。它通过按空格拆分来对输入进行预处理。我们使用 sqrtn 组合程序(请参阅此处)将单词嵌入向量组合到句子嵌入向量。为了演示,我们传递一个孟加拉语单词的列表作为输入,并获得相应的嵌入向量。",
"embedding_layer(['বাস', 'বসবাস', 'ট্রেন', 'যাত্রী', 'ট্রাক']) ",
"转换为 TensorFlow 数据集\n由于数据集确实很大,因此我们使用生成器通过 Tensorflow 数据集的功能在运行时批量生成样本,而不是将整个数据集加载到内存中。同时,数据集还非常不平衡,因此在使用生成器之前,我们将打乱数据集的顺序。",
"dir_names = ['economy', 'sports', 'entertainment', 'state', 'international']\n\nfile_paths = []\nlabels = []\nfor i, dir in enumerate(dir_names):\n file_names = [\"/\".join([dir, name]) for name in os.listdir(dir)]\n file_paths += file_names\n labels += [i] * len(os.listdir(dir))\n \nnp.random.seed(42)\npermutation = np.random.permutation(len(file_paths))\n\nfile_paths = np.array(file_paths)[permutation]\nlabels = np.array(labels)[permutation]",
"打乱顺序后,我们可以查看标签在训练和验证样本中的分布。",
"train_frac = 0.8\ntrain_size = int(len(file_paths) * train_frac)\n\n# plot training vs validation distribution\nplt.subplot(1, 2, 1)\nplt.hist(labels[0:train_size])\nplt.title(\"Train labels\")\nplt.subplot(1, 2, 2)\nplt.hist(labels[train_size:])\nplt.title(\"Validation labels\")\nplt.tight_layout()",
"要使用生成器创建数据集,我们首先编写一个生成器函数,该函数从 file_paths 读取文章,从标签数组中读取标签,并在每个步骤生成一个训练样本。我们将此生成器函数传递到 tf.data.Dataset.from_generator 方法,并指定输出类型。每个训练样本都是一个元组,其中包含 tf.string 数据类型的文章和独热编码标签。我们使用 skip 和 take 方法以 80-20 的比例将数据集拆分为训练集和验证集。",
"def load_file(path, label):\n return tf.io.read_file(path), label\n\ndef make_datasets(train_size):\n batch_size = 256\n\n train_files = file_paths[:train_size]\n train_labels = labels[:train_size]\n train_ds = tf.data.Dataset.from_tensor_slices((train_files, train_labels))\n train_ds = train_ds.map(load_file).shuffle(5000)\n train_ds = train_ds.batch(batch_size).prefetch(tf.data.experimental.AUTOTUNE)\n\n test_files = file_paths[train_size:]\n test_labels = labels[train_size:]\n test_ds = tf.data.Dataset.from_tensor_slices((test_files, test_labels))\n test_ds = test_ds.map(load_file)\n test_ds = test_ds.batch(batch_size).prefetch(tf.data.experimental.AUTOTUNE)\n\n\n return train_ds, test_ds\n\ntrain_data, validation_data = make_datasets(train_size)",
"模型训练和评估\n由于我们已经在模块周围添加了封装容器,使其可以像 Keras 中的任何其他层一样使用,因此我们可以创建一个小的序贯模型,此模型是层的线性堆叠。我们可以像使用任何其他层一样,使用 model.add 添加文本嵌入向量模块。我们通过指定损失和优化器来编译模型,并对其进行 10 个周期的训练。tf.keras API 可以将 TensorFlow 数据集作为输入进行处理,因此我们可以将数据实例传递给用于模型训练的拟合方法。由于我们使用的是生成器函数,tf.data 将负责生成样本、对其进行批处理,并将其馈送给模型。\n模型",
"def create_model():\n model = tf.keras.Sequential([\n tf.keras.layers.Input(shape=[], dtype=tf.string),\n embedding_layer,\n tf.keras.layers.Dense(64, activation=\"relu\"),\n tf.keras.layers.Dense(16, activation=\"relu\"),\n tf.keras.layers.Dense(5),\n ])\n model.compile(loss=tf.losses.SparseCategoricalCrossentropy(from_logits=True),\n optimizer=\"adam\", metrics=['accuracy'])\n return model\n\nmodel = create_model()\n# Create earlystopping callback\nearly_stopping_callback = tf.keras.callbacks.EarlyStopping(monitor='val_loss', min_delta=0, patience=3)",
"训练",
"history = model.fit(train_data, \n validation_data=validation_data, \n epochs=5, \n callbacks=[early_stopping_callback])",
"评估\n我们可以使用由 fit 方法返回的 history 对象(包含每个周期的损失和准确率值)来可视化训练和验证数据的准确率和损失曲线。",
"# Plot training & validation accuracy values\nplt.plot(history.history['accuracy'])\nplt.plot(history.history['val_accuracy'])\nplt.title('Model accuracy')\nplt.ylabel('Accuracy')\nplt.xlabel('Epoch')\nplt.legend(['Train', 'Test'], loc='upper left')\nplt.show()\n\n# Plot training & validation loss values\nplt.plot(history.history['loss'])\nplt.plot(history.history['val_loss'])\nplt.title('Model loss')\nplt.ylabel('Loss')\nplt.xlabel('Epoch')\nplt.legend(['Train', 'Test'], loc='upper left')\nplt.show()",
"预测\n我们可以获得验证数据的预测并检查混淆矩阵,以查看模型在 5 个类中的性能。predict 方法返回每个类的概率的 N 维数组后,我们使用 np.argmax 将其转换为类标签。",
"y_pred = model.predict(validation_data)\n\ny_pred = np.argmax(y_pred, axis=1)\n\nsamples = file_paths[0:3]\nfor i, sample in enumerate(samples):\n f = open(sample)\n text = f.read()\n print(text[0:100])\n print(\"True Class: \", sample.split(\"/\")[0])\n print(\"Predicted Class: \", dir_names[y_pred[i]])\n f.close() ",
"比较性能\n现在,我们可以从 labels 获得验证数据的正确标签,并与我们的预测进行比较,以获得 classification_report。",
"y_true = np.array(labels[train_size:])\n\nprint(classification_report(y_true, y_pred, target_names=dir_names))",
"我们还可以将模型的性能与原始论文中报告的精度为 0.96 的发布结果进行比较。原作者描述了在数据集上完成的许多预处理步骤,例如删除标点和数字、去除前 25 个最常见的停用词等。正如我们在 classification_report 中所见,在仅训练了 5 个周期而没有进行任何预处理的情况下,我们也获得了 0.96 的精度和准确率!\n在此示例中,当我们从嵌入向量模块创建 Keras 层时,我们设置了 trainable=False,这意味着训练期间不会更新嵌入向量权重。请尝试将此设置为 True,使用此数据集仅用 2 个周期即可达到 97% 的准确率。"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
usantamaria/iwi131 | ipynb/23-ProcesamientoDeTexto/Texto.ipynb | cc0-1.0 | [
"\"\"\"\nIPython Notebook v4.0 para python 2.7\nLibrerías adicionales: Ninguna.\nContenido bajo licencia CC-BY 4.0. Código bajo licencia MIT. (c) Sebastian Flores.\n\"\"\"\n\n# Configuracion para recargar módulos y librerías \n%reload_ext autoreload\n%autoreload 2\n\nfrom IPython.core.display import HTML\n\nHTML(open(\"style/iwi131.css\", \"r\").read())",
"<header class=\"w3-container w3-teal\">\n<img src=\"images/utfsm.png\" alt=\"\" align=\"left\"/>\n<img src=\"images/inf.png\" alt=\"\" align=\"right\"/>\n</header>\n<br/><br/><br/><br/><br/>\nIWI131\nProgramación de Computadores\nSebastián Flores\nhttp://progra.usm.cl/ \nhttps://www.github.com/usantamaria/iwi131\nFechas\n\nActividad 05: Miércoles 6 Enero 2016 (8:00).\nCertamen 3: Viernes 8 Enero 2016 (15:30).\nCertamen Recuperativo: Lunes 18 Enero 2016 (8:00).\n\nClases\n\nMie 23 Dic 2016: Procesamiento de Texto.\nLun 28 Dic 2016: Escribir y leer archivos.\nMie 30 Dic 2016: Ejercicios tipo certamen.\nLun 04 Ene 2016: Ejercicios tipo certamen.\nMie 06 Ene 2016: Actividad 5.\n\nConsejo: Baje el libro del curso, lea, aprenda y practique.\n¿Qué contenido aprenderemos?\n\nProcesamiento de texto\n\n¿Porqué aprenderemos ese contenido?\n\nProcesamiento de texto\n\nHabilidad crucial para resolver una gran variedad de problemas.\nMotivación\nQueremos conocer cuales son las palabras más comunes en un idioma. Para eso, necesitamos saber cuantas veces aparece cada palabra en una frase. Desarrolle una función contar_palabras que al ser aplicada sobre un string, entregue un diccionario con las palabras y la cantidad de veces que aparece en la frase. Omita espacios y signos de puntuación y exclamación.\nt = 'El sobre, en el aula, esta sobre el pupitre.'\n\ncontar_palabras(t)\n{'el': 3, 'en': 1, 'esta': 1, 'aula': 1, \n'sobre': 2, 'pupitre': 1}\n\n¿Cómo realizaría usted esta difícil tarea?\nConsejos\nEl procesamiento de texto utiliza:\n\nReconocimiento de patrones: usted debe reconocer que patrones se repiten y puede explotar para procesar el texto.\nUtilización de funciones específicas: el tipo de dato string posee una rica colección de métodos que debe manejar para simplificar la tarea de procesamiento de texto.\nRecuerde que todo string es inmutable, por lo que al aplicar diversas funciones se obtiene siempre un nuevo string.\n\nProcesamiento de texto\nSalto de línea\nEl string \\n corresponde a un único carácter, que representa el salto de línea.",
"print len(\"\\n\")\n\na1 = 'casa\\narbol\\npatio'\nprint a1\nprint len(a1)\n\na2 = '''casa\narbol\npatio'''\nprint a2\nprint len(a2)\n\nprint a1==a2\n\nb = 'a\\nb\\nc'\nprint b\nprint len(b)",
"Procesamiento de texto\nTabulación\nEl string \\t corresponde a un único carácter, que representa una tabulación.",
"print len(\"\\t\")\n\na = 'casa\\n\\tarbol\\n\\tpatio'\nprint a\n\nb = 'a\\tb\\tc'\nprint b\nprint len(b)",
"Procesamiento de texto\nImportante: \\n y \\t aparecen frecuentemente cuando analicemos archivos leídos del disco duro.\nProcesamiento de texto\nReemplazar secciones de un string\n\nLa función mi_string.replace(s1, s2) busca cada ocurrencia del substring s1 en mi_string, y lo reemplaza por s2. \nLa función mi_string.replace(s1, s2,n) busca las primeras n ocurrencias del substring s1 en mi_string, y lo reemplaza por s2. \nLa función mi_string.replace(s1, s2) regresa un nuevo string, el string original no es modificado.",
"palabra = 'cara'\npalabra2 = palabra.replace('r', 's')\nprint palabra\nprint palabra2\nprint palabra2.replace('ca', 'pa')\nprint palabra2.replace('a', 'e', 1)\nprint palabra2.replace('c', '').replace('a', 'o') # Encadenamiento de metodos\nprint palabra",
"Procesamiento de texto\nSeparar un string\nPara separar un string tenemos 2 opciones:\n* Separar en caracteres, utilizando list(mi_string), que genera una lista con los carácteres de mi_string en orden.\n* Separar en palabras, utilizando mi_string.split(s), que generar una lista de \"palabras\" que han sido separadas por el string s. El string s no estará en ninguno de los substrings de la lista. Por defecto, s es el caracter espacio \" \".",
"oracion = 'taca taca'\nprint list(oracion)\nprint set(oracion)\nprint oracion.split()\nprint oracion.split(\"a\")\nprint oracion.split(\"t\")\nprint oracion.split(\"ac\")",
"Procesamiento de texto\nUnir una lista de strings\nPara unir una lista de strings es necesario utilizar el método join:\nPython\n s.join(lista_de_strings)\nRegresa un único string donde los elementos del string han sido \"pegados\" utilizando el string s.",
"mi_lista = ['Ex', 'umbra', 'in', 'solem']\nprint ' '.join(mi_lista)\nprint ''.join(mi_lista)\nprint ' -> '.join(mi_lista)\n\nmi_conjunto = {'Ex', 'umbra', 'in', 'solem'}\nprint mi_conjunto\nprint ' '.join(mi_conjunto)\nprint ''.join(mi_conjunto)\nprint ' -> '.join(mi_conjunto)",
"Unir una lista de strings\nObservación: join funciona sólo sobre una lista de strings. Si quiere pegar números, debe convertirlos a strings antes.",
"lista_de_strings = [\"1\", \"2\", \"3\"]\nprint \", \".join(lista_de_strings)\n\nlista_de_ints = [1, 2, 3]\nprint \", \".join(lista_de_ints)\n\nlista_de_ints = range(10)\nlista_de_strings = []\nfor x in lista_de_ints:\n lista_de_strings.append(str(x))\nprint \", \".join(lista_de_strings)",
"Procesamiento de texto\nUnir una secuencia de valores (no strings) v2\nTambién es posible utilizar map que aplica genera una nueva lista aplicando a cada elemento de la lista original la función pasada como argumento.",
"numeros = range(10)\nprint numeros\ndef f(x):\n return 2.*x + 1./(x+1)\n\nprint map(str, numeros)\nprint map(float, numeros)\nprint map(f, numeros)\n\nprint ', '.join(map(str, numeros))\n\n# \nprint \"-\"join(\"1,2,3,4\".split(\",\"))",
"Procesamiento de texto\nInterpolación de valores por posición",
"s = 'Soy {0} y vivo en {1} {2}'\nprint s.format('Perico', 'Valparaiso')\nprint s.format('Erika', 'Berlin')\nprint s.format('Wang Dawei', 'Beijing')",
"Procesamiento de texto\nInterpolación de valores por nombre",
"s = '{nombre} estudia en la {u}'\n# Datos pueden pasarse ordenados\nprint s.format(nombre='Perico', u='UTFSM')\nprint s.format(nombre='Fulana', u='PUCV')\n# También es posible cambiar el orden\nprint s.format(u='UPLA', nombre='Yayita')\n# O con magia (conocimiento avanzado)\nd = {\"nombre\":\"Mago Merlin\", \"u\":\"Camelot University\"}\nprint s.format(**d)",
"Procesamiento de texto\nMayusculas y Minúsculas\nPara cambiar la capitalización de un string, es posible utilizar los siguientes métodos:\n\n.upper(): TODO EN MAYUSCULA.\n.lower(): todo en minuscula\n.swapcase(): cambia el order que tenia la capitalización.\n.capitalize(): Coloca únicamente mayuscula en la primera letra del string.",
"palabra = '1. raMo de ProGra'\nprint palabra.upper()\nprint palabra.lower()\nprint palabra.swapcase()\nprint palabra.capitalize()",
"Procesamiento de texto\nEjemplo de Motivación\nQueremos conocer cuales son las palabras más comunes en un idioma. Para eso, necesitamos saber cuantas veces aparece cada palabra en una frase. Desarrolle una función contar_palabras que al ser aplicada sobre un string, entregue un diccionario con las palabras y la cantidad de veces que aparece en la frase. Omita espacios y signos de puntuación y exclamación.\nt = 'El sobre, en el aula, esta sobre el pupitre.'\n\ncontar_palabras(t)\n{'el': 3, 'en': 1, 'esta': 1, 'aula': 1, 'sobre': 2, 'pupitre': 1}\n\n¿Cómo realizaría ahora usted esta difícil tarea?\nProcesamiento de texto\nConsejos\nSubdividir en tareas menores:\n* ¿Cómo sacar los simbolos indeseados?\n* ¿Cómo separar las palabras?\n* ¿Cómo contar las palabras?",
"def contar_palabras(s):\n return s\n \nt = 'El sobre, en el aula, esta sobre el pupitre.'\ncontar_palabras(t)",
"Procesamiento de texto\nMotivación: Solución\nINPUT:\nt = 'El sobre, en el aula, esta sobre el pupitre.'\ncontar_palabras(t)\n\nOUTPUT: \n{'el': 3, 'en': 1, 'esta': 1, 'aula': 1, \n'sobre': 2, 'pupitre': 1}",
"def contar_palabras(s):\n s = s.lower()\n for signo in [\",\",\".\",\";\",\"!\",\"?\",\"'\",'\"']:\n s = s.replace(signo,\"\")\n palabras = s.split()\n contador = {}\n for palabra_sucia in palabras:\n palabra = palabra_sucia\n if palabra in contador:\n contador[palabra] += 1 # Aumentamos\n else:\n contador[palabra] = 1 # Inicializamos\n return contador\n \nt = 'El sobre, en el aula, !! Esta sobre el pupitre.'\ncontar_palabras(t)",
"Procesamiento de texto\nEjercicio 2\nEscriba un programa que tenga el siguiente comportamiento:\nINPUT:\nNumero de alumnos: 3\nNombre alumno 1: Isaac Newton\nIngrese las notas de Isaac: 98 94 77\nNombre alumno 2: Nikola Tesla\nIngrese las notas de Nikola: 100 68 94 88\nNombre alumno 3: Albert Einstein\nIngrese las notas de Albert: 83 85\n\nOUTPUT:\nEl promedio de Isaac es 89.67\nEl promedio de Nikola es 87.50\nEl promedio de Albert es 84.00\n\nProcesamiento de texto\nEjercicio 2: Análisis\n¿Cuáles son las tareas necesarias?\nProcesamiento de texto\nEjercicio 1: Solución\nLas tareas a realizar son:\n* Leer número de alumnos\n* Para cada alumno, leer nombre y notas.\n* Procesar notas para obtener el promedio.\n* Almacenar nombre y notas.\n* Separar nombre de apellido.\n* Imprimir resultados apropiadamente.",
"# Solución Alumnos\n\n\n# Solución\n# Guardar datos\nN = int(raw_input(\"Numero de alumnos: \"))\nnotas_alumnos = []\nfor i in range(N):\n nombre = raw_input(\"Nombre alumno {0}:\".format(i+1))\n nombre_pila = nombre.split(\" \")[0]\n notas_str = raw_input(\"Ingrese las notas de {0}: \".format(nombre_pila))\n notas_int = []\n for nota in notas_str.split(\" \"):\n notas_int.append(int(nota))\n promedio = sum(notas_int)/float(len(notas_int))\n notas_alumnos.append( (nombre_pila, promedio) )\n\n# Imprimir promedios\nfor nombre, promedio in notas_alumnos:\n print \"El promedio de {0} es {1:.2f}\".format(nombre, promedio)",
"Procesamiento de texto\nProcesamiento de ADN\nUna cadena de ADN es una secuencia de bases nitrogenadas llamadas adenina, citosina, timina y guanina.\nEn un programa, una cadena se representa como un string de caracteres 'a', 'c', 't' y 'g'.\nA cada cadena, le corresponde una cadena complementaria, que se obtiene intercambiando las adeninas con las timinas, y las citosinas con las guaninas:\ncadena = 'cagcccatgaggcagggtg'\ncomplemento = 'gtcgggtactccgtcccac'\n\nProcesamiento de ADN\n1.1 Procesamiento de ADN: Secuencia aleatoria\nEscriba la función cadena_al_azar(n) que genere una cadena aleatoria de ADN de largo n:\nEjemplo de uso:\ncadena_al_azar(10) \npuede regresar 'acgtccgcct', 'tgttcgcatt', etc.\n\nPista:\nfrom random import choice\n\nchoice('atcg') regresa al azar una de las letras de \"atcg\"\nProcesamiento de ADN\n1.1 Secuencia aleatoria: Análisis\n¿Que tareas son necesarias?",
"# Definicion de funcion\nfrom random import choice\ndef cadena_al_azar(n):\n bases_n=''\n for i in range(n):\n base=choice('atgc')\n bases_n+=base\n return bases_n\n\n# Casos de uso\nprint cadena_al_azar(1)\nprint cadena_al_azar(1)\nprint cadena_al_azar(1)\nprint cadena_al_azar(1)\nprint cadena_al_azar(10)\nprint cadena_al_azar(10)\nprint cadena_al_azar(10)\nprint cadena_al_azar(10)",
"Procesamiento de ADN\n1.1 Solución Secuencia aleatoria v1",
"from random import choice\n\n# Definicion de funcion\ndef cadena_al_azar(n):\n adn = \"\"\n for i in range(n):\n adn += choice(\"acgt\")\n return adn\n\n# Casos de uso\nprint cadena_al_azar(1)\nprint cadena_al_azar(1)\nprint cadena_al_azar(1)\nprint cadena_al_azar(1)\nprint cadena_al_azar(10)\nprint cadena_al_azar(10)\nprint cadena_al_azar(10)\nprint cadena_al_azar(10)",
"Procesamiento de ADN\n1.1 Solución Secuencia aleatoria v2",
"from random import choice\n\n# Definicion de funcion\ndef cadena_al_azar(n):\n bases = []\n for i in range(n):\n bases.append(choice(\"acgt\"))\n adn = \"\".join(bases)\n return adn\n\n# Casos de uso\nprint cadena_al_azar(1)\nprint cadena_al_azar(1)\nprint cadena_al_azar(1)\nprint cadena_al_azar(1)\nprint cadena_al_azar(10)\nprint cadena_al_azar(10)\nprint cadena_al_azar(10)\nprint cadena_al_azar(10)",
"Procesamiento de texto\nProcesamiento de ADN: Secuencia complementaria\nEscriba la función complementaria(s) que regrese la cadena complementaria de c: el complementario de \"a\" es \"t\" (y viceversa), y el complementario de \"c\" es \"g\" (y viceversa).\nPython\n cadena = 'cagcccatgaggcagggtg'\n print complementaria(cadena)\n 'gtcgggtactccgtcccac' \nProcesamiento de texto\nProcesamiento de ADN: Secuencia complementaria\n¿Tareas?",
"# Solucion estudiantes\ndef cadena_(n):\n adn = \"\"\n for i in range(n):\n adn += choice(\"acgt\")\n return adn",
"Procesamiento de texto\nSolución Secuencia complementaria v1",
"def complementaria(adn):\n rna = \"\"\n for base in adn:\n if base==\"a\":\n rna += \"t\"\n elif base==\"t\":\n rna += \"a\"\n elif base==\"c\":\n rna += \"g\"\n else:\n rna += \"c\"\n return rna\n\nadn = cadena_al_azar(20)\nprint adn\nprint complementaria(adn)",
"Procesamiento de texto\nSolución Secuencia complementaria v2",
"def complementaria(adn):\n pares = {\"a\":\"t\", \"t\":\"a\", \"c\":\"g\", \"g\":\"c\"}\n rna = \"\"\n for base in adn:\n rna += pares[base]\n return rna\n\nadn = cadena_al_azar(20)\nprint adn\nprint complementaria(adn)",
"Procesamiento de texto\nSolución Secuencia complementaria v3",
"def complementaria(adn):\n rna = adn.replace(\"a\",\"T\").replace(\"t\",\"A\").replace(\"c\",\"G\").replace(\"g\",\"C\")\n return rna.lower()\n\nadn = cadena_al_azar(20)\nprint adn\nprint complementaria(adn)",
"Procesamiento de texto\nDigitos no presentes\nDado un string con dígitos, indique que digitos no estan presentes (en orden).\nINPUT:\n13579\n3210\n\nOUTPUT:\n02468\n456789"
] | [
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
kdheepak/psst | docs/notebooks/interactive_visuals/Demo.ipynb | mit | [
"How NetworkModel Works",
"import pandas as pd\nimport numpy as np\n\nfrom psst.network.graph import (\n NetworkModel, NetworkViewBase, NetworkView\n)\n\nfrom psst.case import read_matpower\ncase = read_matpower('../cases/case118.m')",
"I. Creating a NetworkModel",
"# Create the model from a PSSTCase, optionally passing a sel_bus\nm = NetworkModel(case, sel_bus='Bus1')",
"In the __init__, the NetworkModel...",
"display(m.case) # saves the case\ndisplay(m.network) # creates a PSSTNetwork\ndisplay(m.G) # stores the networkX graph (an attribute of the PSSTNetwork)\ndisplay(m.model) # builds/solves the model\n\n# Creates df of x,y positions for each node (bus, load, gen), based off self.network.positions\nm.all_pos.head(n=10)\n\n# Creates a df of start and end x,y positions for each edge, based off self.G.edges()\nm.all_edges.head(n=10)",
"The sel_bus and view_buses attributes",
"# `sel_bus` is a single bus, upon which the visualization is initially centered.\n# It can be changed programatically, or via the dropdown menu.\n\nm.sel_bus\n\n# At first, it is the only bus in view_buses.\n# More buses get added to view_buses as they are clicked.\n\nm.view_buses",
"II. Creating a NetworkView from the model",
"# Create the view from the model\n# (It can, alternatively, be created from a case.)\n\nv = NetworkView(model=m)\n\nv",
"III. Generating the x,y data for the view\n\nWhenever the view_buses list get changed, it triggers the callback _callback_view_change\nThis function first calls subset_positions and subset_edges\nThen, the subsetted DataFrames get segregated into seperate ones for bus, gen, and load\nFinally, the x,y coordinates are extracted into a format the NetworkView can use.",
"# The subsetting that occurs is all based on `view_buses`\nm.view_buses",
"The subset_positions() call",
"# Subset positions creates self.pos\nm.pos",
"The function looks like this:\npython\ndef subset_positions(self):\n \"\"\"Subset self.all_pos to include only nodes adjacent to those in view_buses list.\"\"\"\n nodes = [list(self.G.adj[item].keys()) for item in self.view_buses] # get list of nodes adj to selected buses\n nodes = set(itertools.chain.from_iterable(nodes)) # chain lists together, eliminate duplicates w/ set\n nodes.update(self.view_buses) # Add the view_buses themselves to the set\n return self.all_pos.loc[nodes] # Subset df of all positions to include only desired nodes.\nThe subset_edges() call",
"# Subset edges creates self.edges\nm.edges",
"The function looks like this:\npython\ndef subset_edges(self):\n \"\"\"Subset all_edges, with G.edges() info, based on view_buses list.\"\"\"\n edge_list = self.G.edges(nbunch=self.view_buses) # get edges of view_buses as list of tuples\n edges_fwd = self.all_edges.loc[edge_list] # query all_pos with edge_list\n edge_list_rev = [tuple(reversed(tup)) for tup in edge_list] # reverse order of each tuple\n edges_rev = self.all_edges.loc[edge_list_rev] # query all_pos again, with reversed edge_list\n edges = edges_fwd.append(edges_rev).dropna(subset=['start_x']) # combine results, dropping false hits\n return edges\n If you want a closer look...",
"m.view_buses = ['Bus2','Bus3']\n\nedge_list = m.G.edges(nbunch=m.view_buses) # get edges of view_buses as list of tuples\nedge_list\n\nedges_fwd = m.all_edges.loc[edge_list] # query all_pos with edge_list\nedges_fwd\n\nedge_list_rev = [tuple(reversed(tup)) for tup in edge_list] # reverse order of each tuple\nedge_list_rev\n\nedges_rev = m.all_edges.loc[edge_list_rev] # query all_pos again, with reversed edge_list\nedges_rev\n\nedges = edges_fwd.append(edges_rev).dropna(subset=['start_x']) # combine results, dropping false hits\nedges",
"Segregating DataFrames and extracting data\n\nThe DataFrames are segregated into bus, case, and load, using the names in case.bus, case.gen, and case.load\nx,y data is extracted, ready to be plotted by NetworkView\n\n\nExtracting bus data looks like this:\npython\nbus_pos = self.pos[self.pos.index.isin(self.case.bus_name)]\nself.bus_x_vals = bus_pos['x']\nself.bus_y_vals = bus_pos['y']\nself.bus_names = list(bus_pos.index)\n(Similar for the other nodes)",
"print(\"x_vals: \", m.bus_x_vals)\nprint(\"y_vals: \", m.bus_y_vals)\nprint(\"names: \", m.bus_names)",
"Extracting branch data looks like this:\n```python\nedges = self.edges.reset_index()\n_df = edges.loc[edges.start.isin(self.case.bus_name) & edges.end.isin(self.case.bus_name)]\nself.bus_x_edges = [tuple(edge) for edge in _df[['start_x', 'end_x']].values]\nself.bus_y_edges = [tuple(edge) for edge in _df[['start_y', 'end_y']].values]\n```\n(Similar for the other edges)",
"print(\"bus_x_edges:\")\nprint(m.bus_x_edges)\n\nprint(\"\\nbus_y_edges:\")\nprint(m.bus_y_edges)"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
JAmarel/LiquidCrystals | ElectroOptics/MinimizeAttempt.ipynb | mit | [
"import numpy as np\nfrom scipy.integrate import quad, dblquad\n%matplotlib inline\nimport matplotlib.pyplot as plt\nfrom scipy.optimize import minimize\n\nthetamin = 25.6*np.pi/180\nthetamax = 33.7*np.pi/180\nt = 1*10**-6 #Cell Thickness",
"Data",
"tempsC = np.array([26, 27, 29, 31, 33, 35, 37])\nvoltages = np.array([2,3,6,7,9,11,12.5,14,16,18,20,22,23.5,26,27.5,29,31,32.5,34,36])\n\nvoltages = np.array([1.826,3.5652,5.3995,7.2368,9.0761,10.8711,12.7109,14.5508,16.3461,18.1414,19.9816,21.822,23.6174,25.4577,27.253,29.0935,30.889,32.7924,34.5699,35.8716])\nmeasured_psi1 = np.array([[11.4056,20.4615,25.4056,27.9021,29.028,29.6154,30.2517,30.8392,31.1329,31.5245,31.8671,32.014,32.3077,32.5034,32.7972,32.9929,33.1399,33.3357,33.4336,33.6783]])\n\n#This Block just converts units\n\nfields = np.array([entry/t for entry in voltages])\n\nKC = 273.15\ntempsK = np.array([entry+KC for entry in tempsC]) #Celsius to Kelvin\n\n# measured_psi1 = np.array([[11,20.5,25.5,27.5,29,30,30.5,31,31.25,31.5,31.75,32,32.25,32.5,32.75,33,33.25,33.5,33.75,34]])\n# measured_psi2 = np.array([[7.6, 11.5, 22.3, 24.7, 27.8, 29.4, 30.1, 30.7, 31.2, 31.6, 31.9, 32.2, 32.4, 32.6, 32.7, 32.8, 32.9, 32.9, 33.0, 33.1]])\n# measured_psi3 = np.array([[4.7, 7.3, 15.5, 18.1, 22.7, 25.9, 27.5, 28.6, 29.6, 30.3, 30.8, 31.2, 31.5, 31.8, 32.0, 32.1, 32.3, 32.4, 32.5, 32.6]])\n# measured_psi4 = np.array([[3.5, 5.4, 11.5, 13.8, 18.1, 21.9, 24.1, 25.9, 27.5, 28.7, 29.5, 30.1,30.5, 31.0, 31.3, 31.5, 31.7, 31.9, 32.0, 32.2]])\n# measured_psi5 = np.array([[2.5, 3.7, 8.0, 9.6, 12.9, 16.3, 18.7, 20.9, 23.4, 25.3, 26.8, 27.9, 28.5, 29.4, 29.8, 30.2, 30.6, 30.8, 31.1, 31.3]])\n# measured_psi6 = np.array([[1.9, 2.9, 6.1, 7.3, 9.8, 12.6, 14.7, 16.8, 19.4, 21.7, 23.6, 25.2, 26.1, 27.4, 28.0, 28.6, 29.2, 29.5, 29.9, 30.3]])\n# measured_psi7 = np.array([[1.5, 2.3, 4.7, 5.6, 7.5, 9.6, 11.2, 12.9, 15.2, 17.5, 19.6, 21.4, 22.7, 24.4, 25.37, 26.1, 27.02, 27.5, 28.0, 28.6]])\n\n# AllPsi = np.concatenate((measured_psi1,measured_psi2,measured_psi3,measured_psi4,measured_psi5,measured_psi6,measured_psi7),axis=0)",
"Calculate the Boltzmann Factor and the Partition Function\n$$ {Boltz() \\:returns:}\\:\\: e^{\\frac{-U}{k_bT}}\\:sin\\:{\\theta}\\ $$",
"def Boltz(theta,phi,T,p0k,alpha,E):\n \"\"\"Compute the integrand for the Boltzmann factor.\n Returns\n -------\n A function of theta,phi,T,p0k,alpha,E to be used within dblquad\n \"\"\"\n return np.exp((1/T)*p0k*E*np.sin(theta)*np.cos(phi)*(1+alpha*E*np.cos(phi)))*np.sin(theta)",
"Calculate the Tilt Angle $\\psi$\n$$ numerator() \\:returns: {sin\\:{2\\theta}\\:cos\\:{\\phi}}\\:e^{\\frac{-U}{k_bT}}\\:sin\\:{\\theta} $$",
"def numerator(theta,phi,T,p0k,alpha,E):\n boltz = Boltz(theta,phi,T,p0k,alpha,E)\n return np.sin(2*theta)*np.cos(phi)*boltz",
"$$ denominator()\\: returns: {({cos}^2{\\theta} - {sin}^2{\\theta}\\:{cos}^2{\\phi}})\\:e^{\\frac{-U}{k_bT}}\\:sin\\:{\\theta} $$",
"def denominator(theta,phi,T,p0k,alpha,E):\n boltz = Boltz(theta,phi,T,p0k,alpha,E)\n return ((np.cos(theta)**2) - ((np.sin(theta)**2) * (np.cos(phi)**2)))*boltz",
"$$ tan(2\\psi) = \\frac{\\int_{\\theta_{min}}^{\\theta_{max}} \\int_0^{2\\pi} {sin\\:{2\\theta}\\:cos\\:{\\phi}}\\:e^{\\frac{-U}{k_bT}}\\:sin\\:{\\theta}\\: d\\theta d\\phi}{\\int_{\\theta_{min}}^{\\theta_{max}} \\int_0^{2\\pi} ({{cos}^2{\\theta} - {sin}^2{\\theta}\\:{cos}^2{\\phi}})\\:e^{\\frac{-U}{k_bT}}\\:sin\\:{\\theta}\\: d\\theta d\\phi} $$",
"def compute_psi(T,p0k,alpha,E,thetamin,thetamax):\n \"\"\"Computes the tilt angle(psi) by use of our tan(2psi) equation\n Returns\n -------\n Float:\n The statistical tilt angle with conditions T,p0k,alpha,E\n \"\"\"\n \n avg_numerator, avg_numerator_error = dblquad(numerator, 0, 2*np.pi, lambda theta: thetamin, lambda theta: thetamax,args=(T,p0k,alpha,E))\n \n avg_denominator, avg_denominator_error = dblquad(denominator, 0, 2*np.pi, lambda theta: thetamin, lambda theta: thetamax,args=(T,p0k,alpha,E))\n \n psi = (1/2)*np.arctan(avg_numerator / (avg_denominator)) * (180 /(np.pi)) #Converting to degrees from radians and divide by two\n \n return psi",
"Least Square Fitting $\\alpha$ and $\\rho_0$",
"def compute_error(xo,fields,T,thetamin,thetamax,measured_psi):\n \"\"\"Computes the squared error for a pair of parameters by comparing it to all measured tilt angles\n at one temperature.\n This will be used with the minimization function, xo is a point that the minimization checks.\n \n Parameters/Conditions\n ----------\n x0: \n An array of the form [alpha^13,p0^33].\n \n Returns\n -------\n Float: Error\n \"\"\"\n \n alpha = xo[0]/(1e10)\n p0 = xo[1]/(1e30)\n \n p0k = p0/1.3806488e-23\n \n computed_psi = np.array([compute_psi(T,p0k,alpha,E,thetamin,thetamax) for E in fields])\n \n Err = computed_psi - measured_psi\n ErrSqr = np.array([i**2 for i in Err]) \n return np.sum(ErrSqr)*1e8 #Scaling the Squared Error up here seems to help with minimization precision.",
"It might be better to use the minimization function individually for each temperature range. The minimization function returns a minimization object, which gives extra information about the results. The two important entries are fun and x. \nfun is the scalar value of the function that is being minimized. In our case fun is the squared error. \nx is the solution array of the form [alpha^10,p0^30]\nThe reason it might be better to just minimze the squared error function, instead of using the minimize_func that I wrote below is because the minimize function is very picky about the initial guess. Also the minimization function tends to stop when the result of the function is one the order of 10^-3.\nFinal Result for $\\alpha$ and $\\rho_0$\nRight now everything below this might not work as well as manually guessing and checking. The idea for this section was to automate that process and just return our entire solution arrays at the end of the notebook.",
"def minimize_func(guess,fields,T,thetamin,thetamax,measured_psi,bnds):\n \"\"\"A utility function that is will help me construct alpha and p0 arrays later.\n Uses the imported minimize function and compute_error to best fit our parameters\n at a temperature.\n \n Parameters/Conditions\n ----------\n guess: \n The initial guess for minimize().\n \n Returns\n -------\n Array: [alpha,p0]\n \"\"\"\n \n results = minimize(compute_error,guess,args=(fields,T,thetamin,thetamax,measured_psi),method = 'SLSQP',bounds = bnds)\n xres = np.array(dict(results.items())['x']) \n \n \"\"\"Minimize returns a special minimization object. That is similar to a dictionary but not quite.\n xres is grabbing just the x result of the minimization object, which is the [alpha,p0] array that\n we care about\"\"\"\n \n alpha_results = xres[0]\n p0_results = xres[1]\n \n return np.array([alpha_results,p0_results])\n \n\nguess = (2575,2168)\nbnds = ((1000,2600),(200,2400))\n\nresults = minimize(compute_error,guess,args=(fields,tempsK[0],thetamin,thetamax,measured_psi1),method = 'TNC',bounds = bnds)\nresults\n\nres = np.array(dict(results.items())['x'])\nalpha = res[0]\np0 = res[1]\nalpha = alpha*1e-4\np0 = p0/3.33564\nprint(\"alpha micro: \" + str(alpha))\nprint('p0 debye: ' + str(p0))\n\n#Minimization claims that it did not succeed. But the results were pretty good. I think it believes that it did not succeed because I have the squared error scaled up very high.\n\ndef solution(initial_guess,fields,tempsK,thetamin,thetamax,AllPsi,initial_bnds):\n \n \"\"\"Constructs Alpha and p0 arrays where each entry is the value of alpha,p0 at the corresponding temperature in\n tempsK. Initial guess and initial bounds are changed each iteration of the loop to the previous values of alpha and p0.\n Alpha and p0 decrease so this helps to cut down on the range.\n \n Parameters/Conditions\n ----------\n initial_guess: \n The initial guess for minimize().\n initial_bnds:\n The initial bounds for minimize().\n \n \n Returns\n -------\n Array,Array: Alpha Array in micro meters, p0 Array in debye\n \"\"\"\n \n alpha = np.array([])\n p0 = np.array([])\n \n guess = initial_guess\n bnds = initial_bnds\n \n for i in range(len(tempsK)):\n res = minimize_func(guess,fields,tempsK[i],thetamin,thetamax,AllPsi[i],bnds)\n \n alpha = np.append(alpha,res[0])\n p0 = np.append(p0,res[1])\n \n guess = (res[0]-10,res[1]-10)\n bnds = ((initial_bnds[0][0],res[0]),(initial_bnds[1][0],res[1]))\n \n alpha = alpha*1e-4\n \n p0 = p0/(3.33564)\n \n return alpha,p0\n\ninitial_guess = (2575,2168)\ninitial_bnds = ((1000,2600),(200,2300))\n\nalpha_micro,p0Debye = solution(initial_guess,fields,tempsK,thetamin,thetamax,AllPsi,initial_bnds)"
] | [
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
bgruening/EDeN | examples/Sequence_example.ipynb | gpl-3.0 | [
"Example\nConsider sequences that are increasingly different. EDeN allows to turn them into vectors, whose similarity is decreasing.",
"%matplotlib inline",
"Build an artificial dataset: starting from the string 'abcdefghijklmnopqrstuvwxyz', generate iteratively strings by swapping two characters at random. In this way instances are progressively more dissimilar",
"import random\n\ndef make_data(size):\n text = ''.join([str(unichr(97+i)) for i in range(26)])\n seqs = []\n\n def swap_two_characters(seq):\n '''define a function that swaps two characters at random positions in a string '''\n line = list(seq)\n id_i = random.randint(0,len(line)-1)\n id_j = random.randint(0,len(line)-1)\n line[id_i], line[id_j] = line[id_j], line[id_i]\n return ''.join(line)\n\n for i in range(size):\n text = swap_two_characters( text )\n seqs.append( text )\n print text\n \n return seqs\n\nseqs = make_data(25)",
"define a function that builds a graph from a string, i.e. the path graph with the characters as node labels",
"import networkx as nx\n\ndef sequence_to_graph(seq):\n '''convert a sequence into a EDeN 'compatible' graph\n i.e. a graph with the attribute 'label' for every node and edge'''\n G = nx.Graph()\n for id,character in enumerate(seq):\n G.add_node(id, label = character )\n if id > 0:\n G.add_edge(id-1, id, label = '-')\n return G",
"make a generator that yields graphs: generators are 'good' as they allow functional composition",
"def pre_process(iterable):\n for seq in iterable:\n yield sequence_to_graph(seq)",
"initialize the vectorizer object with the desired 'resolution'",
"%%time\nfrom eden.graph import Vectorizer\nvectorizer = Vectorizer( complexity = 4 )",
"obtain an iterator over the sequences processed into graphs",
"%%time\ngraphs = pre_process( seqs )",
"compute the vector encoding of each instance in a sparse data matrix",
"%%time\nX = vectorizer.transform( graphs )\nprint 'Instances: %d ; Features: %d with an avg of %d features per instance' % (X.shape[0], X.shape[1], X.getnnz()/X.shape[0])",
"compute the pairwise similarity as the dot product between the vector representations of each sequence",
"from sklearn import metrics\n\nK=metrics.pairwise.pairwise_kernels(X, metric='linear')\nprint K",
"visualize it as a picture is worth thousand words...",
"import pylab as plt\nplt.figure( figsize=(8,8) )\nimg = plt.imshow( K, interpolation='none', cmap=plt.get_cmap( 'YlOrRd' ) )\nplt.show()"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
barjacks/pythonrecherche | Kursteilnehmer/Sven Millischer/06 /01 Rückblick For-Loop-Übungen.ipynb | mit | [
"10 For-Loop-Rückblick-Übungen\nIn den Teilen der folgenden Übungen habe ich den Code mit \"XXX\" ausgewechselt. Es gilt in allen Übungen, den korrekten Code auszuführen und die Zelle dann auszuführen. \n1.Drucke alle diese Prim-Zahlen aus:",
"primes = [2, 3, 5, 7]\nfor prime in primes:\n print(prime)",
"2.Drucke alle die Zahlen von 0 bis 4 aus:",
"for x in range(5):\n print(x)",
"3.Drucke die Zahlen 3,4,5 aus:",
"for x in range(3, 6):\n print(x)",
"4.Baue einen For-Loop, indem Du alle geraden Zahlen ausdruckst, die tiefer sind als 237.",
"numbers = [\n 951, 402, 984, 651, 360, 69, 408, 319, 601, 485, 980, 507, 725, 547, 544,\n 615, 83, 165, 141, 501, 263, 617, 865, 575, 219, 390, 984, 592, 236, 105, 942, 941,\n 386, 462, 47, 418, 907, 344, 236, 375, 823, 566, 597, 978, 328, 615, 953, 345,\n 399, 162, 758, 219, 918, 237, 412, 566, 826, 248, 866, 950, 626, 949, 687, 217,\n 815, 67, 104, 58, 512, 24, 892, 894, 767, 553, 81, 379, 843, 831, 445, 742, 717,\n 958, 609, 842, 451, 688, 753, 854, 685, 93, 857, 440, 380, 126, 721, 328, 753, 470,\n 743, 527\n]\n\nfor x in numbers:\n if x in range(237):\n if (x % 2 == 0):\n print(x)\n\n#Lösung:",
"5.Addiere alle Zahlen in der Liste",
"sum(numbers)\n\n#Lösung:",
"6.Addiere nur die Zahlen, die gerade sind",
"numbers = [\n 951, 402, 984, 651, 360, 69, 408, 319, 601, 485, 980, 507, 725, 547, 544,\n 615, 83, 165, 141, 501, 263, 617, 865, 575, 219, 390, 984, 592, 236, 105, 942, 941,\n 386, 462, 47, 418, 907, 344, 236, 375, 823, 566, 597, 978, 328, 615, 953, 345,\n 399, 162, 758, 219, 918, 237, 412, 566, 826, 248, 866, 950, 626, 949, 687, 217,\n 815, 67, 104, 58, 512, 24, 892, 894, 767, 553, 81, 379, 843, 831, 445, 742, 717,\n 958, 609, 842, 451, 688, 753, 854, 685, 93, 857, 440, 380, 126, 721, 328, 753, 470,\n 743, 527\n]\n\nnew_list=[]\nfor elem in numbers:\n if elem % 2 == 0:\n new_list.append(elem)\nsum(new_list)",
"7.Drucke mit einem For Loop 5 Mal hintereinander Hello World aus",
"for x in range(5): \n print (\"Hello World\")\n\n#Lösung",
"8.Entwickle ein Programm, das alle Nummern zwischen 2000 und 3200 findet, die durch 7, aber nicht durch 5 teilbar sind. Das Ergebnis sollte auf einer Zeile ausgedruckt werden. Tipp: Schaue Dir hier die Vergleichsoperanden von Python an.",
"l=[]\nfor i in range(2000, 3200):\n if (i%7==0) and (i%5>=0):\n l.append(str(i))\n\nprint(','.join(l))",
"9.Schreibe einen For Loop, der die Nummern in der folgenden Liste von int in str verwandelt.",
"lst = range(45,99)\n\nnew_list=[]\nfor elem in lst:\n str(elem)\n new_list.append(str(elem))\nprint(new_list)",
"10.Schreibe nun ein Programm, das alle Ziffern 4 mit dem Buchstaben A ersetzte, alle Ziffern 5 mit dem Buchtaben B.",
"newnewlist = []\nfor elem in new_list:\n if '4' in elem:\n elem = elem.replace('4', 'A')\n if '5' in elem:\n elem = elem.replace('5', 'B')\n newnewlist.append(elem)\n\nnewnewlist"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
NelisW/ComputationalRadiometry | 03-Introduction-to-Radiometry.ipynb | mpl-2.0 | [
"3 Brief Introduction to Radiometry\nThis notebook forms part of a series on computational optical radiometry \nThe date of this document and module versions used in this document are given at the end of the file.\nFeedback is appreciated: neliswillers at gmail dot com.\nOverview",
"from IPython.display import display\nfrom IPython.display import Image\nfrom IPython.display import HTML\n",
"The pyradi toolkit is a Python toolkit to perform optical and infrared computational radiometry (flux flow) calculations. \nRadiometry is the measurement and calculation of electromagnetic flux transfer for systems operating in the spectral region ranging from ultraviolet to microwaves. Indeed, these principles can be applied to electromagnetic radiation of any wavelength. This book only considers ray-based radiometry for incoherent radiation fields.\nThe briefly summarised information in this notebook is taken from my book, see the book for more details.",
"display(Image(filename='images/PM236.jpg')) ",
"Electromagnetic radiation can be modeled as a number of different phenomena: rays, electromagnetic waves, wavefronts, or particles. All of these models are mathematically related. The appropriate model to use depends on the task at hand. Either the electromagnetic wave model(developed by Maxwell) or the particle model (developed by Einstein) are used when most appropriate. The part of the electromagnetic spectrum normally considered in optical radiometry is as follows:",
"display(Image(filename='images/radiometry03.png')) ",
"The photon is a massless elementary particle and acts as the energy carrier for the electromagnetic wave.\nPhoton particles have discrete energy quanta proportional to the frequency of the electromagnetic energy, $Q = h\\nu = hc/\\lambda$, where $h$ is Planck's constant.\nDefinitions\nThe following figure (expanded from Pinson) and table defines the key radiometry units. The difference operator '$d$' is used to denote 'a small quantity of ...'. This 'small quantity' of one variable is almost always related to a 'small quantity' of another variable in some physical dependency. For example, irradiance is defined as $E=d\\Phi/dA$, which means that a small amount of flux $d\\Phi$ impinges on a small area $dA$, resulting in an irradiance of $E$. 'Small' is defined as the extent or domain over which the quantity, or any of its dependent quantities, does not vary significantly. Because any finite-sized quantity varies over a finite-sized domain, the $d$ operation is only valid over an infinitely small domain $dA=\\lim_{\\Delta A \\to 0}\\Delta A$. The difference operator, written in the form of a differential such as $E=d\\Phi/dA$, is not primarily meant to mean differentiation in the mathematical sense. Rather, it is used to indicate something that can be integrated (or summed).\nIn practice, it is impossible to consider infinitely many, infinitely small domains. Following the reductionist approach, any real system can, however, be assembled as the sum of a set of these small domains, by integration over the physical domain as in $A=\\int dA$. Hence, the 'small-quantity' approach proves very useful to describe and understand the problem, whereas the real-world solution can be obtained as the sum of a set of such small quantities. In almost all of the cases in this notebook, it is implied that such 'small-quantity' domains will be integrated (or summed) over the (larger) problem domain.\nPhoton rates are measured in quanta per second.\nThe 'second' is an SI unit, whereas quanta is a unitless count: the number of photons. Photon rate therefore has units of [1/s] or [s$^{-1}$]. This form tends to lose track of the fact that the number of quanta per second is described. The notebook may occasionally contain units of the form [q/s] to emphasize the photon count. In this case, the 'q' is not a formal unit, it is merely a reminder of 'counts.' In dimensional analysis the 'q' is handled the same as any other unit. \nRadiometric quantities can be defined in terms of three different but related units: radiant power (watts), photon rates (quanta per second), or photometric luminosity (lumen). Photometry is radiometry applied to human visual perception.\nThe conversion from radiometric to photometric quantities is\ncovered in more detail in my book. It is important to realize\nthat the underlying concepts are the same, irrespective of the nature of\nthe quantity. All of the derivations and examples presented in this book are equally valid for radiant, photon, or photometric quantities.\nFlux is the amount of optical power, a photon rate, or photometric luminous flux, flowing between two surfaces. There is always a source area and a receiving area, with the flux flowing between them. All quantities of flux are denoted by the symbol $\\Phi$. The units are [W], [q/s], or [lm], depending on the nature of the quantity.\nIrradiance (areance) is the areal density of flux on the receiving surface area. The flux flows inward onto the surface with no regard to incoming angular density. All quantities of irradiance are denoted by the symbol $E$. The units are [W/m$^2$], [q/(s$\\cdot$m$^2$)], or [lm/m$^2$], depending on the nature of the quantity.\nExitance (areance)\nis the areal density of flux on the source surface\narea. The flux flows outward from the surface with no regard to angular density. The exitance leaving a surface\ncan be due to reflected light, transmitted light, emitted light, or any combination thereof. All quantities of exitance are denoted by the\nsymbol $M$. The units are [W/m$^2$], [q/(s$\\cdot$m$^2$)], or [lm/m$^2$], depending on the\nnature of the quantity.\nIntensity (pointance) is the density of flux over solid angle. The flux flows outward from the source with no regard for surface area. Intensity is denoted by the symbol $I$. The human perception of a point source (e.g., a star at long range) 'brightness' is an intensity measurement. The units are [W/sr], [q/(s$\\cdot$sr)], or [lm/sr], depending on the nature of the quantity.\nRadiance (sterance) is the density of flux per unit source surface area and unit solid angle. \nRadiance is a property of the electromagnetic field irrespective of spatial location (in a lossless medium). For a radiating surface, the radiance may comprise transmitted light, reflected light, emitted light, or any combination thereof. The radiance in a field created by a Lambertian source is conserved: the radiance is constant anywhere in space, also on the receiving surface. All radiance quantities are denoted by the symbol $L$. The human perception of 'brightness' of a large surface can be likened to a radiance experience (beware of the nonlinear response in the eye, however). The units are\n[W/(m$^2$ $\\cdot$sr)], [q/(s$\\cdot$m$^2$ $\\cdot$sr)], or [lm/(m$^2$ $\\cdot$sr)], depending on the nature of the\nquantity.",
"display(Image(filename='images/radiometry01.png')) \n\ndisplay(Image(filename='images/radiometry02.png')) ",
"Spectral quantities\nSee notebook 4 in this series, Introduction to computational radiometry with pyradi, for a detailed description of spectral quantities.\nThree spectral domains are commonly used: wavelength $\\lambda$ in [m], frequency $\\nu$ in [Hz], and wavenumber $\\tilde{\\nu}$ in [cm$^{-1}$] (the number of waves that will fit into a 1-cm length). \nSpectral quantities indicate an amount of the quantity within a small spectral width $d\\lambda$ around the value of $\\lambda$: it is a spectral density. Spectral density quantity symbols are subscripted with a $\\lambda$ or $\\nu$, i.e., $L_\\lambda$ or $L_\\nu$. The dimensional units of a spectral density quantity are indicated as [$\\mu$m$^{-1}$] or [(cm$^{-1})^{-1}$], i.e., [W/(m$^2$ $\\cdot$sr$\\cdot$ $\\mu$m)].\nThe relationship between the wavelength and wavenumber spectral domains is $\\tilde{\\nu}=10^4/\\lambda$ , where $\\lambda$ is in units of $\\mu$m. The conversion of a spectral density quantity such as [W/(m$^2$ $\\cdot$sr$\\cdot$cm$^{-1}$)] requires the derivative, %$d{\\tilde{\\nu}}=-\\frac{10^4}{\\lambda^2}d\\lambda=-\\frac{{\\tilde{\\nu}}^2}{10^4}d\\lambda$.\n$d{\\tilde{\\nu}}=-10^4d\\lambda /\\lambda^2=-\\tilde{\\nu}^2d\\lambda/10^4$.\nThe derivative relationship converts between the spectral widths, and hence the spectral densities, in the two respective domains.\nThe conversion from a wavelength spectral density quantity to a wavenumber spectral density quantity is \n$d{}L_{\\tilde{\\nu}}=d{}L_\\lambda \\lambda^2/10^4=d{} L_\\lambda 10^4/\\tilde{\\nu}^2$.\nSpectral quantities denote the amount in a small spectral width $d\\lambda$ around a wavelength $\\lambda$. It follows that the total quantity over a spectral range can be determined by integration (summation) over the spectral range of interest:\n$$\nL=\\int_{\\lambda_1}^{\\lambda_2}L_\\lambda d\\lambda.\n$$\nThe above integral satisfies the requirements of dimensional analysis (see my book) because the units of $L_\\lambda$ are [W/(m$^2$ $\\cdot$sr$\\cdot$ $\\mu$m)], whereas $d\\lambda$ has the units of [$\\mu$m], and $L$ has units of [W/(m$^2$ $\\cdot$sr)].\nSolid Angle\nThe geometric solid angle $\\omega$ of any arbitrary surface $P$ from the reference point is given by\n$$\n\\omega=\\int!!!!\\int^{P} \\frac{d^2 P \\cos\\theta_1}{R^2},\n$$\nwhere $d^2 P \\cos\\theta_1$ is the projected surface area of the surface $P$ in the direction of the reference point, and $R$ is the distance from $d^2 P$ to the reference point. The integral is independent of the viewing direction $(\\theta_0, \\alpha_0)$ from the reference point. Hence, a given area at a given distance will always have the same geometric solid angle irrespective of the direction of the area.\nThe geometric solid angle of a cone is $\\omega=4\\pi\\sin^2\\left(\\frac{\\Theta}{2}\\right)$, where $\\Theta$ is the cone half-apex angle.\nThe projected solid angle $\\Omega$ of any arbitrary surface $P$ from the reference area $dA_0$ is given by\n$$\n\\Omega=\\int!!!!\\int^{P} \\frac{d^2 P \\cos\\theta_0 \\cos\\theta_1}{R^2},\n$$\nwhere $d^2 P \\cos\\theta_1$ is the projected surface area of the surface $P$ in the direction of the reference area, and $R$ is the distance from $d^2 P$ to the reference area. The integral depends on the viewing direction $(\\theta_0, \\alpha_0)$ from the reference area, by the projected area ($dA_0\\cos\\theta_0$) of $dA_0$ in the direction of $d^2 P$.\nHence, a given area at a given distance will always have a different projected solid angle in different directions. \nThe projected solid angle of a cone is $\\omega=\\pi\\sin^2\\left(\\Theta\\right)$, where $\\Theta$ is the cone half-apex angle.",
"display(Image(filename='images/radiometry04.png')) ",
"Lambertian radiators\nA Lambertian source is, by definition, one whose radiance is completely independent of viewing angle. Many (but not all) rough and natural surfaces produce radiation whose radiance is approximately independent of the angle of observation. These surfaces generally have a rough texture at microscopic scales. Planck-law blackbody radiators are also Lambertian sources (see my book). Any Lambertian radiator is completely described by its scalar radiance magnitude only, with no angular dependence in radiance.\nThe relationship between the exitance and radiance for such a Lambertian surface can be easily derived. If the flux radiated from a Lambertian surface $\\Phi$ [W] is known, it is a simple matter to calculate the exitance $M=\\Phi/A$ [W/m$^2$], where $A$ is the radiating surface area. The exitance of a Lambertian radiator is related to radiance by the projected solid angle of $\\pi$ sr, not the geometric solid angle of $2\\pi$ sr as one might expect. The details are given in my book.\nConservation of radiance\nRadiance is conserved for flux from a Lambertian surface propagation through a lossless optical\nmedium. Consider the construction below: two elemental areas $dA_0$ and $dA_1$ are separated by a distance $R_{01}$, with the angles between the normal vector of each surface and the line of sight given by $\\theta_0$ and $\\theta_1$. A total flux of $d^2\\Phi$ is flowing through both the surfaces. It can be shown (see my book) that for a Lambertian radiator the radiance in an arbitrary $dA_n$ is the same as the radiance in $dA_1$.\nAs light propagates through mediums with different refractive indices $n$ such as air, water, glass, etc., the entity called basic radiance, defined by $L/n^2$, is invariant. It can be shown that for light propagating from a medium with refractive index $n_1$ to a medium with refractive index $n_2$, the basic radiance is conserved: \n$$\n\\frac{L_1}{n_1^2}=\\frac{L_2}{n_2^2}.\n$$",
"display(Image(filename='images/radiometry05.png')) ",
"Flux transfer through lossless and lossy mediums\nA lossless medium is defined as a medium with no losses between the source and the receiver, such as a complete vacuum. This implies that no absorption, scattering, or any other attenuating mechanism is present in the medium. For a lossless medium the flux that flow between both $dA_0$ and $dA_1$ is given by \n$$\nd^2 \\Phi= \\frac{L_{01}\\,d A_0\\,\\cos\\theta_0\\, d A_1\\,\\cos\\theta_1}{R_{01}^2}.\n$$\nIf the medium has loss, the loss effect is accounted for by including a 'transmittance' factor $\\tau_{01}=\\Phi_1/\\Phi_0=L_{10}/L_{01}$, i.e., the fraction of the flux from $A_0$ that arrives at $A_1$, then \n$$\nd^2 \\Phi= \\frac{L_{01}\\,d A_0\\,\\cos\\theta_0\\, d A_1\\,\\cos\\theta_1 \\tau_{01}}{R_{01}^2}.\n$$\nSources and receivers of arbitrary shape\nThe above equation calculates the flux flowing between two infinitely small areas. The flux flowing between two arbitrary shapes can be calculated by integrating the equation over the source surface and the receiving surface. In the general case, the radiance $L$ cannot be assumed constant over $A_0$, introducing the spatial radiance distribution $L(dA_{0})$ as a factor into the spatial integral.\nLikewise, the medium transmittance between any two areas $dA_{0}$ and $dA_{1}$ varies with the spatial locations of $dA_{0}$ and $dA_{1}$ --- hence $\\tau_{01}(dA_{0},dA_{1})$ should also be included in the spatial integral.\nThe integral can be performed over any arbitrary shape, as shown in the following figure, supporting the solution with complex geometries. Clearly matters such as obscuration and occlusion should be considered when performing this integral:\n$$\n \\Phi=\\int_{A_0}\\int_{A_1}\n\\frac{L(dA_{0})\\,dA_0\\,\\cos\\theta_0\\, dA_1\\,\\cos\\theta_1\\,\\tau_{01}(dA_{0},dA_{1})}{R_{01}^2}.\n$$",
"display(Image(filename='images/radiometry06.png')) ",
"Multi-spectral flux transfer\nThe optical power leaving a source undergoes a succession of scaling or 'spectral filtering' processes as the flux propagates through the system, as shown below. This filtering varies with wavelength.\nExamples of such filters are source emissivity, atmospheric transmittance, optical filter transmittance, and detector responsivity. The multi-spectral filter approach described here is conceptually simple but fundamental to the calculation of radiometric flux.",
"display(Image(filename='images/radiometry07.png')) ",
"Extend the above flux-transfer equation for multi-spectral calculations by noting that over a spectral width $d\\lambda$ the radiance is given by $L = L_\\lambda d\\lambda$:\n$$\nd^3 \\Phi_\\lambda=\n\\frac{L_{01\\lambda}\\,dA_0\\;\\cos\\theta_0\\,dA_1\\;\\cos\\theta_1\n\\;\\tau_{01}\\,d\\lambda}{R_{01}^2},\n$$\nwhere $d^3\\Phi_\\lambda$ is the total flux in [W] or [q/s] flowing in a spectral width $d\\lambda$ at wavelength $\\lambda$, from a radiator with radiance $L_{0\\lambda}$ with units [W/(m$^2$ $\\cdot$sr$\\cdot$ $\\mu$m)] and projected surface area $dA_0\\cos\\theta_0$, through a receiver with projected surface area $dA_1\\cos\\theta_1$ at a distance $R_{01}$, with a transmittance of $\\tau_{01}$ between the two surfaces. The transmittance $\\tau_{01}$ now includes all of the spectral variables in the path between the source and the receiver.\nTo determine the total flux flowing from elemental area $dA_0$ through $dA_1$ over a wide spectral width, divide the wide spectral band into a large number $N$ of narrow widths $\\Delta\\lambda$ at wavelengths $\\lambda_n$ and add the flux for all of these narrow bandwidths together as follows:\n$$\nd^2 \\Phi=\n\\sum_{n=0}^{N}\n\\left(\n\\frac{L_{01\\lambda_n}\n\\,dA_{0}\\,\\cos\\theta_0\\,\n\\,dA_{1}\\,\\cos\\theta_1\\,\n\\tau_{01\\lambda_n}\n\\Delta\\lambda}{R_{01}^2}\n\\right).\n$$\nBy the Riemann--Stieltjes theorem in reverse, if now $\\Delta\\lambda\\rightarrow 0$ and $N\\rightarrow\\infty$, the summation becomes the integral\n$$\nd^2 \\Phi=\n\\int_{\\lambda_1}^{\\lambda_2}\n\\frac{L_{01\\lambda}\n\\,dA_{0}\\,\\cos\\theta_0\\,\n\\,dA_{1}\\,\\cos\\theta_1 \\,\\tau_{01\\lambda}d\\lambda}{R_{01}^2}\\ .\n$$\nThis equation describes the total flux at all wavelengths in the spectral range $\\lambda_1$ to $\\lambda_2$ passing\nthrough the system. This equation is developed further in my book.\nConclusion\nThe flux transfer between any two arbitrary surfaces, over any spectral band can be calculated by\n$$\n\\Phi=\n\\int_{A_0}\n\\int_{A_1}\n\\int_{\\lambda_1}^{\\lambda_2}\n\\frac{L_{01\\lambda}\n\\,dA_{0}\\,\\cos\\theta_0\\,\n\\,dA_{1}\\,\\cos\\theta_1 \\,\\tau_{01\\lambda}d\\lambda}{R_{01}^2}\\ .\n$$\nIn practice these integrals are performed by finite sums of small elemental areas and spectral widths. \nAny arbitrary problem can be solved using this approach. For a simple example see the flame sensor and the other pages of this notebook series.\nPython and module versions, and dates",
"try:\n import pyradi.ryutils as ryutils\n print(ryutils.VersionInformation('matplotlib,numpy,pyradi,scipy,pandas'))\nexcept:\n print(\"pyradi.ryutils not found\")"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
getsmarter/bda | module_2/M2_NB1_SourcesOfData.ipynb | mit | [
"<div align=\"right\">Python 3.6 Jupyter Notebook</div>\n\nSources of data\nYour completion of the notebook exercises will be graded based on your ability to do the following:\n\nApply: Are you able to execute code (using the supplied examples) that performs the required functionality on supplied or generated data sets? \nEvaluate: Are you able to interpret the results and justify your interpretation based on the observed data?\nCreate: Are you able to produce notebooks that serve as computational records of a session and can be used to share your insights with others? \n\nNotebook objectives\nBy the end of this notebook you will be expected to:\n\n\nUse \"trusted\" and \"untrusted\" data sources to enrich your analysis;\n and\nUnderstand the implications of the five Rs on data quality from external sources.\n\n\nList of exercises\n\n\nExercise 1: Enriching analysis with data from \"trusted\" sources.\nExercise 2: Pros and cons of using data from \"untrusted\" sources.\n\n\nNotebook introduction\nData collection is expensive and time consuming, as Arek Stopczynski alluded to in this module's video content.\nIn some cases, you will be lucky enough to have existing datasets available to support your analysis. You may have datasets from previous analyses, access to data providers, or curated datasets from your organization. In many cases, however, you will not have access to the data that you require to support your analysis, and you will have to find alternate mechanisms. \nThe data quality requirements will differ based on the problem you are trying to solve. Taking the hypothetical case of geocoding a location, which was introduced in Module 1, the accuracy of the geocoded location does not need to be exact when you are simply trying to plot the locations of students on a map. Geocoding a location for an automated vehicle to turn off the highway, on the other hand, has an entirely different accuracy requirement.\n\nNote:\nThose of you who work in large organizations may be privileged enough to have company data governance and data quality initiatives. These efforts and teams can often add significant value both in terms of supplying company-standard curated data, and making you aware of the internal policies that need to be adhered to.\n\nAs a data analyst or data scientist, it is important to be aware of the implications of your decisions. You need to choose the appropriate set of tools and methods to deal with sourcing and supplying data.\nTechnology has matured in recent years, and allowed access to a host of sources of data that can be used in your analyses. In many cases you can access free resources, or obtain (at a cost) data that has been curated, is at a lower latency, or comes with a service-level agreement. Some governments have even made datasets publicly available.\nYou have been introduced to OpenPDS, in the video content, where the focus shifts from supplying raw data -- where the provider needs to apply security principles before sharing datasets -- to supplying answers rather than data. OpenPDS allows users to collect, store, and control access to their data, while also allowing them to protect their privacy. In this way, users still have ownership of their data, as defined by the new deal on data. \nThis notebook demonstrates another example of how to source external data to enrich your analyses. The Python ecosystem contains a rich set of tools and libraries that can help you to exploit the available resources.\nThis course will not go into detail regarding the various options to source and interact with social data from sources such as Twitter, LinkedIn, Facebook, and Google Plus. However, you should be able to find libraries that will assist you in sourcing and manipulating these sources of data.\nTwitter data is a good example because, depending on the options selected by the Twitter user, every tweet contains not just the message or content that most users are aware of. It also contains a view of the network of the person, home location, location from which the message was sent, and a number of other features that can be very useful when studying networks around a topic of interest. Professor Alex Pentland pointed out the difference in what you share with the world (how you want to be seen) compared to what you actually do and believe (what you commit to). Be sure to keep these concepts in mind when you start exploring the additional sources of data. Those who are interested in the topic can start to explore the options by visiting the Twitter library on PyPI. \nStart with the five Rs introduced in Module 1, and consider the following questions:\n- How accurate does my dataset need to be?\n- How often should the dataset be updated?\n- What happens if the data provider is no longer available?\n- Do I need to adhere to any organizational standards to ensure consistent reporting or integration with other applications?\n- Are there any implications to getting the values wrong?\nYou may need to start with “untrusted” data sources as a means of validating that your analysis can be executed. Once this is done, you can replace the untrusted components with trusted and curated datasets, as your analysis matures.\n<div class=\"alert alert-warning\">\n<b>Note</b>:<br>\nIt is strongly recommended that you save and checkpoint after applying significant changes or completing exercises. This allows you to return the notebook to a previous state should you wish to do so. On the Jupyter menu, select \"File\", then \"Save and Checkpoint\" from the dropdown menu that appears.\n</div>\n\nLoad libraries and set options",
"import pandas as pd\nfrom pandas_datareader import data, wb\nimport numpy as np\nimport matplotlib.pylab as plt\nimport matplotlib\nimport folium\nimport geocoder\nimport wikipedia\n\n#set plot options\n%matplotlib inline\nmatplotlib.rcParams['figure.figsize'] = (10, 8)",
"1. Source additional data from public sources\nThis section will provide short examples to demonstrate the use of public data sources in your notebooks.\n1.1 World Bank\nThis example demonstrates how to source data from an external source to enrich your existing analyses. You will need to combine the data sources and add additional features to the example of student locations plotted on the world map in Module 1's Notebook 3.\nThe specific indicator chosen has little relevance other than to demonstrate the process that you will typically follow in completing your projects. Population counts, from an untrusted source, will be added to your map, and you will use scaling factors combined with the number of students, and population size of the country to demonstrate adding external data with minimal effort.\nThis example makes use of the pandas-datareader module, which supports remote data access. This library has support for extracting data from various internet sources into a Pandas DataFrame. Currently, the supported sources are:\n\nGoogle Finance\nEnigma\nQuandl\nSt.Louis FED (FRED)\nKenneth French’s data library\nWorld Bank\nOECD\nEurostat\nThrift Savings Plan\nNasdaq Trader symbol definitions.\n\nThis example focuses on enriching your student dataset from Module 1, using the World Bank's Development Indicators. In the following sections, you will use the data you saved in a previous exercise, add corresponding indicators for each country in the data, and find the mean location for all observed coordinates per country.\nPrepare the student data\nIn the next code cell, you will load the data from disk, apply the groupby method to group the data by country and, for each group, find the total student count and the average of their GPS coordinates. The final dataset containing the country, student count, and averaged GPS coordinates is saved as a separate DataFrame variable.",
"# Load the grouped_geocoded dataset from Module 1.\ndf1 = pd.read_csv('data/grouped_geocoded.csv',index_col=[0])\n\n# Prepare the student location dataset for use in this example.\n# We use the geometrical center by obtaining the mean location for all observed coordinates per country.\ndf2 = df1.groupby('country').agg({'student_count': [np.sum], 'lat': [np.mean], \n 'long': [np.mean]}).reset_index()\n# Reset the index.\ndf3 = df2.reset_index(level=1, drop=True)\n\n# Review the data\ndf3.head()",
"The column label index has multiple levels. Although this is useful metadata, it would be better to drop multilevel labeling and, instead, rename the columns to capture this information.",
"df3.columns = df3.columns.droplevel(1)\ndf3.rename(columns={'lat': \"lat_mean\", \n 'long': \"long_mean\"}, inplace=True)\ndf3.head()",
"Get and prepare the external dataset from the World Bank\n\nRemember you can use \"wb.download?\" (without the quotation marks) in a separate code cell to get help on the pandas-datareader method for remote data access of the World Bank Indicators. Refer to the pandas-datareader remote data access documentation for more detailed help.",
"# After running this cell you can close the help by clicking on close (`X`) button in the upper right corner\nwb.download?\n\n# The selected indicator is the world population, \"SP.POP.TOTL\", for the years from 2008 to 2016 \nwb_indicator = 'SP.POP.TOTL'\nstart_year = 2008\nend_year = 2016\n\ndf4 = wb.download(indicator = wb_indicator,\n country = ['all'],\n start = start_year,\n end = end_year)\n\n# Review the data\ndf4.head()",
"The data set contains entries for multiple years. The focus of this example is the entry corresponding to the latest year of data available for each country.",
"df5 = df4.reset_index()\nidx = df5.groupby(['country'])['year'].transform(max) == df5['year']",
"You can now extract only the values that correspond to the most recent year available for each country.",
"# Create a new dataframe where entries corresponds to maximum year indexes in previous list.\ndf6 = df5.loc[idx,:]\n\n# Review the data\ndf6.head()",
"Now merge your dataset with the World Bank data.",
"# Combine the student and population datasets.\ndf7 = pd.merge(df3, df6, on='country', how='left')\n\n# Rename the columns of our merged dataset and assign to a new variable.\ndf8 = df7.rename(index=str, columns={('SP.POP.TOTL'): \"PopulationTotal_Latest_WB\"})\n\n# Drop NAN values.\ndf8 = df8[~df8.PopulationTotal_Latest_WB.isnull()]\n\n# Reset index.\ndf8.reset_index(inplace = True)\n\ndf8.head()",
"Let's plot the data.\n\nNote:\nThe visualization below does not have any meaning. The scaling factors selected are used to demonstrate the difference in population sizes, and number of students on this course, per country.",
"# Plot the combined dataset\n\n# Set map center and zoom level\nmapc = [0, 30]\nzoom = 2\n\n# Create map object.\nmap_osm = folium.Map(location=mapc,\n tiles='Stamen Toner',\n zoom_start=zoom)\n\n# Plot each of the locations that we geocoded.\nfor j in range(len(df8)):\n # Plot a blue circle marker for country population.\n folium.CircleMarker([df8.lat_mean[j], df8.long_mean[j]],\n radius=df8.PopulationTotal_Latest_WB[j]/20000000,\n popup='Population',\n color='#3186cc',\n fill_color='#3186cc',\n ).add_to(map_osm)\n # Plot a red circle marker for students per country.\n folium.CircleMarker([df8.lat_mean[j], df8.long_mean[j]],\n radius=df8.student_count[j]/50,\n popup='Students',\n color='red',\n fill_color='red',\n ).add_to(map_osm)\n# Show the map.\nmap_osm",
"<br>\n<div class=\"alert alert-info\">\n<b>Exercise 1 Start.</b>\n</div>\n\nInstructions\n\n\nReview the available indicators in the World Bank dataset, and select an indicator of your choice (other than the population indicator). \nUsing a copy of the code (from above) in the cells below, replace the population indicator with your selected indicator. Instead of returning the most recent value for your selected indicator, compute the mean and standard deviation for the years from 2006 to 2016. You will need to use the Pandas groupby().agg() chained methods, together with the following functions from NumPy: \nnp.mean\nnp.std.\n\nYou can review the data preparation section for the student data above for an example. \nAdd comments (lines starting with a \"#\") giving a brief description of your view on the observed results. Make sure to include, in one or two sentences in each case, the following:\n 1. A clear description why you selected the indicator.\n - What your expectation was before including the data.\n - What you think the results may indicate.\nImportant:\n- Only the external data needs to be prepared. You do not need to prepare the student dataset again. Just use the student data that you prepared above and join this to the new dataset you sourced. \n- Only plot the mean values for your selected indicator (not the standard deviation values).",
"# Your solution here\n# Note: Break your logic using separate cells to break code into units that can be executed \n# should you need to review individual steps.\n",
"<br>\n<div class=\"alert alert-info\">\n<b>Exercise 1 End.</b>\n</div>\n\n\nExercise complete:\nThis is a good time to \"Save and Checkpoint\".\n\n1.2 Using Wikipedia as a data source\nTo demonstrate how quickly data can be sourced from public, \"untrusted\" data sources, you have been supplied with a number of sample scripts below. While these sources contain distinctly rich datasets, which you can acquire with minimal effort, they can be amended by anyone, and may not be 100% accurate. In some cases, you will have to manually transform the datasets, while in others, you might be able to use pre-built libraries.\nExecute the code cells below before completing Exercise 2.",
"# Display MIT page summary from Wikipedia \nprint(wikipedia.summary(\"MIT\"))\n\n# Display a single sentence summary.\nwikipedia.summary(\"MIT\", sentences=1)\n\n# Create variable page that contains the wikipedia information.\npage = wikipedia.page(\"List of countries and dependencies by population\")\n\n# Display the page title.\npage.title\n\n# Display the page URL. This can be utilised to create links back to descriptions.\npage.url",
"<br>\n<div class=\"alert alert-info\">\n<b>Exercise 2 Start.</b>\n</div>\n\nInstructions\n\nAfter executing the cells for the Wikipedia example in Section 1.2, think about the potential implications of using this \"public\" and, in many cases, \"untrusted\" data source when doing analysis or creating data products.\nPlease compile and submit for evaluation a list of three pros and three cons.\n\nNote: Your submission can be a simple markdown list using the syntax provided below.\n\n\nAdd your answer in this markdown cell. The contents of this cell should be replaced with your answer.\nSubmit as a list:\n- Pros: \n - Description 1\n - Description 2\n - Description 3\n- Cons:\n - Description 1\n - Description 2\n - Description 3\n<br>\n<div class=\"alert alert-info\">\n<b>Exercise 2 End.</b>\n</div>\n\n\nExercise complete:\nThis is a good time to \"Save and Checkpoint\".\n\n2. Submit your notebook\nPlease make sure that you:\n- Perform a final \"Save and Checkpoint\";\n- Download a copy of the notebook in \".ipynb\" format to your local machine using \"File\", \"Download as\", and \"IPython Notebook (.ipynb)\"; and\n- Submit a copy of this file to the Online Campus."
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
sysid/nbs | cnn/tw_vgg16.ipynb | mit | [
"Using Convolutional Neural Networks\nThis is running on theano!\nWelcome to the first week of the first deep learning certificate! We're going to use convolutional neural networks (CNNs) to allow our computer to see - something that is only possible thanks to deep learning.\nIntroduction to this week's task: 'Dogs vs Cats'\nWe're going to try to create a model to enter the Dogs vs Cats competition at Kaggle. There are 25,000 labelled dog and cat photos available for training, and 12,500 in the test set that we have to try to label for this competition. According to the Kaggle web-site, when this competition was launched (end of 2013): \"State of the art: The current literature suggests machine classifiers can score above 80% accuracy on this task\". So if we can beat 80%, then we will be at the cutting edge as at 2013!\nBasic setup\nThere isn't too much to do to get started - just a few simple configuration steps.\nThis shows plots in the web page itself - we always wants to use this when using jupyter notebook:",
"%matplotlib inline",
"Define path to data: (It's a good idea to put it in a subdirectory of your notebooks folder, and then exclude that directory from git control by adding it to .gitignore.)",
"#path = \"data/dogscats/\"\npath = \"data/dogscats/sample/\"",
"A few basic libraries that we'll need for the initial exercises:",
"from __future__ import division,print_function\n\nimport os, json\nfrom glob import glob\nimport numpy as np\nnp.set_printoptions(precision=4, linewidth=100)\nfrom matplotlib import pyplot as plt",
"We have created a file most imaginatively called 'utils.py' to store any little convenience functions we'll want to use. We will discuss these as we use them.",
"import utils\nimport importlib\nimportlib.reload(utils)\nfrom utils import plots",
"Use a pretrained VGG model with our Vgg16 class\nOur first step is simply to use a model that has been fully created for us, which can recognise a wide variety (1,000 categories) of images. We will use 'VGG', which won the 2014 Imagenet competition, and is a very simple model to create and understand. The VGG Imagenet team created both a larger, slower, slightly more accurate model (VGG 19) and a smaller, faster model (VGG 16). We will be using VGG 16 since the much slower performance of VGG19 is generally not worth the very minor improvement in accuracy.\nWe have created a python class, Vgg16, which makes using the VGG 16 model very straightforward. \nThe punchline: state of the art custom model in 7 lines of code\nHere's everything you need to do to get >97% accuracy on the Dogs vs Cats dataset - we won't analyze how it works behind the scenes yet, since at this stage we're just going to focus on the minimum necessary to actually do useful work.",
"# As large as you can, but no larger than 64 is recommended. \n# If you have an older or cheaper GPU, you'll run out of memory, so will have to decrease this.\n# batch_size=64\nbatch_size=2\n\n# Import our class, and instantiate\nimport vgg16\nfrom vgg16 import Vgg16\n\nvgg.classes\n\n# %%capture x # ping bug: disconnect -> reconnect kernel workaround\nvgg = Vgg16()\n# Grab a few images at a time for training and validation.\n# NB: They must be in subdirectories named based on their category\nbatches = vgg.get_batches(path+ 'train', batch_size=batch_size)\nbatches.nb_class\n\nval_batches = vgg.get_batches(path+'valid', batch_size=batch_size*2)\nvgg.finetune(batches)\nvgg.fit(batches, val_batches, nb_epoch=1, verbose=1)\n\n#x.show()",
"The code above will work for any image recognition task, with any number of categories! All you have to do is to put your images into one folder per category, and run the code above.\nLet's take a look at how this works, step by step...\nUse Vgg16 for basic image recognition\nLet's start off by using the Vgg16 class to recognise the main imagenet category for each image.\nWe won't be able to enter the Cats vs Dogs competition with an Imagenet model alone, since 'cat' and 'dog' are not categories in Imagenet - instead each individual breed is a separate category. However, we can use it to see how well it can recognise the images, which is a good first step.\nFirst, create a Vgg16 object:",
"vgg = Vgg16()",
"Vgg16 is built on top of Keras (which we will be learning much more about shortly!), a flexible, easy to use deep learning library that sits on top of Theano or Tensorflow. Keras reads groups of images and labels in batches, using a fixed directory structure, where images from each category for training must be placed in a separate folder.\nLet's grab batches of data from our training folder:",
"batches = vgg.get_batches(path+'train', batch_size=4)",
"(BTW, when Keras refers to 'classes', it doesn't mean python classes - but rather it refers to the categories of the labels, such as 'pug', or 'tabby'.)\nBatches is just a regular python iterator. Each iteration returns both the images themselves, as well as the labels.",
"imgs,labels = next(batches)\n\nimgs[0].shape\nlabels",
"As you can see, the labels for each image are an array, containing a 1 in the first position if it's a cat, and in the second position if it's a dog. This approach to encoding categorical variables, where an array containing just a single 1 in the position corresponding to the category, is very common in deep learning. It is called one hot encoding. \nThe arrays contain two elements, because we have two categories (cat, and dog). If we had three categories (e.g. cats, dogs, and kangaroos), then the arrays would each contain two 0's, and one 1.",
"plots(imgs, titles=labels)",
"We can now pass the images to Vgg16's predict() function to get back probabilities, category indexes, and category names for each image's VGG prediction.",
"vgg.predict(imgs, True)",
"The category indexes are based on the ordering of categories used in the VGG model - e.g here are the first four:",
"vgg.classes[:4]",
"(Note that, other than creating the Vgg16 object, none of these steps are necessary to build a model; they are just showing how to use the class to view imagenet predictions.)\nUse our Vgg16 class to finetune a Dogs vs Cats model\nTo change our model so that it outputs \"cat\" vs \"dog\", instead of one of 1,000 very specific categories, we need to use a process called \"finetuning\". Finetuning looks from the outside to be identical to normal machine learning training - we provide a training set with data and labels to learn from, and a validation set to test against. The model learns a set of parameters based on the data provided.\nHowever, the difference is that we start with a model that is already trained to solve a similar problem. The idea is that many of the parameters should be very similar, or the same, between the existing model, and the model we wish to create. Therefore, we only select a subset of parameters to train, and leave the rest untouched. This happens automatically when we call fit() after calling finetune().\nWe create our batches just like before, and making the validation set available as well. A 'batch' (or mini-batch as it is commonly known) is simply a subset of the training data - we use a subset at a time when training or predicting, in order to speed up training, and to avoid running out of memory.",
"batch_size=64\n\nbatches = vgg.get_batches(path+'train', batch_size=batch_size)\nval_batches = vgg.get_batches(path+'valid', batch_size=batch_size)",
"Calling finetune() modifies the model such that it will be trained based on the data in the batches provided - in this case, to predict either 'dog' or 'cat'.",
"vgg.finetune(batches)",
"Finally, we fit() the parameters of the model using the training data, reporting the accuracy on the validation set after every epoch. (An epoch is one full pass through the training data.)",
"vgg.fit(batches, val_batches, nb_epoch=1)",
"That shows all of the steps involved in using the Vgg16 class to create an image recognition model using whatever labels you are interested in. For instance, this process could classify paintings by style, or leaves by type of disease, or satellite photos by type of crop, and so forth.\nNext up, we'll dig one level deeper to see what's going on in the Vgg16 class."
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
deepmind/dm_control | tutorial.ipynb | apache-2.0 | [
"dm_control tutorial\n\n\n<p><small><small>Copyright 2020 The dm_control Authors.</small></p>\n<p><small><small>Licensed under the Apache License, Version 2.0 (the \"License\"); you may not use this file except in compliance with the License. You may obtain a copy of the License at <a href=\"http://www.apache.org/licenses/LICENSE-2.0\">http://www.apache.org/licenses/LICENSE-2.0</a>.</small></small></p>\n<p><small><small>Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.</small></small></p>\n\nThis notebook provides an overview tutorial of DeepMind's dm_control package, hosted at the deepmind/dm_control repository on GitHub.\nIt is adjunct to this tech report.\nA Colab runtime with GPU acceleration is required. If you're using a CPU-only runtime, you can switch using the menu \"Runtime > Change runtime type\".\n<!-- Internal installation instructions. -->\n\nInstalling dm_control on Colab",
"#@title Run to install MuJoCo and `dm_control`\nimport distutils.util\nimport subprocess\nif subprocess.run('nvidia-smi').returncode:\n raise RuntimeError(\n 'Cannot communicate with GPU. '\n 'Make sure you are using a GPU Colab runtime. '\n 'Go to the Runtime menu and select Choose runtime type.')\n\nprint('Installing dm_control...')\n!pip install -q dm_control>=1.0.3.post1\n\n# Configure dm_control to use the EGL rendering backend (requires GPU)\n%env MUJOCO_GL=egl\n\nprint('Checking that the dm_control installation succeeded...')\ntry:\n from dm_control import suite\n env = suite.load('cartpole', 'swingup')\n pixels = env.physics.render()\nexcept Exception as e:\n raise e from RuntimeError(\n 'Something went wrong during installation. Check the shell output above '\n 'for more information.\\n'\n 'If using a hosted Colab runtime, make sure you enable GPU acceleration '\n 'by going to the Runtime menu and selecting \"Choose runtime type\".')\nelse:\n del pixels, suite\n\n!echo Installed dm_control $(pip show dm_control | grep -Po \"(?<=Version: ).+\")",
"Imports\nRun both of these cells:",
"#@title All `dm_control` imports required for this tutorial\n\n# The basic mujoco wrapper.\nfrom dm_control import mujoco\n\n# Access to enums and MuJoCo library functions.\nfrom dm_control.mujoco.wrapper.mjbindings import enums\nfrom dm_control.mujoco.wrapper.mjbindings import mjlib\n\n# PyMJCF\nfrom dm_control import mjcf\n\n# Composer high level imports\nfrom dm_control import composer\nfrom dm_control.composer.observation import observable\nfrom dm_control.composer import variation\n\n# Imports for Composer tutorial example\nfrom dm_control.composer.variation import distributions\nfrom dm_control.composer.variation import noises\nfrom dm_control.locomotion.arenas import floors\n\n# Control Suite\nfrom dm_control import suite\n\n# Run through corridor example\nfrom dm_control.locomotion.walkers import cmu_humanoid\nfrom dm_control.locomotion.arenas import corridors as corridor_arenas\nfrom dm_control.locomotion.tasks import corridors as corridor_tasks\n\n# Soccer\nfrom dm_control.locomotion import soccer\n\n# Manipulation\nfrom dm_control import manipulation\n\n#@title Other imports and helper functions\n\n# General\nimport copy\nimport os\nimport itertools\nfrom IPython.display import clear_output\nimport numpy as np\n\n# Graphics-related\nimport matplotlib\nimport matplotlib.animation as animation\nimport matplotlib.pyplot as plt\nfrom IPython.display import HTML\nimport PIL.Image\n# Internal loading of video libraries.\n\n# Use svg backend for figure rendering\n%config InlineBackend.figure_format = 'svg'\n\n# Font sizes\nSMALL_SIZE = 8\nMEDIUM_SIZE = 10\nBIGGER_SIZE = 12\nplt.rc('font', size=SMALL_SIZE) # controls default text sizes\nplt.rc('axes', titlesize=SMALL_SIZE) # fontsize of the axes title\nplt.rc('axes', labelsize=MEDIUM_SIZE) # fontsize of the x and y labels\nplt.rc('xtick', labelsize=SMALL_SIZE) # fontsize of the tick labels\nplt.rc('ytick', labelsize=SMALL_SIZE) # fontsize of the tick labels\nplt.rc('legend', fontsize=SMALL_SIZE) # legend fontsize\nplt.rc('figure', titlesize=BIGGER_SIZE) # fontsize of the figure title\n\n# Inline video helper function\nif os.environ.get('COLAB_NOTEBOOK_TEST', False):\n # We skip video generation during tests, as it is quite expensive.\n display_video = lambda *args, **kwargs: None\nelse:\n def display_video(frames, framerate=30):\n height, width, _ = frames[0].shape\n dpi = 70\n orig_backend = matplotlib.get_backend()\n matplotlib.use('Agg') # Switch to headless 'Agg' to inhibit figure rendering.\n fig, ax = plt.subplots(1, 1, figsize=(width / dpi, height / dpi), dpi=dpi)\n matplotlib.use(orig_backend) # Switch back to the original backend.\n ax.set_axis_off()\n ax.set_aspect('equal')\n ax.set_position([0, 0, 1, 1])\n im = ax.imshow(frames[0])\n def update(frame):\n im.set_data(frame)\n return [im]\n interval = 1000/framerate\n anim = animation.FuncAnimation(fig=fig, func=update, frames=frames,\n interval=interval, blit=True, repeat=False)\n return HTML(anim.to_html5_video())\n\n# Seed numpy's global RNG so that cell outputs are deterministic. We also try to\n# use RandomState instances that are local to a single cell wherever possible.\nnp.random.seed(42)",
"Model definition, compilation and rendering\nWe begin by describing some basic concepts of the MuJoCo physics simulation library, but recommend the official documentation for details.\nLet's define a simple model with two geoms and a light.",
"#@title A static model {vertical-output: true}\n\nstatic_model = \"\"\"\n<mujoco>\n <worldbody>\n <light name=\"top\" pos=\"0 0 1\"/>\n <geom name=\"red_box\" type=\"box\" size=\".2 .2 .2\" rgba=\"1 0 0 1\"/>\n <geom name=\"green_sphere\" pos=\".2 .2 .2\" size=\".1\" rgba=\"0 1 0 1\"/>\n </worldbody>\n</mujoco>\n\"\"\"\nphysics = mujoco.Physics.from_xml_string(static_model)\npixels = physics.render()\nPIL.Image.fromarray(pixels)",
"static_model is written in MuJoCo's XML-based MJCF modeling language. The from_xml_string() method invokes the model compiler, which instantiates the library's internal data structures. These can be accessed via the physics object, see below.\nAdding DOFs and simulating, advanced rendering\nThis is a perfectly legitimate model, but if we simulate it, nothing will happen except for time advancing. This is because this model has no degrees of freedom (DOFs). We add DOFs by adding joints to bodies, specifying how they can move with respect to their parents. Let us add a hinge joint and re-render, visualizing the joint axis.",
"#@title A child body with a joint { vertical-output: true }\n\nswinging_body = \"\"\"\n<mujoco>\n <worldbody>\n <light name=\"top\" pos=\"0 0 1\"/>\n <body name=\"box_and_sphere\" euler=\"0 0 -30\"> \n <joint name=\"swing\" type=\"hinge\" axis=\"1 -1 0\" pos=\"-.2 -.2 -.2\"/>\n <geom name=\"red_box\" type=\"box\" size=\".2 .2 .2\" rgba=\"1 0 0 1\"/>\n <geom name=\"green_sphere\" pos=\".2 .2 .2\" size=\".1\" rgba=\"0 1 0 1\"/>\n </body>\n </worldbody>\n</mujoco>\n\"\"\"\nphysics = mujoco.Physics.from_xml_string(swinging_body)\n# Visualize the joint axis.\nscene_option = mujoco.wrapper.core.MjvOption()\nscene_option.flags[enums.mjtVisFlag.mjVIS_JOINT] = True\npixels = physics.render(scene_option=scene_option)\nPIL.Image.fromarray(pixels)",
"The things that move (and which have inertia) are called bodies. The body's child joint specifies how that body can move with respect to its parent, in this case box_and_sphere w.r.t the worldbody. \nNote that the body's frame is rotated with an euler directive, and its children, the geoms and the joint, rotate with it. This is to emphasize the local-to-parent-frame nature of position and orientation directives in MJCF.\nLet's make a video, to get a sense of the dynamics and to see the body swinging under gravity.",
"#@title Making a video {vertical-output: true}\n\nduration = 2 # (seconds)\nframerate = 30 # (Hz)\n\n# Visualize the joint axis\nscene_option = mujoco.wrapper.core.MjvOption()\nscene_option.flags[enums.mjtVisFlag.mjVIS_JOINT] = True\n\n# Simulate and display video.\nframes = []\nphysics.reset() # Reset state and time\nwhile physics.data.time < duration:\n physics.step()\n if len(frames) < physics.data.time * framerate:\n pixels = physics.render(scene_option=scene_option)\n frames.append(pixels)\ndisplay_video(frames, framerate)",
"Note how we collect the video frames. Because physics simulation timesteps are generally much smaller than framerates (the default timestep is 2ms), we don't render after each step.\nRendering options\nLike joint visualisation, additional rendering options are exposed as parameters to the render method.",
"#@title Enable transparency and frame visualization {vertical-output: true}\n\nscene_option = mujoco.wrapper.core.MjvOption()\nscene_option.frame = enums.mjtFrame.mjFRAME_GEOM\nscene_option.flags[enums.mjtVisFlag.mjVIS_TRANSPARENT] = True\npixels = physics.render(scene_option=scene_option)\nPIL.Image.fromarray(pixels)\n\n#@title Depth rendering {vertical-output: true}\n\n# depth is a float array, in meters.\ndepth = physics.render(depth=True)\n# Shift nearest values to the origin.\ndepth -= depth.min()\n# Scale by 2 mean distances of near rays.\ndepth /= 2*depth[depth <= 1].mean()\n# Scale to [0, 255]\npixels = 255*np.clip(depth, 0, 1)\nPIL.Image.fromarray(pixels.astype(np.uint8))\n\n#@title Segmentation rendering {vertical-output: true}\n\nseg = physics.render(segmentation=True)\n# Display the contents of the first channel, which contains object\n# IDs. The second channel, seg[:, :, 1], contains object types.\ngeom_ids = seg[:, :, 0]\n# Infinity is mapped to -1\ngeom_ids = geom_ids.astype(np.float64) + 1\n# Scale to [0, 1]\ngeom_ids = geom_ids / geom_ids.max()\npixels = 255*geom_ids\nPIL.Image.fromarray(pixels.astype(np.uint8))\n\n\n#@title Projecting from world to camera coordinates {vertical-output: true}\n\n# Get the world coordinates of the box corners\nbox_pos = physics.named.data.geom_xpos['red_box']\nbox_mat = physics.named.data.geom_xmat['red_box'].reshape(3, 3)\nbox_size = physics.named.model.geom_size['red_box']\noffsets = np.array([-1, 1]) * box_size[:, None]\nxyz_local = np.stack(itertools.product(*offsets)).T\nxyz_global = box_pos[:, None] + box_mat @ xyz_local\n\n# Camera matrices multiply homogenous [x, y, z, 1] vectors.\ncorners_homogeneous = np.ones((4, xyz_global.shape[1]), dtype=float)\ncorners_homogeneous[:3, :] = xyz_global\n\n# Get the camera matrix.\ncamera = mujoco.Camera(physics)\ncamera_matrix = camera.matrix\n\n# Project world coordinates into pixel space. See:\n# https://en.wikipedia.org/wiki/3D_projection#Mathematical_formula\nxs, ys, s = camera_matrix @ corners_homogeneous\n# x and y are in the pixel coordinate system.\nx = xs / s\ny = ys / s\n\n# Render the camera view and overlay the projected corner coordinates.\npixels = camera.render()\nfig, ax = plt.subplots(1, 1)\nax.imshow(pixels)\nax.plot(x, y, '+', c='w')\nax.set_axis_off()",
"MuJoCo basics and named indexing\nmjModel\nMuJoCo's mjModel, encapsulated in physics.model, contains the model description, including the default initial state and other fixed quantities which are not a function of the state, e.g. the positions of geoms in the frame of their parent body. The (x, y, z) offsets of the box and sphere geoms, relative their parent body box_and_sphere are given by model.geom_pos:",
"physics.model.geom_pos",
"The model.opt structure contains global quantities like",
"print('timestep', physics.model.opt.timestep)\nprint('gravity', physics.model.opt.gravity)",
"mjData\nmjData, encapsulated in physics.data, contains the state and quantities that depend on it. The state is made up of time, generalized positions and generalised velocities. These are respectively data.time, data.qpos and data.qvel. \nLet's print the state of the swinging body where we left it:",
"print(physics.data.time, physics.data.qpos, physics.data.qvel)",
"physics.data also contains functions of the state, for example the cartesian positions of objects in the world frame. The (x, y, z) positions of our two geoms are in data.geom_xpos:",
"print(physics.data.geom_xpos)",
"Named indexing\nThe semantics of the above arrays are made clearer using the named wrapper, which assigns names to rows and type names to columns.",
"print(physics.named.data.geom_xpos)",
"Note how model.geom_pos and data.geom_xpos have similar semantics but very different meanings.",
"print(physics.named.model.geom_pos)",
"Name strings can be used to index into the relevant quantities, making code much more readable and robust.",
"physics.named.data.geom_xpos['green_sphere', 'z']",
"Joint names can be used to index into quantities in configuration space (beginning with the letter q):",
"physics.named.data.qpos['swing']",
"We can mix NumPy slicing operations with named indexing. As an example, we can set the color of the box using its name (\"red_box\") as an index into the rows of the geom_rgba array.",
"#@title Changing colors using named indexing{vertical-output: true}\n\nrandom_rgb = np.random.rand(3)\nphysics.named.model.geom_rgba['red_box', :3] = random_rgb\npixels = physics.render()\nPIL.Image.fromarray(pixels)",
"Note that while physics.model quantities will not be changed by the engine, we can change them ourselves between steps. This however is generally not recommended, the preferred approach being to modify the model at the XML level using the PyMJCF library, see below.\nSetting the state with reset_context()\nIn order for data quantities that are functions of the state to be in sync with the state, MuJoCo's mj_step1() needs to be called. This is facilitated by the reset_context() context, please see in-depth discussion in Section 2.1 of the tech report.",
"physics.named.data.qpos['swing'] = np.pi\nprint('Without reset_context, spatial positions are not updated:',\n physics.named.data.geom_xpos['green_sphere', ['z']])\nwith physics.reset_context():\n physics.named.data.qpos['swing'] = np.pi\nprint('After reset_context, positions are up-to-date:',\n physics.named.data.geom_xpos['green_sphere', ['z']])",
"Free bodies: the self-inverting \"tippe-top\"\nA free body is a body with a free joint, with 6 movement DOFs: 3 translations and 3 rotations. We could give our box_and_sphere body a free joint and watch it fall, but let's look at something more interesting. A \"tippe top\" is a spinning toy which flips itself on its head (Wikipedia). We model it as follows:",
"#@title The \"tippe-top\" model{vertical-output: true}\n\ntippe_top = \"\"\"\n<mujoco model=\"tippe top\">\n <option integrator=\"RK4\"/>\n <asset>\n <texture name=\"grid\" type=\"2d\" builtin=\"checker\" rgb1=\".1 .2 .3\" \n rgb2=\".2 .3 .4\" width=\"300\" height=\"300\"/>\n <material name=\"grid\" texture=\"grid\" texrepeat=\"8 8\" reflectance=\".2\"/>\n </asset>\n <worldbody>\n <geom size=\".2 .2 .01\" type=\"plane\" material=\"grid\"/>\n <light pos=\"0 0 .6\"/>\n <camera name=\"closeup\" pos=\"0 -.1 .07\" xyaxes=\"1 0 0 0 1 2\"/>\n <body name=\"top\" pos=\"0 0 .02\">\n <freejoint/>\n <geom name=\"ball\" type=\"sphere\" size=\".02\" />\n <geom name=\"stem\" type=\"cylinder\" pos=\"0 0 .02\" size=\"0.004 .008\"/>\n <geom name=\"ballast\" type=\"box\" size=\".023 .023 0.005\" pos=\"0 0 -.015\" \n contype=\"0\" conaffinity=\"0\" group=\"3\"/>\n </body>\n </worldbody>\n <keyframe>\n <key name=\"spinning\" qpos=\"0 0 0.02 1 0 0 0\" qvel=\"0 0 0 0 1 200\" />\n </keyframe>\n</mujoco>\n\"\"\"\nphysics = mujoco.Physics.from_xml_string(tippe_top)\nPIL.Image.fromarray(physics.render(camera_id='closeup'))",
"Note several new features of this model definition:\n0. The free joint is added with the <freejoint/> clause, which is similar to <joint type=\"free\"/>, but prohibits unphysical attributes like friction or stiffness.\n1. We use the <option/> clause to set the integrator to the more accurate Runge Kutta 4th order.\n2. We define the floor's grid material inside the <asset/> clause and reference it in the floor geom. \n3. We use an invisible and non-colliding box geom called ballast to move the top's center-of-mass lower. Having a low center of mass is (counter-intuitively) required for the flipping behaviour to occur.\n4. We save our initial spinning state as a keyframe. It has a high rotational velocity around the z-axis, but is not perfectly oriented with the world.\n5. We define a <camera> in our model, and then render from it using the camera_id argument to render().\nLet us examine the state:",
"print('positions', physics.data.qpos)\nprint('velocities', physics.data.qvel)",
"The velocities are easy to interpret, 6 zeros, one for each DOF. What about the length-7 positions? We can see the initial 2cm height of the body; the subsequent four numbers are the 3D orientation, defined by a unit quaternion. These normalized four-vectors, which preserve the topology of the orientation group, are the reason that data.qpos can be bigger than data.qvel: 3D orientations are represented with 4 numbers while angular velocities are 3 numbers.",
"#@title Video of the tippe-top {vertical-output: true}\n\nduration = 7 # (seconds)\nframerate = 60 # (Hz)\n\n# Simulate and display video.\nframes = []\nphysics.reset(0) # Reset to keyframe 0 (load a saved state).\nwhile physics.data.time < duration:\n physics.step()\n if len(frames) < (physics.data.time) * framerate:\n pixels = physics.render(camera_id='closeup')\n frames.append(pixels)\n\ndisplay_video(frames, framerate)",
"Measuring values from physics.data\nThe physics.data structure contains all of the dynamic variables and intermediate results produced by the simulation. These are expected to change on each timestep. \nBelow we simulate for 2000 timesteps and plot the state and height of the sphere as a function of time.",
"#@title Measuring values {vertical-output: true}\n\ntimevals = []\nangular_velocity = []\nstem_height = []\n\n# Simulate and save data\nphysics.reset(0)\nwhile physics.data.time < duration:\n physics.step()\n timevals.append(physics.data.time)\n angular_velocity.append(physics.data.qvel[3:6].copy())\n stem_height.append(physics.named.data.geom_xpos['stem', 'z'])\n\ndpi = 100\nwidth = 480\nheight = 640\nfigsize = (width / dpi, height / dpi)\n_, ax = plt.subplots(2, 1, figsize=figsize, dpi=dpi, sharex=True)\n\nax[0].plot(timevals, angular_velocity)\nax[0].set_title('angular velocity')\nax[0].set_ylabel('radians / second')\n\nax[1].plot(timevals, stem_height)\nax[1].set_xlabel('time (seconds)')\nax[1].set_ylabel('meters')\n_ = ax[1].set_title('stem height')",
"PyMJCF tutorial\nThis library provides a Python object model for MuJoCo's XML-based\nMJCF physics modeling language. The\ngoal of the library is to allow users to easily interact with and modify MJCF\nmodels in Python, similarly to what the JavaScript DOM does for HTML.\nA key feature of this library is the ability to easily compose multiple separate\nMJCF models into a larger one. Disambiguation of duplicated names from different\nmodels, or multiple instances of the same model, is handled automatically.\nOne typical use case is when we want robots with a variable number of joints. This is a fundamental change to the kinematics, requiring a new XML descriptor and new binary model to be compiled. \nThe following snippets realise this scenario and provide a quick example of this library's use case.",
"class Leg(object):\n \"\"\"A 2-DoF leg with position actuators.\"\"\"\n def __init__(self, length, rgba):\n self.model = mjcf.RootElement()\n\n # Defaults:\n self.model.default.joint.damping = 2\n self.model.default.joint.type = 'hinge'\n self.model.default.geom.type = 'capsule'\n self.model.default.geom.rgba = rgba # Continued below...\n\n # Thigh:\n self.thigh = self.model.worldbody.add('body')\n self.hip = self.thigh.add('joint', axis=[0, 0, 1])\n self.thigh.add('geom', fromto=[0, 0, 0, length, 0, 0], size=[length/4])\n\n # Hip:\n self.shin = self.thigh.add('body', pos=[length, 0, 0])\n self.knee = self.shin.add('joint', axis=[0, 1, 0])\n self.shin.add('geom', fromto=[0, 0, 0, 0, 0, -length], size=[length/5])\n\n # Position actuators:\n self.model.actuator.add('position', joint=self.hip, kp=10)\n self.model.actuator.add('position', joint=self.knee, kp=10)",
"The Leg class describes an abstract articulated leg, with two joints and corresponding proportional-derivative actuators. \nNote that:\n\nMJCF attributes correspond directly to arguments of the add() method.\nWhen referencing elements, e.g when specifying the joint to which an actuator is attached, the MJCF element itself is used, rather than the name string.",
"BODY_RADIUS = 0.1\nBODY_SIZE = (BODY_RADIUS, BODY_RADIUS, BODY_RADIUS / 2)\nrandom_state = np.random.RandomState(42)\n\ndef make_creature(num_legs):\n \"\"\"Constructs a creature with `num_legs` legs.\"\"\"\n rgba = random_state.uniform([0, 0, 0, 1], [1, 1, 1, 1])\n model = mjcf.RootElement()\n model.compiler.angle = 'radian' # Use radians.\n\n # Make the torso geom.\n model.worldbody.add(\n 'geom', name='torso', type='ellipsoid', size=BODY_SIZE, rgba=rgba)\n\n # Attach legs to equidistant sites on the circumference.\n for i in range(num_legs):\n theta = 2 * i * np.pi / num_legs\n hip_pos = BODY_RADIUS * np.array([np.cos(theta), np.sin(theta), 0])\n hip_site = model.worldbody.add('site', pos=hip_pos, euler=[0, 0, theta])\n leg = Leg(length=BODY_RADIUS, rgba=rgba)\n hip_site.attach(leg.model)\n\n return model",
"The make_creature function uses PyMJCF's attach() method to procedurally attach legs to the torso. Note that at this stage both the torso and hip attachment sites are children of the worldbody, since their parent body has yet to be instantiated. We'll now make an arena with a chequered floor and two lights, and place our creatures in a grid.",
"#@title Six Creatures on a floor.{vertical-output: true}\n\narena = mjcf.RootElement()\nchequered = arena.asset.add('texture', type='2d', builtin='checker', width=300,\n height=300, rgb1=[.2, .3, .4], rgb2=[.3, .4, .5])\ngrid = arena.asset.add('material', name='grid', texture=chequered,\n texrepeat=[5, 5], reflectance=.2)\narena.worldbody.add('geom', type='plane', size=[2, 2, .1], material=grid)\nfor x in [-2, 2]:\n arena.worldbody.add('light', pos=[x, -1, 3], dir=[-x, 1, -2])\n\n# Instantiate 6 creatures with 3 to 8 legs.\ncreatures = [make_creature(num_legs=num_legs) for num_legs in range(3, 9)]\n\n# Place them on a grid in the arena.\nheight = .15\ngrid = 5 * BODY_RADIUS\nxpos, ypos, zpos = np.meshgrid([-grid, 0, grid], [0, grid], [height])\nfor i, model in enumerate(creatures):\n # Place spawn sites on a grid.\n spawn_pos = (xpos.flat[i], ypos.flat[i], zpos.flat[i])\n spawn_site = arena.worldbody.add('site', pos=spawn_pos, group=3)\n # Attach to the arena at the spawn sites, with a free joint.\n spawn_site.attach(model).add('freejoint')\n\n# Instantiate the physics and render.\nphysics = mjcf.Physics.from_mjcf_model(arena)\nPIL.Image.fromarray(physics.render())",
"Multi-legged creatures, ready to roam! Let's inject some controls and watch them move. We'll generate a sinusoidal open-loop control signal of fixed frequency and random phase, recording both video frames and the horizontal positions of the torso geoms, in order to plot the movement trajectories.",
"#@title Video of the movement{vertical-output: true}\n#@test {\"timeout\": 600}\n\nduration = 10 # (Seconds)\nframerate = 30 # (Hz)\nvideo = []\npos_x = []\npos_y = []\ntorsos = [] # List of torso geom elements.\nactuators = [] # List of actuator elements.\nfor creature in creatures:\n torsos.append(creature.find('geom', 'torso'))\n actuators.extend(creature.find_all('actuator'))\n\n# Control signal frequency, phase, amplitude.\nfreq = 5\nphase = 2 * np.pi * random_state.rand(len(actuators))\namp = 0.9\n\n# Simulate, saving video frames and torso locations.\nphysics.reset()\nwhile physics.data.time < duration:\n # Inject controls and step the physics.\n physics.bind(actuators).ctrl = amp * np.sin(freq * physics.data.time + phase)\n physics.step()\n\n # Save torso horizontal positions using bind().\n pos_x.append(physics.bind(torsos).xpos[:, 0].copy())\n pos_y.append(physics.bind(torsos).xpos[:, 1].copy())\n\n # Save video frames.\n if len(video) < physics.data.time * framerate:\n pixels = physics.render()\n video.append(pixels.copy())\n\ndisplay_video(video, framerate)\n\n#@title Movement trajectories{vertical-output: true}\n\ncreature_colors = physics.bind(torsos).rgba[:, :3]\nfig, ax = plt.subplots(figsize=(4, 4))\nax.set_prop_cycle(color=creature_colors)\n_ = ax.plot(pos_x, pos_y, linewidth=4)",
"The plot above shows the corresponding movement trajectories of creature positions. Note how physics.bind(torsos) was used to access both xpos and rgba values. Once the Physics had been instantiated by from_mjcf_model(), the bind() method will expose both the associated mjData and mjModel fields of an mjcf element, providing unified access to all quantities in the simulation. \nComposer tutorial\nIn this tutorial we will create a task requiring our \"creature\" above to press a colour-changing button on the floor with a prescribed force. We begin by implementing our creature as a composer.Entity:",
"#@title The `Creature` class\n\n\nclass Creature(composer.Entity):\n \"\"\"A multi-legged creature derived from `composer.Entity`.\"\"\"\n def _build(self, num_legs):\n self._model = make_creature(num_legs)\n\n def _build_observables(self):\n return CreatureObservables(self)\n\n @property\n def mjcf_model(self):\n return self._model\n\n @property\n def actuators(self):\n return tuple(self._model.find_all('actuator'))\n\n\n# Add simple observable features for joint angles and velocities.\nclass CreatureObservables(composer.Observables):\n\n @composer.observable\n def joint_positions(self):\n all_joints = self._entity.mjcf_model.find_all('joint')\n return observable.MJCFFeature('qpos', all_joints)\n\n @composer.observable\n def joint_velocities(self):\n all_joints = self._entity.mjcf_model.find_all('joint')\n return observable.MJCFFeature('qvel', all_joints)",
"The Creature Entity includes generic Observables for joint angles and velocities. Because find_all() is called on the Creature's MJCF model, it will only return the creature's leg joints, and not the \"free\" joint with which it will be attached to the world. Note that Composer Entities should override the _build and _build_observables methods rather than __init__. The implementation of __init__ in the base class calls _build and _build_observables, in that order, to ensure that the entity's MJCF model is created before its observables. This was a design choice which allows the user to refer to an observable as an attribute (entity.observables.foo) while still making it clear which attributes are observables. The stateful Button class derives from composer.Entity and implements the initialize_episode and after_substep callbacks.",
"#@title The `Button` class\n\nNUM_SUBSTEPS = 25 # The number of physics substeps per control timestep.\n\n\nclass Button(composer.Entity):\n \"\"\"A button Entity which changes colour when pressed with certain force.\"\"\"\n def _build(self, target_force_range=(5, 10)):\n self._min_force, self._max_force = target_force_range\n self._mjcf_model = mjcf.RootElement()\n self._geom = self._mjcf_model.worldbody.add(\n 'geom', type='cylinder', size=[0.25, 0.02], rgba=[1, 0, 0, 1])\n self._site = self._mjcf_model.worldbody.add(\n 'site', type='cylinder', size=self._geom.size*1.01, rgba=[1, 0, 0, 0])\n self._sensor = self._mjcf_model.sensor.add('touch', site=self._site)\n self._num_activated_steps = 0\n\n def _build_observables(self):\n return ButtonObservables(self)\n\n @property\n def mjcf_model(self):\n return self._mjcf_model\n # Update the activation (and colour) if the desired force is applied.\n def _update_activation(self, physics):\n current_force = physics.bind(self.touch_sensor).sensordata[0]\n self._is_activated = (current_force >= self._min_force and\n current_force <= self._max_force)\n physics.bind(self._geom).rgba = (\n [0, 1, 0, 1] if self._is_activated else [1, 0, 0, 1])\n self._num_activated_steps += int(self._is_activated)\n\n def initialize_episode(self, physics, random_state):\n self._reward = 0.0\n self._num_activated_steps = 0\n self._update_activation(physics)\n\n def after_substep(self, physics, random_state):\n self._update_activation(physics)\n\n @property\n def touch_sensor(self):\n return self._sensor\n\n @property\n def num_activated_steps(self):\n return self._num_activated_steps\n\n\nclass ButtonObservables(composer.Observables):\n \"\"\"A touch sensor which averages contact force over physics substeps.\"\"\"\n @composer.observable\n def touch_force(self):\n return observable.MJCFFeature('sensordata', self._entity.touch_sensor,\n buffer_size=NUM_SUBSTEPS, aggregator='mean')",
"Note how the Button counts the number of sub-steps during which it is pressed with the desired force. It also exposes an Observable of the force being applied to the button, whose value is an average of the readings over the physics time-steps.\nWe import some variation modules and an arena factory:",
"#@title Random initialiser using `composer.variation`\n\n\nclass UniformCircle(variation.Variation):\n \"\"\"A uniformly sampled horizontal point on a circle of radius `distance`.\"\"\"\n def __init__(self, distance):\n self._distance = distance\n self._heading = distributions.Uniform(0, 2*np.pi)\n\n def __call__(self, initial_value=None, current_value=None, random_state=None):\n distance, heading = variation.evaluate(\n (self._distance, self._heading), random_state=random_state)\n return (distance*np.cos(heading), distance*np.sin(heading), 0)\n\n#@title The `PressWithSpecificForce` task\n\n\nclass PressWithSpecificForce(composer.Task):\n\n def __init__(self, creature):\n self._creature = creature\n self._arena = floors.Floor()\n self._arena.add_free_entity(self._creature)\n self._arena.mjcf_model.worldbody.add('light', pos=(0, 0, 4))\n self._button = Button()\n self._arena.attach(self._button)\n\n # Configure initial poses\n self._creature_initial_pose = (0, 0, 0.15)\n button_distance = distributions.Uniform(0.5, .75)\n self._button_initial_pose = UniformCircle(button_distance)\n\n # Configure variators\n self._mjcf_variator = variation.MJCFVariator()\n self._physics_variator = variation.PhysicsVariator()\n\n # Configure and enable observables\n pos_corrptor = noises.Additive(distributions.Normal(scale=0.01))\n self._creature.observables.joint_positions.corruptor = pos_corrptor\n self._creature.observables.joint_positions.enabled = True\n vel_corruptor = noises.Multiplicative(distributions.LogNormal(sigma=0.01))\n self._creature.observables.joint_velocities.corruptor = vel_corruptor\n self._creature.observables.joint_velocities.enabled = True\n self._button.observables.touch_force.enabled = True\n\n def to_button(physics):\n button_pos, _ = self._button.get_pose(physics)\n return self._creature.global_vector_to_local_frame(physics, button_pos)\n\n self._task_observables = {}\n self._task_observables['button_position'] = observable.Generic(to_button)\n\n for obs in self._task_observables.values():\n obs.enabled = True\n\n self.control_timestep = NUM_SUBSTEPS * self.physics_timestep\n\n @property\n def root_entity(self):\n return self._arena\n\n @property\n def task_observables(self):\n return self._task_observables\n\n def initialize_episode_mjcf(self, random_state):\n self._mjcf_variator.apply_variations(random_state)\n\n def initialize_episode(self, physics, random_state):\n self._physics_variator.apply_variations(physics, random_state)\n creature_pose, button_pose = variation.evaluate(\n (self._creature_initial_pose, self._button_initial_pose),\n random_state=random_state)\n self._creature.set_pose(physics, position=creature_pose)\n self._button.set_pose(physics, position=button_pose)\n\n def get_reward(self, physics):\n return self._button.num_activated_steps / NUM_SUBSTEPS \n\n#@title Instantiating an environment{vertical-output: true}\n\ncreature = Creature(num_legs=4)\ntask = PressWithSpecificForce(creature)\nenv = composer.Environment(task, random_state=np.random.RandomState(42))\n\nenv.reset()\nPIL.Image.fromarray(env.physics.render())",
"The Control Suite\nThe Control Suite is a set of stable, well-tested tasks designed to serve as a benchmark for continuous control learning agents. Tasks are written using the basic MuJoCo wrapper interface. Standardised action, observation and reward structures make suite-wide benchmarking simple and learning curves easy to interpret. Control Suite domains are not meant to be modified, in order to facilitate benchmarking. For full details regarding benchmarking, please refer to our original publication.\nA video of solved benchmark tasks is available here.\nThe suite come with convenient module level tuples for iterating over tasks:",
"#@title Iterating over tasks{vertical-output: true}\n\nmax_len = max(len(d) for d, _ in suite.BENCHMARKING)\nfor domain, task in suite.BENCHMARKING:\n print(f'{domain:<{max_len}} {task}')\n\n#@title Loading and simulating a `suite` task{vertical-output: true}\n\n# Load the environment\nrandom_state = np.random.RandomState(42)\nenv = suite.load('hopper', 'stand', task_kwargs={'random': random_state})\n\n# Simulate episode with random actions\nduration = 4 # Seconds\nframes = []\nticks = []\nrewards = []\nobservations = []\n\nspec = env.action_spec()\ntime_step = env.reset()\n\nwhile env.physics.data.time < duration:\n\n action = random_state.uniform(spec.minimum, spec.maximum, spec.shape)\n time_step = env.step(action)\n\n camera0 = env.physics.render(camera_id=0, height=200, width=200)\n camera1 = env.physics.render(camera_id=1, height=200, width=200)\n frames.append(np.hstack((camera0, camera1)))\n rewards.append(time_step.reward)\n observations.append(copy.deepcopy(time_step.observation))\n ticks.append(env.physics.data.time)\n\nhtml_video = display_video(frames, framerate=1./env.control_timestep())\n\n# Show video and plot reward and observations\nnum_sensors = len(time_step.observation)\n\n_, ax = plt.subplots(1 + num_sensors, 1, sharex=True, figsize=(4, 8))\nax[0].plot(ticks, rewards)\nax[0].set_ylabel('reward')\nax[-1].set_xlabel('time')\n\nfor i, key in enumerate(time_step.observation):\n data = np.asarray([observations[j][key] for j in range(len(observations))])\n ax[i+1].plot(ticks, data, label=key)\n ax[i+1].set_ylabel(key)\n\nhtml_video\n\n#@title Visualizing an initial state of one task per domain in the Control Suite\ndomains_tasks = {domain: task for domain, task in suite.ALL_TASKS}\nrandom_state = np.random.RandomState(42)\nnum_domains = len(domains_tasks)\nn_col = num_domains // int(np.sqrt(num_domains))\nn_row = num_domains // n_col + int(0 < num_domains % n_col)\n_, ax = plt.subplots(n_row, n_col, figsize=(12, 12))\nfor a in ax.flat:\n a.axis('off')\n a.grid(False)\n\nprint(f'Iterating over all {num_domains} domains in the Suite:')\nfor j, [domain, task] in enumerate(domains_tasks.items()):\n print(domain, task)\n\n env = suite.load(domain, task, task_kwargs={'random': random_state})\n timestep = env.reset()\n pixels = env.physics.render(height=200, width=200, camera_id=0)\n\n ax.flat[j].imshow(pixels)\n ax.flat[j].set_title(domain + ': ' + task)\n\nclear_output()",
"Locomotion\nHumanoid running along corridor with obstacles\nAs an illustrative example of using the Locomotion infrastructure to build an RL environment, consider placing a humanoid in a corridor with walls, and a task specifying that the humanoid will be rewarded for running along this corridor, navigating around the wall obstacles using vision. We instantiate the environment as a composition of the Walker, Arena, and Task as follows. First, we build a position-controlled CMU humanoid walker.",
"#@title A position controlled `cmu_humanoid`\n\nwalker = cmu_humanoid.CMUHumanoidPositionControlledV2020(\n observable_options={'egocentric_camera': dict(enabled=True)})",
"Next, we construct a corridor-shaped arena that is obstructed by walls.",
"#@title A corridor arena with wall obstacles\n\narena = corridor_arenas.WallsCorridor(\n wall_gap=3.,\n wall_width=distributions.Uniform(2., 3.),\n wall_height=distributions.Uniform(2.5, 3.5),\n corridor_width=4.,\n corridor_length=30.,\n)",
"The task constructor places the walker in the arena.",
"#@title A task to navigate the arena\n\ntask = corridor_tasks.RunThroughCorridor(\n walker=walker,\n arena=arena,\n walker_spawn_position=(0.5, 0, 0),\n target_velocity=3.0,\n physics_timestep=0.005,\n control_timestep=0.03,\n)",
"Finally, a task that rewards the agent for running down the corridor at a specific velocity is instantiated as a composer.Environment.",
"#@title The `RunThroughCorridor` environment\n\nenv = composer.Environment(\n task=task,\n time_limit=10,\n random_state=np.random.RandomState(42),\n strip_singleton_obs_buffer_dim=True,\n)\nenv.reset()\npixels = []\nfor camera_id in range(3):\n pixels.append(env.physics.render(camera_id=camera_id, width=240))\nPIL.Image.fromarray(np.hstack(pixels))",
"Multi-Agent Soccer\nBuilding on Composer and Locomotion libraries, the Multi-agent soccer environments, introduced in this paper, follow a consistent task structure of Walkers, Arena, and Task where instead of a single walker, we inject multiple walkers that can interact with each other physically in the same scene. The code snippet below shows how to instantiate a 2-vs-2 Multi-agent Soccer environment with the simple, 5 degree-of-freedom BoxHead walker type.",
"#@title 2-v-2 `Boxhead` soccer\n\nrandom_state = np.random.RandomState(42)\nenv = soccer.load(\n team_size=2,\n time_limit=45.,\n random_state=random_state,\n disable_walker_contacts=False,\n walker_type=soccer.WalkerType.BOXHEAD,\n)\nenv.reset()\npixels = []\n# Select a random subset of 6 cameras (soccer envs have lots of cameras)\ncameras = random_state.choice(env.physics.model.ncam, 6, replace=False)\nfor camera_id in cameras:\n pixels.append(env.physics.render(camera_id=camera_id, width=240))\nimage = np.vstack((np.hstack(pixels[:3]), np.hstack(pixels[3:])))\nPIL.Image.fromarray(image)",
"It can trivially be replaced by e.g. the WalkerType.ANT walker:",
"#@title 3-v-3 `Ant` soccer\n\nrandom_state = np.random.RandomState(42)\nenv = soccer.load(\n team_size=3,\n time_limit=45.,\n random_state=random_state,\n disable_walker_contacts=False,\n walker_type=soccer.WalkerType.ANT,\n)\nenv.reset()\n\npixels = []\ncameras = random_state.choice(env.physics.model.ncam, 6, replace=False)\nfor camera_id in cameras:\n pixels.append(env.physics.render(camera_id=camera_id, width=240))\nimage = np.vstack((np.hstack(pixels[:3]), np.hstack(pixels[3:])))\nPIL.Image.fromarray(image)",
"Manipulation\nThe manipulation module provides a robotic arm, a set of simple objects, and tools for building reward functions for manipulation tasks.",
"#@title Listing all `manipulation` tasks{vertical-output: true}\n\n# `ALL` is a tuple containing the names of all of the environments in the suite.\nprint('\\n'.join(manipulation.ALL))\n\n#@title Listing `manipulation` tasks that use vision{vertical-output: true}\nprint('\\n'.join(manipulation.get_environments_by_tag('vision')))\n\n#@title Loading and simulating a `manipulation` task{vertical-output: true}\n\nenv = manipulation.load('stack_2_of_3_bricks_random_order_vision', seed=42)\naction_spec = env.action_spec()\n\ndef sample_random_action():\n return env.random_state.uniform(\n low=action_spec.minimum,\n high=action_spec.maximum,\n ).astype(action_spec.dtype, copy=False)\n\n# Step the environment through a full episode using random actions and record\n# the camera observations.\nframes = []\ntimestep = env.reset()\nframes.append(timestep.observation['front_close'])\nwhile not timestep.last():\n timestep = env.step(sample_random_action())\n frames.append(timestep.observation['front_close'])\nall_frames = np.concatenate(frames, axis=0)\ndisplay_video(all_frames, 30)"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
huongttlan/statsmodels | examples/notebooks/statespace_sarimax_internet.ipynb | bsd-3-clause | [
"SARIMAX: Model selection, missing data\nThe example mirrors Durbin and Koopman (2012), Chapter 8.4 in application of Box-Jenkins methodology to fit ARMA models. The novel feature is the ability of the model to work on datasets with missing values.",
"%matplotlib inline\n\nimport numpy as np\nimport pandas as pd\nfrom scipy.stats import norm\nimport statsmodels.api as sm\nimport matplotlib.pyplot as plt\n\nimport requests\nfrom io import BytesIO\nfrom zipfile import ZipFile\n\n# Download the dataset\ndk = requests.get('http://www.ssfpack.com/files/DK-data.zip').content\nf = BytesIO(dk)\nzipped = ZipFile(f)\ndf = pd.read_table(\n BytesIO(zipped.read('internet.dat')),\n skiprows=1, header=None, sep='\\s+', engine='python',\n names=['internet','dinternet']\n)",
"Model Selection\nAs in Durbin and Koopman, we force a number of the values to be missing.",
"# Get the basic series\ndta_full = df.dinternet[1:].values\ndta_miss = dta_full.copy()\n\n# Remove datapoints\nmissing = np.r_[6,16,26,36,46,56,66,72,73,74,75,76,86,96]-1\ndta_miss[missing] = np.nan",
"Then we can consider model selection using the Akaike information criteria (AIC), but running the model for each variant and selecting the model with the lowest AIC value.\nThere are a couple of things to note here:\n\nWhen running such a large batch of models, particularly when the autoregressive and moving average orders become large, there is the possibility of poor maximum likelihood convergence. Below we ignore the warnings since this example is illustrative.\nWe use the option enforce_invertibility=False, which allows the moving average polynomial to be non-invertible, so that more of the models are estimable.\nSeveral of the models do not produce good results, and their AIC value is set to NaN. This is not surprising, as Durbin and Koopman note numerical problems with the high order models.",
"import warnings\n\naic_full = pd.DataFrame(np.zeros((6,6), dtype=float))\naic_miss = pd.DataFrame(np.zeros((6,6), dtype=float))\n\nwarnings.simplefilter('ignore')\n\n# Iterate over all ARMA(p,q) models with p,q in [0,6]\nfor p in range(6):\n for q in range(6):\n if p == 0 and q == 0:\n continue\n \n # Estimate the model with no missing datapoints\n mod = sm.tsa.statespace.SARIMAX(dta_full, order=(p,0,q), enforce_invertibility=False)\n try:\n res = mod.fit()\n aic_full.iloc[p,q] = res.aic\n except:\n aic_full.iloc[p,q] = np.nan\n \n # Estimate the model with missing datapoints\n mod = sm.tsa.statespace.SARIMAX(dta_miss, order=(p,0,q), enforce_invertibility=False)\n try:\n res = mod.fit()\n aic_miss.iloc[p,q] = res.aic\n except:\n aic_miss.iloc[p,q] = np.nan",
"For the models estimated over the full (non-missing) dataset, the AIC chooses ARMA(1,1) or ARMA(3,0). Durbin and Koopman suggest the ARMA(1,1) specification is better due to parsimony.\n$$\n\\text{Replication of:}\\\n\\textbf{Table 8.1} ~~ \\text{AIC for different ARMA models.}\\\n\\newcommand{\\r}[1]{{\\color{red}{#1}}}\n\\begin{array}{lrrrrrr}\n\\hline\nq & 0 & 1 & 2 & 3 & 4 & 5 \\\n\\hline\np & {} & {} & {} & {} & {} & {} \\\n0 & 0.00 & 549.81 & 519.87 & 520.27 & 519.38 & 518.86 \\\n1 & 529.24 & \\r{514.30} & 516.25 & 514.58 & 515.10 & 516.28 \\\n2 & 522.18 & 516.29 & 517.16 & 515.77 & 513.24 & 514.73 \\\n3 & \\r{511.99} & 513.94 & 515.92 & 512.06 & 513.72 & 514.50 \\\n4 & 513.93 & 512.89 & nan & nan & 514.81 & 516.08 \\\n5 & 515.86 & 517.64 & nan & nan & nan & nan \\\n\\hline\n\\end{array}\n$$\nFor the models estimated over missing dataset, the AIC chooses ARMA(1,1)\n$$\n\\text{Replication of:}\\\n\\textbf{Table 8.2} ~~ \\text{AIC for different ARMA models with missing observations.}\\\n\\begin{array}{lrrrrrr}\n\\hline\nq & 0 & 1 & 2 & 3 & 4 & 5 \\\n\\hline\np & {} & {} & {} & {} & {} & {} \\\n0 & 0.00 & 488.93 & 464.01 & 463.86 & 462.63 & 463.62 \\\n1 & 468.01 & \\r{457.54} & 459.35 & 458.66 & 459.15 & 461.01 \\\n2 & 469.68 & nan & 460.48 & 459.43 & 459.23 & 460.47 \\\n3 & 467.10 & 458.44 & 459.64 & 456.66 & 459.54 & 460.05 \\\n4 & 469.00 & 459.52 & nan & 463.04 & 459.35 & 460.96 \\\n5 & 471.32 & 461.26 & nan & nan & 461.00 & 462.97 \\\n\\hline\n\\end{array}\n$$\nNote: the AIC values are calculated differently than in Durbin and Koopman, but show overall similar trends.\nPostestimation\nUsing the ARMA(1,1) specification selected above, we perform in-sample prediction and out-of-sample forecasting.",
"# Statespace\nmod = sm.tsa.statespace.SARIMAX(dta_miss, order=(1,0,1))\nres = mod.fit()\nprint(res.summary())\n\n# In-sample one-step-ahead predictions\npredict_res = res.predict(full_results=True)\n\npredict = predict_res.forecasts\ncov = predict_res.forecasts_error_cov\npredict_idx = np.arange(len(predict[0]))\n\n# 95% confidence intervals\ncritical_value = norm.ppf(1 - 0.05 / 2.)\nstd_errors = np.sqrt(cov.diagonal().T)\nci = np.c_[\n (predict - critical_value*std_errors)[:, :, None],\n (predict + critical_value*std_errors)[:, :, None],\n][0].T\n\n# Out-of-sample forecasts and confidence intervals\nnforecast = 20\nforecast = res.forecast(nforecast)\nforcast_idx = len(dta_full) + np.arange(nforecast)\n\n# Graph\nfig, ax = plt.subplots(figsize=(12,6))\nax.xaxis.grid()\nax.plot(predict_idx, dta_miss, 'k.')\n\n# Plot\nax.plot(predict_idx, predict[0], 'gray');\nax.fill_between(predict_idx, ci[0], ci[1], alpha=0.1)\n\nax.plot(forcast_idx[-20:], forecast[0], 'k--', linestyle='--', linewidth=2)\n\nax.set(title='Figure 8.9 - Internet series');"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
UDST/activitysim | activitysim/examples/example_estimation/notebooks/07_mand_tour_freq.ipynb | bsd-3-clause | [
"Estimating Mandatory Tour Frequency\nThis notebook illustrates how to re-estimate a single model component for ActivitySim. This process \nincludes running ActivitySim in estimation mode to read household travel survey files and write out\nthe estimation data bundles used in this notebook. To review how to do so, please visit the other\nnotebooks in this directory.\nLoad libraries",
"import os\nimport larch # !conda install larch -c conda-forge # for estimation\nimport pandas as pd",
"We'll work in our test directory, where ActivitySim has saved the estimation data bundles.",
"os.chdir('test')",
"Load data and prep model for estimation",
"modelname = \"mandatory_tour_frequency\"\n\nfrom activitysim.estimation.larch import component_model\nmodel, data = component_model(modelname, return_data=True)",
"Review data loaded from the EDB\nThe next step is to read the EDB, including the coefficients, model settings, utilities specification, and chooser and alternative data.\nCoefficients",
"data.coefficients",
"Utility specification",
"data.spec",
"Chooser data",
"data.chooser_data",
"Estimate\nWith the model setup for estimation, the next step is to estimate the model coefficients. Make sure to use a sufficiently large enough household sample and set of zones to avoid an over-specified model, which does not have a numerically stable likelihood maximizing solution. Larch has a built-in estimation methods including BHHH, and also offers access to more advanced general purpose non-linear optimizers in the scipy package, including SLSQP, which allows for bounds and constraints on parameters. BHHH is the default and typically runs faster, but does not follow constraints on parameters.",
"model.estimate()",
"Estimated coefficients",
"model.parameter_summary()",
"Output Estimation Results",
"from activitysim.estimation.larch import update_coefficients\nresult_dir = data.edb_directory/\"estimated\"\nupdate_coefficients(\n model, data, result_dir,\n output_file=f\"{modelname}_coefficients_revised.csv\",\n);",
"Write the model estimation report, including coefficient t-statistic and log likelihood",
"model.to_xlsx(\n result_dir/f\"{modelname}_model_estimation.xlsx\", \n data_statistics=False,\n)",
"Next Steps\nThe final step is to either manually or automatically copy the *_coefficients_revised.csv file to the configs folder, rename it to *_coefficients.csv, and run ActivitySim in simulation mode.",
"pd.read_csv(result_dir/f\"{modelname}_coefficients_revised.csv\")"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
GoogleCloudPlatform/vertex-ai-samples | notebooks/community/managed_notebooks/pricing_optimization/pricing-optimization.ipynb | apache-2.0 | [
"Pricing Optimization\nTable of contents\n\nOverview\nDataset\nObjective\nCosts\nCreate a BigQuery dataset\nLoad the dataset from Cloud Storage\nData analysis\nPreprocess the data for training\nTrain the model using BigQuery ML\nGenerate forecasts from the model\nInterpret the results to choose the best price\nClean up\n\nOverview\n<a name=\"section-1\"></a>\nThis notebook demonstrates analysis of pricing optimization on CDM Pricing Data and automating the workflow using Vertex AI Workbench managed notebooks.\nNote: This notebook file was developed to run in a Vertex AI Workbench managed notebooks instance using the Python (Local) kernel. Some components of this notebook may not work in other notebook environments.\nDataset\n<a name=\"section-2\"></a>\nThe dataset used in this notebook is a part of the CDM Pricing dataset, which consists of product sales information on specified dates.\nObjective\n<a name=\"section-3\"></a>\nThe objective of this notebook is to build a pricing optimization model using Vertex AI. The following steps have been followed: \n\nLoad the required dataset from a Cloud Storage bucket.\nAnalyze the fields present in the dataset.\nProcess the data to build a model.\nBuild a BigQuery ML forecast model on the processed data.\nGet forecasted values from the BigQuery ML model.\nInterpret the forecasts to identify the best prices.\nClean up.\n\nCosts\n<a name=\"section-4\"></a>\nThis tutorial uses the following billable components of Google Cloud:\n\nVertex AI\nBigQuery\nCloud Storage\n\nLearn about Vertex AI\npricing, BigQuery pricing and Cloud Storage\npricing, and use the Pricing\nCalculator\nto generate a cost estimate based on your projected usage.\nBefore you begin\nSet your project ID\nIf you don't know your project ID, you may be able to get your project ID using gcloud.",
"import os\n\nPROJECT_ID = \"\"\n\n# Get your Google Cloud project ID from gcloud\nif not os.getenv(\"IS_TESTING\"):\n shell_output = !gcloud config list --format 'value(core.project)' 2>/dev/null\n PROJECT_ID = shell_output[0]\n print(\"Project ID: \", PROJECT_ID)",
"Otherwise, set your project ID here.",
"if PROJECT_ID == \"\" or PROJECT_ID is None:\n PROJECT_ID = \"[your-project-id]\" # @param {type:\"string\"}",
"Import the required libraries and define constants",
"import matplotlib.pyplot as plt\nimport pandas as pd\nimport seaborn as sns\nfrom google.cloud import bigquery\nfrom google.cloud.bigquery import Client\n\nDATASET = \"[your-bigquery-dataset-id]\" # set the BigQuery dataset-id\nTRAINING_DATA_TABLE = \"[your-bigquery-table-id-to-store-the-training-data]\" # set the BigQuery table-id to store the training data",
"Create a BigQuery dataset\n<a name=\"section-5\"></a>\n@bigquery\n-- create a dataset in BigQuery\nCREATE SCHEMA pricing_optimization\nOPTIONS(\n location=\"us\"\n )\nLoad the dataset from Cloud Storage\n<a name=\"section-6\"></a>",
"DATA_LOCATION = \"gs://cloud-samples-data/ai-platform-unified/datasets/tabular/cdm_pricing_large_table.csv\"\ndf = pd.read_csv(DATA_LOCATION)\nprint(df.shape)\ndf.head()",
"You will build a forecast model on this data and thus determine the best price for a product. For this type of model, you will not be using many fields: only the sales and price related ones. For the current execrcise, focus on the following fields:\n\nProduct_ID\nCustomer_Hierarchy\nFiscal_Date\nList_Price_Converged\nInvoiced_quantity_in_Pieces\nNet_Sales\n\nData Analysis\n<a name=\"section-7\"></a>\nFirst, explore the data and distributions.\nSelect the required columns from the dataframe.",
"id_col = \"Product_ID\"\ndate_col = \"Fiscal_Date\"\ncateg_cols = [\"Customer_Hierarchy\"]\nnum_cols = [\"List_Price_Converged\", \"Invoiced_quantity_in_Pieces\", \"Net_Sales\"]\n\ndf = df[[id_col, date_col] + categ_cols + num_cols].copy()\ndf.head()",
"Check the column types and null values in the dataframe.",
"df.info()",
"This data description reveals that there are no null values in the data. Also, the field Fiscal_Date which is a date field is loaded as an object type. \nChange the type of the date field to datetime.",
"df[\"Fiscal_Date\"] = pd.to_datetime(df[\"Fiscal_Date\"], infer_datetime_format=True)",
"Plot the distributions for the categorical fields.",
"for i in categ_cols:\n df[i].value_counts(normalize=True).plot(kind=\"bar\")\n plt.title(i)\n plt.show()",
"Plot the distributions for the numerical fields.",
"for i in num_cols:\n _, ax = plt.subplots(1, 2, figsize=(10, 4))\n df[i].plot(kind=\"box\", ax=ax[0])\n df[i].plot(kind=\"hist\", ax=ax[1])\n ax[0].set_title(i + \"-Boxplot\")\n ax[1].set_title(i + \"-Histogram\")\n plt.show()",
"Check the maximum date and minimum date in Fiscal_Date column.",
"print(df[\"Fiscal_Date\"].max())\nprint(df[\"Fiscal_Date\"].min())",
"Check the product distribution across each category.",
"grp_cols = [\"Customer_Hierarchy\", \"Product_ID\"]\ngrp_df = df[grp_cols].groupby(by=grp_cols).count().reset_index()\ngrp_df.groupby(\"Customer_Hierarchy\").nunique()",
"Check the percentage changes in the orders based on the percentage changes in the price.",
"# aggregate the data\ndf_aggr = (\n df.groupby([\"Product_ID\", \"List_Price_Converged\"])\n .agg({\"Fiscal_Date\": min, \"Invoiced_quantity_in_Pieces\": sum, \"Net_Sales\": sum})\n .reset_index()\n)\n# rename the aggregated columns\ndf_aggr.rename(\n columns={\n \"Fiscal_Date\": \"First_price_date\",\n \"Invoiced_quantity_in_Pieces\": \"Total_ordered_pieces\",\n \"Net_Sales\": \"Total_net_sales\",\n },\n inplace=True,\n)\n\n# sort values chronologically\ndf_aggr.sort_values(by=[\"Product_ID\", \"First_price_date\"], inplace=True)\ndf_aggr.reset_index(drop=True, inplace=True)\n\n# add columns for previous values\ndf_aggr[\"Previous_List\"] = df_aggr.groupby([\"Product_ID\"])[\n \"List_Price_Converged\"\n].shift()\ndf_aggr[\"Previous_Total_ordered_pieces\"] = df_aggr.groupby([\"Product_ID\"])[\n \"Total_ordered_pieces\"\n].shift()\n\n# average price change across sku's\ndf_aggr[\"price_change_perc\"] = (\n (df_aggr[\"List_Price_Converged\"] - df_aggr[\"Previous_List\"])\n / df_aggr[\"Previous_List\"].fillna(0)\n * 100\n)\ndf_aggr[\"order_change_perc\"] = (\n (df_aggr[\"Total_ordered_pieces\"] - df_aggr[\"Previous_Total_ordered_pieces\"])\n / df_aggr[\"Previous_Total_ordered_pieces\"].fillna(0)\n * 100\n)\n\n# plot a scatterplot to visualize the changes\nsns.scatterplot(\n x=\"price_change_perc\",\n y=\"order_change_perc\",\n data=df_aggr,\n hue=\"Product_ID\",\n legend=False,\n)\nplt.title(\"Percentage of change in price vs order\")\nplt.show()",
"For most of the products, the percentage change in orders are high where the percentage changes in the prices are low. This suggests that too much change in the prices can affect the number of orders. \nNote: There seem to be some outliers in the data as percentage changes greater than 800 are found. In the current exercise, do not take any manual measures to deal with outliers as you will create a BigQuery ML timeseries model that already deals with outliers.\nPreprocess the data for training\n<a name=\"section-8\"></a>\nCheck which Product_ID's have the maximum orders.",
"df_orders = df.groupby([\"Product_ID\", \"Customer_Hierarchy\"], as_index=False)[\n \"Invoiced_quantity_in_Pieces\"\n].sum()\ndf_orders.loc[\n df_orders.groupby(\"Customer_Hierarchy\")[\"Invoiced_quantity_in_Pieces\"].idxmax()\n]",
"From the above result, you can infer the following:\n\nUnder the Food category, SKU 62 has the maximum orders.\nUnder the Manufacturing category, SKU 17 has the maximum orders.\nUnder the Paper category, SKU 107 has the maximum orders.\nUnder the Publishing category, SKU 8 has the maximum orders.\nUnder the Utilities category, SKU 140 has the maximum orders.\n\nGiven that there are too many ids and only a few records for most of them, consider only the above Product_IDs for which there are a maximum number of orders. \nNote: The Invoiced_quantity_in_Pieces field seems to be a float type rather than an int type as it should be. This could be because the data itself might be averaged in the first place.\nCheck the various prices available for these Product_IDs.",
"df_type_food = df[(df[\"Product_ID\"] == \"SKU 62\") & (df[\"Customer_Hierarchy\"] == \"Food\")]\nprint(\"Food :\")\nprint(df_type_food[\"List_Price_Converged\"].value_counts())\ndf_type_manuf = df[\n (df[\"Product_ID\"] == \"SKU 17\") & (df[\"Customer_Hierarchy\"] == \"Manufacturing\")\n]\nprint(\"Manufacturing :\")\nprint(df_type_manuf[\"List_Price_Converged\"].value_counts())\ndf_type_paper = df[\n (df[\"Product_ID\"] == \"SKU 107\") & (df[\"Customer_Hierarchy\"] == \"Paper\")\n]\nprint(\"Paper :\")\nprint(df_type_paper[\"List_Price_Converged\"].value_counts())\ndf_type_pub = df[\n (df[\"Product_ID\"] == \"SKU 8\") & (df[\"Customer_Hierarchy\"] == \"Publishing\")\n]\nprint(\"Publishing :\")\nprint(df_type_pub[\"List_Price_Converged\"].value_counts())\ndf_type_util = df[\n (df[\"Product_ID\"] == \"SKU 140\") & (df[\"Customer_Hierarchy\"] == \"Utilities\")\n]\nprint(\"Utilities :\")\nprint(df_type_util[\"List_Price_Converged\"].value_counts())",
"In the publishing category, Product_ID SKU 8 and SKU 17 are less than or equal to two different prices in the entire data and so you will exclude them and consider the rest for building the forecast model. The idea here is to train a forecast model on the timeseries data for products with different prices.\nJoin the data for all the Product_IDs into one dataframe and remove duplicate records.",
"df_final = pd.concat([df_type_food, df_type_paper, df_type_util])\ndf_final = (\n df_final[\n [\n \"Product_ID\",\n \"Fiscal_Date\",\n \"Customer_Hierarchy\",\n \"List_Price_Converged\",\n \"Invoiced_quantity_in_Pieces\",\n ]\n ]\n .drop_duplicates()\n .reset_index(drop=True)\n)\ndf_final.head()",
"Save the data to a BigQuery table.",
"bq_client = bigquery.Client(project=PROJECT_ID)\n\njob_config = bigquery.LoadJobConfig(\n # Specify a (partial) schema. All columns are always written to the\n # table. The schema is used to assist in data type definitions.\n schema=[\n bigquery.SchemaField(\"Product_ID\", bigquery.enums.SqlTypeNames.STRING),\n bigquery.SchemaField(\"Fiscal_Date\", bigquery.enums.SqlTypeNames.DATE),\n bigquery.SchemaField(\"List_Price_Converged\", bigquery.enums.SqlTypeNames.FLOAT),\n bigquery.SchemaField(\n \"Invoiced_quantity_in_Pieces\", bigquery.enums.SqlTypeNames.FLOAT\n ),\n ],\n # Optionally, set the write disposition. BigQuery appends loaded rows\n # to an existing table by default, but with WRITE_TRUNCATE write\n # disposition it replaces the table with the loaded data.\n write_disposition=\"WRITE_TRUNCATE\",\n)\n\n# save the dataframe to a table in the created dataset\njob = bq_client.load_table_from_dataframe(\n df_final,\n \"{}.{}.{}\".format(PROJECT_ID, DATASET, TRAINING_DATA_TABLE),\n job_config=job_config,\n) # Make an API request.\njob.result() # Wait for the job to complete.",
"Train the model using BigQuery ML\n<a name=\"section-9\"></a>\nTrain an Arima-Plus model on the data using BigQuery ML.\n@bigquery\ncreate or replace model pricing_optimization.bqml_arima\noptions\n (model_type = 'ARIMA_PLUS',\n time_series_timestamp_col = 'Fiscal_Date',\n time_series_data_col = 'Invoiced_quantity_in_Pieces',\n time_series_id_col = 'ID'\n ) as\nselect\n Fiscal_Date,\n Concat(Product_ID,\"_\" ,Cast(List_Price_Converged as string)) as ID,\n Invoiced_quantity_in_Pieces\nfrom\n pricing_optimization.TRAINING_DATA\nGenerate forecasts from the model\n<a name=\"section-10\"></a>\nPredict the sales for the next 30 days for each id and save to a dataframe.",
"client = Client()\n\nquery = '''\nDECLARE HORIZON STRING DEFAULT \"30\"; #number of values to forecast\nDECLARE CONFIDENCE_LEVEL STRING DEFAULT \"0.90\"; ## required confidence level\n\nEXECUTE IMMEDIATE format(\"\"\"\n SELECT\n *\n FROM \n ML.FORECAST(MODEL pricing_optimization.bqml_arima, \n STRUCT(%s AS horizon, \n %s AS confidence_level)\n )\n \"\"\",HORIZON,CONFIDENCE_LEVEL)'''\njob = client.query(query)\ndfforecast = job.to_dataframe()\ndfforecast.head()",
"Interpret the results to choose the best price\n<a name=\"section-11\"></a>\nCalculate average forecast values for the forecast duration.",
"dfforecast_avg = (\n dfforecast[[\"ID\", \"forecast_value\"]].groupby(\"ID\", as_index=False).mean()\n)",
"Extract the ID and Price fields from the ID field.",
"dfforecast_avg[\"Product_ID\"] = dfforecast_avg[\"ID\"].apply(lambda x: x.split(\"_\")[0])\ndfforecast_avg[\"Price\"] = dfforecast_avg[\"ID\"].apply(lambda x: x.split(\"_\")[1])",
"Plot the average forecasted sales vs. the price of the product.",
"for i in dfforecast_avg[\"Product_ID\"].unique():\n dfforecast_avg[dfforecast_avg[\"Product_ID\"] == i].set_index(\"Price\").sort_values(\n \"forecast_value\"\n ).plot(kind=\"bar\")\n plt.title(\"Price vs. Average Sales for \" + i)\n plt.show()",
"Based on the plots for price vs. the average forecasted orders, it can be said that to use the maximum orders, each of the considered Product_IDs can follow the below prices:\n\nSKU 107's price range can be from 4.44 - 4.73 units\nSKU 140's price can be 1.95 units\nSKU 62's price can be 4.23 units\n\nClean Up\n<a name=\"section-12\"></a>\nTo clean up all Google Cloud resources used in this project, you can delete the Google Cloud project you used for the tutorial.\nOtherwise, you can delete the individual resources you created in this tutorial. The following code deletes the entire dataset.",
"# Construct a BigQuery client object.\nclient = bigquery.Client()\n\n# TODO(developer): Set model_id to the ID of the model to fetch.\ndataset_id = \"{PROJECT}.{DATASET}\".format(PROJECT=PROJECT_ID, DATASET=DATASET)\n\n# Use the delete_contents parameter to delete a dataset and its contents.\n# Use the not_found_ok parameter to not receive an error if the dataset has already been deleted.\nclient.delete_dataset(\n dataset_id, delete_contents=True, not_found_ok=True\n) # Make an API request.\n\nprint(\"Deleted dataset '{}'.\".format(dataset_id))"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
slundberg/shap | notebooks/image_examples/image_classification/Multi-input Gradient Explainer MNIST Example.ipynb | mit | [
"Multi-input Gradient Explainer MNIST Example\nHere we demonstrate how to use GradientExplainer when you have multiple inputs to your Keras/TensorFlow model. To keep things simple but also mildly interesting we feed two copies of MNIST into our model, where one copy goes into a conv-net layer and the other copy goes directly into a feedforward network.",
"import tensorflow as tf\nfrom tensorflow.keras import Input\nfrom tensorflow.keras.layers import Flatten, Dense, Dropout, Conv2D\n\n# load the MNIST data\n(x_train, y_train), (x_test, y_test) = tf.keras.datasets.mnist.load_data()\nx_train, x_test = x_train / 255.0, x_test / 255.0\nx_train = x_train.astype('float32')\nx_test = x_test.astype('float32')\nx_train = x_train.reshape(x_train.shape[0], 28, 28, 1)\nx_test = x_test.reshape(x_test.shape[0], 28, 28, 1)\n\n# define our model\ninput1 = Input(shape=(28,28,1))\ninput2 = Input(shape=(28,28,1))\ninput2c = Conv2D(32, kernel_size=(3, 3), activation='relu')(input2)\njoint = tf.keras.layers.concatenate([Flatten()(input1), Flatten()(input2c)])\nout = Dense(10, activation='softmax')(Dropout(0.2)(Dense(128, activation='relu')(joint)))\nmodel = tf.keras.models.Model(inputs = [input1, input2], outputs=out)\n\nmodel.compile(optimizer='adam',\n loss='sparse_categorical_crossentropy',\n metrics=['accuracy'])\n\n# fit the model\nmodel.fit([x_train, x_train], y_train, epochs=3)",
"Explain the predictions made by the model using GradientExplainer",
"import shap\n\n# since we have two inputs we pass a list of inputs to the explainer\nexplainer = shap.GradientExplainer(model, [x_train, x_train])\n\n# we explain the model's predictions on the first three samples of the test set\nshap_values = explainer.shap_values([x_test[:3], x_test[:3]])\n\n# since the model has 10 outputs we get a list of 10 explanations (one for each output)\nprint(len(shap_values))\n\n# since the model has 2 inputs we get a list of 2 explanations (one for each input) for each output\nprint(len(shap_values[0]))\n\n# here we plot the explanations for all classes for the first input (this is the feed forward input)\nshap.image_plot([shap_values[i][0] for i in range(10)], x_test[:3])\n\n# here we plot the explanations for all classes for the second input (this is the conv-net input)\nshap.image_plot([shap_values[i][1] for i in range(10)], x_test[:3])",
"Estimating the sampling error\nBy setting return_variances=True we get an estimate of how accurate our explanations are. We can see that the default number of samples (200) that were used provide fairly low variance estimates (compared to the magnitude of the shap_values above). Note that you can always use the nsamples parameter to control how many samples are used.",
"# get the variance of our estimates\nshap_values, shap_values_var = explainer.shap_values([x_test[:3], x_test[:3]], return_variances=True)\n\n# here we plot the explanations for all classes for the first input (this is the feed forward input)\nshap.image_plot([shap_values_var[i][0] for i in range(10)], x_test[:3])"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
ES-DOC/esdoc-jupyterhub | notebooks/cccma/cmip6/models/canesm5/atmoschem.ipynb | gpl-3.0 | [
"ES-DOC CMIP6 Model Properties - Atmoschem\nMIP Era: CMIP6\nInstitute: CCCMA\nSource ID: CANESM5\nTopic: Atmoschem\nSub-Topics: Transport, Emissions Concentrations, Gas Phase Chemistry, Stratospheric Heterogeneous Chemistry, Tropospheric Heterogeneous Chemistry, Photo Chemistry. \nProperties: 84 (39 required)\nModel descriptions: Model description details\nInitialized From: -- \nNotebook Help: Goto notebook help page\nNotebook Initialised: 2018-02-15 16:53:46\nDocument Setup\nIMPORTANT: to be executed each time you run the notebook",
"# DO NOT EDIT ! \nfrom pyesdoc.ipython.model_topic import NotebookOutput \n\n# DO NOT EDIT ! \nDOC = NotebookOutput('cmip6', 'cccma', 'canesm5', 'atmoschem')",
"Document Authors\nSet document authors",
"# Set as follows: DOC.set_author(\"name\", \"email\") \n# TODO - please enter value(s)",
"Document Contributors\nSpecify document contributors",
"# Set as follows: DOC.set_contributor(\"name\", \"email\") \n# TODO - please enter value(s)",
"Document Publication\nSpecify document publication status",
"# Set publication status: \n# 0=do not publish, 1=publish. \nDOC.set_publication_status(0)",
"Document Table of Contents\n1. Key Properties\n2. Key Properties --> Software Properties\n3. Key Properties --> Timestep Framework\n4. Key Properties --> Timestep Framework --> Split Operator Order\n5. Key Properties --> Tuning Applied\n6. Grid\n7. Grid --> Resolution\n8. Transport\n9. Emissions Concentrations\n10. Emissions Concentrations --> Surface Emissions\n11. Emissions Concentrations --> Atmospheric Emissions\n12. Emissions Concentrations --> Concentrations\n13. Gas Phase Chemistry\n14. Stratospheric Heterogeneous Chemistry\n15. Tropospheric Heterogeneous Chemistry\n16. Photo Chemistry\n17. Photo Chemistry --> Photolysis \n1. Key Properties\nKey properties of the atmospheric chemistry\n1.1. Model Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview of atmospheric chemistry model.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.model_overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"1.2. Model Name\nIs Required: TRUE Type: STRING Cardinality: 1.1\nName of atmospheric chemistry model code.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.model_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"1.3. Chemistry Scheme Scope\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nAtmospheric domains covered by the atmospheric chemistry model",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.chemistry_scheme_scope') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"troposhere\" \n# \"stratosphere\" \n# \"mesosphere\" \n# \"mesosphere\" \n# \"whole atmosphere\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"1.4. Basic Approximations\nIs Required: TRUE Type: STRING Cardinality: 1.1\nBasic approximations made in the atmospheric chemistry model",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.basic_approximations') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"1.5. Prognostic Variables Form\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nForm of prognostic variables in the atmospheric chemistry component.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.prognostic_variables_form') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"3D mass/mixing ratio for gas\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"1.6. Number Of Tracers\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nNumber of advected tracers in the atmospheric chemistry model",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.number_of_tracers') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"1.7. Family Approach\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nAtmospheric chemistry calculations (not advection) generalized into families of species?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.family_approach') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"1.8. Coupling With Chemical Reactivity\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nAtmospheric chemistry transport scheme turbulence is couple with chemical reactivity?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.coupling_with_chemical_reactivity') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"2. Key Properties --> Software Properties\nSoftware properties of aerosol code\n2.1. Repository\nIs Required: FALSE Type: STRING Cardinality: 0.1\nLocation of code for this component.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.software_properties.repository') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"2.2. Code Version\nIs Required: FALSE Type: STRING Cardinality: 0.1\nCode version identifier.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.software_properties.code_version') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"2.3. Code Languages\nIs Required: FALSE Type: STRING Cardinality: 0.N\nCode language(s).",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.software_properties.code_languages') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"3. Key Properties --> Timestep Framework\nTimestepping in the atmospheric chemistry model\n3.1. Method\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nMathematical method deployed to solve the evolution of a given variable",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Operator splitting\" \n# \"Integrated\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"3.2. Split Operator Advection Timestep\nIs Required: FALSE Type: INTEGER Cardinality: 0.1\nTimestep for chemical species advection (in seconds)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_advection_timestep') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"3.3. Split Operator Physical Timestep\nIs Required: FALSE Type: INTEGER Cardinality: 0.1\nTimestep for physics (in seconds).",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_physical_timestep') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"3.4. Split Operator Chemistry Timestep\nIs Required: FALSE Type: INTEGER Cardinality: 0.1\nTimestep for chemistry (in seconds).",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_chemistry_timestep') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"3.5. Split Operator Alternate Order\nIs Required: FALSE Type: BOOLEAN Cardinality: 0.1\n?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_alternate_order') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"3.6. Integrated Timestep\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nTimestep for the atmospheric chemistry model (in seconds)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.integrated_timestep') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"3.7. Integrated Scheme Type\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nSpecify the type of timestep scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.integrated_scheme_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Explicit\" \n# \"Implicit\" \n# \"Semi-implicit\" \n# \"Semi-analytic\" \n# \"Impact solver\" \n# \"Back Euler\" \n# \"Newton Raphson\" \n# \"Rosenbrock\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"4. Key Properties --> Timestep Framework --> Split Operator Order\n**\n4.1. Turbulence\nIs Required: FALSE Type: INTEGER Cardinality: 0.1\nCall order for turbulence scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.turbulence') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"4.2. Convection\nIs Required: FALSE Type: INTEGER Cardinality: 0.1\nCall order for convection scheme This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.convection') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"4.3. Precipitation\nIs Required: FALSE Type: INTEGER Cardinality: 0.1\nCall order for precipitation scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.precipitation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"4.4. Emissions\nIs Required: FALSE Type: INTEGER Cardinality: 0.1\nCall order for emissions scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.emissions') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"4.5. Deposition\nIs Required: FALSE Type: INTEGER Cardinality: 0.1\nCall order for deposition scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.deposition') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"4.6. Gas Phase Chemistry\nIs Required: FALSE Type: INTEGER Cardinality: 0.1\nCall order for gas phase chemistry scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.gas_phase_chemistry') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"4.7. Tropospheric Heterogeneous Phase Chemistry\nIs Required: FALSE Type: INTEGER Cardinality: 0.1\nCall order for tropospheric heterogeneous phase chemistry scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.tropospheric_heterogeneous_phase_chemistry') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"4.8. Stratospheric Heterogeneous Phase Chemistry\nIs Required: FALSE Type: INTEGER Cardinality: 0.1\nCall order for stratospheric heterogeneous phase chemistry scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.stratospheric_heterogeneous_phase_chemistry') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"4.9. Photo Chemistry\nIs Required: FALSE Type: INTEGER Cardinality: 0.1\nCall order for photo chemistry scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.photo_chemistry') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"4.10. Aerosols\nIs Required: FALSE Type: INTEGER Cardinality: 0.1\nCall order for aerosols scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.aerosols') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"5. Key Properties --> Tuning Applied\nTuning methodology for atmospheric chemistry component\n5.1. Description\nIs Required: TRUE Type: STRING Cardinality: 1.1\nGeneral overview description of tuning: explain and motivate the main targets and metrics retained. &Document the relative weight given to climate performance metrics versus process oriented metrics, &and on the possible conflicts with parameterization level tuning. In particular describe any struggle &with a parameter value that required pushing it to its limits to solve a particular model deficiency.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"5.2. Global Mean Metrics Used\nIs Required: FALSE Type: STRING Cardinality: 0.N\nList set of metrics of the global mean state used in tuning model/component",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.global_mean_metrics_used') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"5.3. Regional Metrics Used\nIs Required: FALSE Type: STRING Cardinality: 0.N\nList of regional metrics of mean state used in tuning model/component",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.regional_metrics_used') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"5.4. Trend Metrics Used\nIs Required: FALSE Type: STRING Cardinality: 0.N\nList observed trend metrics used in tuning model/component",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.trend_metrics_used') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"6. Grid\nAtmospheric chemistry grid\n6.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe the general structure of the atmopsheric chemistry grid",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.grid.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"6.2. Matches Atmosphere Grid\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\n* Does the atmospheric chemistry grid match the atmosphere grid?*",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.grid.matches_atmosphere_grid') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"7. Grid --> Resolution\nResolution in the atmospheric chemistry grid\n7.1. Name\nIs Required: TRUE Type: STRING Cardinality: 1.1\nThis is a string usually used by the modelling group to describe the resolution of this grid, e.g. ORCA025, N512L180, T512L70 etc.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.grid.resolution.name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"7.2. Canonical Horizontal Resolution\nIs Required: FALSE Type: STRING Cardinality: 0.1\nExpression quoted for gross comparisons of resolution, eg. 50km or 0.1 degrees etc.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.grid.resolution.canonical_horizontal_resolution') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"7.3. Number Of Horizontal Gridpoints\nIs Required: FALSE Type: INTEGER Cardinality: 0.1\nTotal number of horizontal (XY) points (or degrees of freedom) on computational grid.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.grid.resolution.number_of_horizontal_gridpoints') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"7.4. Number Of Vertical Levels\nIs Required: FALSE Type: INTEGER Cardinality: 0.1\nNumber of vertical levels resolved on computational grid.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.grid.resolution.number_of_vertical_levels') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"7.5. Is Adaptive Grid\nIs Required: FALSE Type: BOOLEAN Cardinality: 0.1\nDefault is False. Set true if grid resolution changes during execution.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.grid.resolution.is_adaptive_grid') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"8. Transport\nAtmospheric chemistry transport\n8.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nGeneral overview of transport implementation",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.transport.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"8.2. Use Atmospheric Transport\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs transport handled by the atmosphere, rather than within atmospheric cehmistry?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.transport.use_atmospheric_transport') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"8.3. Transport Details\nIs Required: FALSE Type: STRING Cardinality: 0.1\nIf transport is handled within the atmospheric chemistry scheme, describe it.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.transport.transport_details') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"9. Emissions Concentrations\nAtmospheric chemistry emissions\n9.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview atmospheric chemistry emissions",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.emissions_concentrations.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"10. Emissions Concentrations --> Surface Emissions\n**\n10.1. Sources\nIs Required: FALSE Type: ENUM Cardinality: 0.N\nSources of the chemical species emitted at the surface that are taken into account in the emissions scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.sources') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Vegetation\" \n# \"Soil\" \n# \"Sea surface\" \n# \"Anthropogenic\" \n# \"Biomass burning\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"10.2. Method\nIs Required: FALSE Type: ENUM Cardinality: 0.N\nMethods used to define chemical species emitted directly into model layers above the surface (several methods allowed because the different species may not use the same method).",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.method') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Climatology\" \n# \"Spatially uniform mixing ratio\" \n# \"Spatially uniform concentration\" \n# \"Interactive\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"10.3. Prescribed Climatology Emitted Species\nIs Required: FALSE Type: STRING Cardinality: 0.1\nList of chemical species emitted at the surface and prescribed via a climatology, and the nature of the climatology (E.g. CO (monthly), C2H6 (constant))",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.prescribed_climatology_emitted_species') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"10.4. Prescribed Spatially Uniform Emitted Species\nIs Required: FALSE Type: STRING Cardinality: 0.1\nList of chemical species emitted at the surface and prescribed as spatially uniform",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.prescribed_spatially_uniform_emitted_species') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"10.5. Interactive Emitted Species\nIs Required: FALSE Type: STRING Cardinality: 0.1\nList of chemical species emitted at the surface and specified via an interactive method",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.interactive_emitted_species') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"10.6. Other Emitted Species\nIs Required: FALSE Type: STRING Cardinality: 0.1\nList of chemical species emitted at the surface and specified via any other method",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.other_emitted_species') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"11. Emissions Concentrations --> Atmospheric Emissions\nTO DO\n11.1. Sources\nIs Required: FALSE Type: ENUM Cardinality: 0.N\nSources of chemical species emitted in the atmosphere that are taken into account in the emissions scheme.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.sources') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Aircraft\" \n# \"Biomass burning\" \n# \"Lightning\" \n# \"Volcanos\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"11.2. Method\nIs Required: FALSE Type: ENUM Cardinality: 0.N\nMethods used to define the chemical species emitted in the atmosphere (several methods allowed because the different species may not use the same method).",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.method') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Climatology\" \n# \"Spatially uniform mixing ratio\" \n# \"Spatially uniform concentration\" \n# \"Interactive\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"11.3. Prescribed Climatology Emitted Species\nIs Required: FALSE Type: STRING Cardinality: 0.1\nList of chemical species emitted in the atmosphere and prescribed via a climatology (E.g. CO (monthly), C2H6 (constant))",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.prescribed_climatology_emitted_species') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"11.4. Prescribed Spatially Uniform Emitted Species\nIs Required: FALSE Type: STRING Cardinality: 0.1\nList of chemical species emitted in the atmosphere and prescribed as spatially uniform",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.prescribed_spatially_uniform_emitted_species') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"11.5. Interactive Emitted Species\nIs Required: FALSE Type: STRING Cardinality: 0.1\nList of chemical species emitted in the atmosphere and specified via an interactive method",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.interactive_emitted_species') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"11.6. Other Emitted Species\nIs Required: FALSE Type: STRING Cardinality: 0.1\nList of chemical species emitted in the atmosphere and specified via an "other method"",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.other_emitted_species') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"12. Emissions Concentrations --> Concentrations\nTO DO\n12.1. Prescribed Lower Boundary\nIs Required: FALSE Type: STRING Cardinality: 0.1\nList of species prescribed at the lower boundary.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.emissions_concentrations.concentrations.prescribed_lower_boundary') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"12.2. Prescribed Upper Boundary\nIs Required: FALSE Type: STRING Cardinality: 0.1\nList of species prescribed at the upper boundary.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.emissions_concentrations.concentrations.prescribed_upper_boundary') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"13. Gas Phase Chemistry\nAtmospheric chemistry transport\n13.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview gas phase atmospheric chemistry",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.gas_phase_chemistry.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"13.2. Species\nIs Required: FALSE Type: ENUM Cardinality: 0.N\nSpecies included in the gas phase chemistry scheme.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.gas_phase_chemistry.species') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"HOx\" \n# \"NOy\" \n# \"Ox\" \n# \"Cly\" \n# \"HSOx\" \n# \"Bry\" \n# \"VOCs\" \n# \"isoprene\" \n# \"H2O\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"13.3. Number Of Bimolecular Reactions\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nThe number of bi-molecular reactions in the gas phase chemistry scheme.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_bimolecular_reactions') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"13.4. Number Of Termolecular Reactions\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nThe number of ter-molecular reactions in the gas phase chemistry scheme.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_termolecular_reactions') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"13.5. Number Of Tropospheric Heterogenous Reactions\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nThe number of reactions in the tropospheric heterogeneous chemistry scheme.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_tropospheric_heterogenous_reactions') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"13.6. Number Of Stratospheric Heterogenous Reactions\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nThe number of reactions in the stratospheric heterogeneous chemistry scheme.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_stratospheric_heterogenous_reactions') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"13.7. Number Of Advected Species\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nThe number of advected species in the gas phase chemistry scheme.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_advected_species') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"13.8. Number Of Steady State Species\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nThe number of gas phase species for which the concentration is updated in the chemical solver assuming photochemical steady state",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_steady_state_species') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"13.9. Interactive Dry Deposition\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs dry deposition interactive (as opposed to prescribed)? Dry deposition describes the dry processes by which gaseous species deposit themselves on solid surfaces thus decreasing their concentration in the air.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.gas_phase_chemistry.interactive_dry_deposition') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"13.10. Wet Deposition\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs wet deposition included? Wet deposition describes the moist processes by which gaseous species deposit themselves on solid surfaces thus decreasing their concentration in the air.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.gas_phase_chemistry.wet_deposition') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"13.11. Wet Oxidation\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs wet oxidation included? Oxidation describes the loss of electrons or an increase in oxidation state by a molecule",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.gas_phase_chemistry.wet_oxidation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"14. Stratospheric Heterogeneous Chemistry\nAtmospheric chemistry startospheric heterogeneous chemistry\n14.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview stratospheric heterogenous atmospheric chemistry",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"14.2. Gas Phase Species\nIs Required: FALSE Type: ENUM Cardinality: 0.N\nGas phase species included in the stratospheric heterogeneous chemistry scheme.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.gas_phase_species') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Cly\" \n# \"Bry\" \n# \"NOy\" \n# TODO - please enter value(s)\n",
"14.3. Aerosol Species\nIs Required: FALSE Type: ENUM Cardinality: 0.N\nAerosol species included in the stratospheric heterogeneous chemistry scheme.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.aerosol_species') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Sulphate\" \n# \"Polar stratospheric ice\" \n# \"NAT (Nitric acid trihydrate)\" \n# \"NAD (Nitric acid dihydrate)\" \n# \"STS (supercooled ternary solution aerosol particule))\" \n# TODO - please enter value(s)\n",
"14.4. Number Of Steady State Species\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nThe number of steady state species in the stratospheric heterogeneous chemistry scheme.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.number_of_steady_state_species') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"14.5. Sedimentation\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs sedimentation is included in the stratospheric heterogeneous chemistry scheme or not?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.sedimentation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"14.6. Coagulation\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs coagulation is included in the stratospheric heterogeneous chemistry scheme or not?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.coagulation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"15. Tropospheric Heterogeneous Chemistry\nAtmospheric chemistry tropospheric heterogeneous chemistry\n15.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview tropospheric heterogenous atmospheric chemistry",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"15.2. Gas Phase Species\nIs Required: FALSE Type: STRING Cardinality: 0.1\nList of gas phase species included in the tropospheric heterogeneous chemistry scheme.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.gas_phase_species') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"15.3. Aerosol Species\nIs Required: FALSE Type: ENUM Cardinality: 0.N\nAerosol species included in the tropospheric heterogeneous chemistry scheme.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.aerosol_species') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Sulphate\" \n# \"Nitrate\" \n# \"Sea salt\" \n# \"Dust\" \n# \"Ice\" \n# \"Organic\" \n# \"Black carbon/soot\" \n# \"Polar stratospheric ice\" \n# \"Secondary organic aerosols\" \n# \"Particulate organic matter\" \n# TODO - please enter value(s)\n",
"15.4. Number Of Steady State Species\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nThe number of steady state species in the tropospheric heterogeneous chemistry scheme.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.number_of_steady_state_species') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"15.5. Interactive Dry Deposition\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs dry deposition interactive (as opposed to prescribed)? Dry deposition describes the dry processes by which gaseous species deposit themselves on solid surfaces thus decreasing their concentration in the air.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.interactive_dry_deposition') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"15.6. Coagulation\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs coagulation is included in the tropospheric heterogeneous chemistry scheme or not?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.coagulation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"16. Photo Chemistry\nAtmospheric chemistry photo chemistry\n16.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview atmospheric photo chemistry",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.photo_chemistry.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"16.2. Number Of Reactions\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nThe number of reactions in the photo-chemistry scheme.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.photo_chemistry.number_of_reactions') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"17. Photo Chemistry --> Photolysis\nPhotolysis scheme\n17.1. Method\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nPhotolysis scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.photo_chemistry.photolysis.method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Offline (clear sky)\" \n# \"Offline (with clouds)\" \n# \"Online\" \n# TODO - please enter value(s)\n",
"17.2. Environmental Conditions\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe any environmental conditions taken into account by the photolysis scheme (e.g. whether pressure- and temperature-sensitive cross-sections and quantum yields in the photolysis calculations are modified to reflect the modelled conditions.)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.photo_chemistry.photolysis.environmental_conditions') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"©2017 ES-DOC"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
kubeflow/pipelines | components/gcp/dataflow/launch_template/sample.ipynb | apache-2.0 | [
"Name\nData preparation by using a template to submit a job to Cloud Dataflow\nLabels\nGCP, Cloud Dataflow, Kubeflow, Pipeline\nSummary\nA Kubeflow Pipeline component to prepare data by using a template to submit a job to Cloud Dataflow.\nDetails\nIntended use\nUse this component when you have a pre-built Cloud Dataflow template and want to launch it as a step in a Kubeflow Pipeline.\nRuntime arguments\nArgument | Description | Optional | Data type | Accepted values | Default |\n:--- | :---------- | :----------| :----------| :---------- | :----------|\nproject_id | The ID of the Google Cloud Platform (GCP) project to which the job belongs. | No | GCPProjectID | | |\ngcs_path | The path to a Cloud Storage bucket containing the job creation template. It must be a valid Cloud Storage URL beginning with 'gs://'. | No | GCSPath | | |\nlaunch_parameters | The parameters that are required to launch the template. The schema is defined in LaunchTemplateParameters. The parameter jobName is replaced by a generated name. | Yes | Dict | A JSON object which has the same structure as LaunchTemplateParameters | None |\nlocation | The regional endpoint to which the job request is directed.| Yes | GCPRegion | | None |\nstaging_dir | The path to the Cloud Storage directory where the staging files are stored. A random subdirectory will be created under the staging directory to keep the job information. This is done so that you can resume the job in case of failure.| Yes | GCSPath | | None |\nvalidate_only | If True, the request is validated but not executed. | Yes | Boolean | | False |\nwait_interval | The number of seconds to wait between calls to get the status of the job. | Yes | Integer | | 30 |\nInput data schema\nThe input gcs_path must contain a valid Cloud Dataflow template. The template can be created by following the instructions in Creating Templates. You can also use Google-provided templates.\nOutput\nName | Description\n:--- | :----------\njob_id | The id of the Cloud Dataflow job that is created.\nCaution & requirements\nTo use the component, the following requirements must be met:\n- Cloud Dataflow API is enabled.\n- The component can authenticate to GCP. Refer to Authenticating Pipelines to GCP for details.\n- The Kubeflow user service account is a member of:\n - roles/dataflow.developer role of the project.\n - roles/storage.objectViewer role of the Cloud Storage Object gcs_path.\n - roles/storage.objectCreator role of the Cloud Storage Object staging_dir. \nDetailed description\nYou can execute the template locally by following the instructions in Executing Templates. See the sample code below to learn how to execute the template.\nFollow these steps to use the component in a pipeline:\n1. Install the Kubeflow Pipeline SDK:",
"%%capture --no-stderr\n\n!pip3 install kfp --upgrade",
"Load the component using KFP SDK",
"import kfp.components as comp\n\ndataflow_template_op = comp.load_component_from_url(\n 'https://raw.githubusercontent.com/kubeflow/pipelines/1.7.0-rc.3/components/gcp/dataflow/launch_template/component.yaml')\nhelp(dataflow_template_op)",
"Sample\nNote: The following sample code works in an IPython notebook or directly in Python code.\nIn this sample, we run a Google-provided word count template from gs://dataflow-templates/latest/Word_Count. The template takes a text file as input and outputs word counts to a Cloud Storage bucket. Here is the sample input:",
"!gsutil cat gs://dataflow-samples/shakespeare/kinglear.txt",
"Set sample parameters",
"# Required Parameters\nPROJECT_ID = '<Please put your project ID here>'\nGCS_WORKING_DIR = 'gs://<Please put your GCS path here>' # No ending slash\n\n# Optional Parameters\nEXPERIMENT_NAME = 'Dataflow - Launch Template'\nOUTPUT_PATH = '{}/out/wc'.format(GCS_WORKING_DIR)",
"Example pipeline that uses the component",
"import kfp.dsl as dsl\nimport json\n@dsl.pipeline(\n name='Dataflow launch template pipeline',\n description='Dataflow launch template pipeline'\n)\ndef pipeline(\n project_id = PROJECT_ID, \n gcs_path = 'gs://dataflow-templates/latest/Word_Count', \n launch_parameters = json.dumps({\n 'parameters': {\n 'inputFile': 'gs://dataflow-samples/shakespeare/kinglear.txt',\n 'output': OUTPUT_PATH\n }\n }), \n location = '',\n validate_only = 'False', \n staging_dir = GCS_WORKING_DIR,\n wait_interval = 30):\n dataflow_template_op(\n project_id = project_id, \n gcs_path = gcs_path, \n launch_parameters = launch_parameters, \n location = location, \n validate_only = validate_only,\n staging_dir = staging_dir,\n wait_interval = wait_interval)",
"Compile the pipeline",
"pipeline_func = pipeline\npipeline_filename = pipeline_func.__name__ + '.zip'\nimport kfp.compiler as compiler\ncompiler.Compiler().compile(pipeline_func, pipeline_filename)",
"Submit the pipeline for execution",
"#Specify pipeline argument values\narguments = {}\n\n#Get or create an experiment and submit a pipeline run\nimport kfp\nclient = kfp.Client()\nexperiment = client.create_experiment(EXPERIMENT_NAME)\n\n#Submit a pipeline run\nrun_name = pipeline_func.__name__ + ' run'\nrun_result = client.run_pipeline(experiment.id, run_name, pipeline_filename, arguments)",
"Inspect the output",
"!gsutil cat $OUTPUT_PATH*",
"References\n\nComponent python code\nComponent docker file\nSample notebook\nCloud Dataflow Templates overview\n\nLicense\nBy deploying or using this software you agree to comply with the AI Hub Terms of Service and the Google APIs Terms of Service. To the extent of a direct conflict of terms, the AI Hub Terms of Service will control."
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
eric-svds/flask-with-docker | app/.ipynb_checkpoints/my_notebook-checkpoint.ipynb | gpl-2.0 | [
"Sample PCA analysis with Iris dataset\nThe following are required for this notebook:\n- pip install matplotlib\n- pip install scikit-learn\nThis notebook plots (and pickles) the Iris data set before and after Principal Component Analysis. Output is intended to be imported by a Flask application and passed to an HTML template. D3.js can be used to create scatterplots similar to the two shown below (only nicer, hopefully).\nThe following cell imports sckikit-learn and the data set. A PCA is performed with 4 principal components.",
"import matplotlib.pyplot as plt\nfrom sklearn import datasets\nfrom sklearn.decomposition import PCA\nimport numpy as np\n%matplotlib inline\n\n# Import the infamous Iris Dataset\niris = datasets.load_iris()\n\n# Keep only the first two features (Sepal length, width)\nX = iris.data\nY = iris.target\n\n# Perform PCA on 4D data, keeping 2 principal components\nX_PCA = PCA(n_components=4).fit_transform(iris.data)",
"Plot Original Data Set\nPlot the Sepal width vs. Sepal length on the original data.",
"# Plot the first two features BEFORE doing the PCA\nplt.figure(2, figsize=(8, 6))\n\nplt.scatter(X[:, 0], X[:, 1], c=Y, cmap=plt.cm.Paired)\nplt.xlabel('Sepal length (cm)')\nplt.ylabel('Sepal width (cm)')\n\nplt.show()",
"Plot Data After PCA\nAfter performing a PCA, the first two components are plotted. Note that the two components plotted are linear combinations of the original 4 features of the data set.",
"# Plot the first two principal components AFTER the PCA\nplt.figure(2, figsize=(8, 6))\n\nplt.scatter(X_PCA[:, 0], X_PCA[:, 1], c=Y, cmap=plt.cm.Paired)\nplt.xlabel('Component 1')\nplt.ylabel('Component 2')\n\nplt.show()",
"Save Output\nThe Flask application will make use of the following D3 Scatterplot example. Data has to be in a particular format (see link for example), this cell flips the data sets into that format and pickles the output.",
"# Pickle pre- and post-PCA data\nimport pickle\n\nfeatures = []\nfor full_label in iris.feature_names:\n name = full_label[:-5].split() # remove trailing ' (cm)'\n features.append(name[0]+name[1].capitalize())\nfeatures.append(\"species\")\n\n# Create full set for Iris data\ndata1 = []\ndata_PCA = []\nfor i, vals in enumerate(X):\n row1 = dict()\n row_PCA = dict()\n for k, val in enumerate(np.append(X[i], iris.target_names[Y[i]])):\n row1[features[k]] = val\n data1.append(row1)\n for k, val in enumerate(np.append(X_PCA[i], iris.target_names[Y[i]])):\n row_PCA[features[k]] = val\n data_PCA.append(row_PCA)\n\npickle.dump(data1, open(\"pkl/data1.pkl\", \"wb\"))\npickle.dump(data_PCA, open(\"pkl/data_PCA.pkl\", \"wb\"))\n\nttt = data1[0].values()[3]\nprint ttt\ntype(ttt)"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
sbenthall/bigbang | examples/obsolete_notebooks/SummerSchoolCompareWordRankings.ipynb | agpl-3.0 | [
"from bigbang.archive import Archive\nfrom bigbang.archive import load as load_archive\nimport os\nimport pandas as pd\nimport numpy as np\n\n\nietf_path = \"../archives/\"\nncuc_path = \"../archives/http:/lists.ncuc.org/pipermail\"\n\npaths = [os.path.join(ietf_path,\"6lo.csv\"),\n os.path.join(ietf_path,\"5gangip.csv\")]\n\narchives = [load_archive(path) for path in paths]\n\nfrom sklearn.feature_extraction.text import CountVectorizer\n\n#tp = u'(?u)\\x08[^\\\\W\\\\d_][^\\\\W\\\\d_]+\\x08'\n\ntp = u'(?u)\\\\b[^\\\\W\\\\d\\_]\\\\w+\\\\b'\n\n\n\ndef ordered_words(data,authors=None):\n \n if authors is not None:\n ## Filter to only those emails that include given authors\n \n ## a series of email IDs, valued True iff\n ## one of the author names is in the From field\n selected = data['From'].apply(lambda x: \n any([(author in x)\n for author\n in authors]))\n \n # a series of Booleans can be used to select\n # only certain rows from a DataFrame\n data = data[selected]\n \n cv = CountVectorizer(max_df=.16,min_df=5,token_pattern=tp)\n \n c_dtm = cv.fit_transform(data['Body'].dropna())\n \n feature_names = cv.get_feature_names()\n feature_counts = np.array(c_dtm.sum(axis=0))[0]\n \n feature_order = np.argsort(feature_counts)[::-1]\n \n sorted_features = [feature_names[i] for i in feature_order]\n \n rankings = pd.Series({pair[1] : pair[0] \n for pair \n in enumerate(sorted_features)})\n\n counts = pd.Series({feature_names[i] : feature_counts[i] \n for i \n in feature_order})\n \n ## Returns a pair (a tuple of length 2)\n return rankings,counts",
"The line below creates a list of three pairs, each pair containing two pandas.Series objects.\nA Series is like a dictionary, only its items are ordered and its values must share a data type. The order keys of the series are its index. It is easy to compose Series objects into a DataFrame.",
"series = [ordered_words(archive.data) for archive in archives]",
"This creates a DataFrame from each of the series.\nThe columns alternate between representing word rankings and representing word counts.",
"rankings = pd.concat([series[0][0],\n series[0][1],\n series[1][0],\n series[1][1],\n series[2][0],\n series[2][1]],axis=1)\n\n# display the first 5 rows of the DataFrame\nrankings[:5]",
"We should rename the columns to be more descriptive of the data.",
"rankings.rename(columns={0: 'ipc-gnso rankings',\n 1: 'ipc-gnso counts',\n 2: 'wp4 rankings',\n 3: 'wp4 counts',\n 4: 'ncuc-discuss rankings',\n 5: 'ncuc-discuss counts'},inplace=True)\n\nrankings[:5]",
"Use the to_csv() function on the DataFrame object to export the data to CSV format, which you can open easily in Excel.",
"rankings.to_csv(\"rankings_all.csv\",encoding=\"utf-8\")",
"To filter the data by certain authors before computing the word rankings, provide a list of author names as an argument.\nOnly emails whose From header includes on of the author names within it will be included in the calculation.\nNote that for detecting the author name, the program for now uses simple string inclusion. You may need to try multiple variations of the authors' names in order to catch all emails written by persons of interest.",
"authors = [\"Greg Shatan\",\n \"Niels ten Oever\"]\n\nordered_words(archives[0].data, authors=authors)"
] | [
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
antoniomezzacapo/qiskit-tutorial | community/aqua/chemistry/LiH_with_qubit_tapering_and_uccsd.ipynb | apache-2.0 | [
"# import common packages\nfrom collections import OrderedDict\nimport itertools\nimport logging\n\nimport numpy as np\nimport scipy\n\nfrom qiskit_aqua import (get_algorithm_instance, get_optimizer_instance, \n get_variational_form_instance, get_initial_state_instance, Operator)\nfrom qiskit_aqua._logging import build_logging_config, set_logging_config\nfrom qiskit_aqua_chemistry.drivers import ConfigurationManager\nfrom qiskit_aqua_chemistry.core import get_chemistry_operator_instance\n\n# set_logging_config(build_logging_config(logging.INFO))\n\n# using driver to get fermionic Hamiltonian\ncfg_mgr = ConfigurationManager()\npyscf_cfg = OrderedDict([('atom', 'Li .0 .0 .0; H .0 .0 1.6'), \n ('unit', 'Angstrom'), ('charge', 0), \n ('spin', 0), ('basis', 'sto3g')])\nsection = {}\nsection['properties'] = pyscf_cfg\ndriver = cfg_mgr.get_driver_instance('PYSCF')\nmolecule = driver.run(section)\n\ncore = get_chemistry_operator_instance('hamiltonian')\nhamiltonian_cfg = OrderedDict([\n ('name', 'hamiltonian'),\n ('transformation', 'full'),\n ('qubit_mapping', 'parity'),\n ('two_qubit_reduction', True),\n ('freeze_core', True),\n ('orbital_reduction', [])\n])\ncore.init_params(hamiltonian_cfg)\nalgo_input = core.run(molecule)\nqubit_op = algo_input.qubit_op\n\nprint(\"Originally requires {} qubits\".format(qubit_op.num_qubits))\nprint(qubit_op)",
"Find the symmetries of the qubit operator",
"[symmetries, sq_paulis, cliffords, sq_list] = qubit_op.find_Z2_symmetries()\nprint('Z2 symmetries found:')\nfor symm in symmetries:\n print(symm.to_label())\nprint('single qubit operators found:')\nfor sq in sq_paulis:\n print(sq.to_label())\nprint('cliffords found:')\nfor clifford in cliffords:\n print(clifford.print_operators())\nprint('single-qubit list: {}'.format(sq_list))",
"Use the found symmetries, single qubit operators, and cliffords to taper qubits from the original qubit operator. For each Z2 symmetry one can taper one qubit. However, different tapered operators can be built, corresponding to different symmetry sectors.",
"tapered_ops = []\nfor coeff in itertools.product([1, -1], repeat=len(sq_list)):\n tapered_op = Operator.qubit_tapering(qubit_op, cliffords, sq_list, list(coeff))\n tapered_ops.append((list(coeff), tapered_op))\n print(\"Number of qubits of tapered qubit operator: {}\".format(tapered_op.num_qubits))",
"The user has to specify the symmetry sector he is interested in. Since we are interested in finding the ground state here, let us get the original ground state energy as a reference.",
"ee = get_algorithm_instance('ExactEigensolver')\nee.init_args(qubit_op, k=1)\nresult = core.process_algorithm_result(ee.run())\nfor line in result[0]:\n print(line)",
"Now, let us iterate through all tapered qubit operators to find out the one whose ground state energy matches the original (un-tapered) one.",
"smallest_eig_value = 99999999999999\nsmallest_idx = -1\nfor idx in range(len(tapered_ops)):\n ee.init_args(tapered_ops[idx][1], k=1)\n curr_value = ee.run()['energy']\n if curr_value < smallest_eig_value:\n smallest_eig_value = curr_value\n smallest_idx = idx\n print(\"Lowest eigenvalue of the {}-th tapered operator (computed part) is {:.12f}\".format(idx, curr_value))\n \nthe_tapered_op = tapered_ops[smallest_idx][1]\nthe_coeff = tapered_ops[smallest_idx][0]\nprint(\"The {}-th tapered operator matches original ground state energy, with corresponding symmetry sector of {}\".format(smallest_idx, the_coeff))",
"Alternatively, one can run multiple VQE instances to find the lowest eigenvalue sector. \nHere we just validate that the_tapered_op reach the smallest eigenvalue in one VQE execution with the UCCSD variational form, modified to take into account of the tapered symmetries.",
"# setup initial state\ninit_state = get_initial_state_instance('HartreeFock')\ninit_state.init_args(num_qubits=the_tapered_op.num_qubits, num_orbitals=core._molecule_info['num_orbitals'],\n qubit_mapping=core._qubit_mapping, two_qubit_reduction=core._two_qubit_reduction,\n num_particles=core._molecule_info['num_particles'], sq_list=sq_list)\n\n# setup variationl form\nvar_form = get_variational_form_instance('UCCSD')\nvar_form.init_args(num_qubits=the_tapered_op.num_qubits, depth=1,\n num_orbitals=core._molecule_info['num_orbitals'], \n num_particles=core._molecule_info['num_particles'],\n active_occupied=None, active_unoccupied=None, initial_state=init_state,\n qubit_mapping=core._qubit_mapping, two_qubit_reduction=core._two_qubit_reduction, \n num_time_slices=1,\n cliffords=cliffords, sq_list=sq_list, tapering_values=the_coeff, symmetries=symmetries)\n\n# setup optimizer\noptimizer = get_optimizer_instance('COBYLA')\noptimizer.init_args()\noptimizer.set_options(maxiter=1000)\n\n# set vqe\nalgo = get_algorithm_instance('VQE')\nalgo.setup_quantum_backend(backend='statevector_simulator')\nalgo.init_args(the_tapered_op, 'matrix', var_form, optimizer)\n\nalgo_result = algo.run()\n\nresult = core.process_algorithm_result(algo_result)\nfor line in result[0]:\n print(line)\n\nprint(\"The parameters for UCCSD are:\\n{}\".format(algo_result['opt_params']))"
] | [
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
ondrolexa/sg2 | 14_Simultaneous_deformation.ipynb | mit | [
"Simultaneous deformation",
"%pylab inline\n\nfrom sg2lib import *",
"Naive concept of simultaneous deformation\nHere we try to split simple shear and pure shear to several incremental steps and mutually superposed those increments to simulate simultaneous deformation. We will use following deformation gradients for total simple shear and pure shear:",
"gamma = 1\nSx = 2\nFs = array([[1, gamma], [0, 1]])\nFp = array([[Sx, 0], [0, 1/Sx]])",
"To divide simple shear deformation with $\\gamma$=1 to n incremental steps",
"n = 10\nFsi = array([[1, gamma/n], [0, 1]])\nprint('Incremental deformation gradient:')\nprint(Fsi)",
"To check that supperposition of those increments give as total deformation, we can use allclose numpy function",
"array_equal(matrix_power(Fsi, n), Fs)\n\nFpi = array([[Sx**(1/n), 0], [0, Sx**(-1/n)]])\nprint('Incremental deformation gradient:')\nprint(Fpi)\n\nallclose(matrix_power(Fpi, n), Fp)",
"Knowing that deformation superposition is not cimmutative, we can check that axial ratio of finite strain resulting from simple shear superposed on pure shear and vice-versa is really different:",
"u,s,v = svd(Fs @ Fp)\nprint('Axial ratio of finite strain resulting from simple shear superposed on pure shear: {}'.format(s[0]/s[1]))\nu,s,v = svd(Fp @ Fs)\nprint('Axial ratio of finite strain resulting from pure shear superposed on simple shear: {}'.format(s[0]/s[1]))",
"Lets try to split those deformation to two increments and mutually mix them:",
"Fsi = array([[1, gamma/2], [0, 1]])\nFpi = array([[Sx**(1/2), 0], [0, Sx**(-1/2)]])\nu,s,v = svd(Fsi @ Fpi @ Fsi @ Fpi)\nprint('Axial ratio of finite strain of superposed increments starting with pure shear: {}'.format(s[0]/s[1]))\nu,s,v = svd(Fpi @ Fsi @ Fpi @ Fsi)\nprint('Axial ratio of finite strain of superposed increments starting with simple shear: {}'.format(s[0]/s[1]))",
"It is now close to each other, but still quite different. So let's split it to much more increments....",
"n = 100\nFsi = array([[1, gamma/n], [0, 1]])\nFpi = array([[Sx**(1/n), 0], [0, Sx**(-1/n)]])\nu,s,v = svd(matrix_power(Fsi @ Fpi, n))\nprint('Axial ratio of finite strain of superposed increments starting with pure shear: {}'.format(s[0]/s[1]))\nu,s,v = svd(matrix_power(Fpi @ Fsi, n))\nprint('Axial ratio of finite strain of superposed increments starting with simple shear: {}'.format(s[0]/s[1]))",
"Now it is very close. Let's visualize how finite strain converge with increasing number of increments:",
"arp = []\nars = []\nninc = range(1, 201)\nfor n in ninc:\n Fsi = array([[1, gamma/n], [0, 1]])\n Fpi = array([[Sx**(1/n), 0], [0, Sx**(-1/n)]])\n u,s,v = svd(matrix_power(Fsi @ Fpi, n))\n arp.append(s[0]/s[1])\n u,s,v = svd(matrix_power(Fpi @ Fsi, n))\n ars.append(s[0]/s[1])\nfigure(figsize=(16, 4))\nsemilogy(ninc, arp, 'r', label='Pure shear first')\nsemilogy(ninc, ars, 'g', label='Simple shear first')\nlegend()\nxlim(1, 200)\nxlabel('Number of increments')\nylabel('Finite strain axial ratio');",
"Using spatial velocity gradient\nWe need to import matrix exponential and matrix logarithm functions from scipy.linalg",
"from scipy.linalg import expm, logm",
"Spatial velocity gradient could be obtained as matrix logarithm of deformation gradient",
"Lp = logm(Fp)\nLs = logm(Fs)",
"Total spatial velocity gradient of simulatanous deformation could be calculated by summation of individual ones",
"L = Lp + Ls",
"Resulting deformation gradient could be calculated as matrix exponential of total spatial velocity gradient",
"F = expm(L)\nu,s,v = svd(F)\nsar = s[0]/s[1]\nprint('Axial| ratio of finite strain of simultaneous pure shear and simple shear: {}'.format(sar))",
"Lets overlay it on previous diagram",
"arp = []\nars = []\nninc = range(1, 201)\nfor n in ninc:\n Fsi = array([[1, gamma/n], [0, 1]])\n Fpi = array([[Sx**(1/n), 0], [0, Sx**(-1/n)]])\n u,s,v = svd(matrix_power(Fsi @ Fpi, n))\n arp.append(s[0]/s[1])\n u,s,v = svd(matrix_power(Fpi @ Fsi, n))\n ars.append(s[0]/s[1])\nfigure(figsize=(16, 4))\nsemilogy(ninc, arp, 'r', label='Pure shear first')\nsemilogy(ninc, ars, 'g', label='Simple shear first')\nlegend()\nxlim(1, 200)\naxhline(sar)\nxlabel('Number of increments')\nylabel('Finite strain axial ratio');",
"Decomposition of spatial velocity gradient\nHere we will decompose spatial velocity gradient of simple shear to rate of deformation tensor and spin tensor.",
"L = logm(Fs)\n\nD = (L + L.T)/2\nW = (L - L.T)/2",
"Check that decomposition give total spatial velocity gradient",
"allclose(D + W, L)",
"Visualize spatial velocity gradients for rate of deformation tensor",
"vel_field(D)",
"Visualize spatial velocity gradients for spin tensor",
"vel_field(W)"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
QuantStack/quantstack-talks | 2019-05-22-pydata-frankfurt/notebooks/bqplot.ipynb | bsd-3-clause | [
"bqplot https://github.com/bloomberg/bqplot\nA Jupyter - d3.js bridge\nbqplot is a jupyter interactive widget library bringing d3.js visualization to the Jupyter notebook.\n\nApache Licensed\n\nbqplot implements the abstractions of Wilkinson’s “The Grammar of Graphics” as interactive Jupyter widgets.\nbqplot provides both\n- high-level plotting procedures with relevant defaults for common chart types,\n- lower-level descriptions of data visualizations meant for complex interactive visualization dashboards and applications involving mouse interactions and user-provided Python callbacks.\nInstallation:\nbash\nconda install -c conda-forge bqplot",
"from __future__ import print_function\nfrom IPython.display import display\nfrom ipywidgets import *\nfrom traitlets import *\n\nimport numpy as np\nimport pandas as pd\nimport bqplot as bq\nimport datetime as dt\n\nnp.random.seed(0)\nsize = 100\ny_data = np.cumsum(np.random.randn(size) * 100.0)\ny_data_2 = np.cumsum(np.random.randn(size))\ny_data_3 = np.cumsum(np.random.randn(size) * 100.)\n\nx = np.linspace(0.0, 10.0, size)\n\nprice_data = pd.DataFrame(np.cumsum(np.random.randn(150, 2).dot([[0.5, 0.8], [0.8, 1.0]]), axis=0) + 100,\n columns=['Security 1', 'Security 2'],\n index=pd.date_range(start='01-01-2007', periods=150))\n\nsymbol = 'Security 1'\ndates_all = price_data.index.values\nfinal_prices = price_data[symbol].values.flatten()",
"A simple plot with the pyplot API",
"from bqplot import pyplot as plt\n\nplt.figure(1)\nn = 100\nplt.plot(np.linspace(0.0, 10.0, n), np.cumsum(np.random.randn(n)), \n axes_options={'y': {'grid_lines': 'dashed'}})\nplt.show()",
"Scatter Plot",
"plt.figure(title='Scatter Plot with colors')\nplt.scatter(y_data_2, y_data_3, color=y_data)\nplt.show()",
"Histogram",
"plt.figure()\nplt.hist(y_data, colors=['OrangeRed'])\nplt.show()",
"Every component of the figure is an independent widget",
"xs = bq.LinearScale()\nys = bq.LinearScale()\nx = np.arange(100)\ny = np.cumsum(np.random.randn(2, 100), axis=1) #two random walks\n\nline = bq.Lines(x=x, y=y, scales={'x': xs, 'y': ys}, colors=['red', 'green'])\nxax = bq.Axis(scale=xs, label='x', grid_lines='solid')\nyax = bq.Axis(scale=ys, orientation='vertical', tick_format='0.2f', label='y', grid_lines='solid')\n\nfig = bq.Figure(marks=[line], axes=[xax, yax], animation_duration=1000)\ndisplay(fig)\n\n# update data of the line mark\nline.y = np.cumsum(np.random.randn(2, 100), axis=1)\n\nxs = bq.LinearScale()\nys = bq.LinearScale()\nx, y = np.random.rand(2, 20)\nscatt = bq.Scatter(x=x, y=y, scales={'x': xs, 'y': ys}, default_colors=['blue'])\nxax = bq.Axis(scale=xs, label='x', grid_lines='solid')\nyax = bq.Axis(scale=ys, orientation='vertical', tick_format='0.2f', label='y', grid_lines='solid')\n\nfig = bq.Figure(marks=[scatt], axes=[xax, yax], animation_duration=1000)\ndisplay(fig)\n\n#data updates\nscatt.x = np.random.rand(20) * 10\nscatt.y = np.random.rand(20)",
"The same holds for the attributes of scales, axes",
"xs.min = 4\n\nxs.min = None\n\nxax.label = 'Some label for the x axis'",
"Use bqplot figures as input widgets",
"xs = bq.LinearScale()\nys = bq.LinearScale()\nx = np.arange(100)\ny = np.cumsum(np.random.randn(2, 100), axis=1) #two random walks\n\nline = bq.Lines(x=x, y=y, scales={'x': xs, 'y': ys}, colors=['red', 'green'])\nxax = bq.Axis(scale=xs, label='x', grid_lines='solid')\nyax = bq.Axis(scale=ys, orientation='vertical', tick_format='0.2f', label='y', grid_lines='solid')",
"Selections",
"def interval_change_callback(change):\n db.value = str(change['new'])\n\nintsel = bq.interacts.FastIntervalSelector(scale=xs, marks=[line])\nintsel.observe(interval_change_callback, names=['selected'] )\n\ndb = widgets.Label()\ndb.value = str(intsel.selected)\ndisplay(db)\n\nfig = bq.Figure(marks=[line], axes=[xax, yax], animation_duration=1000, interaction=intsel)\ndisplay(fig)\n\nline.selected",
"Handdraw",
"handdraw = bq.interacts.HandDraw(lines=line)\nfig.interaction = handdraw\n\nline.y[0]",
"Moving points around",
"from bqplot import *\n\nsize = 100\nnp.random.seed(0)\nx_data = range(size)\ny_data = np.cumsum(np.random.randn(size) * 100.0)\n\n## Enabling moving of points in scatter. Try to click and drag any of the points in the scatter and \n## notice the line representing the mean of the data update\n\nsc_x = LinearScale()\nsc_y = LinearScale()\n\nscat = Scatter(x=x_data[:10], y=y_data[:10], scales={'x': sc_x, 'y': sc_y}, default_colors=['blue'],\n enable_move=True)\nlin = Lines(scales={'x': sc_x, 'y': sc_y}, stroke_width=4, line_style='dashed', colors=['orange'])\nm = Label(value='Mean is %s'%np.mean(scat.y))\n\ndef update_line(change):\n with lin.hold_sync():\n lin.x = [np.min(scat.x), np.max(scat.x)]\n lin.y = [np.mean(scat.y), np.mean(scat.y)]\n m.value='Mean is %s'%np.mean(scat.y)\n \n\nupdate_line(None)\n\n# update line on change of x or y of scatter\nscat.observe(update_line, names='x')\nscat.observe(update_line, names='y')\n\nax_x = Axis(scale=sc_x)\nax_y = Axis(scale=sc_y, tick_format='0.2f', orientation='vertical')\n\nfig = Figure(marks=[scat, lin], axes=[ax_x, ax_y])\n\n## In this case on drag, the line updates as you move the points.\nwith scat.hold_sync():\n scat.enable_move = True\n scat.update_on_move = True\n scat.enable_add = False\n\ndisplay(m, fig)"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
kubernetes-client/python | examples/notebooks/intro_notebook.ipynb | apache-2.0 | [
"Managing kubernetes objects using common resource operations with the python client\nSome of these operations include;\n\n\ncreate_xxxx : create a resource object. Ex create_namespaced_pod and create_namespaced_deployment, for creation of pods and deployments respectively. This performs operations similar to kubectl create.\n\n\nread_xxxx : read the specified resource object. Ex read_namespaced_pod and read_namespaced_deployment, to read pods and deployments respectively. This performs operations similar to kubectl describe.\n\n\nlist_xxxx : retrieve all resource objects of a specific type. Ex list_namespaced_pod and list_namespaced_deployment, to list pods and deployments respectively. This performs operations similar to kubectl get.\n\n\npatch_xxxx : apply a change to a specific field. Ex patch_namespaced_pod and patch_namespaced_deployment, to update pods and deployments respectively. This performs operations similar to kubectl patch, kubectl label, kubectl annotate etc.\n\n\nreplace_xxxx : replacing a resource object will update the resource by replacing the existing spec with the provided one. Ex replace_namespaced_pod and replace_namespaced_deployment, to update pods and deployments respectively, by creating new replacements of the entire object. This performs operations similar to kubectl rolling-update, kubectl apply and kubectl replace.\n\n\ndelete_xxxx : delete a resource. This performs operations similar to kubectl delete.\n\n\nFor Further information see the Documentation for API Endpoints section in https://github.com/kubernetes-client/python/blob/master/kubernetes/README.md",
"from kubernetes import client, config",
"Load config from default location.",
"config.load_kube_config()",
"Create API endpoint instance as well as API resource instances (body and specification).",
"api_instance = client.AppsV1Api()\ndep = client.V1Deployment()\nspec = client.V1DeploymentSpec()",
"Fill required object fields (apiVersion, kind, metadata and spec).",
"name = \"my-busybox\"\ndep.metadata = client.V1ObjectMeta(name=name)\n\nspec.template = client.V1PodTemplateSpec()\nspec.template.metadata = client.V1ObjectMeta(name=\"busybox\")\nspec.template.metadata.labels = {\"app\":\"busybox\"}\nspec.template.spec = client.V1PodSpec()\ndep.spec = spec\n\ncontainer = client.V1Container()\ncontainer.image = \"busybox:1.26.1\"\ncontainer.args = [\"sleep\", \"3600\"]\ncontainer.name = name\nspec.template.spec.containers = [container]",
"Create Deployment using create_xxxx command for Deployments.",
"api_instance.create_namespaced_deployment(namespace=\"default\",body=dep)",
"Use list_xxxx command for Deployment, to list Deployments.",
"deps = api_instance.list_namespaced_deployment(namespace=\"default\")\nfor item in deps.items:\n print(\"%s %s\" % (item.metadata.namespace, item.metadata.name))",
"Use read_xxxx command for Deployment, to display the detailed state of the created Deployment resource.",
"api_instance.read_namespaced_deployment(namespace=\"default\",name=name)",
"Use patch_xxxx command for Deployment, to make specific update to the Deployment.",
"dep.metadata.labels = {\"key\": \"value\"}\napi_instance.patch_namespaced_deployment(name=name, namespace=\"default\", body=dep)",
"Use replace_xxxx command for Deployment, to update Deployment with a completely new version of the object.",
"dep.spec.template.spec.containers[0].image = \"busybox:1.26.2\"\napi_instance.replace_namespaced_deployment(name=name, namespace=\"default\", body=dep)",
"Use delete_xxxx command for Deployment, to delete created Deployment.",
"api_instance.delete_namespaced_deployment(name=name, namespace=\"default\", body=client.V1DeleteOptions(propagation_policy=\"Foreground\", grace_period_seconds=5))"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
tpin3694/tpin3694.github.io | statistics/t-tests.ipynb | mit | [
"Title: T-Tests\nSlug: t-tests\nSummary: T-tests in Python. \nDate: 2016-02-08 12:00\nCategory: Statistics\nTags: Basics\nAuthors: Chris Albon \nPreliminaries",
"from scipy import stats\nimport numpy as np",
"Create Data",
"# Create a list of 20 observations drawn from a random distribution \n# with mean 1 and a standard deviation of 1.5\nx = np.random.normal(1, 1.5, 20)\n\n# Create a list of 20 observations drawn from a random distribution \n# with mean 0 and a standard deviation of 1.5\ny = np.random.normal(0, 1.5, 20)",
"One Sample Two-Sided T-Test\nImagine the one sample T-test and drawing a (normally shaped) hill centered at 1 and \"spread\" out with a standard deviation of 1.5, then placing a flag at 0 and looking at where on the hill the flag is location. Is it near the top? Far away from the hill? If the flag is near the very bottom of the hill or farther, then the t-test p-score will be below 0.05.",
"# Run a t-test to test if the mean of x is statistically significantly different than 0\npvalue = stats.ttest_1samp(x, 0)[1]\n\n# View the p-value\npvalue",
"Two Variable Unpaired Two-Sided T-Test With Equal Variances\nImagine the one sample T-test and drawing two (normally shaped) hills centered at their means and their 'flattness' (individual spread) based on the standard deviation. The T-test looks at how much the two hills are overlapping. Are they basically on top of each other? Do just the bottoms of the hill just barely touch? If the tails of the hills are just barely overlapping or are not overlapping at all, the t-test p-score will be below 0.05.",
"stats.ttest_ind(x, y)[1]",
"Two Variable Unpaired Two-Sided T-Test With Unequal Variances",
"stats.ttest_ind(x, y, equal_var=False)[1]",
"Two Variable Paired Two-Sided T-Test\nPaired T-tests are used when we are taking repeated samples and want to take into account the fact that the two distributions we are testing are paired.",
"stats.ttest_rel(x, y)[1]"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
fedor1113/LineCodes | Decoder.ipynb | mit | [
"Decode line codes in png graphs\nAssumptions (format):\n\nThe clock is given and it is a red line on the top.\nThe signal line is black\n...",
"# Makes sure to install PyPNG image handling module\nimport sys\n!{sys.executable} -m pip install pypng\n\nimport png\n\nr = png.Reader(\"ex.png\")\nt = r.asRGB()\n\nimg = list(t[2])\n# print(img)",
"Outline\nThe outline of the idea is:\n\nFind the red lines that represent parallel synchronization signal above\nCalculate their size\n\"Synchromize with rows below\" (according to the rules of the code)\n...\nPROFIT!\n\n!!! Things to keep in mind:\n\ndeviations of red\ndeviations of black\nnoise - it might just break everything!\nbeginning and end of image...\n...\n\nA rather simple PNG we'll work with first:",
"# Let us first define colour red\n# We'll work with RGB for colours\n# So for accepted variants we'll make a list of 3-lists.\n\nclass colourlist(list):\n \"\"\"Just lists of 3-lists with some fancy methods to work with RGB colours\n \"\"\"\n \n def add_deviations(self, d=8): # Magical numbers are so magical!\n \"\"\"Adds deviations for RGB colours to a given list.\n Warning! Too huge - it takes forever.\n \n Input: list of 3-lists\n Output: None (side-effects - changes the list)\n \"\"\"\n \n #l = l[:] Nah, let's make it a method\n l = self\n \n v = len(l)\n max_deviation = d\n \n for i in range(v): # Iterate through the list of colours\n \n for j in range(-max_deviation, max_deviation+1): \n # Actually it is the deviation.\n \n #for k in range(3): # RGB! (no \"a\"s here)\n \n newcolour = self[i][:] # Take one of the original colours\n newcolour[0] = abs(newcolour[0]+j) # Create a deviation\n l.append(newcolour) \n # Append new colour to the end of the list. \n # <- Here it is changed!\n for j in range(-max_deviation, max_deviation+1): \n # Work with all the possibilities with this d\n newcolour1 = newcolour[:]\n newcolour1[1] = abs(newcolour1[1]+j)\n l.append(newcolour1) \n # Append new colour to the end of the list. Yeah! \n # <- Here it is changed!\n \n for j in range(-max_deviation, max_deviation+1): \n # Work with all the possibilities with this d\n newcolour2 = newcolour1[:]\n newcolour2[2] = abs(newcolour2[2]+j)\n l.append(newcolour2) # Append new colour to the end of the list. Yeah! \n # <- Here it is changed!\n \n return None\n\ndef withinDeviation(colour, cl, d=20):\n \"\"\"This is much more efficient!\n Input: 3-list (colour), colourlist, int\n Output: bool\n \"\"\"\n for el in cl:\n if (abs(colour[0] - el[0]) <= d and \n abs(colour[1] - el[1]) <= d and \n abs(colour[2] - el[2]) <= d):\n return True\n return False\n\n\n\naccepted_colours = colourlist([[118, 58, 57], [97, 71, 36], [132, 56, 46], [132, 46, 47], [141, 51, 53]]) # ...\n\n#accepted_colours.add_deviations()\n\n# print(accepted_colours) # -check it! - or better don't - it is a biiiig list....\n\n# print(len(accepted_colours)) # That will take a while... Heh..\n\ndef find_first_pixel_of_colour(pixellist, accepted_deviations):\n \"\"\"Returns the row and column of the pixel \n in a converted to list with RGB colours PNG\n \n Input: ..., colourlist\n Output: 2-tuple of int (or None)\n \"\"\"\n \n accepted_deviations = accepted_deviations[:]\n rows = len(pixellist)\n cols = len(pixellist[0])\n \n for j in range(rows):\n for i in range(0, cols, 3):\n # if [pixellist[j][i], pixellist[j][i+1], pixellist[j][i+2]] in accepted_deviations:\n if withinDeviation([pixellist[j][i], pixellist[j][i+1], pixellist[j][i+2]], accepted_deviations):\n return (j, i)\n \n return None\n\n\n\nfr = find_first_pixel_of_colour(img, accepted_colours)\n\nif fr is None:\n print(\"Warning a corrupt file or a wrong format!!!\")\n\nprint(fr)\nprint(img[fr[0]][fr[1]], img[fr[0]][fr[1]+1], img[fr[0]][fr[1]+2])\nprint(img[fr[0]])\n\n# [133, 56, 46] in accepted_colours\n\n# Let us now find the length of the red lines that represent the sync signal\n\ndef find_next_pixel_in_row(pixel, row, accepted_deviations):\n \"\"\"Returns the column of the next pixel of a given colour\n (with deviations) in a row from a converted to list with RGB \n colours PNG\n \n Input: 2-tuple of int, list of int with len%3==0, colourlist\n Output: int (returns -1 specifically if none are found)\n \"\"\"\n \n l = len(row)\n \n if pixel[1] >= l-1:\n return -1\n \n for i in range(pixel[1]+3, l, 3):\n # if [row[i], row[i+1], row[i+2]] in accepted_deviations:\n if withinDeviation([row[i], row[i+1], row[i+2]], accepted_deviations):\n return i\n \n return -1\n\n\n\ndef colour_line_length(pixels, start, colour, deviations=20):\n\n line_length = 1\n pr = start[:]\n r = (pr[0], \n find_next_pixel_in_row(pr, pixels[pr[0]], colour[:]))\n # print(pr, r)\n if not(r[1] == pr[1]+3):\n print(\"Ooops! Something went wrong!\")\n else:\n line_length += 1\n while (r[1] == pr[1]+3):\n pr = r\n r = (pr[0], \n find_next_pixel_in_row(pr, \n pixels[pr[0]], colour[:]))\n line_length += 1\n \n return line_length\n\n\n\nline_length = colour_line_length(img, fr, accepted_colours, deviations=20)\n \nprint(line_length) # !!!",
"We found the sync (clock) line length in our graph!",
"print(\"It is\", line_length)",
"Now the information transfer signal itself is ~\"black\", so we need to find the black colour range as well!",
"# Let's do just that\n\nblack = colourlist([[0, 0, 0], [0, 1, 0], [7, 2, 8]])\n# black.add_deviations(60) # experimentally it is somewhere around that\n# experimentally the max deviation is somewhere around 60\nprint(black)",
"The signal we are currently interested in is Manchester code (as per G.E. Thomas).\nIt is a self-clocking signal, but since we do have a clock with it - we use it)\nLet us find the height of the Manchester signal in our PNG - just because...",
"fb = find_first_pixel_of_colour(img, black)\n\ndef signal_height(pxls, fib):\n signal_height = 1\n # if ([img[fb[0]+1][fb[1]], img[fb[0]+1][fb[1]+1], img[fb[0]+1][fb[1]+2]] in black):\n if withinDeviation([pxls[fib[0]+1][fib[1]], pxls[fib[0]+1][fib[1]+1]\n , pxls[fib[0]+1][fib[1]+2]], black, 60):\n signal_height += 1\n i = 2\n rows = len(pxls)\n # while([img[fb[0]+i][fb[1]], img[fb[0]+i][fb[1]+1], img[fb[0]+i][fb[1]+2]] in black):\n while(withinDeviation([pxls[fib[0]+i][fib[1]]\n , pxls[fib[0]+i][fib[1]+1]\n , pxls[fib[0]+i][fib[1]+2]], black, 60)):\n signal_height += 1\n i += 1\n if (i >= rows):\n break\n else:\n print(\"\") # TO DO\n return signal_height\n\nsheight = signal_height(img, fb)-1\n\nprint(sheight)\n\n# Let's quickly find the last red line\n...\n\ndef manchester(pixels, start, clock, \n line_colour, d=60, inv=False):\n \"\"\"Decodes Manchester code (as per G. E. Thomas) \n (or with inv=True Manchester code\n (as per IEEE 802.4)).\n \n Input: array of int with len%3==0 (- PNG pixels),\n int, int, colourlist, int, bool (optional)\n Output: str (of '1' and '0') or None\n \"\"\"\n \n res = \"\"\n \n cols = len(pixels[0])\n fb = find_first_pixel_of_colour(pixels, line_colour)\n m = 2*clock*3-2*3 # Here be dragons!\n # Hack: only check it using the upper line \n # (or lack thereof)\n \n if not(inv):\n for i in range(start, cols-2*3, m):\n fromUP = withinDeviation([pixels[fb[0]][i-6], \n pixels[fb[0]][i-5], \n pixels[fb[0]][i-4]], \n line_colour, d)\n if fromUP:\n res = res + \"1\"\n else:\n res = res + \"0\"\n else:\n for i in range(start, cols-2*3, m):\n fromUP = withinDeviation([pixels[fb[0]][i-6], \n pixels[fb[0]][i-5], \n pixels[fb[0]][i-4]], \n line_colour, d)\n if cond:\n res = res + \"0\"\n else:\n res = res + \"1\"\n \n return res\n\ndef nrz(pixels, start, clock, \n line_colour, d=60, inv=False):\n \"\"\"Decodes NRZ code\n (or with inv=True its inversed version).\n It is assumed that there is indeed a valid\n NRZ code with a valid message.\n \n Input: array of int with len%3==0 (- PNG pixels),\n int, int, colourlist, int, bool (optional)\n Output: str (of '1' and '0') or (maybe?) None\n \"\"\"\n \n res = \"\"\n \n cols = len(pixels[0])\n fb = find_first_pixel_of_colour(pixels, line_colour)\n m = 2*clock*3-2*3 # Here be dragons!\n # Hack: only check it using the upper line \n # (or lack thereof)\n \n if not(inv):\n for i in range(start, cols, m):\n UP = withinDeviation([pixels[fb[0]][i], \n pixels[fb[0]][i+1], \n pixels[fb[0]][i+2]], \n line_colour, d)\n if UP:\n res = res + \"1\"\n else:\n res = res + \"0\"\n else:\n for i in range(start, cols-2*3, m):\n UP = withinDeviation([pixels[fb[0]][i], \n pixels[fb[0]][i+1], \n pixels[fb[0]][i+2]], \n line_colour, d)\n if cond:\n res = res + \"0\"\n else:\n res = res + \"1\"\n \n return res\n\ndef code2B1Q(pixels, start, clock=None, \n line_colour=[[0, 0, 0]], d=60, inv=False):\n \"\"\"Decodes 2B1Q code. The clock is not used - it\n is for compatibility only - really, so put \n anything there. Does _NOT_ always work!\n \n WARNING! Right now does not work AT ALL \n (apart from one specific case)\n \n Input: array of int with len%3==0 (- PNG pixels),\n int, *, colourlist, int\n Output: str (of '1' and '0') or None\n \"\"\"\n \n res = \"\"\n \n cols = len(pixels[0])\n fb = find_first_pixel_of_colour(pixels, line_colour) # (11, 33)\n # will only work if the first or second dibit is 0b11\n ll = colour_line_length(pixels, fb, line_colour, deviations=20) # 10\n sh = signal_height(pixels, fb) - 1 # 17 -1?\n m = ll*3-2*3 # will only work if there is a transition\n # (after the first dibit)\n # We only need to check if the line is\n # on the upper, middle upper or middle lower rows...\n \n for i in range(start, cols, m):\n UP = withinDeviation([pixels[fb[0]][i], \n pixels[fb[0]][i+1], \n pixels[fb[0]][i+2]], \n line_colour, d)\n DOWN = withinDeviation([pixels[fb[0]+sh][i], \n pixels[fb[0]+sh][i+1], \n pixels[fb[0]+sh][i+2]], \n line_colour, d)\n almostUP = UP\n # if UP:\n # res = res + \"10\"\n if DOWN: # elif DOWN:\n res = res + \"00\"\n # print(\"00\")\n elif almostUP:\n res = res + \"11\"\n # print(\"11\")\n else:\n res = res + \"01\"\n # print(\"01\")\n \n return res\n\n# A-a-and... here is magic!\n\nres = manchester(img, fr[1]+5*3, line_length, black, d=60, inv=False)\n\nans = []\nfor i in range(0, len(res), 8):\n ans.append(int('0b'+res[i:i+8], 2))\n# print(ans)\n\nfor i in range(0, len(ans)):\n print(ans[i])",
"Huzzah!\nAnd that is how we decode it.\nLet us now look at some specific examples.",
"# Here is a helper function to automate all that\n\ndef parse_code(path_to_file, code, inv=False):\n \"\"\"Guess what... Parses a line code PNG\n \n Input: str, function \n (~coinsides with the name of the code)\n Output: str (of '1' and '0') or (maybe?) None\n \"\"\"\n \n r1 = png.Reader(path_to_file)\n t1 = r1.asRGB()\n img1 = list(t1[2])\n fr1 = find_first_pixel_of_colour(img1, accepted_colours)\n line_length1 = colour_line_length(img1, \n fr1, accepted_colours, deviations=20)\n \n res1 = code(img1, fr1[1]+5*3, line_length1, black, d=60, inv=inv)\n \n return res1\n\ndef print_nums(bitesstr):\n \"\"\"I hope you get the gist...\n \n Input: str\n Output: list (side effects - prints...)\n \"\"\"\n \n ans1 = []\n for i in range(0, len(bitesstr), 8):\n ans1.append(int('0b'+bitesstr[i:i+8], 2))\n \n for i in range(0, len(ans1)):\n print(ans1[i])\n \n return ans1",
"Manchester Code\n(a rather tricky example)\nHere is a tricky example of Manchester code - where we have ASCII '0's and '1's with which a 3-letter \"word\" is encoded.",
"ans1 = print_nums(parse_code(\"Line_Code_PNGs/Manchester.png\", manchester))\n\nres2d = \"\"\nfor i in range(0, len(ans1)):\n res2d += chr(ans1[i])\n\nans2d = []\nfor i in range(0, len(res2d), 8):\n print(int('0b'+res2d[i:i+8], 2))",
"NRZ",
"ans2 = print_nums(parse_code(\"Line_Code_PNGs/NRZ.png\", nrz))\n",
"2B1Q\nWarning! 2B1Q is currently almost completely broken. Pull requests with correct solutions are welcome :)",
"ans3 = print_nums(parse_code(\"Line_Code_PNGs/2B1Q.png\", code2B1Q))\n\nres2d3 = \"\"\nfor i in range(0, len(ans3)):\n res2d3 += chr(ans3[i])\n\nans2d3 = []\nfor i in range(0, len(res2d3), 8):\n print(int('0b'+res2d3[i:i+8], 2))"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
jorisvandenbossche/DS-python-data-analysis | notebooks/case4_air_quality_processing.ipynb | bsd-3-clause | [
"<p><font size=\"6\"><b> CASE - air quality data of European monitoring stations (AirBase)</b></font></p>\n\n\n© 2021, Joris Van den Bossche and Stijn Van Hoey (jorisvandenbossche@gmail.com, stijnvanhoey@gmail.com). Licensed under CC BY 4.0 Creative Commons\n\n\nAirBase is the European air quality database maintained by the European Environment Agency (EEA). It contains air quality monitoring data and information submitted by participating countries throughout Europe. The air quality database consists of a multi-annual time series of air quality measurement data and statistics for a number of air pollutants.\nSome of the data files that are available from AirBase were included in the data folder: the hourly concentrations of nitrogen dioxide (NO2) for 4 different measurement stations:\n\nFR04037 (PARIS 13eme): urban background site at Square de Choisy\nFR04012 (Paris, Place Victor Basch): urban traffic site at Rue d'Alesia\nBETR802: urban traffic site in Antwerp, Belgium\nBETN029: rural background site in Houtem, Belgium\n\nSee http://www.eea.europa.eu/themes/air/interactive/no2",
"import pandas as pd\nimport numpy as np\nimport matplotlib.pyplot as plt",
"Processing a single file\nWe will start with processing one of the downloaded files (BETR8010000800100hour.1-1-1990.31-12-2012). Looking at the data, you will see it does not look like a nice csv file:",
"with open(\"data/BETR8010000800100hour.1-1-1990.31-12-2012\") as f:\n print(f.readline())",
"So we will need to do some manual processing.\nJust reading the tab-delimited data:",
"data = pd.read_csv(\"data/BETR8010000800100hour.1-1-1990.31-12-2012\", sep='\\t')#, header=None)\n\ndata.head()",
"The above data is clearly not ready to be used! Each row contains the 24 measurements for each hour of the day, and also contains a flag (0/1) indicating the quality of the data. Furthermore, there is no header row with column names.\n<div class=\"alert alert-success\">\n\n<b>EXERCISE 1</b>: <br><br> Clean up this dataframe by using more options of `pd.read_csv` (see its [docstring](http://pandas.pydata.org/pandas-docs/stable/generated/pandas.read_csv.html))\n\n <ul>\n <li>specify the correct delimiter</li>\n <li>specify that the values of -999 and -9999 should be regarded as NaN</li>\n <li>specify our own column names (for how the column names are made up, see <a href=\"http://stackoverflow.com/questions/6356041/python-intertwining-two-lists\">http://stackoverflow.com/questions/6356041/python-intertwining-two-lists</a>)\n</ul>\n</div>",
"# Column names: list consisting of 'date' and then intertwined the hour of the day and 'flag'\nhours = [\"{:02d}\".format(i) for i in range(24)]\ncolumn_names = ['date'] + [item for pair in zip(hours, ['flag' + str(i) for i in range(24)]) for item in pair]\n\n# %load _solutions/case4_air_quality_processing1.py\n\n# %load _solutions/case4_air_quality_processing2.py",
"For the sake of this tutorial, we will disregard the 'flag' columns (indicating the quality of the data).\n<div class=\"alert alert-success\">\n\n**EXERCISE 2**:\n\nDrop all 'flag' columns ('flag1', 'flag2', ...)\n\n</div>",
"flag_columns = [col for col in data.columns if 'flag' in col]\n# we can now use this list to drop these columns\n\n# %load _solutions/case4_air_quality_processing3.py\n\ndata.head()",
"Now, we want to reshape it: our goal is to have the different hours as row indices, merged with the date into a datetime-index. Here we have a wide and long dataframe, and want to make this a long, narrow timeseries.\n<div class=\"alert alert-info\">\n\n<b>REMEMBER</b>: \n\n\nRecap: reshaping your data with [`stack` / `melt` and `unstack` / `pivot`](./pandas_08_reshaping_data.ipynb)</li>\n\n\n\n<img src=\"../img/pandas/schema-stack.svg\" width=70%>\n\n</div>\n\n<div class=\"alert alert-success\">\n\n<b>EXERCISE 3</b>:\n\n<br><br>\n\nReshape the dataframe to a timeseries. \nThe end result should look like:<br><br>\n\n\n<div class='center'>\n<table border=\"1\" class=\"dataframe\">\n <thead>\n <tr style=\"text-align: right;\">\n <th></th>\n <th>BETR801</th>\n </tr>\n </thead>\n <tbody>\n <tr>\n <th>1990-01-02 09:00:00</th>\n <td>48.0</td>\n </tr>\n <tr>\n <th>1990-01-02 12:00:00</th>\n <td>48.0</td>\n </tr>\n <tr>\n <th>1990-01-02 13:00:00</th>\n <td>50.0</td>\n </tr>\n <tr>\n <th>1990-01-02 14:00:00</th>\n <td>55.0</td>\n </tr>\n <tr>\n <th>...</th>\n <td>...</td>\n </tr>\n <tr>\n <th>2012-12-31 20:00:00</th>\n <td>16.5</td>\n </tr>\n <tr>\n <th>2012-12-31 21:00:00</th>\n <td>14.5</td>\n </tr>\n <tr>\n <th>2012-12-31 22:00:00</th>\n <td>16.5</td>\n </tr>\n <tr>\n <th>2012-12-31 23:00:00</th>\n <td>15.0</td>\n </tr>\n </tbody>\n</table>\n<p style=\"text-align:center\">170794 rows × 1 columns</p>\n</div>\n\n <ul>\n <li>Reshape the dataframe so that each row consists of one observation for one date + hour combination</li>\n <li>When you have the date and hour values as two columns, combine these columns into a datetime (tip: string columns can be summed to concatenate the strings) and remove the original columns</li>\n <li>Set the new datetime values as the index, and remove the original columns with date and hour values</li>\n\n</ul>\n\n\n**NOTE**: This is an advanced exercise. Do not spend too much time on it and don't hesitate to look at the solutions. \n\n</div>\n\nReshaping using melt:",
"# %load _solutions/case4_air_quality_processing4.py",
"Reshaping using stack:",
"# %load _solutions/case4_air_quality_processing5.py\n\n# %load _solutions/case4_air_quality_processing6.py",
"Combine date and hour:",
"# %load _solutions/case4_air_quality_processing7.py\n\n# %load _solutions/case4_air_quality_processing8.py\n\n# %load _solutions/case4_air_quality_processing9.py\n\ndata_stacked.head()",
"Our final data is now a time series. In pandas, this means that the index is a DatetimeIndex:",
"data_stacked.index\n\ndata_stacked.plot()",
"Processing a collection of files\nWe now have seen the code steps to process one of the files. We have however multiple files for the different stations with the same structure. Therefore, to not have to repeat the actual code, let's make a function from the steps we have seen above.\n<div class=\"alert alert-success\">\n\n<b>EXERCISE 4</b>:\n\n <ul>\n <li>Write a function <code>read_airbase_file(filename, station)</code>, using the above steps the read in and process the data, and that returns a processed timeseries.</li>\n</ul>\n</div>",
"def read_airbase_file(filename, station):\n \"\"\"\n Read hourly AirBase data files.\n \n Parameters\n ----------\n filename : string\n Path to the data file.\n station : string\n Name of the station.\n \n Returns\n -------\n DataFrame\n Processed dataframe.\n \"\"\"\n \n ...\n \n return ...\n\n# %load _solutions/case4_air_quality_processing10.py",
"Test the function on the data file from above:",
"import os\n\nfilename = \"data/BETR8010000800100hour.1-1-1990.31-12-2012\"\nstation = os.path.split(filename)[-1][:7]\n\nstation\n\ntest = read_airbase_file(filename, station)\ntest.head()",
"We now want to use this function to read in all the different data files from AirBase, and combine them into one Dataframe.\n<div class=\"alert alert-success\">\n\n**EXERCISE 5**:\n\nUse the [pathlib module](https://docs.python.org/3/library/pathlib.html) `Path` class in combination with the `glob` method to list all 4 AirBase data files that are included in the 'data' directory, and call the result `data_files`.\n\n<details><summary>Hints</summary>\n\n- The pathlib module provides a object oriented way to handle file paths. First, create a `Path` object of the data folder, `pathlib.Path(\"./data\")`. Next, apply the `glob` function to extract all the files containing `*0008001*` (use wildcard * to say \"any characters\"). The output is a Python generator, which you can collect as a `list()`.\n\n</details> \n\n\n</div>",
"from pathlib import Path\n\n# %load _solutions/case4_air_quality_processing11.py",
"<div class=\"alert alert-success\">\n\n**EXERCISE 6**:\n\n* Loop over the data files, read and process the file using our defined function, and append the dataframe to a list.\n* Combine the the different DataFrames in the list into a single DataFrame where the different columns are the different stations. Call the result `combined_data`.\n\n<details><summary>Hints</summary>\n\n- The `data_files` list contains `Path` objects (from the pathlib module). To get the actual file name as a string, use the `.name` attribute.\n- The station name is always first 7 characters of the file name.\n\n</details> \n\n\n</div>",
"# %load _solutions/case4_air_quality_processing12.py\n\n# %load _solutions/case4_air_quality_processing13.py\n\ncombined_data.head()",
"Finally, we don't want to have to repeat this each time we use the data. Therefore, let's save the processed data to a csv file.",
"# let's first give the index a descriptive name\ncombined_data.index.name = 'datetime'\n\ncombined_data.to_csv(\"airbase_data_processed.csv\")"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
LFPy/LFPy | examples/LFPy-example-02.ipynb | gpl-3.0 | [
"%matplotlib inline",
"Example 2: Extracellular response of synaptic input\nThis is an example of LFPy running in a Jupyter notebook. To run through this example code and produce output, press <shift-Enter> in each code block below.\nFirst step is to import LFPy and other packages for analysis and plotting:",
"import numpy as np\nimport matplotlib.pyplot as plt\nfrom matplotlib.gridspec import GridSpec\nimport LFPy",
"Create some dictionarys with parameters for cell, synapse and extracellular electrode:",
"cellParameters = {\n 'morphology': 'morphologies/L5_Mainen96_LFPy.hoc',\n 'tstart': -50,\n 'tstop': 100,\n 'dt': 2**-4,\n 'passive': True,\n}\n\nsynapseParameters = {\n 'syntype': 'Exp2Syn',\n 'e': 0.,\n 'tau1': 0.5,\n 'tau2': 2.0,\n 'weight': 0.005,\n 'record_current': True,\n}\n\nz = np.mgrid[-400:1201:100]\nelectrodeParameters = {\n 'x': np.zeros(z.size),\n 'y': np.zeros(z.size),\n 'z': z,\n 'sigma': 0.3,\n}",
"Then, create the cell, synapse and electrode objects using the\nLFPy.Cell, LFPy.Synapse, LFPy.RecExtElectrode classes.",
"cell = LFPy.Cell(**cellParameters)\ncell.set_pos(x=-10, y=0, z=0)\ncell.set_rotation(x=4.98919, y=-4.33261, z=np.pi)\n\nsynapse = LFPy.Synapse(cell,\n idx = cell.get_closest_idx(z=800),\n **synapseParameters)\nsynapse.set_spike_times(np.array([10, 30, 50]))\n \nelectrode = LFPy.RecExtElectrode(cell, **electrodeParameters)",
"Run the simulation using cell.simulate() probing the extracellular potential with \nthe additional keyword argument probes=[electrode]",
"cell.simulate(probes=[electrode])",
"Then plot the somatic potential and the prediction obtained using the RecExtElectrode instance \n(now accessible as electrode.data):",
"fig = plt.figure(figsize=(12, 6))\ngs = GridSpec(2, 3)\n\nax0 = fig.add_subplot(gs[:, 0])\nax0.plot(cell.x.T, cell.z.T, 'k')\nax0.plot(synapse.x, synapse.z, \n color='r', marker='o', markersize=10,\n label='synapse')\nax0.plot(electrode.x, electrode.z, '.', color='g', \n label='electrode')\nax0.axis([-500, 500, -450, 1250])\nax0.legend()\nax0.set_xlabel('x (um)')\nax0.set_ylabel('z (um)')\nax0.set_title('morphology')\n\nax1 = fig.add_subplot(gs[0, 1])\nax1.plot(cell.tvec, synapse.i, 'r')\nax1.set_title('synaptic current (pA)')\nplt.setp(ax1.get_xticklabels(), visible=False)\n\nax2 = fig.add_subplot(gs[1, 1], sharex=ax1)\nax2.plot(cell.tvec, cell.somav, 'k')\nax2.set_title('somatic voltage (mV)')\n\nax3 = fig.add_subplot(gs[:, 2], sharey=ax0, sharex=ax1)\nim = ax3.pcolormesh(cell.tvec, electrode.z, electrode.data,\n vmin=-abs(electrode.data).max(), vmax=abs(electrode.data).max(),\n shading='auto')\nplt.colorbar(im)\nax3.set_title('LFP (mV)')\nax3.set_xlabel('time (ms)')\n\n#savefig('LFPy-example-02.pdf', dpi=300)"
] | [
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
teoguso/sol_1116 | cumulant-to-pdf.ipynb | mit | [
"Best report ever\nEverything you see here is either markdown, LaTex, Python or BASH.\nThe spectral function\nIt looks like this:\n\\begin{equation}\n A(\\omega) = \\mathrm{Im}|G(\\omega)|\n\\end{equation}\nGW vs Cumulant\nMathematically very different:\n\\begin{equation}\n G^{GW} (\\omega) = \\frac1{ \\omega - \\epsilon - \\Sigma (\\omega) } \n\\end{equation}\n\\begin{equation}\n G^C(t_1, t_2) = G^0(t_1, t_2) e^{ i \\int_{t_1}^{t_2} \\int_{t'}^{t_2} dt' dt'' W (t', t'') }\n\\end{equation}\n\nBUT they connect through $\\mathrm{Im} W (\\omega) = \\frac1\\pi \\mathrm{Im} \\Sigma ( \\epsilon - \\omega )$.\nImplementation\nUsing a multi-pole representation for $\\Sigma^{GW}$:\n\\begin{equation}\n \\mathrm{Im} W (\\omega) = \\frac1\\pi \\mathrm{Im} \\Sigma ( \\epsilon - \\omega )\n\\end{equation}\n\\begin{equation}\n W (\\tau) = - i \\lambda \\bigl[ e^{ i \\omega_p \\tau } \\theta ( - \\tau ) + e^{ - i \\omega_p \\tau } \\theta ( \\tau ) \\bigr]\n\\end{equation}\nGW vs Cumulant\n\n\nGW:\n\\begin{equation}\n A(\\omega) = \\frac1\\pi \\frac{\\mathrm{Im}\\Sigma (\\omega)} \n { [ \\omega - \\epsilon - \\mathrm{Re}\\Sigma (\\omega) ]^2 + \n [ \\mathrm{Im}\\Sigma (\\omega) ]^2}\n\\end{equation}\n\n\nCumulant:\n\n\n\\begin{equation}\n A(\\omega) = \\frac1\\pi \\sum_{n=0}^{\\infty} \\frac{a^n}{n!} \\frac{\\Gamma}{ (\\omega - \\epsilon + n \\omega_p)^2 + \\Gamma^2 }\n\\end{equation}\n\nNow some executable code (Python)\nI have implemented the formulas above in my Python code. \nI can just run it from here., but before let me check\nif my input file is correct...",
"!gvim data/SF_Si_bulk/invar.in",
"Now I can run my script:",
"%cd data/SF_Si_bulk/\n%run ../../../../../Code/SF/sf.py",
"Not very elegant, I know. It's just for demo pourposes.",
"cd ../../../",
"I have first to import a few modules/set up a few things:",
"from __future__ import print_function\nimport numpy as np\nimport matplotlib.pyplot as plt\n# plt.rcParams['figure.figsize'] = (9., 6.)\n%matplotlib inline",
"Next I can read the data from a local folder:",
"sf_c = np.genfromtxt(\n 'data/SF_Si_bulk/Spfunctions/spftot_exp_kpt_1_19_bd_1_4_s1.0_p1.0_800ev_np1.dat')\nsf_gw = np.genfromtxt(\n 'data/SF_Si_bulk/Spfunctions/spftot_gw_s1.0_p1.0_800ev.dat')\n#!gvim spftot_exp_kpt_1_19_bd_1_4_s1.0_p1.0_800ev_np1.dat",
"Now I can plot the stored arrays.",
"plt.plot(sf_c[:,0], sf_c[:,1], label='1-pole cumulant')\nplt.plot(sf_gw[:,0], sf_gw[:,1], label='GW')\nplt.xlim(-50, 0)\nplt.ylim(0, 300)\nplt.title(\"Bulk Si - Spectral function - ib=1, ikpt=1\")\nplt.xlabel(\"Energy (eV)\")\nplt.grid(); plt.legend(loc='best')",
"Creating a PDF document\nI can create a PDF version of this notebook from itself, using the command line:",
"!jupyter-nbconvert --to pdf cumulant-to-pdf.ipynb\n\npwd\n\n!xpdf cumulant-to-pdf.pdf"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
NLP-Deeplearning-Club/Classic-ML-Methods-Algo | ipynbs/appendix/ensemble/voting.ipynb | mit | [
"投票\n投票是最简单最基本的集成方式,核心思想也很朴素:大家伙投票决定结果.\n其原理是结合了多个不同的机器学习分类器,并且采用多数表决(硬投票)或者平均预测概率(软投票)的方式来预测分类标签.这样的分类器可以用于一组同样表现良好的模型,以便平衡它们各自的弱点.\n使用sklearn做投票\nsklearn提供了用于投票的接口sklearn.ensemble.VotingClassifier.下面的例子可以大体了解如何使用投票接口",
"import numpy as np\nfrom sklearn.linear_model import LogisticRegression\nfrom sklearn.naive_bayes import GaussianNB\nfrom sklearn.ensemble import RandomForestClassifier, VotingClassifier",
"自己构造一组随机的数据",
"X = np.array([[-1, -1], [-2, -1], [-3, -2], [1, 1], [2, 1], [3, 2]])\ny = np.array([1, 1, 1, 2, 2, 2])",
"初始化多个分类器模型",
"clf1 = LogisticRegression(random_state=1)\nclf2 = RandomForestClassifier(random_state=1)\nclf3 = GaussianNB()",
"初始化投票器",
"eclf1 = VotingClassifier(estimators=[('lr', clf1), ('rf', clf2), ('gnb', clf3)], voting='hard')",
"训练模型",
"eclf1 = eclf1.fit(X, y)",
"预测",
"print(eclf1.predict(X))",
"投票器的设置\n投票器可以设置\n\n\nvoting \nhard表示直接以多数原则投票确定结果,soft表示基于预测概率之和的argmax来预测类别标签\n\n\nn_jobs\n并行任务数设置\n\n\nweights\n不同分类器的权重\n\n\nflatten_transform(0.19版本接口)\n仅当voting为'soft'时有用.flatten_transform = true时影响变换输出的形状,变换方法返回形为(n_samples,n_classifiers * n_classes).如果flatten_transform = false,则返回(n_classifiers,n_samples,n_classes)",
"eclf3 = VotingClassifier(estimators=[('lr', clf1), ('rf', clf2), ('gnb', clf3)],n_jobs=3, voting='soft', weights=[2,1,1])\n\neclf3 = eclf3.fit(X, y)\n\nprint(eclf3.predict(X))\n\nprint(eclf3.transform(X).shape)"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
tensorflow/docs-l10n | site/en-snapshot/addons/tutorials/optimizers_cyclicallearningrate.ipynb | apache-2.0 | [
"Copyright 2021 The TensorFlow Authors.",
"#@title Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.",
"TensorFlow Addons Optimizers: CyclicalLearningRate\n<table class=\"tfo-notebook-buttons\" align=\"left\">\n <td>\n <a target=\"_blank\" href=\"https://www.tensorflow.org/addons/tutorials/optimizers_cyclicallearningrate\"><img src=\"https://www.tensorflow.org/images/tf_logo_32px.png\" />View on TensorFlow.org</a>\n </td>\n <td>\n <a target=\"_blank\" href=\"https://colab.research.google.com/github/tensorflow/addons/blob/master/docs/tutorials/optimizers_cyclicallearningrate.ipynb\"><img src=\"https://www.tensorflow.org/images/colab_logo_32px.png\" />Run in Google Colab</a>\n </td>\n <td>\n <a target=\"_blank\" href=\"https://github.com/tensorflow/addons/blob/master/docs/tutorials/optimizers_cyclicallearningrate.ipynb\"><img src=\"https://www.tensorflow.org/images/GitHub-Mark-32px.png\" />View source on GitHub</a>\n </td>\n <td>\n <a href=\"https://storage.googleapis.com/tensorflow_docs/addons/docs/tutorials/optimizers_cyclicallearningrate.ipynb\"><img src=\"https://www.tensorflow.org/images/download_logo_32px.png\" />Download notebook</a>\n </td>\n</table>\n\nOverview\nThis tutorial demonstrates the use of Cyclical Learning Rate from the Addons package.\nCyclical Learning Rates\nIt has been shown it is beneficial to adjust the learning rate as training progresses for a neural network. It has manifold benefits ranging from saddle point recovery to preventing numerical instabilities that may arise during backpropagation. But how does one know how much to adjust with respect to a particular training timestamp? In 2015, Leslie Smith noticed that you would want to increase the learning rate to traverse faster across the loss landscape but you would also want to reduce the learning rate when approaching convergence. To realize this idea, he proposed Cyclical Learning Rates (CLR) where you would adjust the learning rate with respect to the cycles of a function. For a visual demonstration, you can check out this blog. CLR is now available as a TensorFlow API. For more details, check out the original paper here. \nSetup",
"!pip install -q -U tensorflow_addons\n\nfrom tensorflow.keras import layers\nimport tensorflow_addons as tfa\nimport tensorflow as tf\n\nimport numpy as np\nimport matplotlib.pyplot as plt\n\ntf.random.set_seed(42)\nnp.random.seed(42)",
"Load and prepare dataset",
"(x_train, y_train), (x_test, y_test) = tf.keras.datasets.fashion_mnist.load_data()\n\nx_train = np.expand_dims(x_train, -1)\nx_test = np.expand_dims(x_test, -1)",
"Define hyperparameters",
"BATCH_SIZE = 64\nEPOCHS = 10\nINIT_LR = 1e-4\nMAX_LR = 1e-2",
"Define model building and model training utilities",
"def get_training_model():\n model = tf.keras.Sequential(\n [\n layers.InputLayer((28, 28, 1)),\n layers.experimental.preprocessing.Rescaling(scale=1./255),\n layers.Conv2D(16, (5, 5), activation=\"relu\"),\n layers.MaxPooling2D(pool_size=(2, 2)),\n layers.Conv2D(32, (5, 5), activation=\"relu\"),\n layers.MaxPooling2D(pool_size=(2, 2)),\n layers.SpatialDropout2D(0.2),\n layers.GlobalAvgPool2D(),\n layers.Dense(128, activation=\"relu\"),\n layers.Dense(10, activation=\"softmax\"),\n ]\n )\n return model\n\ndef train_model(model, optimizer):\n model.compile(loss=\"sparse_categorical_crossentropy\", optimizer=optimizer,\n metrics=[\"accuracy\"])\n history = model.fit(x_train,\n y_train,\n batch_size=BATCH_SIZE,\n validation_data=(x_test, y_test),\n epochs=EPOCHS)\n return history",
"In the interest of reproducibility, the initial model weights are serialized which you will be using to conduct our experiments.",
"initial_model = get_training_model()\ninitial_model.save(\"initial_model\")",
"Train a model without CLR",
"standard_model = tf.keras.models.load_model(\"initial_model\")\nno_clr_history = train_model(standard_model, optimizer=\"sgd\")",
"Define CLR schedule\nThe tfa.optimizers.CyclicalLearningRate module return a direct schedule that can be passed to an optimizer. The schedule takes a step as its input and outputs a value calculated using CLR formula as laid out in the paper.",
"steps_per_epoch = len(x_train) // BATCH_SIZE\nclr = tfa.optimizers.CyclicalLearningRate(initial_learning_rate=INIT_LR,\n maximal_learning_rate=MAX_LR,\n scale_fn=lambda x: 1/(2.**(x-1)),\n step_size=2 * steps_per_epoch\n)\noptimizer = tf.keras.optimizers.SGD(clr)",
"Here, you specify the lower and upper bounds of the learning rate and the schedule will oscillate in between that range ([1e-4, 1e-2] in this case). scale_fn is used to define the function that would scale up and scale down the learning rate within a given cycle. step_size defines the duration of a single cycle. A step_size of 2 means you need a total of 4 iterations to complete one cycle. The recommended value for step_size is as follows:\nfactor * steps_per_epoch where factor lies within the [2, 8] range. \nIn the same CLR paper, Leslie also presented a simple and elegant method to choose the bounds for learning rate. You are encouraged to check it out as well. This blog post provides a nice introduction to the method. \nBelow, you visualize how the clr schedule looks like.",
"step = np.arange(0, EPOCHS * steps_per_epoch)\nlr = clr(step)\nplt.plot(step, lr)\nplt.xlabel(\"Steps\")\nplt.ylabel(\"Learning Rate\")\nplt.show()",
"In order to better visualize the effect of CLR, you can plot the schedule with an increased number of steps.",
"step = np.arange(0, 100 * steps_per_epoch)\nlr = clr(step)\nplt.plot(step, lr)\nplt.xlabel(\"Steps\")\nplt.ylabel(\"Learning Rate\")\nplt.show()",
"The function you are using in this tutorial is referred to as the triangular2 method in the CLR paper. There are other two functions there were explored namely triangular and exp (short for exponential). \nTrain a model with CLR",
"clr_model = tf.keras.models.load_model(\"initial_model\")\nclr_history = train_model(clr_model, optimizer=optimizer)",
"As expected the loss starts higher than the usual and then it stabilizes as the cycles progress. You can confirm this visually with the plots below. \nVisualize losses",
"(fig, ax) = plt.subplots(2, 1, figsize=(10, 8))\n\nax[0].plot(no_clr_history.history[\"loss\"], label=\"train_loss\")\nax[0].plot(no_clr_history.history[\"val_loss\"], label=\"val_loss\")\nax[0].set_title(\"No CLR\")\nax[0].set_xlabel(\"Epochs\")\nax[0].set_ylabel(\"Loss\")\nax[0].set_ylim([0, 2.5])\nax[0].legend()\n\nax[1].plot(clr_history.history[\"loss\"], label=\"train_loss\")\nax[1].plot(clr_history.history[\"val_loss\"], label=\"val_loss\")\nax[1].set_title(\"CLR\")\nax[1].set_xlabel(\"Epochs\")\nax[1].set_ylabel(\"Loss\")\nax[1].set_ylim([0, 2.5])\nax[1].legend()\n\nfig.tight_layout(pad=3.0)\nfig.show()",
"Even though for this toy example, you did not see the effects of CLR much but be noted that it is one of the main ingredients behind Super Convergence and can have a really good impact when training in large-scale settings."
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
mne-tools/mne-tools.github.io | dev/_downloads/5b9edf9c05aec2b9bb1f128f174ca0f3/40_cluster_1samp_time_freq.ipynb | bsd-3-clause | [
"%matplotlib inline",
"Non-parametric 1 sample cluster statistic on single trial power\nThis script shows how to estimate significant clusters\nin time-frequency power estimates. It uses a non-parametric\nstatistical procedure based on permutations and cluster\nlevel statistics.\nThe procedure consists of:\n\nextracting epochs\ncompute single trial power estimates\nbaseline line correct the power estimates (power ratios)\ncompute stats to see if ratio deviates from 1.\n\nHere, the unit of observation is epochs from a specific study subject.\nHowever, the same logic applies when the unit of observation is\na number of study subjects each of whom contribute their own averaged\ndata (i.e., an average of their epochs). This would then be considered\nan analysis at the \"2nd level\".\nFor more information on cluster-based permutation testing in MNE-Python,\nsee also: tut-cluster-spatiotemporal-sensor",
"# Authors: Alexandre Gramfort <alexandre.gramfort@inria.fr>\n# Stefan Appelhoff <stefan.appelhoff@mailbox.org>\n#\n# License: BSD-3-Clause\n\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport scipy.stats\n\nimport mne\nfrom mne.time_frequency import tfr_morlet\nfrom mne.stats import permutation_cluster_1samp_test\nfrom mne.datasets import sample",
"Set parameters",
"data_path = sample.data_path()\nmeg_path = data_path / 'MEG' / 'sample'\nraw_fname = meg_path / 'sample_audvis_raw.fif'\ntmin, tmax, event_id = -0.3, 0.6, 1\n\n# Setup for reading the raw data\nraw = mne.io.read_raw_fif(raw_fname)\nevents = mne.find_events(raw, stim_channel='STI 014')\n\ninclude = []\nraw.info['bads'] += ['MEG 2443', 'EEG 053'] # bads + 2 more\n\n# picks MEG gradiometers\npicks = mne.pick_types(raw.info, meg='grad', eeg=False, eog=True,\n stim=False, include=include, exclude='bads')\n\n# Load condition 1\nevent_id = 1\nepochs = mne.Epochs(raw, events, event_id, tmin, tmax, picks=picks,\n baseline=(None, 0), preload=True,\n reject=dict(grad=4000e-13, eog=150e-6))\n\n# just use right temporal sensors for speed\nepochs.pick_channels(mne.read_vectorview_selection('Right-temporal'))\nevoked = epochs.average()\n\n# Factor to down-sample the temporal dimension of the TFR computed by\n# tfr_morlet. Decimation occurs after frequency decomposition and can\n# be used to reduce memory usage (and possibly computational time of downstream\n# operations such as nonparametric statistics) if you don't need high\n# spectrotemporal resolution.\ndecim = 5\n\n# define frequencies of interest\nfreqs = np.arange(8, 40, 2)\n\n# run the TFR decomposition\ntfr_epochs = tfr_morlet(epochs, freqs, n_cycles=4., decim=decim,\n average=False, return_itc=False, n_jobs=None)\n\n# Baseline power\ntfr_epochs.apply_baseline(mode='logratio', baseline=(-.100, 0))\n\n# Crop in time to keep only what is between 0 and 400 ms\nevoked.crop(-0.1, 0.4)\ntfr_epochs.crop(-0.1, 0.4)\n\nepochs_power = tfr_epochs.data",
"Define adjacency for statistics\nTo perform a cluster-based permutation test, we need a suitable definition\nfor the adjacency of sensors, time points, and frequency bins.\nThe adjacency matrix will be used to form clusters.\nWe first compute the sensor adjacency, and then combine that with a\n\"lattice\" adjacency for the time-frequency plane, which assumes\nthat elements at index \"N\" are adjacent to elements at indices\n\"N + 1\" and \"N - 1\" (forming a \"grid\" on the time-frequency plane).",
"# find_ch_adjacency first attempts to find an existing \"neighbor\"\n# (adjacency) file for given sensor layout.\n# If such a file doesn't exist, an adjacency matrix is computed on the fly,\n# using Delaunay triangulations.\nsensor_adjacency, ch_names = mne.channels.find_ch_adjacency(\n tfr_epochs.info, 'grad')\n\n# In this case, find_ch_adjacency finds an appropriate file and\n# reads it (see log output: \"neuromag306planar\").\n# However, we need to subselect the channels we are actually using\nuse_idx = [ch_names.index(ch_name)\n for ch_name in tfr_epochs.ch_names]\nsensor_adjacency = sensor_adjacency[use_idx][:, use_idx]\n\n# Our sensor adjacency matrix is of shape n_chs × n_chs\nassert sensor_adjacency.shape == \\\n (len(tfr_epochs.ch_names), len(tfr_epochs.ch_names))\n\n# Now we need to prepare adjacency information for the time-frequency\n# plane. For that, we use \"combine_adjacency\", and pass dimensions\n# as in the data we want to test (excluding observations). Here:\n# channels × frequencies × times\nassert epochs_power.data.shape == (\n len(epochs), len(tfr_epochs.ch_names),\n len(tfr_epochs.freqs), len(tfr_epochs.times))\nadjacency = mne.stats.combine_adjacency(\n sensor_adjacency, len(tfr_epochs.freqs), len(tfr_epochs.times))\n\n# The overall adjacency we end up with is a square matrix with each\n# dimension matching the data size (excluding observations) in an\n# \"unrolled\" format, so: len(channels × frequencies × times)\nassert adjacency.shape[0] == adjacency.shape[1] == \\\n len(tfr_epochs.ch_names) * len(tfr_epochs.freqs) * len(tfr_epochs.times)",
"Compute statistic\nFor forming clusters, we need to specify a critical test statistic threshold.\nOnly data bins exceeding this threshold will be used to form clusters.\nHere, we\nuse a t-test and can make use of Scipy's percent point function of the t\ndistribution to get a t-value that corresponds to a specific alpha level\nfor significance. This threshold is often called the\n\"cluster forming threshold\".\n<div class=\"alert alert-info\"><h4>Note</h4><p>The choice of the threshold is more or less arbitrary. Choosing\n a t-value corresponding to p=0.05, p=0.01, or p=0.001 may often provide\n a good starting point. Depending on the specific dataset you are working\n with, you may need to adjust the threshold.</p></div>",
"# We want a two-tailed test\ntail = 0\n\n# In this example, we wish to set the threshold for including data bins in\n# the cluster forming process to the t-value corresponding to p=0.001 for the\n# given data.\n#\n# Because we conduct a two-tailed test, we divide the p-value by 2 (which means\n# we're making use of both tails of the distribution).\n# As the degrees of freedom, we specify the number of observations\n# (here: epochs) minus 1.\n# Finally, we subtract 0.001 / 2 from 1, to get the critical t-value\n# on the right tail (this is needed for MNE-Python internals)\ndegrees_of_freedom = len(epochs) - 1\nt_thresh = scipy.stats.t.ppf(1 - 0.001 / 2, df=degrees_of_freedom)\n\n# Set the number of permutations to run.\n# Warning: 50 is way too small for a real-world analysis (where values of 5000\n# or higher are used), but here we use it to increase the computation speed.\nn_permutations = 50\n\n# Run the analysis\nT_obs, clusters, cluster_p_values, H0 = \\\n permutation_cluster_1samp_test(epochs_power, n_permutations=n_permutations,\n threshold=t_thresh, tail=tail,\n adjacency=adjacency,\n out_type='mask', verbose=True)",
"View time-frequency plots\nWe now visualize the observed clusters that are statistically significant\nunder our permutation distribution.\n<div class=\"alert alert-danger\"><h4>Warning</h4><p>Talking about \"significant clusters\" can be convenient, but\n you must be aware of all associated caveats! For example, it\n is **invalid** to interpret the cluster p value as being\n spatially or temporally specific. A cluster with sufficiently\n low (for example < 0.05) p value at specific location does not\n allow you to say that the significant effect is at that\n particular location. The p value only tells you about the\n probability of obtaining similar or stronger/larger cluster\n anywhere in the data if there were no differences between the\n compared conditions. So it only allows you to draw conclusions\n about the differences in the data \"in general\", not at specific\n locations. See the comprehensive\n [FieldTrip tutorial](ft_cluster_) for more information.\n [FieldTrip tutorial](ft_cluster_) for more information.</p></div>\n\n.. include:: ../../links.inc",
"evoked_data = evoked.data\ntimes = 1e3 * evoked.times\n\nplt.figure()\nplt.subplots_adjust(0.12, 0.08, 0.96, 0.94, 0.2, 0.43)\n\nT_obs_plot = np.nan * np.ones_like(T_obs)\nfor c, p_val in zip(clusters, cluster_p_values):\n if p_val <= 0.05:\n T_obs_plot[c] = T_obs[c]\n\n# Just plot one channel's data\n# use the following to show a specific one:\n# ch_idx = tfr_epochs.ch_names.index('MEG 1332')\nch_idx, f_idx, t_idx = np.unravel_index(\n np.nanargmax(np.abs(T_obs_plot)), epochs_power.shape[1:])\n\nvmax = np.max(np.abs(T_obs))\nvmin = -vmax\nplt.subplot(2, 1, 1)\nplt.imshow(T_obs[ch_idx], cmap=plt.cm.gray,\n extent=[times[0], times[-1], freqs[0], freqs[-1]],\n aspect='auto', origin='lower', vmin=vmin, vmax=vmax)\nplt.imshow(T_obs_plot[ch_idx], cmap=plt.cm.RdBu_r,\n extent=[times[0], times[-1], freqs[0], freqs[-1]],\n aspect='auto', origin='lower', vmin=vmin, vmax=vmax)\nplt.colorbar()\nplt.xlabel('Time (ms)')\nplt.ylabel('Frequency (Hz)')\nplt.title(f'Induced power ({tfr_epochs.ch_names[ch_idx]})')\n\nax2 = plt.subplot(2, 1, 2)\nevoked.plot(axes=[ax2], time_unit='s')\nplt.show()"
] | [
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
jasonding1354/pyDataScienceToolkits_Base | Scikit-learn/.ipynb_checkpoints/(3)linear_regression-checkpoint.ipynb | mit | [
"内容概要\n\n如何使用pandas读入数据\n如何使用seaborn进行数据的可视化\nscikit-learn的线性回归模型和使用方法\n线性回归模型的评估测度\n特征选择的方法\n\n作为有监督学习,分类问题是预测类别结果,而回归问题是预测一个连续的结果。\n1. 使用pandas来读取数据\nPandas是一个用于数据探索、数据处理、数据分析的Python库",
"import pandas as pd\n\n# read csv file directly from a URL and save the results\ndata = pd.read_csv('http://www-bcf.usc.edu/~gareth/ISL/Advertising.csv', index_col=0)\n\n# display the first 5 rows\ndata.head()",
"上面显示的结果类似一个电子表格,这个结构称为Pandas的数据帧(data frame)。\npandas的两个主要数据结构:Series和DataFrame:\n- Series类似于一维数组,它有一组数据以及一组与之相关的数据标签(即索引)组成。\n- DataFrame是一个表格型的数据结构,它含有一组有序的列,每列可以是不同的值类型。DataFrame既有行索引也有列索引,它可以被看做由Series组成的字典。",
"# display the last 5 rows\ndata.tail()\n\n# check the shape of the DataFrame(rows, colums)\ndata.shape",
"特征:\n- TV:对于一个给定市场中单一产品,用于电视上的广告费用(以千为单位)\n- Radio:在广播媒体上投资的广告费用\n- Newspaper:用于报纸媒体的广告费用\n响应:\n- Sales:对应产品的销量\n在这个案例中,我们通过不同的广告投入,预测产品销量。因为响应变量是一个连续的值,所以这个问题是一个回归问题。数据集一共有200个观测值,每一组观测对应一个市场的情况。",
"import seaborn as sns\n\n%matplotlib inline\n\n# visualize the relationship between the features and the response using scatterplots\nsns.pairplot(data, x_vars=['TV','Radio','Newspaper'], y_vars='Sales', size=7, aspect=0.8)",
"seaborn的pairplot函数绘制X的每一维度和对应Y的散点图。通过设置size和aspect参数来调节显示的大小和比例。可以从图中看出,TV特征和销量是有比较强的线性关系的,而Radio和Sales线性关系弱一些,Newspaper和Sales线性关系更弱。通过加入一个参数kind='reg',seaborn可以添加一条最佳拟合直线和95%的置信带。",
"sns.pairplot(data, x_vars=['TV','Radio','Newspaper'], y_vars='Sales', size=7, aspect=0.8, kind='reg')",
"2. 线性回归模型\n优点:快速;没有调节参数;可轻易解释;可理解\n缺点:相比其他复杂一些的模型,其预测准确率不是太高,因为它假设特征和响应之间存在确定的线性关系,这种假设对于非线性的关系,线性回归模型显然不能很好的对这种数据建模。\n线性模型表达式:\n$y = \\beta_0 + \\beta_1x_1 + \\beta_2x_2 + ... + \\beta_nx_n$\n其中\n- y是响应\n- $\\beta_0是截距$\n- $\\beta_1是x1的系数,以此类推$\n在这个案例中:\n$y = \\beta_0 + \\beta_1TV + \\beta_2Radio + ... + \\beta_n*Newspaper$\n(1)使用pandas来构建X和y\n\nscikit-learn要求X是一个特征矩阵,y是一个NumPy向量\npandas构建在NumPy之上\n因此,X可以是pandas的DataFrame,y可以是pandas的Series,scikit-learn可以理解这种结构",
"# create a python list of feature names\nfeature_cols = ['TV', 'Radio', 'Newspaper']\n\n# use the list to select a subset of the original DataFrame\nX = data[feature_cols]\n\n# equivalent command to do this in one line\nX = data[['TV', 'Radio', 'Newspaper']]\n\n# print the first 5 rows\nX.head()\n\n# check the type and shape of X\nprint type(X)\nprint X.shape\n\n# select a Series from the DataFrame\ny = data['Sales']\n\n# equivalent command that works if there are no spaces in the column name\ny = data.Sales\n\n# print the first 5 values\ny.head()\n\nprint type(y)\nprint y.shape",
"(2)构造训练集和测试集",
"from sklearn.cross_validation import train_test_split\nX_train, X_test, y_train, y_test = train_test_split(X, y, random_state=1)\n\n# default split is 75% for training and 25% for testing\nprint X_train.shape\nprint y_train.shape\nprint X_test.shape\nprint y_test.shape",
"(3)Scikit-learn的线性回归",
"from sklearn.linear_model import LinearRegression\n\nlinreg = LinearRegression()\n\nlinreg.fit(X_train, y_train)\n\nprint linreg.intercept_\nprint linreg.coef_\n\n# pair the feature names with the coefficients\nzip(feature_cols, linreg.coef_)",
"$y = 2.88 + 0.0466 * TV + 0.179 * Radio + 0.00345 * Newspaper$\n如何解释各个特征对应的系数的意义?\n- 对于给定了Radio和Newspaper的广告投入,如果在TV广告上每多投入1个单位,对应销量将增加0.0466个单位\n- 更明确一点,加入其它两个媒体投入固定,在TV广告上没增加1000美元(因为单位是1000美元),销量将增加46.6(因为单位是1000)\n(4)预测",
"y_pred = linreg.predict(X_test)",
"3. 回归问题的评价测度\n对于分类问题,评价测度是准确率,但这种方法不适用于回归问题。我们使用针对连续数值的评价测度(evaluation metrics)。\n下面介绍三种常用的针对回归问题的评价测度",
"# define true and predicted response values\ntrue = [100, 50, 30, 20]\npred = [90, 50, 50, 30]",
"(1)平均绝对误差(Mean Absolute Error, MAE)\n$\\frac{1}{n}\\sum_{i=1}^{n}|y_i - \\hat{y_i}|$\n(2)均方误差(Mean Squared Error, MSE)\n$\\frac{1}{n}\\sum_{i=1}^{n}(y_i - \\hat{y_i})^2$\n(3)均方根误差(Root Mean Squared Error, RMSE)\n$\\sqrt{\\frac{1}{n}\\sum_{i=1}^{n}(y_i - \\hat{y_i})^2}$",
"from sklearn import metrics\nimport numpy as np\n# calculate MAE by hand\nprint \"MAE by hand:\",(10 + 0 + 20 + 10)/4.\n\n# calculate MAE using scikit-learn\nprint \"MAE:\",metrics.mean_absolute_error(true, pred)\n\n# calculate MSE by hand\nprint \"MSE by hand:\",(10**2 + 0**2 + 20**2 + 10**2)/4.\n\n# calculate MSE using scikit-learn\nprint \"MSE:\",metrics.mean_squared_error(true, pred)\n\n\n# calculate RMSE by hand\nprint \"RMSE by hand:\",np.sqrt((10**2 + 0**2 + 20**2 + 10**2)/4.)\n\n# calculate RMSE using scikit-learn\nprint \"RMSE:\",np.sqrt(metrics.mean_squared_error(true, pred))",
"计算Sales预测的RMSE",
"print np.sqrt(metrics.mean_squared_error(y_test, y_pred))",
"4. 特征选择\n在之前展示的数据中,我们看到Newspaper和销量之间的线性关系比较弱,现在我们移除这个特征,看看线性回归预测的结果的RMSE如何?",
"feature_cols = ['TV', 'Radio']\n\nX = data[feature_cols]\ny = data.Sales\n\nX_train, X_test, y_train, y_test = train_test_split(X, y, random_state=1)\n\nlinreg.fit(X_train, y_train)\n\ny_pred = linreg.predict(X_test)\n\nprint np.sqrt(metrics.mean_squared_error(y_test, y_pred))",
"我们将Newspaper这个特征移除之后,得到RMSE变小了,说明Newspaper特征不适合作为预测销量的特征,于是,我们得到了新的模型。我们还可以通过不同的特征组合得到新的模型,看看最终的误差是如何的。"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
atlury/deep-opencl | cs480/23 Linear Dimensionality Reduction.ipynb | lgpl-3.0 | [
"$\\newcommand{\\xv}{\\mathbf{x}}\n\\newcommand{\\Xv}{\\mathbf{X}}\n\\newcommand{\\yv}{\\mathbf{y}}\n\\newcommand{\\Yv}{\\mathbf{Y}}\n\\newcommand{\\zv}{\\mathbf{z}}\n\\newcommand{\\av}{\\mathbf{a}}\n\\newcommand{\\Wv}{\\mathbf{W}}\n\\newcommand{\\wv}{\\mathbf{w}}\n\\newcommand{\\betav}{\\mathbf{\\beta}}\n\\newcommand{\\gv}{\\mathbf{g}}\n\\newcommand{\\Hv}{\\mathbf{H}}\n\\newcommand{\\dv}{\\mathbf{d}}\n\\newcommand{\\Vv}{\\mathbf{V}}\n\\newcommand{\\vv}{\\mathbf{v}}\n\\newcommand{\\Uv}{\\mathbf{U}}\n\\newcommand{\\uv}{\\mathbf{u}}\n\\newcommand{\\tv}{\\mathbf{t}}\n\\newcommand{\\Tv}{\\mathbf{T}}\n\\newcommand{\\Sv}{\\mathbf{S}}\n\\newcommand{\\Gv}{\\mathbf{G}}\n\\newcommand{\\zv}{\\mathbf{z}}\n\\newcommand{\\Zv}{\\mathbf{Z}}\n\\newcommand{\\Norm}{\\mathcal{N}}\n\\newcommand{\\muv}{\\boldsymbol{\\mu}}\n\\newcommand{\\sigmav}{\\boldsymbol{\\sigma}}\n\\newcommand{\\phiv}{\\boldsymbol{\\phi}}\n\\newcommand{\\Phiv}{\\boldsymbol{\\Phi}}\n\\newcommand{\\Sigmav}{\\boldsymbol{\\Sigma}}\n\\newcommand{\\Lambdav}{\\boldsymbol{\\Lambda}}\n\\newcommand{\\half}{\\frac{1}{2}}\n\\newcommand{\\argmax}[1]{\\underset{#1}{\\operatorname{argmax}}}\n\\newcommand{\\argmin}[1]{\\underset{#1}{\\operatorname{argmin}}}\n\\newcommand{\\dimensionbar}[1]{\\underset{#1}{\\operatorname{|}}}\n\\newcommand{\\grad}{\\mathbf{\\nabla}}\n\\newcommand{\\ebx}[1]{e^{\\betav_{#1}^T \\xv_n}}\n\\newcommand{\\eby}[1]{e^{y_{n,#1}}}\n\\newcommand{\\Tiv}{\\mathbf{Ti}}\n\\newcommand{\\Fv}{\\mathbf{F}}\n\\newcommand{\\ones}[1]{\\mathbf{1}_{#1}}$\nLinear Dimensionality Reduction\nPrincipal Components Analysis (PCA)\nPrincipal Components Analysis (PCA) is a way to find and use directions in the data space along which data samples vary the most.\nAssume samples have $D$ attributes, meaning each sample is\n$D$-dimensional. We want to project each sample to a smaller space,\nof dimension $M$ such that the variance of the projected data has\nmaximum variance.\nLet's assume each sample $\\xv_n$ has zero mean. For $M=1$, we want\nthe direction vector (unit length) $\\uv_1$ that maximizes the variance\nof each projected sample. This variance is\n$$\n\\frac{1}{N} \\sum_{n=1}^N (\\uv_1^T \\xv_n)^2 = \\uv_1^T \\Sv \\uv_1\n$$\nwhere\n$$\n\\Sv = \\frac{1}{N} \\sum_{n=1}^N \\xv_n \\xv_n^T\n$$\nTo maximize $\\uv_1^T \\Sv \\uv_1$ in a non-trivial way, we constrain\n$\\uv_1^T \\uv_1 = 1$. This constraint is added with a Lagrange\nmultipler so that we want $\\uv_1$ that maximizes\n$$\n \\uv_1^T \\Sv \\uv_1+ \\lambda_1(1-\\uv_1^T \\uv_1)\n$$\nSetting the derivative of this with respect to $\\uv_1$ to zero we find\nthat\n$$\n\\Sv \\uv_1 = \\lambda_1 \\uv_1\n$$\nso $\\uv_1$ is an eigenvector of $\\Sv$ and $\\lambda_1$ is an eigenvalue\nthat is the variance of the projected samples.\nAdditional directions, all orthogonal to each other, are found by the\neigendecomposition of $\\Sv$, or, equivalently, the singular value\ndecomposition of data sample matrix $\\Xv$ with mean zero.\n$$\n\\Uv \\Sigmav \\Vv^T = \\Xv\n$$\nThe columns of $\\Vv$ are the eigenvectors of $\\Sv$ and the elements of the\ndiagonal matrix $\\Sigmav$ are the square root of the eigenvalues.\nX = X - np.mean(X,axis=0)\nU,s,V = np.linalg.svd(X)\nV = V.T\n\nThen, to project onto the eigenvectors, just\nproj = np.dot(X,V)\n\nLet's generate some two-dimensional samples from a Normal distribution\nwith mean [0,4] and covariance matrix \n$\\Sigma=\\begin{bmatrix} 0.9 & 0.8\\ 0.8 & 0.9 \\end{bmatrix}$. Then we\nwill calculate the svd of the samples and project the samples to the\ntwo eigenvectors.",
"import numpy as np\nimport matplotlib.pyplot as plt\n%matplotlib inline\n\ndef mvNormalRand(n,mean,sigma): \n mean = np.array(mean) # to allow entry of values as simple lists \n sigma = np.array(sigma) \n X = np.random.normal(0,1,n*len(mean)).reshape((n,len(mean))) \n return np.dot(X, np.linalg.cholesky(sigma)) + mean \n\nN = 200\ndata = mvNormalRand(N,[0,4],[[0.9,0.8],[0.8,0.9]])\ndata.shape\n\nmeans = np.mean(data,axis=0)\ndatan = data - means\n\nU,S,V = np.linalg.svd(datan)\nV = V.T\nV.shape\n\nV\n\ndef drawline(v,means,len,color,label):\n p1 = means - v*len/2\n p2 = means + v*len/2\n plt.plot([p1[0],p2[0]],[p1[1],p2[1]],label=label,color=color,linewidth=2)\n\n\ndef plotOriginalAndTransformed(data,V):\n plt.figure(figsize=(10,5))\n plt.subplot(1,2,1)\n plt.plot(data[:,0],data[:,1],'.')\n means = np.mean(data,axis=0)\n drawline(V[:,0],means,8,\"red\",\"First\")\n drawline(V[:,1],means,8,\"green\",\"Second\")\n leg = plt.legend()\n plt.axis('equal')\n plt.gca().set_aspect('equal')\n\n\n plt.subplot(1,2,2) \n proj = np.dot(data - means, V)\n plt.plot(proj[:,0],proj[:,1],'.')\n plt.axis('equal')\n plt.gca().set_aspect('equal')\n plt.xlabel(\"First\")\n plt.ylabel(\"Second\")\n plt.title(\"Projected to First and Second Singular Vectors\");\n \nplotOriginalAndTransformed(data,V)",
"Now, if we have two classes of data, we might be able to classify the\ndata well with just the projection onto just one eigenvector. Could\nbe either eigenvector.\nFirst, with second class having mean [-5,3] and \n$\\Sigma=\\begin{bmatrix} 0.9 & 0.8\\ -0.8 & 0.9 \\end{bmatrix}$.",
"N = 200\ndata1 = mvNormalRand(N,[0,4],[[0.9,0.8],[0.8,0.9]])\ndata2 = mvNormalRand(N,[-5,3],[[0.9,0.8],[-0.8,0.9]])\ndata = np.vstack((data1,data2))\n\nmeans = np.mean(data,axis=0)\n\nU,S,V = np.linalg.svd(data-means)\nV = V.T\n\nplotOriginalAndTransformed(data,V)",
"And again, with first class \n$\\Sigma=\\begin{bmatrix} 0.9 & 0.2\\ 0.2 & 20 \\end{bmatrix}$\nand second class having\n$\\Sigma=\\begin{bmatrix} 0.9 & 0.2\\ -0.2 & 20 \\end{bmatrix}$.",
"N = 200\ndata1 = mvNormalRand(N,[0,4],[[0.9,0.2],[0.2,0.9]])\ndata2 = mvNormalRand(N,[-5,3],[[0.9,0.2],[-0.2,20]])\ndata = np.vstack((data1,data2))\n\nmeans = np.mean(data,axis=0)\n\nU,S,V = np.linalg.svd(data - means)\nV = V.T\n\nplotOriginalAndTransformed(data,V)",
"Sammon Mapping\nIntroductions to Sammon Mapping are found at\n * Sammon Mapping in Wikipedia\n * Sammon Mapping, by Paul Henderson\nA Sammon Mapping is one that maps each data sample $d_i$ to a location in two dimensions, $p_i$, such that distances between pairs of points are preserved. The objective defined by Sammon is to minimize the squared difference in distances between pairs of data points and their projections through the use of an objective function like\n$$\n\\sum_{i=1}^{N-1} \\sum_{j=i+1}^N \\left (\\frac{||d_i - d_j||}{s} - ||p_i - p_j|| \\right )^2\n$$\nThe typical Sammon Mapping algorithm does a gradient descent on this function by adjusting all of the two-dimensional points $p_{ij}$. Each iteration requires computing all pairwise distances. \nOne way to decrease this amount of work is to just work with a subset of points, perhaps picked randomly. To display all points, we just find an explicit mapping (function) that projects a data sample to a two-dimensional point. Let's call this $f$, so $f(d_i) = p_i$. For now, let's just use a linear function for $f$, so \n$$\nf(d_i) = d_i^T \\theta\n$$\nwhere $\\theta$ is a $D\\times 2$ matrix of coefficients. \nTo do this in python, let's start with calculating all pairwise distances. Let $X$ be our $N\\times D$ matrix of data samples, one per row. We can use a list comprehension to calculate the distance between each row in $X$ and each of the rows following that row.",
"X = np.array([ [0,1], [4,5], [10,20]])\nX\n\nN = X.shape[0] # number of rows\n[(i,j) for i in range(N-1) for j in range(i+1,N)]\n\n[X[i,:] - X[j,:] for i in range(N-1) for j in range(i+1,N)]\n\nnp.array([X[i,:] - X[j,:] for i in range(N-1) for j in range(i+1,N)])",
"To convert these differences to distances, just",
"diffs = np.array([X[i,:] - X[j,:] for i in range(N-1) for j in range(i+1,N)])\nnp.sqrt(np.sum(diffs*diffs, axis=1))",
"And to calculate the projection, a call to np.dot is all that is needed. Let's make a function to do the projection, and one to convert differences to distances.",
"def diffToDist(dX):\n return np.sqrt(np.sum(dX*dX, axis=1))\n\ndef proj(X,theta):\n return np.dot(X,theta)\n\ndiffToDist(diffs)\n\nproj(X,np.array([[1,0.2],[0.3,0.8]]))",
"Now, to follow the negative gradient of the objective function, we need its gradient, with respect to $\\theta$. With a little work, you can derive it to find\n$$\n\\begin{align}\n\\nabla_\\theta &= \\frac{1}{2} \\sum_{i=1}^{N-1} \\sum_{j=i+1}^N \\left (\\frac{||d_i - d_j||}{s} - ||p_i - p_j|| \\right )^2 \\\n &= 2 \\frac{1}{2} \\sum_{i=1}^{N-1} \\sum_{j=i+1}^N \\left (\\frac{||d_i - d_j||}{s} - ||f(d_i;\\theta) - f(d_j;\\theta)|| \\right ) (-1) \\nabla_\\theta ||f(d_i;\\theta) - f(d_j;\\theta)||\\\n &= - \\sum_{i=1}^{N-1} \\sum_{j=i+1}^N \\left (\\frac{||d_i - d_j||}{s} - ||f(d_i;\\theta) - f(d_j;\\theta)|| \\right ) \\frac{(d_i-d_j)^T (p_i - p_j)}{||p_i - p_j||} \n \\end{align}\n$$\nSo, we need to keep the differences around, in addition to the distances. First, let's write a function for the objective function, so we can monitor it, to make sure we are decrease it with each iteration. Let's multiply by $1/N$ so the values we get don't grow huge with large $N$.",
"def objective(X,proj,theta,s):\n N = X.shape[0]\n P = proj(X,theta)\n dX = np.array([X[i,:] - X[j,:] for i in range(N-1) for j in range(i+1,N)])\n dP = np.array([P[i,:] - P[j,:] for i in range(N-1) for j in range(i+1,N)])\n return 1/N * np.sum( (diffToDist(dX)/s - diffToDist(dP))**2)",
"Now for the gradient\n$$\n\\begin{align}\n\\nabla_\\theta &= - \\sum_{i=1}^{N-1} \\sum_{j=i+1}^N \\left (\\frac{||d_i - d_j||}{s} - ||f(d_i;\\theta) - f(d_j;\\theta)|| \\right ) \\frac{(d_i-d_j)^T (p_i - p_j)}{||p_i - p_j||}\n \\end{align}\n$$",
"def gradient(X,proj,theta,s):\n N = X.shape[0]\n P = proj(X,theta)\n dX = np.array([X[i,:] - X[j,:] for i in range(N-1) for j in range(i+1,N)])\n dP = np.array([P[i,:] - P[j,:] for i in range(N-1) for j in range(i+1,N)])\n distX = diffToDist(dX)\n distP = diffToDist(dP)\n return -1/N * np.dot((((distX/s - distP) / distP).reshape((-1,1)) * dX).T, dP)",
"This last line has the potential for dividing by zero! Let's avoid this, in a very ad-hoc manner, by replacing zeros in distP by its smallest nonzero value",
"def gradient(X,proj,theta,s):\n N = X.shape[0]\n P = proj(X,theta)\n dX = np.array([X[i,:] - X[j,:] for i in range(N-1) for j in range(i+1,N)])\n dP = np.array([P[i,:] - P[j,:] for i in range(N-1) for j in range(i+1,N)])\n distX = diffToDist(dX)\n distP = diffToDist(dP)\n minimumNonzero = np.min(distP[distP>0])\n distP[distP==0] = minimumNonzero\n return -1/N * np.dot((((distX/s - distP) / distP).reshape((-1,1)) * dX).T, dP)\n\nn = 8\nX = np.random.multivariate_normal([2,3], 0.5*np.eye(2), n)\nX = np.vstack((X,\n np.random.multivariate_normal([1,-1], 0.2*np.eye(2), n)))\nX = X - np.mean(X,axis=0)\ns = 0.5 * np.sqrt(np.max(np.var(X,axis=0)))\nprint('s',s)\n\n# theta = np.random.uniform(-1,1,(2,2))\n# theta = np.eye(2) + np.random.uniform(-0.1,0.1,(2,2))\nu,svalues,v = np.linalg.svd(X)\nv = v.T\ntheta = v[:,:2]\n\nnIterations = 10\nvals = []\nfor i in range(nIterations):\n theta -= 0.001 * gradient(X,proj,theta,s)\n v = objective(X,proj,theta,s)\n vals.append(v)\n\n# print('X\\n',X)\n# print('P\\n',proj(X,theta))\nprint('theta\\n',theta)\nplt.figure(figsize=(10,15))\nplt.subplot(3,1,(1,2))\nP = proj(X,theta)\nmn = 1.1*np.min(X)\nmx = 1.1*np.max(X)\nplt.axis([mn,mx,mn,mx])\n#strings = [chr(ord('a')+i) for i in range(X.shape[0])]\nstrings = [i for i in range(X.shape[0])]\nfor i in range(X.shape[0]):\n plt.text(X[i,0],X[i,1],strings[i],color='black',size=15)\nfor i in range(P.shape[0]):\n plt.text(P[i,0],P[i,1],strings[i],color='green',size=15)\nplt.title('2D data, Originals in black')\n\nplt.subplot(3,1,3)\nplt.plot(vals)\nplt.ylabel('Objective Function');",
"Let's watch the mapping develop. One way to do this is to save the values of $\\theta$ after each iteration, then use interact to step through the interations.",
"from IPython.html.widgets import interact\n\nn = 10\nX = np.random.multivariate_normal([2,3], 0.5*np.eye(2), n)\nX = np.vstack((X,\n np.random.multivariate_normal([1,-1], 0.2*np.eye(2), n)))\nX = X - np.mean(X,axis=0)\ns = 0.5 * np.sqrt(np.max(np.var(X,axis=0)))\nprint('s',s)\n\nu,svalues,v = np.linalg.svd(X)\nV = v.T\ntheta = V[:,:2]\n\ntheta = (np.random.uniform(size=((2,2)))-0.5) * 10\n\nthetas = [theta] # store all theta values\n\nnIterations = 200\nvals = []\nfor i in range(nIterations):\n theta = theta - 0.02 * gradient(X,proj,theta,s)\n v = objective(X,proj,theta,s)\n thetas.append(theta.copy())\n vals.append(v)\n\n\nmn = 1.5*np.min(X)\nmx = 1.5*np.max(X)\n\nstrings = [i for i in range(X.shape[0])]\n\n@interact(i=(0,nIterations-1,1))\ndef plotIteration(i):\n #plt.cla()\n plt.figure(figsize=(8,10))\n theta = thetas[i]\n val = vals[i]\n P = proj(X,theta)\n plt.axis([mn,mx,mn,mx])\n for i in range(X.shape[0]):\n plt.text(X[i,0],X[i,1],strings[i],color='black',size=15) \n for i in range(P.shape[0]):\n plt.text(P[i,0],P[i,1],strings[i],color='red',size=15) \n plt.title('2D data, Originals in black. Objective = ' + str(val))"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |