repo_name
stringlengths
6
77
path
stringlengths
8
215
license
stringclasses
15 values
cells
sequence
types
sequence
GoogleCloudPlatform/training-data-analyst
quests/serverlessml/02_bqml/solution/first_model.ipynb
apache-2.0
[ "First BigQuery ML models for Taxifare Prediction\nIn this notebook, we will use BigQuery ML to build our first models for taxifare prediction.BigQuery ML provides a fast way to build ML models on large structured and semi-structured datasets.\nLearning Objectives\n\nChoose the correct BigQuery ML model type and specify options\nEvaluate the performance of your ML model\nImprove model performance through data quality cleanup\nCreate a Deep Neural Network (DNN) using SQL\n\nEach learning objective will correspond to a #TODO in the student lab notebook -- try to complete that notebook first before reviewing this solution notebook. \nWe'll start by creating a dataset to hold all the models we create in BigQuery", "%%bash\nexport PROJECT=$(gcloud config list project --format \"value(core.project)\")\necho \"Your current GCP Project Name is: \"$PROJECT\n\n%%bash\npip install tensorflow==2.6.0 --user", "Let's make sure we install the necessary version of tensorflow. After doing the pip install above, click Restart the kernel on the notebook so that the Python environment picks up the new packages.", "import os\n\nPROJECT = \"qwiklabs-gcp-bdc77450c97b4bf6\" # REPLACE WITH YOUR PROJECT NAME\nREGION = \"us-central1\" # REPLACE WITH YOUR BUCKET REGION e.g. us-central1\nimport tensorflow as tf\n\nprint(\"TensorFlow version: \",tf.version.VERSION)\n\n# Do not change these\nos.environ[\"PROJECT\"] = PROJECT\nos.environ[\"REGION\"] = REGION\nos.environ[\"BUCKET\"] = PROJECT # DEFAULT BUCKET WILL BE PROJECT ID\n\nif PROJECT == \"your-gcp-project-here\":\n print(\"Don't forget to update your PROJECT name! Currently:\", PROJECT)", "Create a BigQuery Dataset and Google Cloud Storage Bucket\nA BigQuery dataset is a container for tables, views, and models built with BigQuery ML. Let's create one called serverlessml if we have not already done so in an earlier lab. We'll do the same for a GCS bucket for our project too.", "%%bash\n\n## Create a BigQuery dataset for serverlessml if it doesn't exist\ndatasetexists=$(bq ls -d | grep -w serverlessml)\n\nif [ -n \"$datasetexists\" ]; then\n echo -e \"BigQuery dataset already exists, let's not recreate it.\"\n\nelse\n echo \"Creating BigQuery dataset titled: serverlessml\"\n \n bq --location=US mk --dataset \\\n --description 'Taxi Fare' \\\n $PROJECT:serverlessml\n echo \"\\nHere are your current datasets:\"\n bq ls\nfi \n \n## Create GCS bucket if it doesn't exist already...\nexists=$(gsutil ls -d | grep -w gs://${PROJECT}/)\n\nif [ -n \"$exists\" ]; then\n echo -e \"Bucket exists, let's not recreate it.\"\n \nelse\n echo \"Creating a new GCS bucket.\"\n gsutil mb -l ${REGION} gs://${PROJECT}\n echo \"\\nHere are your current buckets:\"\n gsutil ls\nfi", "Model 1: Raw data\nLet's build a model using just the raw data. It's not going to be very good, but sometimes it is good to actually experience this.\nThe model will take a minute or so to train. When it comes to ML, this is blazing fast.", "%%bigquery\nCREATE OR REPLACE MODEL serverlessml.model1_rawdata\nOPTIONS(input_label_cols=['fare_amount'], model_type='linear_reg') AS\n\nSELECT\n (tolls_amount + fare_amount) AS fare_amount,\n pickup_longitude AS pickuplon,\n pickup_latitude AS pickuplat,\n dropoff_longitude AS dropofflon,\n dropoff_latitude AS dropofflat,\n passenger_count*1.0 AS passengers\nFROM `nyc-tlc.yellow.trips`\nWHERE ABS(MOD(FARM_FINGERPRINT(CAST(pickup_datetime AS STRING)), 100000)) = 1", "Once the training is done, visit the BigQuery Cloud Console and look at the model that has been trained. Then, come back to this notebook.\nNote that BigQuery automatically split the data we gave it, and trained on only a part of the data and used the rest for evaluation. We can look at eval statistics on that held-out data:", "%%bigquery\nSELECT * FROM ML.EVALUATE(MODEL serverlessml.model1_rawdata)", "Let's report just the error we care about, the Root Mean Squared Error (RMSE)", "%%bigquery\nSELECT SQRT(mean_squared_error) AS rmse FROM ML.EVALUATE(MODEL serverlessml.model1_rawdata)", "We told you it was not going to be good! Recall that our heuristic got 8.13, and our target is $6.\nNote that the error is going to depend on the dataset that we evaluate it on.\nWe can also evaluate the model on our own held-out benchmark/test dataset, but we shouldn't make a habit of this (we want to keep our benchmark dataset as the final evaluation, not make decisions using it all along the way. If we do that, our test dataset won't be truly independent).", "%%bigquery\nSELECT SQRT(mean_squared_error) AS rmse FROM ML.EVALUATE(MODEL serverlessml.model1_rawdata,\n(\nSELECT\n (tolls_amount + fare_amount) AS fare_amount,\n pickup_longitude AS pickuplon,\n pickup_latitude AS pickuplat,\n dropoff_longitude AS dropofflon,\n dropoff_latitude AS dropofflat,\n passenger_count*1.0 AS passengers\nFROM `nyc-tlc.yellow.trips`\nWHERE ABS(MOD(FARM_FINGERPRINT(CAST(pickup_datetime AS STRING)), 100000)) = 2\nAND\n trip_distance > 0\n AND fare_amount >= 2.5\n AND pickup_longitude > -78\n AND pickup_longitude < -70\n AND dropoff_longitude > -78\n AND dropoff_longitude < -70\n AND pickup_latitude > 37\n AND pickup_latitude < 45\n AND dropoff_latitude > 37\n AND dropoff_latitude < 45\n AND passenger_count > 0\n))", "Model 2: Apply data cleanup\nRecall that we did some data cleanup in the previous lab. Let's do those before training.\nThis is a dataset that we will need quite frequently in this notebook, so let's extract it first.", "%%bigquery\nCREATE OR REPLACE TABLE serverlessml.cleaned_training_data AS\n\nSELECT\n (tolls_amount + fare_amount) AS fare_amount,\n pickup_longitude AS pickuplon,\n pickup_latitude AS pickuplat,\n dropoff_longitude AS dropofflon,\n dropoff_latitude AS dropofflat,\n passenger_count*1.0 AS passengers\nFROM `nyc-tlc.yellow.trips`\nWHERE ABS(MOD(FARM_FINGERPRINT(CAST(pickup_datetime AS STRING)), 100000)) = 1\nAND\n trip_distance > 0\n AND fare_amount >= 2.5\n AND pickup_longitude > -78\n AND pickup_longitude < -70\n AND dropoff_longitude > -78\n AND dropoff_longitude < -70\n AND pickup_latitude > 37\n AND pickup_latitude < 45\n AND dropoff_latitude > 37\n AND dropoff_latitude < 45\n AND passenger_count > 0\n\n%%bigquery\n-- LIMIT 0 is a free query; this allows us to check that the table exists.\nSELECT * FROM serverlessml.cleaned_training_data\nLIMIT 0\n\n%%bigquery\nCREATE OR REPLACE MODEL serverlessml.model2_cleanup\nOPTIONS(input_label_cols=['fare_amount'], model_type='linear_reg') AS\n\nSELECT\n*\nFROM\nserverlessml.cleaned_training_data\n\n%%bigquery\nSELECT SQRT(mean_squared_error) AS rmse FROM ML.EVALUATE(MODEL serverlessml.model2_cleanup)", "Model 3: More sophisticated models\nWhat if we try a more sophisticated model? Let's try Deep Neural Networks (DNNs) in BigQuery:\nDNN\nTo create a DNN, simply specify dnn_regressor for the model_type and add your hidden layers.", "%%bigquery\n-- This model type is in alpha, so it may not work for you yet. This training takes on the order of 15 minutes.\nCREATE OR REPLACE MODEL serverlessml.model3b_dnn\nOPTIONS(input_label_cols=['fare_amount'], model_type='dnn_regressor', hidden_units=[32, 8]) AS\n\nSELECT\n*\nFROM\nserverlessml.cleaned_training_data\n\n%%bigquery\nSELECT SQRT(mean_squared_error) AS rmse FROM ML.EVALUATE(MODEL serverlessml.model3b_dnn)", "Nice!\nEvaluate DNN on benchmark dataset\nLet's use the same validation dataset to evaluate -- remember that evaluation metrics depend on the dataset. You can not compare two models unless you have run them on the same withheld data.", "%%bigquery\nSELECT SQRT(mean_squared_error) AS rmse \nFROM ML.EVALUATE(MODEL serverlessml.model3b_dnn,\n(\nSELECT\n (tolls_amount + fare_amount) AS fare_amount,\n pickup_datetime,\n pickup_longitude AS pickuplon,\n pickup_latitude AS pickuplat,\n dropoff_longitude AS dropofflon,\n dropoff_latitude AS dropofflat,\n passenger_count*1.0 AS passengers,\n 'unused' AS key\nFROM `nyc-tlc.yellow.trips`\nWHERE ABS(MOD(FARM_FINGERPRINT(CAST(pickup_datetime AS STRING)), 10000)) = 2\nAND\n trip_distance > 0\n AND fare_amount >= 2.5\n AND pickup_longitude > -78\n AND pickup_longitude < -70\n AND dropoff_longitude > -78\n AND dropoff_longitude < -70\n AND pickup_latitude > 37\n AND pickup_latitude < 45\n AND dropoff_latitude > 37\n AND dropoff_latitude < 45\n AND passenger_count > 0\n))", "Wow! Later in this sequence of notebooks, we will get to below $4, but this is quite good, for very little work.\nIn this notebook, we showed you how to use BigQuery ML to quickly build ML models. We will come back to BigQuery ML when we want to experiment with different types of feature engineering. The speed of BigQuery ML is very attractive for development.\nCopyright 2022 Google Inc.\nLicensed under the Apache License, Version 2.0 (the \"License\"); you may not use this file except in compliance with the License. You may obtain a copy of the License at\nhttp://www.apache.org/licenses/LICENSE-2.0\nUnless required by applicable law or agreed to in writing, software distributed under the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
Jim00000/Numerical-Analysis
5_Numerical_Differentiation_And_Integration.ipynb
unlicense
[ "★ Numerical Differentiaion and Integration ★", "# Import modules\nimport math\nimport sympy as sym\nimport numpy as np\nimport scipy \nimport matplotlib.pyplot as plt\nimport plotly\nimport plotly.plotly as ply\nimport plotly.figure_factory as ply_ff\nfrom IPython.display import Math\nfrom IPython.display import display\n\n# Startup plotly\nplotly.offline.init_notebook_mode(connected=True)\n\n''' Fix MathJax issue '''\n# The polling here is to ensure that plotly.js has already been loaded before\n# setting display alignment in order to avoid a race condition.\nfrom IPython.core.display import display, HTML\ndisplay(HTML(\n '<script>'\n 'var waitForPlotly = setInterval( function() {'\n 'if( typeof(window.Plotly) !== \"undefined\" ){'\n 'MathJax.Hub.Config({ SVG: { font: \"STIX-Web\" }, displayAlign: \"center\" });'\n 'MathJax.Hub.Queue([\"setRenderer\", MathJax.Hub, \"SVG\"]);'\n 'clearInterval(waitForPlotly);'\n '}}, 250 );'\n '</script>'\n))", "5.1 Numerical Differentiation\nTwo-point forward-difference formula\n$f'(x) = \\frac{f(x+h) - f(x)}{h} - \\frac{h}{2}f''(c)$\nwhere $c$ is between $x$ and $x+h$\nExample\nUse the two-point forward-difference formula with $h = 0.1$ to approximate the derivative of $f(x) = 1/x$ at $x = 2$", "# Parameters\nx = 2\nh = 0.1\n\n# Symbolic computation\nsym_x = sym.Symbol('x')\nsym_deri_x1 = sym.diff(1 / sym_x, sym_x)\nsym_deri_x1_num = sym_deri_x1.subs(sym_x, x).evalf()\n\n# Approximation\nf = lambda x : 1 / x\nderi_x1 = (f(x + h) - f(x)) / h\n\n# Comparison\nprint('approximate = %f, real value = %f, backward error = %f' %(deri_x1, sym_deri_x1_num, abs(deri_x1 - sym_deri_x1_num)) )", "Three-point centered-difference formula\n$f'(x) = \\frac{f(x+h) - f(x-h)}{2h} - \\frac{h^2}{6}f'''(c)$\nwhere $x-h < c < x+h$\nExample\nUse the three-point centered-difference formula with $h = 0.1$ to approximate the derivative of $f(x) = 1 / x$ at $x = 2$", "# Parameters\nx = 2\nh = 0.1\nf = lambda x : 1 / x\n\n# Symbolic computation\nsym_x = sym.Symbol('x')\nsym_deri_x1 = sym.diff(1 / sym_x, sym_x)\nsym_deri_x1_num = sym_deri_x1.subs(sym_x, x).evalf()\n\n# Approximation\nderi_x1 = (f(x + h) - f(x - h)) / (2 * h)\n\n# Comparison\nprint('approximate = %f, real value = %f, backward error = %f' %(deri_x1, sym_deri_x1_num, abs(deri_x1 - sym_deri_x1_num)) )", "Three-point centered-difference formula for second derivative\n$f''(x) = \\frac{f(x - h) - 2f(x) + f(x + h)}{h^2} - \\frac{h^2}{12}f^{(iv)}(c)$\nfor some $c$ between $x - h$ and $x + h$\nRounding error\nExample\nApproximate the derivative of $f(x) = e^x$ at $x = 0$", "# Parameters\nf = lambda x : math.exp(x)\nreal_value = 1\nh_msg = \"$10^{-%d}$\"\ntwp_deri_x1 = lambda x, h : ( f(x + h) - f(x) ) / h\nthp_deri_x1 = lambda x, h : ( f(x + h) - f(x - h) ) / (2 * h)\n\ndata = [\n [\"h\", \n \"$f'(x) \\\\approx \\\\frac{e^{x+h} - e^x}{h}$\", \n \"error\", \n \"$f'(x) \\\\approx \\\\frac{e^{x+h} - e^{x-h}}{2h}$\", \n \"error\"],\n]\n\nfor i in range(1,10):\n h = pow(10, -i)\n twp_deri_x1_value = twp_deri_x1(0, h) \n thp_deri_x1_value = thp_deri_x1(0, h)\n row = [\"\", \"\", \"\", \"\", \"\"]\n row[0] = h_msg %i\n row[1] = '%.14f' %twp_deri_x1_value\n row[2] = '%.14f' %abs(twp_deri_x1_value - real_value)\n row[3] = '%.14f' %thp_deri_x1_value\n row[4] = '%.14f' %abs(thp_deri_x1_value - real_value)\n data.append(row)\n\ntable = ply_ff.create_table(data)\nplotly.offline.iplot(table, show_link=False)", "Extrapolation for order n formula\n$ Q \\approx \\frac{2^nF(h/2) - F(h)}{2^n - 1} $", "sym.init_printing(use_latex=True)\n\nx = sym.Symbol('x')\ndx = sym.diff(sym.exp(sym.sin(x)), x)\nMath('Derivative : %s' %sym.latex(dx) )", "5.2 Newton-Cotes Formulas For Numerical Integration\nTrapezoid Rule\n$\\int_{x_0}^{x_1} f(x) dx = \\frac{h}{2}(y_0 + y_1) - \\frac{h^3}{12}f''(c)$\nwhere $h = x_1 - x_0$ and $c$ is between $x_0$ and $x_1$\nSimpson's Rule\n$\\int_{x_0}^{x_2} f(x) dx = \\frac{h}{3}(y_0 + 4y_1 + y_2) - \\frac{h^5}{90}f^{(iv)}(c)$\nwhere $h = x_2 - x_1 = x_1 - x_0$ and $c$ is between $x_0$ and $x_2$\nExample\nApply the Trapezoid Rule and Simpson's Rule to approximate $\\int_{1}^{2} \\ln(x) dx$ and find an upper bound for the error in your approximations", "# Apply Trapezoid Rule\ntrapz = scipy.integrate.trapz([np.log(1), np.log(2)], [1, 2])\n\n# Evaluate the error term of Trapezoid Rule\nsym_x = sym.Symbol('x')\nexpr = sym.diff(sym.log(sym_x), sym_x, 2)\ntrapz_err = abs(expr.subs(sym_x, 1).evalf() / 12)\n\n# Print out results\nprint('Trapezoid rule : %f and upper bound error : %f' %(trapz, trapz_err) )\n\n# Apply Simpson's Rule\narea = scipy.integrate.simps([np.log(1), np.log(1.5), np.log(2)], [1, 1.5, 2])\n\n# Evaluate the error term\nsym_x = sym.Symbol('x')\nexpr = sym.diff(sym.log(sym_x), sym_x, 4)\nsimps_err = abs( pow(0.5, 5) / 90 * expr.subs(sym_x, 1).evalf() )\n\n# Print out results\nprint('Simpson\\'s rule : %f and upper bound error : %f' %(area, simps_err) )", "Composite Trapezoid Rule\n$\\int_{a}^{b} f(x) dx = \\frac{h}{2} \\left ( y_0 + y_m + 2\\sum_{i=1}^{m-1}y_i \\right ) - \\frac{(b-a)h^2}{12}f''(c)$\nwhere $h = (b - a) / m $ and $c$ is between $a$ and $b$\nComposite Simpson's Rule\n$ \\int_{a}^{b}f(x)dx = \\frac{h}{3}\\left [ y_0 + y_{2m} + 4\\sum_{i=1}^{m}y_{2i-1} + 2\\sum_{i=1}^{m - 1}y_{2i} \\right ] - \\frac{(b-a)h^4}{180}f^{(iv)}(c) $\nwhere $c$ is between $a$ and $b$\nExample\nCarry out four-panel approximations of $\\int_{1}^{2} \\ln{x} dx$ using the composite Trapezoid Rule and composite Simpson's Rule", "# Apply composite Trapezoid Rule\nx = np.linspace(1, 2, 5)\ny = np.log(x)\ntrapz = scipy.integrate.trapz(y, x)\n\n# Error term\nsym_x = sym.Symbol('x')\nexpr = sym.diff(sym.log(sym_x), sym_x, 2)\ntrapz_err = abs( (2 - 1) * pow(0.25, 2) / 12 * expr.subs(sym_x, 1).evalf() )\n\nprint('Trapezoid Rule : %f, error = %f' %(trapz, trapz_err) )\n\n# Apply composite Trapezoid Rule\nx = np.linspace(1, 2, 9)\ny = np.log(x)\narea = scipy.integrate.simps(y, x)\n\n# Error term\nsym_x = sym.Symbol('x')\nexpr = sym.diff(sym.log(sym_x), sym_x, 4)\nsimps_err = abs( (2 - 1) * pow(0.125, 4) / 180 * expr.subs(sym_x, 1).evalf() )\n\nprint('Simpson\\'s Rule : %f, error = %f' %(area, simps_err) )", "Midpoint Rule\n$ \\int_{x_0}^{x_1} f(x)dx = hf(\\omega) + \\frac{h^3}{24}f''(c) $\nwhere $ h = (x_1 - x_0) $, $\\omega$ is the midpoint $ x_0 + h / 2 $, and $c$ is between $x_0$ and $x_1$\nComposite Midpoint Rule\n$ \\int_{a}^{b} f(x) dx = h \\sum_{i=1}^{m}f(\\omega_{i}) + \\frac{(b - a)h^2}{24} f''(c) $\nwhere $h = (b - a) / m$ and $c$ is between $a$ and $b$. The $\\omega_{i}$ are the midpoints of the $m$ equal subintervals of $[a,b]$\nExample\nApproximate $\\int_{0}^{1} \\frac{\\sin x}{x} dx$ by using the composite Midpoint Rule with $m = 10$ panels", "# Parameters\nm = 10\nh = (1 - 0) / m\nf = lambda x : np.sin(x) / x\nmids = np.arange(0 + h/2, 1, h)\n\n# Apply composite midpoint rule\narea = h * np.sum(f(mids))\n\n# Error term\nsym_x = sym.Symbol('x')\nexpr = sym.diff(sym.sin(sym_x) / sym_x, sym_x, 2)\nmid_err = abs( (1 - 0) * pow(h, 2) / 24 * expr.subs(sym_x, 1).evalf() )\n\n# Print out\nprint('Composite Midpoint Rule : %.8f, error = %.8f' %(area, mid_err) )", "5.3 Romberg Integration", "def romberg(f, a, b, step):\n R = np.zeros(step * step).reshape(step, step)\n R[0][0] = (b - a) * (f(a) + f(b)) / 2\n for j in range(1, step):\n h = (b - a) / pow(2, j)\n summ = 0\n for i in range(1, pow(2, j - 1) + 1):\n summ += h * f(a + (2 * i - 1) * h)\n R[j][0] = 0.5 * R[j - 1][0] + summ\n \n for k in range(1, j + 1):\n R[j][k] = ( pow(4, k) * R[j][k - 1] - R[j - 1][k - 1] ) / ( pow(4, k) - 1 )\n \n return R[step - 1][step - 1]", "Example\nApply Romberg Integration to approximate $\\int_{1}^{2} \\ln{x}dx$", "f = lambda x : np.log(x)\nresult = romberg(f, 1, 2, 4)\nprint('Romberg Integration : %f' %(result) )\n\nf = lambda x : np.log(x)\nresult = scipy.integrate.romberg(f, 1, 2, show=True)\nprint('Romberg Integration : %f' %(result) )", "5.4 Adaptive Quadrature", "''' Use Trapezoid Rule '''\n\ndef adaptive_quadrature(f, a, b, tol):\n return adaptive_quadrature_recursively(f, a, b, tol, a, b, 0)\n \ndef adaptive_quadrature_recursively(f, a, b, tol, orig_a, orig_b, deep):\n c = (a + b) / 2\n S = lambda x, y : (y - x) * (f(x) + f(y)) / 2\n if abs( S(a, b) - S(a, c) - S(c, b) ) < 3 * tol * (b - a) / (orig_b - orig_a) or deep > 20 :\n return S(a, c) + S(c, b)\n else:\n return adaptive_quadrature_recursively(f, a, c, tol / 2, orig_a, orig_b, deep + 1) + adaptive_quadrature_recursively(f, c, b, tol / 2, orig_a, orig_b, deep + 1)\n\n''' Use Simpon's Rule '''\n\ndef adaptive_quadrature(f, a, b, tol):\n return adaptive_quadrature_recursively(f, a, b, tol, a, b, 0)\n \ndef adaptive_quadrature_recursively(f, a, b, tol, orig_a, orig_b, deep):\n c = (a + b) / 2\n S = lambda x, y : (y - x) * ( f(x) + 4 * f((x + y) / 2) + f(y) ) / 6\n if abs( S(a, b) - S(a, c) - S(c, b) ) < 15 * tol or deep > 20 :\n return S(a, c) + S(c, b)\n else:\n return adaptive_quadrature_recursively(f, a, c, tol / 2, orig_a, orig_b, deep + 1) + adaptive_quadrature_recursively(f, c, b, tol / 2, orig_a, orig_b, deep + 1)", "Example\nUse Adaptive Quadrature to approximate the integral $ \\int_{-1}^{1} (1 + \\sin{e^{3x}}) dx $", "f = lambda x : 1 + np.sin(np.exp(3 * x))\nval = adaptive_quadrature(f, -1, 1, tol=1e-12)\nprint(val)", "5.5 Gaussian Quadrature", "poly = scipy.special.legendre(2)\n# Find roots of polynomials\ncomp = scipy.linalg.companion(poly)\nroots = scipy.linalg.eig(comp)[0]", "Example\nApproximate $\\int_{-1}^{1} e^{-\\frac{x^2}{2}}dx$ using Gaussian Quadrature", "f = lambda x : np.exp(-np.power(x, 2) / 2)\nquad = scipy.integrate.quadrature(f, -1, 1)\nprint(quad[0])\n\n# Parametes\na = -1\nb = 1\ndeg = 3\nf = lambda x : np.exp( -np.power(x, 2) / 2 )\n\nx, w = scipy.special.p_roots(deg) # Or use numpy.polynomial.legendre.leggauss\nquad = np.sum(w * f(x))\n \nprint(quad)", "Example\nApproximate the integral $\\int_{1}^{2} \\ln{x} dx$ using Gaussian Quadrature", "# Parametes\na = 1\nb = 2\ndeg = 4\nf = lambda t : np.log( ((b - a) * t + b + a) / 2) * (b - a) / 2\n\nx, w = scipy.special.p_roots(deg)\nnp.sum(w * f(x))" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
mF2C/COMPSs
tests/sources/python/9_jupyter_notebook/src/simple_mpi.ipynb
apache-2.0
[ "Test suite for Jupyter-notebook\nSample example of use of PyCOMPSs from Jupyter with mpi worker\nFirst step\nImport ipycompss library", "import pycompss.interactive as ipycompss", "Second step\nInitialize COMPSs runtime\nParameters indicates if the execution will generate task graph, tracefile, monitor interval and debug information. The parameter taskCount is a work around for the dot generation of the legend", "ipycompss.start(graph=True, trace=True, debug=True, project_xml='../project.xml', resources_xml='../resources.xml', mpi_worker=True)", "Third step\nImport task module before annotating functions or methods", "from pycompss.api.task import task", "Fourth step\nDeclare functions and decorate with @task those that should be tasks", "@task(returns=int)\ndef test(val1):\n return val1 * val1\n\n@task(returns=int)\ndef test2(val2, val3):\n return val2 + val3", "Fifth step\nInvoke tasks", "a = test(2)\n\nb = test2(a, 5)", "Sixt step\nImport compss_wait_on module and synchronize tasks", "from pycompss.api.api import compss_wait_on\n\nresult = compss_wait_on(b)", "Only those results being sychronized with compss_wait_on will have a valid value", "print(\"Results: \")\nprint(\"a: \", a)\nprint(\"b: \", b)\nprint(\"result: \", result)", "Stop COMPSs runtime. All data will be synchronized in the main program", "ipycompss.stop(sync=True)\n\nprint(\"Results after stopping PyCOMPSs: \")\nprint(\"a: \", a)\nprint(\"b: \", b)\nprint(\"result: \", result)", "CHECK THE RESULTS FOR THE TEST", "from pycompss.runtime.binding import Future\nif a == 4 and isinstance(b, Future) and result == 9:\n print(\"RESULT=EXPECTED\")\nelse:\n print(\"RESULT=UNEXPECTED\")" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
walkon302/CDIPS_Recommender
notebook_versions/Exploring_Data_v2.ipynb
apache-2.0
[ "Data Exploration", "import sys \nimport os\nsys.path.append(os.getcwd()+'/../')\n\n# other\nimport numpy as np\nimport glob\nimport pandas as pd\nimport ntpath\n\n#keras\nfrom keras.preprocessing import image\n\n# plotting\nimport seaborn as sns\nsns.set_style('white')\nimport matplotlib.pyplot as plt\n%matplotlib inline\n\n# debuggin\nfrom IPython.core.debugger import Tracer\n\n#stats\nimport scipy.stats as stats\n\nimport bqplot.pyplot as bqplt", "Data File", "user_profile = pd.read_csv('../data_user_view_buy/user_profile.csv',sep='\\t',header=None)\n\n\nuser_profile.columns = ['user_id','buy_spu','buy_sn','buy_ct3','view_spu','view_sn','view_ct3','time_interval','view_cnt','view_seconds']\n\nstring =str(user_profile.buy_spu.as_matrix()[3002])\nprint(string)\nprint(string[0:7]+'-'+string[7::])\n#print(str(user_profile.buy_spu.as_matrix()[0])[7::])\n\nuser_profile.head(10)\n\nprint('n rows: {0}').format(len(user_profile))", "Plotting Functions", "def plot_trajectory_scatter(user_profile,scatter_color_col=None,samplesize=50,size=10,savedir=None):\n plt.figure(figsize=(12,1*samplesize/10))\n\n for ui,user_id in enumerate(np.random.choice(user_profile.user_id.unique(),samplesize)):\n trajectory = user_profile.loc[user_profile.user_id==user_id,]\n\n time = 0-trajectory.time_interval.as_matrix()/60.0/60.0/24.0\n \n # add image or not\n if scatter_color_col is not None:\n c = trajectory[scatter_color_col].as_matrix()\n else:\n c = np.ones(len(trajectory))\n \n plt.scatter(time,np.ones(len(time))*ui,s=size,c=c,edgecolors=\"none\",cmap=\"jet\")\n plt.axvline(x=0,linewidth=1)\n sns.despine()\n plt.title('example user trajectories')\n plt.xlabel('days to purchase')\n if savedir is not None:\n plt.savefig(savedir,dpi=100)", "Descriptions of Data", "user_profile.describe()\n\nprint('unique users:{0}').format(len(user_profile.user_id.unique()))\n\nprint('unique items viewed:{0}').format(len(user_profile.view_spu.unique()))\nprint('unique items bought:{0}').format(len(user_profile.buy_spu.unique()))\n\nprint('unique categories viewed:{0}').format(len(user_profile.view_ct3.unique()))\nprint('unique categories bought:{0}').format(len(user_profile.buy_ct3.unique()))\nprint('unique brands viewed:{0}').format(len(user_profile.view_sn.unique()))\nprint('unique brands bought:{0}').format(len(user_profile.buy_sn.unique()))\n\nsamplesize = 2000\nplt.figure(figsize=(12,4))\nplt.subplot(1,3,1)\nplt.hist(np.random.choice(user_profile.time_interval.as_matrix()/60.0/60.0,samplesize))\nsns.despine()\nplt.title('sample histogram from \"time interval\"')\nplt.xlabel('hours from view to buy')\nplt.ylabel('counts of items')\n\nplt.subplot(1,3,2)\nplt.hist(np.random.choice(user_profile.view_cnt.as_matrix(),samplesize))\nsns.despine()\nplt.title('sample histogram from \"view count\"')\nplt.xlabel('view counts')\nplt.ylabel('counts of items')\n\nplt.subplot(1,3,3)\nplt.hist(np.random.choice(user_profile.view_seconds.as_matrix(),samplesize))\nsns.despine()\nplt.title('sample histogram from \"view lengths\"')\nplt.xlabel('view lengths (seconds)')\nplt.ylabel('counts of items')", "there are many items that are viewed more than a day before buying\nmost items are viewed less than 10 times and for less than a couple minutes (though need to zoom in)", "print('longest time interval')\nprint(user_profile.time_interval.min())\n\nprint('longest time interval')\nprint(user_profile.time_interval.max()/60.0/60.0/24)\n", "longest span from viewing to buying is 6 days \n\nAverage Time for Items Viewed before Being Bought", "mean_time_interval = np.array([])\nsamplesize =1000\nfor user_id in np.random.choice(user_profile.user_id.unique(),samplesize):\n mean_time_interval = np.append(mean_time_interval, user_profile.loc[user_profile.user_id==user_id,'time_interval'].mean())\n \n\nplt.figure(figsize=(12,3))\nplt.hist(mean_time_interval/60.0,bins=200)\nsns.despine()\nplt.title('sample histogram of average length for user trajectories\"')\nplt.xlabel('minutes')\nplt.ylabel('counts of items out of '+str(samplesize))\n", "5% look like they have relatively short sessions (maybe within one sitting)", "plt.figure(figsize=(12,3))\nplt.hist(mean_time_interval/60.0,bins=1000)\nplt.xlim(0,100)\nsns.despine()\nplt.title('sample histogram of average length for user trajectories\"')\nplt.xlabel('minutes')\nplt.ylabel('counts of items out of '+str(samplesize))\n", "zooming in to look at the shortest sessions. \nabout 7% have sessions <10 minutes", "plt.figure(figsize=(8,3))\nplt.hist(mean_time_interval/60.0,bins=200,cumulative=True,normed=True)\nplt.xlim(0,2000)\nsns.despine()\nplt.title('sample cdf of average length for user trajectories\"')\nplt.xlabel('minutes')\nplt.ylabel('counts of items out of '+str(samplesize))\n", "20% has sessions less <100 minutes\n\nExample Trajectories", "user_id = 1606682799\ntrajectory = user_profile.loc[user_profile.user_id==user_id,]\ntrajectory= trajectory.sort_values(by='time_interval',ascending=False)\ntrajectory", "this is an example trajectory of someone who browsed a few items and then bought item 31.. within the same session.", "plot_trajectory_scatter(user_profile)", "here are 50 random subjects and when they view items (could make into an interactive plot)\n\nWhat's the distribution of items that are bought? Are there some items that are much more popular than others?", "samplesize =1000\nnumber_of_times_item_bought = np.empty(samplesize)\nnumber_of_times_item_viewed = np.empty(samplesize)\nfor ii,item_id in enumerate(np.random.choice(user_profile.view_spu.unique(),samplesize)):\n number_of_times_item_bought[ii] = len(user_profile.loc[user_profile.buy_spu==item_id,'user_id'].unique()) # assume the same user would not buy the same product \n number_of_times_item_viewed[ii] = len(user_profile.loc[user_profile.view_spu==item_id]) # same user can view the same image more than once for this count\n \n \n\nplt.figure(figsize=(12,4))\nplt.subplot(1,2,1)\nplt.bar(np.arange(len(number_of_times_item_bought)),number_of_times_item_bought)\nsns.despine()\nplt.title('item popularity (purchases)')\nplt.xlabel('item')\nplt.ylabel('# of times items were bought')\n\nplt.subplot(1,2,2)\nplt.hist(number_of_times_item_bought,bins=100)\nsns.despine()\nplt.title('item popularity (purchases)')\nplt.xlabel('# of times items were bought sample size='+str(samplesize))\nplt.ylabel('# of items')\n\nplt.figure(figsize=(12,4))\nplt.subplot(1,2,1)\nplt.bar(np.arange(len(number_of_times_item_viewed)),number_of_times_item_viewed)\nsns.despine()\nplt.title('item popularity (views)')\nplt.xlabel('item')\nplt.ylabel('# of times items were viewed')\n\nplt.subplot(1,2,2)\nplt.hist(number_of_times_item_bought,bins=100)\nsns.despine()\nplt.title('item popularity (views) sample size='+str(samplesize))\nplt.xlabel('# of times items were viewed')\nplt.ylabel('# of items')\n\nplt.figure(figsize=(6,4))\nplt.subplot(1,1,1)\nthresh =30\ninclude = number_of_times_item_bought<thresh\nplt.scatter(number_of_times_item_viewed[include],number_of_times_item_bought[include],)\n(r,p) = stats.pearsonr(number_of_times_item_viewed[include],number_of_times_item_bought[include])\nsns.despine()\nplt.xlabel('number of times viewed')\nplt.ylabel('number of times bought')\nplt.title('r='+str(np.round(r,2))+' data truncated buys<'+str(thresh))", "Items bought and viewed per user?", "samplesize =1000\nitems_bought_per_user = np.empty(samplesize)\nitems_viewed_per_user = np.empty(samplesize)\nfor ui,user_id in enumerate(np.random.choice(user_profile.user_id.unique(),samplesize)):\n items_bought_per_user[ui] = len(user_profile.loc[user_profile.user_id==user_id,'buy_spu'].unique())\n items_viewed_per_user[ui] = len(user_profile.loc[user_profile.user_id==user_id,'view_spu'].unique())\n \n\nplt.figure(figsize=(12,4))\nplt.subplot(1,2,1)\nplt.hist(items_bought_per_user)\nsns.despine()\nplt.title('number of items bought per user (sample of 1000)')\nplt.xlabel('# items bought')\nplt.ylabel('# users')\n\nplt.subplot(1,2,2)\nplt.hist(items_viewed_per_user)\nsns.despine()\nplt.title('number of items viewed per user (sample of 1000)')\nplt.xlabel('# items viewed')\nplt.ylabel('# users')", "How many times did the user buy an item he/she already looked at?\nImage URLs\nHow many of the SPUs in our dataset (smaller) have urls in our url.csv?", "urls = pd.read_csv('../../deep-learning-models-master/img/eval_img_url.csv',header=None)\nurls.columns = ['spu','url']\nprint(len(urls))\nurls.head(10)\n\n\nurls[['spu','url']].groupby(['spu']).agg(['count']).head()", "items with more than one url?", "urls.loc[urls.spu==357870273655002,'url'].as_matrix()\n\nurls.loc[urls.spu==357889732772303,'url'].as_matrix()", "these are the same item, just different images.", "#urls.loc[urls.spu==1016200950427238422,'url']\n\n\ntmp_urls = urls.loc[urls.spu==1016200950427238422,'url'].as_matrix()\ntmp_urls\n\nfrom urllib import urlretrieve\nimport time\n\n\n\n# scrape images \nfor i,tmp_url in enumerate(tmp_urls):\n urlretrieve(tmp_url, '../data_img_tmp/{}.jpg'.format(i))\n #time.sleep(3)\n\n# plot them. \nprint('two images from url with same spu (ugh)')\nplt.figure(figsize=(8,3))\nfor i,tmp_url in enumerate(tmp_urls):\n img_path= '../data_img_tmp/{}.jpg'.format(i)\n img = image.load_img(img_path, target_size=(224, 224))\n plt.subplot(1,len(tmp_urls),i+1)\n plt.imshow(img)\n plt.grid(b=False)", "These are different thought!!", "urls.spu[0]\n\nurls.url[0]", "the url contains the spu, but I'm not sure what the other numbers are. The goods_num? The category etc?", "view_spus = user_profile.view_spu.unique()\ncontained = 0\nspus_with_url = list(urls.spu.as_matrix())\nfor view_spu in view_spus: \n if view_spu in spus_with_url:\n contained+=1\nprint(contained/np.float(len(view_spus)))\n\nbuy_spus = user_profile.buy_spu.unique()\ncontained = 0\nspus_with_url = list(urls.spu.as_matrix())\nfor buy_spu in buy_spus: \n if buy_spu in spus_with_url:\n contained+=1\nprint(contained/np.float(len(buy_spus)))", "we only have the url for 7% of the bought items and 9% of the viewed items", "buy_spu in spus_with_url\n\nlen(urls.spu.unique())\nlen(user_profile.view_spu.unique())", "Are the images we have in this new dataset?\n\nat the moment, I don't know how to find the spu of the images we have. \n\nViewing DataSet with Feature Data in", "spu_fea = pd.read_pickle(\"../data_nn_features/spu_fea.pkl\") #takes forever to load \n\nspu_fea['view_spu']=spu_fea['spu_id']\n\nspu_fea['view_spu']=spu_fea['spu_id']\nuser_profile_w_features = user_profile.merge(spu_fea,on='view_spu',how='left')\nprint('before merge nrow: {0}').format(len(user_profile))\nprint('after merge nrows:{0}').format(len(user_profile_w_features))\n\nprint('number of items with features: {0}').format(len(spu_fea))\n\nspu_fea.head()\n\n# merge with userdata\nspu_fea['view_spu']=spu_fea['spu_id']\nuser_profile_w_features = user_profile.merge(spu_fea,on='view_spu',how='left')\nprint('before merge nrow: {0}').format(len(user_profile))\nprint('after merge nrows:{0}').format(len(user_profile_w_features))\n\nuser_profile_w_features['has_features']=user_profile_w_features.groupby(['view_spu'])['spu_id'].apply(lambda x: np.isnan(x))\n\nuser_profile_w_features.has_features= user_profile_w_features.has_features.astype('int')\n\nuser_profile_w_features.head()", "Plotting Trajectories and Seeing How many features we have", "plot_trajectory_scatter(user_profile_w_features,scatter_color_col='has_features',samplesize=100,size=10,savedir='../../test.png')", "What percent of rows have features?", "1-(user_profile_w_features['features'].isnull()).mean()", "What percent of bought items are in the feature list?", "1-user_profile_w_features.groupby(['view_spu'])['spu_id'].apply(lambda x: np.isnan(x)).mean()\n\nbuy_spus = user_profile.buy_spu.unique()\ncontained = 0\nspus_with_features = list(spu_fea.spu_id.as_matrix())\nfor buy_spu in buy_spus: \n if buy_spu in spus_with_features:\n contained+=1\nprint(contained/np.float(len(buy_spus)))\n\ncontained\n\nlen(buy_spus)\n\nview_spus = user_profile.view_spu.unique()\ncontained = 0\nspus_with_features = list(spu_fea.spu_id.as_matrix())\nfor view_spu in view_spus: \n if view_spu in spus_with_features:\n contained+=1\nprint(contained/np.float(len(view_spus)))\n\nlen(view_spus)\n", "Evaluation Dataset", "user_profile = pd.read_pickle('../data_user_view_buy/user_profile_items_nonnull_features_20_mins_5_views.pkl')\n\nlen(user_profile)\n\nprint('unique users:{0}').format(len(user_profile.user_id.unique()))\n\nprint('unique items viewed:{0}').format(len(user_profile.view_spu.unique()))\nprint('unique items bought:{0}').format(len(user_profile.buy_spu.unique()))\n\nprint('unique categories viewed:{0}').format(len(user_profile.view_ct3.unique()))\nprint('unique categories bought:{0}').format(len(user_profile.buy_ct3.unique()))\nprint('unique brands viewed:{0}').format(len(user_profile.view_sn.unique()))\nprint('unique brands bought:{0}').format(len(user_profile.buy_sn.unique()))\n\n#user_profile.groupby(['user_id'])['buy_spu'].nunique()\n\n# how many items bought per user in this dataset? \nplt.figure(figsize=(8,3))\nplt.hist(user_profile.groupby(['user_id'])['buy_spu'].nunique(),bins=20,normed=False)\nsns.despine()\nplt.xlabel('number of items bought per user')\nplt.ylabel('number of user')\n\nuser_profile.loc[user_profile.user_id==4283991208,]", "some people have longer viewing trajectories. first item was viewed 28hours ahead of time.", "user_profile.loc[user_profile.user_id==6539296,]", "this person bought two items.", "plot_trajectory_scatter(user_profile,samplesize=100,size=10,savedir='../figures/trajectories_evaluation_dataset.png')", "I'd like to make this figure better - easier to tell which rows people are on\n\nSave Notebook", "%%bash \njupyter nbconvert --to slides Exploring_Data.ipynb && mv Exploring_Data.slides.html ../notebook_slides/Exploring_Data_v2.slides.html\njupyter nbconvert --to html Exploring_Data.ipynb && mv Exploring_Data.html ../notebook_htmls/Exploring_Data_v2.html\ncp Exploring_Data.ipynb ../notebook_versions/Exploring_Data_v2.ipynb\n\n# push to s3 \nimport sys\nimport os\nsys.path.append(os.getcwd()+'/../')\nfrom src import s3_data_management\ns3_data_management.push_results_to_s3('Exploring_Data_v1.html','../notebook_htmls/Exploring_Data_v1.html')\ns3_data_management.push_results_to_s3('Exporing_Data_v1.slides.html','../notebook_slides/Exploring_Data_v1.slides.html')\n" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
ledeprogram/algorithms
class6/donow/Devulapalli_Harsha_Class6_DoNow.ipynb
gpl-3.0
[ "1. Import the necessary packages to read in the data, plot, and create a linear regression model", "import pandas as pd\n%matplotlib inline\nimport matplotlib.pyplot as plt # package for doing plotting (necessary for adding the line)\nimport statsmodels.formula.api as smf", "2. Read in the hanford.csv file", "cd C:\\Users\\Harsha Devulapalli\\Desktop\\algorithms\\class6\n\ndf=pd.read_csv(\"data/hanford.csv\")", "<img src=\"images/hanford_variables.png\">\n3. Calculate the basic descriptive statistics on the data", "df.describe()", "4. Calculate the coefficient of correlation (r) and generate the scatter plot. Does there seem to be a correlation worthy of investigation?", "df.corr()\n\ndf.plot(kind='scatter',x='Exposure',y='Mortality')", "5. Create a linear regression model based on the available data to predict the mortality rate given a level of exposure", "lm = smf.ols(formula=\"Mortality~Exposure\",data=df).fit()\n\nlm.params\n\nintercept, slope = lm.params", "6. Plot the linear regression line on the scatter plot of values. Calculate the r^2 (coefficient of determination)", "df.plot(kind=\"scatter\",x=\"Exposure\",y=\"Mortality\")\nplt.plot(df[\"Exposure\"],slope*df[\"Exposure\"]+intercept,\"-\",color=\"red\")\n\nr = df.corr()['Exposure']['Mortality']\nr*r", "7. Predict the mortality rate (Cancer per 100,000 man years) given an index of exposure = 10", "def predictor(exposure):\n return intercept+float(exposure)*slope\n\npredictor(10)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
hankcs/HanLP
plugins/hanlp_demo/hanlp_demo/zh/tok_mtl.ipynb
apache-2.0
[ "<h2 align=\"center\">点击下列图标在线运行HanLP</h2>\n<div align=\"center\">\n <a href=\"https://colab.research.google.com/github/hankcs/HanLP/blob/doc-zh/plugins/hanlp_demo/hanlp_demo/zh/tok_mtl.ipynb\" target=\"_blank\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>\n <a href=\"https://mybinder.org/v2/gh/hankcs/HanLP/doc-zh?filepath=plugins%2Fhanlp_demo%2Fhanlp_demo%2Fzh%2Ftok_mtl.ipynb\" target=\"_blank\"><img src=\"https://mybinder.org/badge_logo.svg\" alt=\"Open In Binder\"/></a>\n</div>\n\n安装\n无论是Windows、Linux还是macOS,HanLP的安装只需一句话搞定:", "!pip install hanlp -U", "加载模型\nHanLP的工作流程是先加载模型,模型的标示符存储在hanlp.pretrained这个包中,按照NLP任务归类。", "import hanlp\nhanlp.pretrained.mtl.ALL # MTL多任务,具体任务见模型名称,语种见名称最后一个字段或相应语料库", "调用hanlp.load进行加载,模型会自动下载到本地缓存。自然语言处理分为许多任务,分词只是最初级的一个。与其每个任务单独创建一个模型,不如利用HanLP的联合模型一次性完成多个任务:", "HanLP = hanlp.load(hanlp.pretrained.mtl.CLOSE_TOK_POS_NER_SRL_DEP_SDP_CON_ELECTRA_BASE_ZH)", "分词\n任务越少,速度越快。如指定仅执行分词,默认细粒度:", "HanLP('阿婆主来到北京立方庭参观自然语义科技公司。', tasks='tok').pretty_print()", "执行粗颗粒度分词:", "HanLP('阿婆主来到北京立方庭参观自然语义科技公司。', tasks='tok/coarse').pretty_print()", "同时执行细粒度和粗粒度分词:", "HanLP('阿婆主来到北京立方庭参观自然语义科技公司。', tasks='tok*')", "coarse为粗分,fine为细分。\n注意\nNative API的输入单位限定为句子,需使用多语种分句模型或基于规则的分句函数先行分句。RESTful同时支持全文、句子、已分词的句子。除此之外,RESTful和native两种API的语义设计完全一致,用户可以无缝互换。\n自定义词典\n自定义词典为分词任务的成员变量,要操作自定义词典,先获取分词任务,以细分标准为例:", "tok = HanLP['tok/fine']\ntok", "自定义词典为分词任务的成员变量:", "tok.dict_combine, tok.dict_force", "HanLP支持合并和强制两种优先级的自定义词典,以满足不同场景的需求。\n不挂词典:", "tok.dict_force = tok.dict_combine = None\nHanLP(\"商品和服务项目\", tasks='tok/fine').pretty_print()", "强制模式\n强制模式优先输出正向最长匹配到的自定义词条(慎用,详见《自然语言处理入门》第二章):", "tok.dict_force = {'和服', '服务项目'}\nHanLP(\"商品和服务项目\", tasks='tok/fine').pretty_print()", "与大众的朴素认知不同,词典优先级最高未必是好事,极有可能匹配到不该分出来的自定义词语,导致歧义。自定义词语越长,越不容易发生歧义。这启发我们将强制模式拓展为强制校正功能。\n强制校正原理相似,但会将匹配到的自定义词条替换为相应的分词结果:", "tok.dict_force = {'和服务': ['和', '服务']}\nHanLP(\"商品和服务项目\", tasks='tok/fine').pretty_print()", "合并模式\n合并模式的优先级低于统计模型,即dict_combine会在统计模型的分词结果上执行最长匹配并合并匹配到的词条。一般情况下,推荐使用该模式。", "tok.dict_force = None\ntok.dict_combine = {'和服', '服务项目'}\nHanLP(\"商品和服务项目\", tasks='tok/fine').pretty_print()", "需要算法基础才能理解,初学者可参考《自然语言处理入门》。\n空格单词\n含有空格、制表符等(Transformer tokenizer去掉的字符)的词语需要用tuple的形式提供:", "tok.dict_combine = {('iPad', 'Pro'), '2个空格'}\nHanLP(\"如何评价iPad Pro ?iPad Pro有2个空格\", tasks='tok/fine')['tok/fine']", "聪明的用户请继续阅读,tuple词典中的字符串其实等价于该字符串的所有可能的切分方式:", "dict(tok.dict_combine.config[\"dictionary\"]).keys()", "单词位置\nHanLP支持输出每个单词在文本中的原始位置,以便用于搜索引擎等场景。在词法分析中,非语素字符(空格、换行、制表符等)会被剔除,此时需要额外的位置信息才能定位每个单词:", "tok.config.output_spans = True\nsent = '2021 年\\nHanLPv2.1 为生产环境带来次世代最先进的多语种NLP技术。'\nword_offsets = HanLP(sent, tasks='tok/fine')['tok/fine']\nprint(word_offsets)", "返回格式为三元组(单词,单词的起始下标,单词的终止下标),下标以字符级别计量。", "for word, begin, end in word_offsets:\n assert word == sent[begin:end]" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
flsantos/startup_acquisition_forecast
.ipynb_checkpoints/3_dataset_exploration-checkpoint.ipynb
mit
[ "Dataset Exploration\nHere we'll be exploring how each of the features we have so far relates to the target variable \"status\"\nImporting the dataset", "import pandas as pd\nstartups = pd.read_csv('data/startups_2.csv', index_col=0)\nstartups[:3]", "Let's start exploring the numerical features\nLet's see a heatmap chart of the average features for 'acquired' startups against the complete set of startups", "import seaborn as sns\nimport matplotlib.pyplot as plt\n%matplotlib inline\n\ndef plot_avg_status_against_avg_total(df, status):\n startups_numeric = df.filter(regex=('(number_of|avg_).*|.*(funding_total_usd|funding_rounds|_at|status)'))\n startups_acquired = startups_numeric[startups_numeric['status'] == status]\n\n startups_numeric = startups_numeric.drop('status', 1)\n startups_acquired = startups_acquired.drop('status', 1)\n\n fig, ax = plt.subplots(figsize=(20,20)) \n ax.set_title(status+' startups heatmap')\n sns.heatmap((pd.DataFrame(startups_acquired.mean()).transpose() -startups_numeric.mean())/startups_numeric.std(ddof=0), annot=True, cbar=False, square=True, ax=ax)\n\nplot_avg_status_against_avg_total(startups, 'acquired')", "The same for 'closed':", "plot_avg_status_against_avg_total(startups, 'closed')\n\nplot_avg_status_against_avg_total(startups, 'ipo')\n\nplot_avg_status_against_avg_total(startups, 'operating')", "We can see some logic behavior here. Acquired startups tend to have high venture_funding_rounds and low seed_funding_rounds, while closed startups have few funding_rounds in general and relatively high angel_funding_rounds.\nRegarding the dates variables we also have logical results. Acquired and closed startups haven't had a funding for a higher amount of time.\nWhile operating startups had a funding not so long ago when compared to the rest of the startups.", "# Produce a scatter matrix for each pair of features in the data\n#startups_funding_rounds = startups_numeric.filter(regex=('.*funding_total_usd'))\n#pd.scatter_matrix(startups_funding_rounds, alpha = 0.3, figsize = (14,8), diagonal = 'kde');", "Applying PCA to discover which features best explain the variance in the dataset", "from sklearn.decomposition import PCA\nimport visuals as vs\n\n\nstartups_numeric = startups.filter(regex=('(number_of|avg_).*|.*(funding_total_usd|funding_rounds|_at)'))\n\n# TODO: Apply PCA by fitting the good data with the same number of dimensions as features\npca = PCA(n_components=4)\npca.fit(startups_numeric)\n\n\n# Generate PCA results plot\npca_results = vs.pca_results(startups_numeric, pca)\nstartups_numeric[:3]\n\ngood_data = startups_numeric\nimport numpy as np\ndimensions = dimensions = ['Dimension {}'.format(i) for i in range(1,len(pca.components_)+1)]\n\ncomponents = pd.DataFrame(np.round(pca.components_, 4), columns = good_data.keys())\ncomponents.index = dimensions\ncomponents", "The most important variables here are:\nDimension1: funding_rounds, -last_funding_at, debt_financing_funding_rounds, venture_funding_rounds\nDimension2: -funding_rounds, -last_funding_at, -seed_funding_rounds, venture_funding_rounds\nDimension3: -last_funding_at, equity_crowdfunding_funding_rounds, -seed_funding_rounds\nDimension4: last_funding_at, equity_crowdfunding_funding_rounds, seed_funding_rounds\nNow I'll apply the same PCA algorithm, but just for startups with acquired status", "startups_numeric_acquired = startups.filter(regex=('(number_of|avg_).*|.*(funding_total_usd|funding_rounds|_at|status)'))\nstartups_numeric_acquired = startups_numeric_acquired[startups_numeric_acquired['status'] == 'acquired']\nstartups_numeric_acquired = startups_numeric_acquired.drop('status', 1)\n\npca = PCA(n_components=4)\npca.fit(startups_numeric_acquired)\n\n# Generate PCA results plot\npca_results = vs.pca_results(startups_numeric_acquired, pca)", "Okay. We see now that some features tend to express more variance than others.\nWe also see that funding_rounds variable tend to dominate against funding_total_usd values.\nAnd also, that last_funding_at is a very expressing variable.\nLet's start playing with non-numerical variables: dates and Categories", "#startups_numeric = df.filter(regex=('.*(funding_total_usd|funding_rounds|status)'))\nstartups_non_numeric = startups.filter(regex=('^((?!(_acquisitions|_investments|_per_round|funding_total_usd|funding_rounds|_at)).)*$'))\nstartups_non_numeric[:3]", "Let's try some DecisionTrees for categories and see which performance we get.", "startups_non_numeric['status'].value_counts()\nstartups_non_numeric['acquired'] = startups_non_numeric['status'].map({'operating': 0, 'acquired':1, 'closed':0, 'ipo':0})\nstartups_non_numeric = startups_non_numeric.drop('status', 1)\nstartups_non_numeric[:3]\n\nfrom sklearn import tree\ndef visualize_tree(tree_model, feature_names):\n \"\"\"Create tree png using graphviz.\n\n Args\n ----\n tree_model -- scikit-learn DecsisionTree.\n feature_names -- list of feature names.\n \"\"\"\n with open(\"dt.dot\", 'w') as f:\n tree.export_graphviz(tree_model, out_file=f,\n feature_names=feature_names)\n\n command = [\"dot\", \"-Tpng\", \"dt.dot\", \"-o\", \"dt.png\"]\n try:\n subprocess.check_call(command)\n except:\n exit(\"Could not run dot, ie graphviz, to \"\n \"produce visualization\")\n\n#import visuals_tree as vs_tree\n#vs_tree.ModelLearning(startups_non_numeric.drop(['acquired','state_code'], 1), startups_non_numeric['acquired'])\nfrom sklearn import tree\nfrom sklearn.cross_validation import cross_val_score\nfrom sklearn import tree\nfrom sklearn import grid_search\nfrom sklearn import preprocessing\n\n\n#clf = tree.DecisionTreeClassifier(random_state=0)\n#cross_val_score(clf, startups_non_numeric.drop(['acquired','state_code'], 1), startups_non_numeric['acquired'], cv=10)\n\n\n#Drop state_code feature\nfeatures = startups_non_numeric.drop(['acquired','state_code'], 1)\n\n#Convert state_code feature to number\n#features = startups_non_numeric.drop(['acquired'], 1)\n#features['state_code'] = preprocessing.LabelEncoder().fit_transform(features['state_code'])\n\n#Convert state_code to dummy variables\nfeatures = pd.get_dummies(startups_non_numeric.drop(['acquired'], 1), prefix='state', columns=['state_code'])\n\n\n#Merge numeric_features to non-numeric-features\nfeatures_all = pd.concat([features, startups_numeric], axis=1, ignore_index=False)\n#features = features_all\n\nfeatures = startups_numeric\n\n\n\n\n\n\nparameters = {'max_depth':range(5,20)}\nclf = grid_search.GridSearchCV(tree.DecisionTreeClassifier(), parameters, n_jobs=5, scoring='roc_auc')\nclf.fit(X=features, y=startups_non_numeric['acquired'])\ntree_model = clf.best_estimator_\nprint (clf.best_score_, clf.best_params_) \nprint tree.export_graphviz(clf.best_estimator_, feature_names=list(features.columns))\n\nimport visuals_tree as vs_tree\nvs_tree = reload(vs_tree)\nvs_tree.ModelComplexity(features_all, startups_non_numeric['acquired'])", "Only categories and states are not enough for making a good prediction. With that, maximum (roc_auc) of 0.64 was achieved. With attributes, a simple decisionTreeClassifier achieved 0.84 roc_auc.\nSaving the dataset ready to be tested by different learning algorithms", "all = pd.concat([features_all, startups_non_numeric['acquired']], axis=1, ignore_index=False)\nall.to_csv('data/startups_3.csv')\n\n\nall_with_status = all.join(startups['status'])\nall_with_status_without_operating = all_with_status[all_with_status['status'] != 'operating']\nall_with_status_without_operating.shape\nall_without_operating = all_with_status_without_operating.drop('status', 1)\nall_without_operating.to_csv('data/startups_not_operating_3.csv')" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
francof2a/APC
sources/TempScript.ipynb
gpl-3.0
[ "TempScrip\nScript to developing and test partial functionality.", "import dataset as ds\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport tensorflow as tf\n\n\n# Download database\nds.download('UCI HAR')", "Reading Dataset\nIdea here is to read the files in the dataset to extract data for training, testing and the corresponding (activity) labels for them. The outcome is a set of numpy arrays for each set.", "# Paths and filenames\nDATASET_PATH = \"../dataset/UCI HAR/UCI HAR Dataset\"\nTEST_RELPATH = \"/test\"\nTRAIN_RELPATH = \"/train\"\n\nVARS_FILENAMES = [\n 'body_acc_x_',\n 'body_acc_y_',\n 'body_acc_z_',\n 'body_gyro_x_',\n 'body_gyro_y_',\n 'body_gyro_z_',\n 'total_acc_x_',\n 'total_acc_y_',\n 'total_acc_z_']\n\nLABELS_DEF_FILE = DATASET_PATH + \"/activity_labels.txt\"\n\n# Make a list of files for training\ntrainFiles = [DATASET_PATH + TRAIN_RELPATH + '/Inertial Signals/' + var_filename + 'train.txt' for var_filename in VARS_FILENAMES]\n# Make an tensor with data for training\ndataTrain = ds.get_data(trainFiles, print_on = True)\n# Show dataTrain dimensions\nprint dataTrain.shape\n\n# Make a list of files for testing\ntestFiles = [DATASET_PATH + TEST_RELPATH + '/Inertial Signals/' + var_filename + 'test.txt' for var_filename in VARS_FILENAMES]\n# Make an tensor with data for training\ndataTest = ds.get_data(testFiles, print_on = True)\n# Show dataTrain dimensions\nprint dataTest.shape\n\n\n# Sensor 0 : Sample 1 (128 samples) (Training set)\nfig = plt.figure()\nplt.figure(figsize=(16,8))\ndataTrain[0,1,:]\nplt.plot(dataTrain[0,1,:])\nplt.show()\n\n# Sensor 1 : Sample 2 (128 samples) (Test set)\nfig = plt.figure()\nplt.figure(figsize=(16,8))\ndataTest[1,2,:]\nplt.plot(dataTest[1,2,:])\nplt.show()\n\n# Get the labels values for training samples\ntrainLabelsFile = DATASET_PATH + TRAIN_RELPATH + '/' + 'y_train.txt'\nlabelsTrain = ds.get_labels(trainLabelsFile, print_on = True)\n\nprint labelsTrain.shape #show dimension\n\n# Get the labels values for testing samples\ntestLabelsFile = DATASET_PATH + TEST_RELPATH + '/' + 'y_test.txt'\nlabelsTest = ds.get_labels(testLabelsFile, print_on = True)\n\nprint labelsTest.shape #show dimension\n\n\n# convert outputs to one-hot code\nlabelsTrainEncoded = ds.encode_onehot(labelsTrain)\nlabelsTestEncoded = ds.encode_onehot(labelsTest)\n\n# Make a dictionary\nlabelDictionary = ds.make_labels_dictionary(LABELS_DEF_FILE)\n\nprint label_dict\nprint \"\\n\"\n\nsel = 300\nprint \"label {} ({}) -> {}\".format(labelsTrain[sel], label_dict[labelsTrain[sel]], labelsTrainEncoded[sel])\n", "Filtered plots", "activityToPlot = 2.0\n\nfig = plt.figure()\nplt.figure(figsize=(16,8))\nplt.title(label_dict[activityToPlot])\n\nfor idx, activity in enumerate(labelsTrain):\n if activityToPlot == activity:\n plt.plot(dataTrain[4,idx,:])\n\n\nplt.show()", "RNN\nfirst tries", "numLayers = 50;\nlstm_cell = tf.contrib.rnn.BasicRNNCell(numLayers)\n\nlstm_cell" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
antoniomezzacapo/qiskit-tutorial
community/terra/qis_adv/vaidman_detection_test.ipynb
apache-2.0
[ "<img src=\"../../../images/qiskit-heading.gif\" alt=\"Note: In order for images to show up in this jupyter notebook you need to select File => Trusted Notebook\" width=\"500 px\" align=\"left\">\nThe Vaidman Detection Test: Interaction Free Measurement\nThe latest version of this notebook is available on https://github.com/Qiskit/qiskit-tutorial.\n\nContributors\nAlex Breitweiser\nIntroduction\nOne surprising result of quantum mechanics is the ability to measure something without ever directly \"observing\" it. This interaction-free measurement cannot be reproduced in classical mechanics. The prototypical example is the Elitzur–Vaidman Bomb Experiment - in which one wants to test whether bombs are active without detonating them. In this example we will test whether an unknown operation is null (the identity) or an X gate, corresponding to a dud or a live bomb.\nThe Algorithm\nThe algorithm will use two qubits, $q_1$ and $q_2$, as well as a small parameter, $\\epsilon = \\frac{\\pi}{n}$ for some integer $n$. Call the unknown gate, which is either the identity or an X gate, $G$, and assume we have it in a controlled form. The algorithm is then:\n1. Start with both $q_1$ and $q_2$ in the $|0\\rangle$ state\n2. Rotate $q_1$ by $\\epsilon$ about the Y axis\n3. Apply a controlled $G$ on $q_2$, conditioned on $q_1$\n4. Measure $q_2$\n5. Repeat (2-4) $n$ times\n6. Measure $q_1$\n\nExplanation and proof of correctness\nThere are two cases: Either the gate is the identity (a dud), or it is an X gate (a live bomb).\nCase 1: Dud\nAfter rotation, $q_1$ is now approximately\n$$q_1 \\approx |0\\rangle + \\frac{\\epsilon}{2} |1\\rangle$$\nSince the unknown gate is the identity, the controlled gate leaves the two qubit state separable,\n$$q_1 \\times q_2 \\approx (|0\\rangle + \\frac{\\epsilon}{2} |1\\rangle) \\times |0\\rangle$$\nand measurement is trivial (we will always measure $|0\\rangle$ for $q_2$).\nRepetition will not change this result - we will always keep separability and $q_2$ will remain in $|0\\rangle$.\nAfter n steps, $q_1$ will flip by $\\pi$ to $|1\\rangle$, and so measuring it will certainly yield $1$. Therefore, the output register for a dud bomb will read:\n$$000...01$$\nCase 2: Live\nAgain, after rotation, $q_1$ is now approximately\n$$q_1 \\approx |0\\rangle + \\frac{\\epsilon}{2} |1\\rangle$$\nBut, since the unknown gate is now an X gate, the combined state after $G$ is now\n$$q_1 \\times q_2 \\approx |00\\rangle + \\frac{\\epsilon}{2} |11\\rangle$$\nMeasuring $q_2$ now might yield $1$, in which case we have \"measured\" the live bomb (obtained a result which differs from that of a dud) and it explodes. However, this only happens with a probability proportional to $\\epsilon^2$. In the vast majority of cases, we will measure $0$ and the entire system will collapse back to\n$$q_1 \\times q_2 = |00\\rangle$$\nAfter every step, the system will most likely return to the original state, and the final measurement of $q_1$ will yield $0$. Therefore, the most likely outcome of a live bomb is\n$$000...00$$\nwhich will identify a live bomb without ever \"measuring\" it. If we ever obtain a 1 in the bits preceding the final bit, we will have detonated the bomb, but this will only happen with probability of order\n$$P \\propto n \\epsilon^2 \\propto \\epsilon$$\nThis probability may be made arbitrarily small at the cost of an arbitrarily long circuit.\nGenerating Random Bombs\nA test set must be generated to experiment on - this can be done by classical (pseudo)random number generation, but as long as we have access to a quantum computer we might as well take advantage of the ability to generate true randomness.", "# useful additional packages \nimport matplotlib.pyplot as plt\n%matplotlib inline\nimport numpy as np\nfrom collections import Counter #Use this to convert results from list to dict for histogram\n\n# importing QISKit\nfrom qiskit import QuantumCircuit, QuantumRegister, ClassicalRegister\nfrom qiskit import execute, Aer, IBMQ\nfrom qiskit.backends.ibmq import least_busy\n\n# import basic plot tools\nfrom qiskit.tools.visualization import plot_histogram\n\n# To use IBMQ Quantum Experience\nIBMQ.load_accounts()", "We will generate a test set of 50 \"bombs\", and each \"bomb\" will be run through a 20-step measurement circuit. We set up the program as explained in previous examples.", "# Use local qasm simulator\nbackend = Aer.get_backend('qasm_simulator')\n\n# Use the IBMQ Quantum Experience\n# backend = least_busy(IBMQ.backends())\n\nN = 50 # Number of bombs\nsteps = 20 # Number of steps for the algorithm, limited by maximum circuit depth\neps = np.pi / steps # Algorithm parameter, small\n\n# Prototype circuit for bomb generation\nq_gen = QuantumRegister(1, name='q_gen')\nc_gen = ClassicalRegister(1, name='c_gen')\nIFM_gen = QuantumCircuit(q_gen, c_gen, name='IFM_gen')\n\n# Prototype circuit for bomb measurement\nq = QuantumRegister(2, name='q')\nc = ClassicalRegister(steps+1, name='c')\nIFM_meas = QuantumCircuit(q, c, name='IFM_meas')", "Generating a random bomb is achieved by simply applying a Hadamard gate to a $q_1$, which starts in $|0\\rangle$, and then measuring. This randomly gives a $0$ or $1$, each with equal probability. We run one such circuit for each bomb, since circuits are currently limited to a single measurement.", "# Quantum circuits to generate bombs\nqc = []\ncircuits = [\"IFM_gen\"+str(i) for i in range(N)]\n# NB: Can't have more than one measurement per circuit\nfor circuit in circuits:\n IFM = QuantumCircuit(q_gen, c_gen, name=circuit)\n IFM.h(q_gen[0]) #Turn the qubit into |0> + |1>\n IFM.measure(q_gen[0], c_gen[0])\n qc.append(IFM)\n_ = [i.qasm() for i in qc] # Suppress the output", "Note that, since we want to measure several discrete instances, we do not want to average over multiple shots. Averaging would yield partial bombs, but we assume bombs are discretely either live or dead.", "result = execute(qc, backend=backend, shots=1).result() # Note that we only want one shot\nbombs = []\nfor circuit in qc:\n for key in result.get_counts(circuit): # Hack, there should only be one key, since there was only one shot\n bombs.append(int(key))\n#print(', '.join(('Live' if bomb else 'Dud' for bomb in bombs))) # Uncomment to print out \"truth\" of bombs\nplot_histogram(Counter(('Live' if bomb else 'Dud' for bomb in bombs))) #Plotting bomb generation results", "Testing the Bombs\nHere we implement the algorithm described above to measure the bombs. As with the generation of the bombs, it is currently impossible to take several measurements in a single circuit - therefore, it must be run on the simulator.", "# Use local qasm simulator\nbackend = Aer.get_backend('qasm_simulator')\n\nqc = []\ncircuits = [\"IFM_meas\"+str(i) for i in range(N)]\n#Creating one measurement circuit for each bomb\nfor i in range(N):\n bomb = bombs[i]\n IFM = QuantumCircuit(q, c, name=circuits[i])\n for step in range(steps):\n IFM.ry(eps, q[0]) #First we rotate the control qubit by epsilon\n if bomb: #If the bomb is live, the gate is a controlled X gate\n IFM.cx(q[0],q[1])\n #If the bomb is a dud, the gate is a controlled identity gate, which does nothing\n IFM.measure(q[1], c[step]) #Now we measure to collapse the combined state\n IFM.measure(q[0], c[steps])\n qc.append(IFM)\n_ = [i.qasm() for i in qc] # Suppress the output\nresult = execute(qc, backend=backend, shots=1, max_credits=5).result()\n\ndef get_status(counts):\n # Return whether a bomb was a dud, was live but detonated, or was live and undetonated\n # Note that registers are returned in reversed order\n for key in counts:\n if '1' in key[1:]:\n #If we ever measure a '1' from the measurement qubit (q1), the bomb was measured and will detonate\n return '!!BOOM!!'\n elif key[0] == '1':\n #If the control qubit (q0) was rotated to '1', the state never entangled because the bomb was a dud\n return 'Dud'\n else:\n #If we only measured '0' for both the control and measurement qubit, the bomb was live but never set off\n return 'Live'\n\nresults = {'Live': 0, 'Dud': 0, \"!!BOOM!!\": 0}\nfor circuit in qc:\n status = get_status(result.get_counts(circuit))\n results[status] += 1\nplot_histogram(results)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
vzg100/Post-Translational-Modification-Prediction
old/Phosphorylation Sequence Tests -MLP -dbptm+ELM -scalesTrain-VectorAvr..ipynb
mit
[ "Template for test", "from pred import Predictor\nfrom pred import sequence_vector\nfrom pred import chemical_vector", "Controlling for Random Negatve vs Sans Random in Imbalanced Techniques using S, T, and Y Phosphorylation.\nIncluded is N Phosphorylation however no benchmarks are available, yet. \nTraining data is from phospho.elm and benchmarks are from dbptm. \nNote: SMOTEEN seems to preform best", "par = [\"pass\", \"ADASYN\", \"SMOTEENN\", \"random_under_sample\", \"ncl\", \"near_miss\"]\nscale = [-1, \"standard\", \"robust\", \"minmax\", \"max\"]\n\n\nfor i in par:\n for j in scale: \n print(\"y\", i, \" \", j)\n y = Predictor()\n y.load_data(file=\"Data/Training/clean_s_filtered.csv\")\n y.process_data(vector_function=\"sequence\", amino_acid=\"S\", imbalance_function=i, random_data=0)\n y.supervised_training(\"mlp_adam\", scale=j)\n y.benchmark(\"Data/Benchmarks/phos.csv\", \"S\")\n del y\n print(\"x\", i, \" \", j)\n x = Predictor()\n x.load_data(file=\"Data/Training/clean_s_filtered.csv\")\n x.process_data(vector_function=\"sequence\", amino_acid=\"S\", imbalance_function=i, random_data=1)\n x.supervised_training(\"mlp_adam\", scale=j)\n x.benchmark(\"Data/Benchmarks/phos.csv\", \"S\")\n del x\n", "Y Phosphorylation", "par = [\"pass\", \"ADASYN\", \"SMOTEENN\", \"random_under_sample\", \"ncl\", \"near_miss\"]\nscale = [-1, \"standard\", \"robust\", \"minmax\", \"max\"]\n\nfor i in par:\n for j in scale: \n print(\"y\", i, \" \", j)\n y = Predictor()\n y.load_data(file=\"Data/Training/clean_s_filtered.csv\")\n y.process_data(vector_function=\"sequence\", amino_acid=\"Y\", imbalance_function=i, random_data=0)\n y.supervised_training(\"mlp_adam\", scale=j)\n y.benchmark(\"Data/Benchmarks/phos.csv\", \"Y\")\n del y\n print(\"x\", i, \" \", j)\n x = Predictor()\n x.load_data(file=\"Data/Training/clean_s_filtered.csv\")\n x.process_data(vector_function=\"sequence\", amino_acid=\"Y\", imbalance_function=i, random_data=1)\n x.supervised_training(\"mlp_adam\", scale=j)\n x.benchmark(\"Data/Benchmarks/phos.csv\", \"Y\")\n del x\n\n", "T Phosphorylation", "par = [\"pass\", \"ADASYN\", \"SMOTEENN\", \"random_under_sample\", \"ncl\", \"near_miss\"]\nscale = [-1, \"standard\", \"robust\", \"minmax\", \"max\"]\n\nfor i in par:\n for j in scale: \n print(\"y\", i, \" \", j)\n y = Predictor()\n y.load_data(file=\"Data/Training/clean_s_filtered.csv\")\n y.process_data(vector_function=\"sequence\", amino_acid=\"T\", imbalance_function=i, random_data=0)\n y.supervised_training(\"mlp_adam\", scale=j)\n y.benchmark(\"Data/Benchmarks/phos.csv\", \"T\")\n del y\n print(\"x\", i, \" \", j)\n x = Predictor()\n x.load_data(file=\"Data/Training/clean_s_filtered.csv\")\n x.process_data(vector_function=\"sequence\", amino_acid=\"T\", imbalance_function=i, random_data=1)\n x.supervised_training(\"mlp_adam\", scale=j)\n x.benchmark(\"Data/Benchmarks/phos.csv\", \"T\")\n del x\n\n" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
AndysDeepAbstractions/deep-learning
gan_mnist/Intro_to_GANs_Exercises.ipynb
mit
[ "Generative Adversarial Network\nIn this notebook, we'll be building a generative adversarial network (GAN) trained on the MNIST dataset. From this, we'll be able to generate new handwritten digits!\nGANs were first reported on in 2014 from Ian Goodfellow and others in Yoshua Bengio's lab. Since then, GANs have exploded in popularity. Here are a few examples to check out:\n\nPix2Pix \nCycleGAN\nA whole list\n\nThe idea behind GANs is that you have two networks, a generator $G$ and a discriminator $D$, competing against each other. The generator makes fake data to pass to the discriminator. The discriminator also sees real data and predicts if the data it's received is real or fake. The generator is trained to fool the discriminator, it wants to output data that looks as close as possible to real data. And the discriminator is trained to figure out which data is real and which is fake. What ends up happening is that the generator learns to make data that is indistiguishable from real data to the discriminator.\n\nThe general structure of a GAN is shown in the diagram above, using MNIST images as data. The latent sample is a random vector the generator uses to contruct it's fake images. As the generator learns through training, it figures out how to map these random vectors to recognizable images that can fool the discriminator.\nThe output of the discriminator is a sigmoid function, where 0 indicates a fake image and 1 indicates an real image. If you're interested only in generating new images, you can throw out the discriminator after training. Now, let's see how we build this thing in TensorFlow.", "%matplotlib inline\n\nimport pickle as pkl\nimport numpy as np\nimport tensorflow as tf\nimport matplotlib.pyplot as plt\n\nfrom tensorflow.examples.tutorials.mnist import input_data\nmnist = input_data.read_data_sets('MNIST_data')", "Model Inputs\nFirst we need to create the inputs for our graph. We need two inputs, one for the discriminator and one for the generator. Here we'll call the discriminator input inputs_real and the generator input inputs_z. We'll assign them the appropriate sizes for each of the networks.\n\nExercise: Finish the model_inputs function below. Create the placeholders for inputs_real and inputs_z using the input sizes real_dim and z_dim respectively.", "def model_inputs(real_dim, z_dim):\n inputs_real = tf.placeholder(tf.float32, (None, real_dim), name='input_real') \n inputs_z = tf.placeholder(tf.float32, (None, z_dim), name='input_z')\n \n return inputs_real, inputs_z", "Generator network\n\nHere we'll build the generator network. To make this network a universal function approximator, we'll need at least one hidden layer. We should use a leaky ReLU to allow gradients to flow backwards through the layer unimpeded. A leaky ReLU is like a normal ReLU, except that there is a small non-zero output for negative input values.\nVariable Scope\nHere we need to use tf.variable_scope for two reasons. Firstly, we're going to make sure all the variable names start with generator. Similarly, we'll prepend discriminator to the discriminator variables. This will help out later when we're training the separate networks.\nWe could just use tf.name_scope to set the names, but we also want to reuse these networks with different inputs. For the generator, we're going to train it, but also sample from it as we're training and after training. The discriminator will need to share variables between the fake and real input images. So, we can use the reuse keyword for tf.variable_scope to tell TensorFlow to reuse the variables instead of creating new ones if we build the graph again.\nTo use tf.variable_scope, you use a with statement:\npython\nwith tf.variable_scope('scope_name', reuse=False):\n # code here\nHere's more from the TensorFlow documentation to get another look at using tf.variable_scope.\nLeaky ReLU\nTensorFlow doesn't provide an operation for leaky ReLUs, so we'll need to make one . For this you can just take the outputs from a linear fully connected layer and pass them to tf.maximum. Typically, a parameter alpha sets the magnitude of the output for negative values. So, the output for negative input (x) values is alpha*x, and the output for positive x is x:\n$$\nf(x) = max(\\alpha * x, x)\n$$\nTanh Output\nThe generator has been found to perform the best with $tanh$ for the generator output. This means that we'll have to rescale the MNIST images to be between -1 and 1, instead of 0 and 1.\n\nExercise: Implement the generator network in the function below. You'll need to return the tanh output. Make sure to wrap your code in a variable scope, with 'generator' as the scope name, and pass the reuse keyword argument from the function to tf.variable_scope.", "def generator(z, out_dim, n_units=128, reuse=False, alpha=0.01):\n ''' Build the generator network.\n \n Arguments\n ---------\n z : Input tensor for the generator\n out_dim : Shape of the generator output\n n_units : Number of units in hidden layer\n reuse : Reuse the variables with tf.variable_scope\n alpha : leak parameter for leaky ReLU\n \n Returns\n -------\n out, logits: \n '''\n with tf.variable_scope('generator', reuse=reuse):\n # Hidden layer\n h1 = tf.layers.dense(z, n_units, activation=None )#tf.nn.elu)\n # Leaky ReLU\n h1 = tf.maximum(alpha * h1, h1)\n \n # Logits and tanh output\n logits = tf.layers.dense(h1, out_dim, activation=None)\n out = tf.tanh(logits)\n #out = tf.sigmoid(logits)\n \n return out", "Discriminator\nThe discriminator network is almost exactly the same as the generator network, except that we're using a sigmoid output layer.\n\nExercise: Implement the discriminator network in the function below. Same as above, you'll need to return both the logits and the sigmoid output. Make sure to wrap your code in a variable scope, with 'discriminator' as the scope name, and pass the reuse keyword argument from the function arguments to tf.variable_scope.", "def discriminator(x, n_units=128, reuse=False, alpha=0.01):\n ''' Build the discriminator network.\n \n Arguments\n ---------\n x : Input tensor for the discriminator\n n_units: Number of units in hidden layer\n reuse : Reuse the variables with tf.variable_scope\n alpha : leak parameter for leaky ReLU\n \n Returns\n -------\n out, logits: \n '''\n with tf.variable_scope('discriminator', reuse=reuse):\n # Hidden layer\n h1 = tf.layers.dense(x, n_units, activation= None)#tf.nn.elu)\n # Leaky ReLU\n h1 = tf.maximum(alpha * h1, h1)\n \n logits = tf.layers.dense(h1, 1, activation=None)\n #out = tf.sigmoid(logits)\n out = tf.tanh(logits)\n\n \n return out, logits", "Hyperparameters", "# Size of input image to discriminator\ninput_size = 784 # 28x28 MNIST images flattened\n# Size of latent vector to generator\nz_size = 100\n# Sizes of hidden layers in generator and discriminator\ng_hidden_size = 128\nd_hidden_size = 128\n# Leak factor for leaky ReLU\nalpha = 0.01\n# Label smoothing \nsmooth = 0.25", "Build network\nNow we're building the network from the functions defined above.\nFirst is to get our inputs, input_real, input_z from model_inputs using the sizes of the input and z.\nThen, we'll create the generator, generator(input_z, input_size). This builds the generator with the appropriate input and output sizes.\nThen the discriminators. We'll build two of them, one for real data and one for fake data. Since we want the weights to be the same for both real and fake data, we need to reuse the variables. For the fake data, we're getting it from the generator as g_model. So the real data discriminator is discriminator(input_real) while the fake discriminator is discriminator(g_model, reuse=True).\n\nExercise: Build the network from the functions you defined earlier.", "tf.reset_default_graph()\n# Create our input placeholders\ninput_real, input_z = model_inputs(input_size,z_size)\n\n# Generator network here\ng_model = generator(input_z, input_size)\n# g_model is the generator output\n\n# Disriminator network here\nd_model_real, d_logits_real = discriminator(input_real)\nd_model_fake, d_logits_fake = discriminator(g_model, reuse=True)", "Discriminator and Generator Losses\nNow we need to calculate the losses, which is a little tricky. For the discriminator, the total loss is the sum of the losses for real and fake images, d_loss = d_loss_real + d_loss_fake. The losses will by sigmoid cross-entropies, which we can get with tf.nn.sigmoid_cross_entropy_with_logits. We'll also wrap that in tf.reduce_mean to get the mean for all the images in the batch. So the losses will look something like \npython\ntf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=logits, labels=labels))\nFor the real image logits, we'll use d_logits_real which we got from the discriminator in the cell above. For the labels, we want them to be all ones, since these are all real images. To help the discriminator generalize better, the labels are reduced a bit from 1.0 to 0.9, for example, using the parameter smooth. This is known as label smoothing, typically used with classifiers to improve performance. In TensorFlow, it looks something like labels = tf.ones_like(tensor) * (1 - smooth)\nThe discriminator loss for the fake data is similar. The logits are d_logits_fake, which we got from passing the generator output to the discriminator. These fake logits are used with labels of all zeros. Remember that we want the discriminator to output 1 for real images and 0 for fake images, so we need to set up the losses to reflect that.\nFinally, the generator losses are using d_logits_fake, the fake image logits. But, now the labels are all ones. The generator is trying to fool the discriminator, so it wants to discriminator to output ones for fake images.\n\nExercise: Calculate the losses for the discriminator and the generator. There are two discriminator losses, one for real images and one for fake images. For the real image loss, use the real logits and (smoothed) labels of ones. For the fake image loss, use the fake logits with labels of all zeros. The total discriminator loss is the sum of those two losses. Finally, the generator loss again uses the fake logits from the discriminator, but this time the labels are all ones because the generator wants to fool the discriminator.", "# Calculate losses\nd_loss_real = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_real, \n labels = tf.ones_like(d_logits_real) * (1 - smooth)))\n\nd_loss_fake = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_fake, \n labels = tf.ones_like(d_logits_real) * -(1 - smooth)))\n\nd_loss = d_loss_real + d_loss_fake\n\ng_loss = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_fake, \n labels = tf.ones_like(d_logits_fake) * (1 - smooth)))", "Optimizers\nWe want to update the generator and discriminator variables separately. So we need to get the variables for each part and build optimizers for the two parts. To get all the trainable variables, we use tf.trainable_variables(). This creates a list of all the variables we've defined in our graph.\nFor the generator optimizer, we only want to generator variables. Our past selves were nice and used a variable scope to start all of our generator variable names with generator. So, we just need to iterate through the list from tf.trainable_variables() and keep variables that start with generator. Each variable object has an attribute name which holds the name of the variable as a string (var.name == 'weights_0' for instance). \nWe can do something similar with the discriminator. All the variables in the discriminator start with discriminator.\nThen, in the optimizer we pass the variable lists to the var_list keyword argument of the minimize method. This tells the optimizer to only update the listed variables. Something like tf.train.AdamOptimizer().minimize(loss, var_list=var_list) will only train the variables in var_list.\n\nExercise: Below, implement the optimizers for the generator and discriminator. First you'll need to get a list of trainable variables, then split that list into two lists, one for the generator variables and another for the discriminator variables. Finally, using AdamOptimizer, create an optimizer for each network that update the network variables separately.", "# Optimizers\nlearning_rate = 0.002\n\n# Get the trainable_variables, split into G and D parts\nt_vars = tf.trainable_variables()\ng_vars = [var for var in t_vars if var.name.startswith('generator')]\nd_vars = [var for var in t_vars if var.name.startswith('discriminator')]\n\nd_train_opt = tf.train.AdamOptimizer(learning_rate).minimize(d_loss, var_list=d_vars)\ng_train_opt = tf.train.AdamOptimizer(learning_rate).minimize(g_loss, var_list=g_vars)", "Training", "def view_samples(epoch, samples):\n fig, axes = plt.subplots(figsize=(7,7), nrows=4, ncols=4, sharey=True, sharex=True)\n for ax, img in zip(axes.flatten(), samples[epoch]):\n ax.xaxis.set_visible(False)\n ax.yaxis.set_visible(False)\n im = ax.imshow(img.reshape((28,28)), cmap='Greys_r')\n \n plt.show()\n \n return fig, axes\n\nbatch_size = 100\nepochs = 1\nsamples = []\nlosses = []\nsaver = tf.train.Saver(var_list = g_vars)\nwith tf.Session() as sess:\n sess.run(tf.global_variables_initializer())\n for e in range(epochs):\n for ii in range(mnist.train.num_examples//batch_size):\n batch = mnist.train.next_batch(batch_size)\n \n # Get images, reshape and rescale to pass to D\n batch_images = batch[0].reshape((batch_size, 784))\n batch_images = batch_images*2 - 1\n \n # Sample random noise for G\n batch_z = np.random.uniform(-1, 1, size=(batch_size, z_size))\n \n # Run optimizers\n _ = sess.run(d_train_opt, feed_dict={input_real: batch_images, input_z: batch_z})\n _ = sess.run(g_train_opt, feed_dict={input_z: batch_z})\n \n # At the end of each epoch, get the losses and print them out\n %time train_loss_d = sess.run(d_loss, {input_z: batch_z, input_real: batch_images})\n train_loss_g = g_loss.eval({input_z: batch_z})\n \n print(\"Epoch {}/{}...\".format(e+1, epochs),\n \"Discriminator Loss: {:.4f}...\".format(train_loss_d),\n \"Generator Loss: {:.4f}\".format(train_loss_g)) \n # Save losses to view after training\n losses.append((train_loss_d, train_loss_g))\n \n # Sample from generator as we're training for viewing afterwards\n sample_z = np.random.uniform(-1, 1, size=(16, z_size))\n gen_samples = sess.run(\n generator(input_z, input_size, reuse=True),\n feed_dict={input_z: sample_z})\n samples.append(gen_samples)\n saver.save(sess, './checkpoints/generator_my.ckpt')\n _ = view_samples(-1, samples)\n\n# Save training generator samples\nwith open('train_samples_my.pkl', 'wb') as f:\n pkl.dump(samples, f)", "Training loss\nHere we'll check out the training losses for the generator and discriminator.", "%matplotlib inline\n\nimport matplotlib.pyplot as plt\n\nfig, ax = plt.subplots()\nlosses = np.array(losses)\nplt.plot(losses.T[0], label='Discriminator')\nplt.plot(losses.T[1], label='Generator')\nplt.title(\"Training Losses\")\nplt.legend()", "Generator samples from training\nHere we can view samples of images from the generator. First we'll look at images taken while training.", "# Load samples from generator taken while training\nwith open('train_samples.pkl', 'rb') as f:\n samples = pkl.load(f)", "These are samples from the final training epoch. You can see the generator is able to reproduce numbers like 5, 7, 3, 0, 9. Since this is just a sample, it isn't representative of the full range of images this generator can make.", "_ = view_samples(-1, samples)", "Below I'm showing the generated images as the network was training, every 10 epochs. With bonus optical illusion!", "rows, cols = 10, 6\nfig, axes = plt.subplots(figsize=(7,12), nrows=rows, ncols=cols, sharex=True, sharey=True)\n\nfor sample, ax_row in zip(samples[::int(len(samples)/rows)], axes):\n for img, ax in zip(sample[::int(len(sample)/cols)], ax_row):\n ax.imshow(img.reshape((28,28)), cmap='Greys_r')\n ax.xaxis.set_visible(False)\n ax.yaxis.set_visible(False)", "It starts out as all noise. Then it learns to make only the center white and the rest black. You can start to see some number like structures appear out of the noise. Looks like 1, 9, and 8 show up first. Then, it learns 5 and 3.\nSampling from the generator\nWe can also get completely new images from the generator by using the checkpoint we saved after training. We just need to pass in a new latent vector $z$ and we'll get new samples!", "saver = tf.train.Saver(var_list=g_vars)\nwith tf.Session() as sess:\n saver.restore(sess, tf.train.latest_checkpoint('checkpoints'))\n sample_z = np.random.uniform(-1, 1, size=(16, z_size))\n gen_samples = sess.run(\n generator(input_z, input_size, reuse=True),\n feed_dict={input_z: sample_z})\nview_samples(0, [gen_samples])" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
zzsza/TIL
Tensorflow-Extended/TFDV(data validation) example.ipynb
mit
[ "Github\nPython2에서 진행\nPython3에서도 되긴 하는데, 몇 기능이 안될듯(Apache Beam이 아직 파이썬2만 지원)", "from __future__ import print_function\nimport sys, os\nimport tempfile, urllib, zipfile\n# Confirm that we're using Python 2\nassert sys.version_info.major is 2, 'Oops, not running Python 2'\n\n# Set up some globals for our file paths\nBASE_DIR = tempfile.mkdtemp()\nDATA_DIR = os.path.join(BASE_DIR, 'data')\nOUTPUT_DIR = os.path.join(BASE_DIR, 'chicago_taxi_output')\nTRAIN_DATA = os.path.join(DATA_DIR, 'train', 'data.csv')\nEVAL_DATA = os.path.join(DATA_DIR, 'eval', 'data.csv')\nSERVING_DATA = os.path.join(DATA_DIR, 'serving', 'data.csv')\n\n# Download the zip file from GCP and unzip it\nzip, headers = urllib.urlretrieve('https://storage.googleapis.com/tfx-colab-datasets/chicago_data.zip')\nzipfile.ZipFile(zip).extractall(BASE_DIR)\nzipfile.ZipFile(zip).close()\n\nprint(\"Here's what we downloaded:\")\n!ls -lR {os.path.join(BASE_DIR, 'data')}\n\n!pip2 install -q tensorflow_data_validation\nimport tensorflow_data_validation as tfdv\n\nprint('TFDV version: {}'.format(tfdv.version.__version__))", "Compute and visualize statistics\n\ntfdv.generate_statistics_from_csv로 데이터 분포 생성\n많은 데이터일 경우 내부적으로 Apache Beam을 사용해 병렬처리\nBeam의 PTransform과 결합 가능", "train_stats = tfdv.generate_statistics_from_csv(data_location=TRAIN_DATA)", "tfdv.visualize_statistics를 사용해 시각화, 내부적으론 Facets을 사용한다 함\nnumeric, categorical feature들을 나눔", "tfdv.visualize_statistics(train_stats)\n", "Infer a scahema\n\n데이터를 통해 스키마 추론\ntfdv.infer_schema\ntfdv.display_schema", "schema = tfdv.infer_schema(statistics=train_stats)\ntfdv.display_schema(schema=schema)", "평가 데이터 에러 체크\n\ntrain, validation에서 다른 데이터들이 있음\n캐글할 때 유용할듯", "# Compute stats for evaluation data\neval_stats = tfdv.generate_statistics_from_csv(data_location=EVAL_DATA)\n\n# Compare evaluation data with training data\ntfdv.visualize_statistics(lhs_statistics=eval_stats, rhs_statistics=train_stats,\n lhs_name='EVAL_DATASET', rhs_name='TRAIN_DATASET')", "Check for evaluation anomalies\n\ntrain 데이터엔 없었는데 validation에 생긴 데이터 있는지 확인", "# Check eval data for errors by validating the eval data stats using the previously inferred schema.\nanomalies = tfdv.validate_statistics(statistics=eval_stats, schema=schema)\ntfdv.display_anomalies(anomalies)", "Fix evaluation anomalies in the schema\n\n수정", "# Relax the minimum fraction of values that must come from the domain for feature company.\ncompany = tfdv.get_feature(schema, 'company')\ncompany.distribution_constraints.min_domain_mass = 0.9\n\n# Add new value to the domain of feature payment_type.\npayment_type_domain = tfdv.get_domain(schema, 'payment_type')\npayment_type_domain.value.append('Prcard')\n\n# Validate eval stats after updating the schema \nupdated_anomalies = tfdv.validate_statistics(eval_stats, schema)\ntfdv.display_anomalies(updated_anomalies)", "Schema Environments\n\nserving할 때도 스키마 체크해야 함\nEnvironments can be used to express such requirements. In particular, features in schema can be associated with a set of environments using default_environment, in_environment and not_in_environment.", "serving_stats = tfdv.generate_statistics_from_csv(SERVING_DATA)\nserving_anomalies = tfdv.validate_statistics(serving_stats, schema)\n\ntfdv.display_anomalies(serving_anomalies)", "Int value가 있음 => Float으로 수정", "options = tfdv.StatsOptions(schema=schema, infer_type_from_schema=True)\nserving_stats = tfdv.generate_statistics_from_csv(SERVING_DATA, stats_options=options)\nserving_anomalies = tfdv.validate_statistics(serving_stats, schema)\n\ntfdv.display_anomalies(serving_anomalies)\n\n# All features are by default in both TRAINING and SERVING environments.\nschema.default_environment.append('TRAINING')\nschema.default_environment.append('SERVING')\n\n# Specify that 'tips' feature is not in SERVING environment.\ntfdv.get_feature(schema, 'tips').not_in_environment.append('SERVING')\n\nserving_anomalies_with_env = tfdv.validate_statistics(\n serving_stats, schema, environment='SERVING')\n\ntfdv.display_anomalies(serving_anomalies_with_env)", "Check for drift and skew\n\n\nDrift\n\nDrift detection is supported for categorical features and between consecutive spans of data (i.e., between span N and span N+1), such as between different days of training data. We express drift in terms of L-infinity distance, and you can set the threshold distance so that you receive warnings when the drift is higher than is acceptable. Setting the correct distance is typically an iterative process requiring domain knowledge and experimentation.\n\n\n\nSkew\n\nSchema Skew\n같은 스키마를 가지지 않을 때\n\n\nFeature Skew\nFeature 생성 로직이 변경될 때\n\n\nDsitribution Skew\nTrain, Serving시 데이터 분포가 다를 경우", "# Add skew comparator for 'payment_type' feature.\npayment_type = tfdv.get_feature(schema, 'payment_type')\npayment_type.skew_comparator.infinity_norm.threshold = 0.01\n\n# Add drift comparator for 'company' feature.\ncompany=tfdv.get_feature(schema, 'company')\ncompany.drift_comparator.infinity_norm.threshold = 0.001\n\nskew_anomalies = tfdv.validate_statistics(train_stats, schema,\n previous_statistics=eval_stats,\n serving_statistics=serving_stats)\n\ntfdv.display_anomalies(skew_anomalies)", "Freeze the schema\n\n스키마 저장", "from tensorflow.python.lib.io import file_io\nfrom google.protobuf import text_format\n\nfile_io.recursive_create_dir(OUTPUT_DIR)\nschema_file = os.path.join(OUTPUT_DIR, 'schema.pbtxt')\ntfdv.write_schema_text(schema, schema_file)\n\n!cat {schema_file}", "When to use TFDV\n\n갑자기 이상한 feature가 들어오는지 확인\nDecision surface에서 모델이 훈련 잘되었는지 확인\nfeature engineering 실수 방지" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
yw-fang/readingnotes
machine-learning/McKinney-pythonbook2013/chapter07-note.ipynb
apache-2.0
[ "阅读笔记\n 作者:方跃文 \n Email: fyuewen@gmail.com \n** 时间:始于2018年11月17日, 结束写作于 2018年\n第七章 数据规整化:清理、转换、合并、重塑\nPANDAS 的产生是以运用为导向的,因此它包含了许多实际工作中需要的数据清理方式。\n合并数据集\npandas对象可以通过一些内置的方法进行合并:\n\n\npandas.merge, 可以根据一个或者多个key将数据进行连接\n\n\npandas.concat , 可以沿着一条轴将多个数据堆叠在一起。\n\n\n实例方法中的 combine.fist 可以将重复的数据编排在一起,并且用一个对象中的值填缺另一个对象中的缺失值。\n\n\n数据库风格的DataFrame合并 (database-style DataFrame joins)\nPands 中的merge,允许我们根据一个或者多个keys来合并datasets,这种操作实现类似于基于SQL数据中的 join 方法。", "import pandas as pd\n\ndef special_sign(sign, times):\n # sign is string, times is integer\n str_list = sign*times\n new_str = ''.join([i for i in str_list])\n return(new_str)\n\ndf1 = pd.DataFrame({'key':list('bbacaab'),\n 'data1': range(7)})\ndf2 = pd.DataFrame({'key': list('abd'),\n 'data2': range(3)})\nprint(df1)\nprint(special_sign('#',15))\nprint(df2)\n\n## many-to-one join; without specifying which column to join on\npd.merge(df1, df2) \n\n## many-to-one join; wit specifying which column to join on\npd.merge(df1, df2, on = 'key')", "轴向连接\nDataFrame 中有很丰富的merge方法,此外还有一种数据合并运算被称作连接(concatenation)、binding、stacking。\n在Numpy中,也有concatenation函数。", "import numpy as np\n\narr1 = np.arange(12).reshape(3,4)\nprint(arr1)\n\nnp.concatenate([arr1, arr1], axis=1)", "对于pandas对象,带有标签的轴使我们能够进一步推广数组的连接运算。\npandas中的concate函数提供了一些功能,来操作这种合并运算\n下方这个例子中,有三个series,这三个series的索引没有重叠,我们来看看,concate是如何给出合并运算的。", "import pandas as pd\nseri1 = pd.Series([-1,2], index=list('ab'))\nseri2 = pd.Series([2,3,4], index=list('cde'))\nseri3 = pd.Series([5,6], index=list('fg'))\nprint(seri1)\nprint(seri2)\nprint(seri3)\n\nprint(seri1)\n\npd.concat([seri1,seri2,seri3])", "By default,concat是在axis=0上工作的,最终产生一个全新的Series。如果传入axis=1,那么结果就会成为一个 DataFrame (axis=1 是列)", "pd.concat([seri1, seri2, seri3],axis=1, sort=False)\n\npd.concat([seri1, seri2, seri3],axis=1, sort=False,join='inner') # 传入 inner,得到并集,该处并集为none\n\nseri4 = pd.concat([seri1*5, seri3])\nprint(seri4)\n\nseri4 = pd.concat([seri1*5, seri3],axis=1, join='inner')\nprint(seri4)", "Appendix during writing this note\ndefine a function which returns a string with repeated letters", "# Ref: https://stackoverflow.com/questions/38273353/how-to-repeat-individual-characters-in-strings-in-python\ndef special_sign(sign, times):\n # sign is string, times is integer\n str_list = sign*times\n new_str = ''.join([i for i in str_list])\n return(new_str)\n\nprint(special_sign('*',20))" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
cfjhallgren/shogun
doc/ipython-notebooks/structure/FGM.ipynb
gpl-3.0
[ "General Structured Output Models with Shogun Machine Learning Toolbox\nShell Hu (GitHub ID: hushell)\nThanks Patrick Pletscher and Fernando J. Iglesias García for taking time to help me finish the project! Shoguners = awesome! Me = grateful!\nIntroduction\nThis notebook illustrates the training of a <a href=\"http://en.wikipedia.org/wiki/Factor_graph\">factor graph</a> model using <a href=\"http://en.wikipedia.org/wiki/Structured_support_vector_machine\">structured SVM</a> in Shogun. We begin by giving a brief outline of factor graphs and <a href=\"http://en.wikipedia.org/wiki/Structured_prediction\">structured output learning</a> followed by the corresponding API in Shogun. Finally, we test the scalability by performing an experiment on a real <a href=\"http://en.wikipedia.org/wiki/Optical_character_recognition\">OCR</a> data set for <a href=\"http://en.wikipedia.org/wiki/Handwriting_recognition\">handwritten character recognition</a>.\nFactor Graph\nA factor graph explicitly represents the factorization of an undirected graphical model in terms of a set of factors (potentials), each of which is defined on a clique in the original graph [1]. For example, a MRF distribution can be factorized as \n$$\nP(\\mathbf{y}) = \\frac{1}{Z} \\prod_{F \\in \\mathcal{F}} \\theta_F(\\mathbf{y}_F),\n$$\nwhere $F$ is the factor index, $\\theta_F(\\mathbf{y}_F)$ is the energy with respect to assignment $\\mathbf{y}_F$. In this demo, we focus only on table representation of factors. Namely, each factor holds an energy table $\\theta_F$, which can be viewed as an unnormalized CPD. According to different factorizations, there are different types of factors. Usually we assume the Markovian property is held, that is, factors have the same parameterization if they belong to the same type, no matter how location or time changes. In addition, we have parameter free factor type, but nothing to learn for such kinds of types. More detailed implementation will be explained later.\nStructured Prediction\nStructured prediction typically involves an input $\\mathbf{x}$ (can be structured) and a structured output $\\mathbf{y}$. A joint feature map $\\Phi(\\mathbf{x},\\mathbf{y})$ is defined to incorporate structure information into the labels, such as chains, trees or general graphs. In general, the linear parameterization will be used to give the prediction rule. We leave the kernelized version for future work.\n$$\n\\hat{\\mathbf{y}} = \\underset{\\mathbf{y} \\in \\mathcal{Y}}{\\operatorname{argmax}} \\langle \\mathbf{w}, \\Phi(\\mathbf{x},\\mathbf{y}) \\rangle \n$$\nwhere $\\Phi(\\mathbf{x},\\mathbf{y})$ is the feature vector by mapping local factor features to corresponding locations in terms of $\\mathbf{y}$, and $\\mathbf{w}$ is the global parameter vector. In factor graph model, parameters are associated with a set of factor types. So $\\mathbf{w}$ is a collection of local parameters. \nThe parameters are learned by regularized risk minimization, where the risk defined by user provided loss function $\\Delta(\\mathbf{y},\\mathbf{\\hat{y}})$ is usually non-convex and non-differentiable, e.g. the Hamming loss. So the empirical risk is defined in terms of the surrogate hinge loss $H_i(\\mathbf{w}) = \\max_{\\mathbf{y} \\in \\mathcal{Y}} \\Delta(\\mathbf{y}_i,\\mathbf{y}) - \\langle \\mathbf{w}, \\Psi_i(\\mathbf{y}) \\rangle $, which is an upper bound of the user defined loss. Here $\\Psi_i(\\mathbf{y}) = \\Phi(\\mathbf{x}_i,\\mathbf{y}_i) - \\Phi(\\mathbf{x}_i,\\mathbf{y})$. The training objective is given by\n$$\n\\min_{\\mathbf{w}} \\frac{\\lambda}{2} ||\\mathbf{w}||^2 + \\frac{1}{N} \\sum_{i=1}^N H_i(\\mathbf{w}). \n$$\nIn Shogun's factor graph model, the corresponding implemented functions are:\n\n\n<a href=\"http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1CStructuredModel.html#a15bd99e15bbf0daa8a727d03dbbf4bcd\">FactorGraphModel::get_joint_feature_vector()</a> $\\longleftrightarrow \\Phi(\\mathbf{x}_i,\\mathbf{y})$ \n\n\n<a href=\"http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1CFactorGraphModel.html#a36665cfdd7ea2dfcc9b3c590947fe67f\">FactorGraphModel::argmax()</a> $\\longleftrightarrow H_i(\\mathbf{w})$\n\n\n<a href=\"http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1CFactorGraphModel.html#a17dac99e933f447db92482a6dce8489b\">FactorGraphModel::delta_loss()</a> $\\longleftrightarrow \\Delta(\\mathbf{y}_i,\\mathbf{y})$\n\n\nExperiment: OCR\nShow Data\nFirst of all, we load the OCR data from a prepared mat file. The raw data can be downloaded from <a href=\"http://www.seas.upenn.edu/~taskar/ocr/\">http://www.seas.upenn.edu/~taskar/ocr/</a>. It has 6876 handwritten words with an average length of 8 letters from 150 different persons. Each letter is rasterized into a binary image of size 16 by 8 pixels. Thus, each $\\mathbf{y}$ is a chain, and each node has 26 possible states denoting ${a,\\cdots,z}$.", "%pylab inline\n%matplotlib inline\nimport os\nSHOGUN_DATA_DIR=os.getenv('SHOGUN_DATA_DIR', '../../../data')\nimport numpy as np\nimport scipy.io\n\ndataset = scipy.io.loadmat(os.path.join(SHOGUN_DATA_DIR, 'ocr/ocr_taskar.mat'))\n# patterns for training\np_tr = dataset['patterns_train']\n# patterns for testing\np_ts = dataset['patterns_test']\n# labels for training\nl_tr = dataset['labels_train']\n# labels for testing\nl_ts = dataset['labels_test']\n\n# feature dimension\nn_dims = p_tr[0,0].shape[0]\n# number of states\nn_stats = 26\n# number of training samples\nn_tr_samples = p_tr.shape[1]\n# number of testing samples\nn_ts_samples = p_ts.shape[1]", "Few examples of the handwritten words are shown below. Note that the first capitalized letter has been removed.", "import matplotlib.pyplot as plt\n\ndef show_word(patterns, index):\n \"\"\"show a word with padding\"\"\"\n plt.rc('image', cmap='binary')\n letters = patterns[0,index][:128,:]\n n_letters = letters.shape[1]\n for l in xrange(n_letters):\n lett = np.transpose(np.reshape(letters[:,l], (8,16)))\n lett = np.hstack((np.zeros((16,1)), lett, np.zeros((16,1))))\n lett = np.vstack((np.zeros((1,10)), lett, np.zeros((1,10))))\n subplot(1,n_letters,l+1)\n imshow(lett)\n plt.xticks(())\n plt.yticks(())\n plt.tight_layout()\n\nshow_word(p_tr, 174)\n\nshow_word(p_tr, 471)\n\nshow_word(p_tr, 57)", "Define Factor Types and Build Factor Graphs\nLet's define 4 factor types, such that a word will be able to be modeled as a chain graph.\n\nThe unary factor type will be used to define unary potentials that capture the appearance likelihoods of each letter. In our case, each letter has $16 \\times 8$ pixels, thus there are $(16 \\times 8 + 1) \\times 26$ parameters. Here the additional bits in the parameter vector are bias terms. One for each state. \nThe pairwise factor type will be used to define pairwise potentials between each pair of letters. This type in fact gives the Potts potentials. There are $26 \\times 26$ parameters. \nThe bias factor type for the first letter is a compensation factor type, since the interaction is one-sided. So there are $26$ parameters to be learned.\nThe bias factor type for the last letter, which has the same intuition as the last item. There are also $26$ parameters.\n\nPutting all parameters together, the global parameter vector $\\mathbf{w}$ has length $4082$.", "from shogun import TableFactorType\n\n# unary, type_id = 0\ncards_u = np.array([n_stats], np.int32)\nw_gt_u = np.zeros(n_stats*n_dims)\nfac_type_u = TableFactorType(0, cards_u, w_gt_u)\n\n# pairwise, type_id = 1\ncards = np.array([n_stats,n_stats], np.int32)\nw_gt = np.zeros(n_stats*n_stats)\nfac_type = TableFactorType(1, cards, w_gt)\n\n# first bias, type_id = 2\ncards_s = np.array([n_stats], np.int32)\nw_gt_s = np.zeros(n_stats)\nfac_type_s = TableFactorType(2, cards_s, w_gt_s)\n\n# last bias, type_id = 3\ncards_t = np.array([n_stats], np.int32)\nw_gt_t = np.zeros(n_stats)\nfac_type_t = TableFactorType(3, cards_t, w_gt_t)\n\n# all initial parameters\nw_all = [w_gt_u,w_gt,w_gt_s,w_gt_t]\n\n# all factor types\nftype_all = [fac_type_u,fac_type,fac_type_s,fac_type_t]", "Next, we write a function to construct the factor graphs and prepare labels for training. For each factor graph instance, the structure is a chain but the number of nodes and edges depend on the number of letters, where unary factors will be added for each letter, pairwise factors will be added for each pair of neighboring letters. Besides, the first and last letter will get an additional bias factor respectively.", "def prepare_data(x, y, ftype, num_samples):\n \"\"\"prepare FactorGraphFeatures and FactorGraphLabels \"\"\"\n from shogun import Factor, TableFactorType, FactorGraph\n from shogun import FactorGraphObservation, FactorGraphLabels, FactorGraphFeatures\n\n samples = FactorGraphFeatures(num_samples)\n labels = FactorGraphLabels(num_samples)\n\n for i in xrange(num_samples):\n n_vars = x[0,i].shape[1]\n data = x[0,i].astype(np.float64)\n\n vc = np.array([n_stats]*n_vars, np.int32)\n fg = FactorGraph(vc)\n\n # add unary factors\n for v in xrange(n_vars):\n datau = data[:,v]\n vindu = np.array([v], np.int32)\n facu = Factor(ftype[0], vindu, datau)\n fg.add_factor(facu)\n\n # add pairwise factors\n for e in xrange(n_vars-1):\n datap = np.array([1.0])\n vindp = np.array([e,e+1], np.int32)\n facp = Factor(ftype[1], vindp, datap)\n fg.add_factor(facp)\n\n # add bias factor to first letter\n datas = np.array([1.0])\n vinds = np.array([0], np.int32)\n facs = Factor(ftype[2], vinds, datas)\n fg.add_factor(facs)\n\n # add bias factor to last letter\n datat = np.array([1.0])\n vindt = np.array([n_vars-1], np.int32)\n fact = Factor(ftype[3], vindt, datat)\n fg.add_factor(fact)\n\n # add factor graph\n samples.add_sample(fg)\n\n # add corresponding label\n states_gt = y[0,i].astype(np.int32)\n states_gt = states_gt[0,:]; # mat to vector\n loss_weights = np.array([1.0/n_vars]*n_vars)\n fg_obs = FactorGraphObservation(states_gt, loss_weights)\n labels.add_label(fg_obs)\n\n return samples, labels\n\n# prepare training pairs (factor graph, node states)\nn_tr_samples = 350 # choose a subset of training data to avoid time out on buildbot\nsamples, labels = prepare_data(p_tr, l_tr, ftype_all, n_tr_samples)", "An example of graph structure is visualized as below, from which you may have a better sense how a factor graph being built. Note that different colors are used to represent different factor types.", "try:\n import networkx as nx # pip install networkx\nexcept ImportError:\n import pip\n pip.main(['install', '--user', 'networkx'])\n import networkx as nx\n\nimport matplotlib.pyplot as plt\n\n# create a graph\nG = nx.Graph()\nnode_pos = {}\n\n# add variable nodes, assuming there are 3 letters\nG.add_nodes_from(['v0','v1','v2'])\nfor i in xrange(3):\n node_pos['v%d' % i] = (2*i,1)\n\n# add factor nodes\nG.add_nodes_from(['F0','F1','F2','F01','F12','Fs','Ft'])\nfor i in xrange(3):\n node_pos['F%d' % i] = (2*i,1.006)\n \nfor i in xrange(2):\n node_pos['F%d%d' % (i,i+1)] = (2*i+1,1)\n \nnode_pos['Fs'] = (-1,1)\nnode_pos['Ft'] = (5,1)\n\n# add edges to connect variable nodes and factor nodes\nG.add_edges_from([('v%d' % i,'F%d' % i) for i in xrange(3)])\nG.add_edges_from([('v%d' % i,'F%d%d' % (i,i+1)) for i in xrange(2)])\nG.add_edges_from([('v%d' % (i+1),'F%d%d' % (i,i+1)) for i in xrange(2)])\nG.add_edges_from([('v0','Fs'),('v2','Ft')])\n\n# draw graph\nfig, ax = plt.subplots(figsize=(6,2))\nnx.draw_networkx_nodes(G,node_pos,nodelist=['v0','v1','v2'],node_color='white',node_size=700,ax=ax)\nnx.draw_networkx_nodes(G,node_pos,nodelist=['F0','F1','F2'],node_color='yellow',node_shape='s',node_size=300,ax=ax)\nnx.draw_networkx_nodes(G,node_pos,nodelist=['F01','F12'],node_color='blue',node_shape='s',node_size=300,ax=ax)\nnx.draw_networkx_nodes(G,node_pos,nodelist=['Fs'],node_color='green',node_shape='s',node_size=300,ax=ax)\nnx.draw_networkx_nodes(G,node_pos,nodelist=['Ft'],node_color='purple',node_shape='s',node_size=300,ax=ax)\nnx.draw_networkx_edges(G,node_pos,alpha=0.7)\nplt.axis('off')\nplt.tight_layout()", "Training\nNow we can create the factor graph model and start training. We will use the tree max-product belief propagation to do MAP inference.", "from shogun import FactorGraphModel, TREE_MAX_PROD\n\n# create model and register factor types\nmodel = FactorGraphModel(samples, labels, TREE_MAX_PROD)\nmodel.add_factor_type(ftype_all[0])\nmodel.add_factor_type(ftype_all[1])\nmodel.add_factor_type(ftype_all[2])\nmodel.add_factor_type(ftype_all[3])", "In Shogun, we implemented several batch solvers and online solvers. Let's first try to train the model using a batch solver. We choose the dual bundle method solver (<a href=\"http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1CDualLibQPBMSOSVM.html\">DualLibQPBMSOSVM</a>) [2], since in practice it is slightly faster than the primal n-slack cutting plane solver (<a a href=\"http://www.shogun-toolbox.org/doc/en/latest/PrimalMosekSOSVM_8h.html\">PrimalMosekSOSVM</a>) [3]. However, it still will take a while until convergence. Briefly, in each iteration, a gradually tighter piece-wise linear lower bound of the objective function will be constructed by adding more cutting planes (most violated constraints), then the approximate QP will be solved. Finding a cutting plane involves calling the max oracle $H_i(\\mathbf{w})$ and in average $N$ calls are required in an iteration. This is basically why the training is time consuming.", "from shogun import DualLibQPBMSOSVM\nfrom shogun import BmrmStatistics\nimport pickle\nimport time\n\n# create bundle method SOSVM, there are few variants can be chosen\n# BMRM, Proximal Point BMRM, Proximal Point P-BMRM, NCBM\n# usually the default one i.e. BMRM is good enough\n# lambda is set to 1e-2\nbmrm = DualLibQPBMSOSVM(model, labels, 0.01)\n\nbmrm.set_TolAbs(20.0)\nbmrm.set_verbose(True)\nbmrm.set_store_train_info(True)\n \n# train\nt0 = time.time()\nbmrm.train()\nt1 = time.time()\n\nw_bmrm = bmrm.get_w()\n\nprint \"BMRM took\", t1 - t0, \"seconds.\"", "Let's check the duality gap to see if the training has converged. We aim at minimizing the primal problem while maximizing the dual problem. By the weak duality theorem, the optimal value of the primal problem is always greater than or equal to dual problem. Thus, we could expect the duality gap will decrease during the time. A relative small and stable duality gap may indicate the convergence. In fact, the gap doesn't have to become zero, since we know it is not far away from the local minima.", "import matplotlib.pyplot as plt\nfig, axes = plt.subplots(nrows=1, ncols=2, figsize=(12,4))\n\nprimal_bmrm = bmrm.get_helper().get_primal_values()\ndual_bmrm = bmrm.get_result().get_hist_Fd_vector()\n\nlen_iter = min(primal_bmrm.size, dual_bmrm.size)\nprimal_bmrm = primal_bmrm[1:len_iter]\ndual_bmrm = dual_bmrm[1:len_iter]\n\n# plot duality gaps\nxs = range(dual_bmrm.size)\naxes[0].plot(xs, (primal_bmrm-dual_bmrm), label='duality gap')\naxes[0].set_xlabel('iteration')\naxes[0].set_ylabel('duality gap')\naxes[0].legend(loc=1)\naxes[0].set_title('duality gaps');\naxes[0].grid(True)\n\n# plot primal and dual values\nxs = range(dual_bmrm.size-1)\naxes[1].plot(xs, primal_bmrm[1:], label='primal')\naxes[1].plot(xs, dual_bmrm[1:], label='dual')\naxes[1].set_xlabel('iteration')\naxes[1].set_ylabel('objective')\naxes[1].legend(loc=1)\naxes[1].set_title('primal vs dual');\naxes[1].grid(True)", "There are other statitics may also be helpful to check if the solution is good or not, such as the number of cutting planes, from which we may have a sense how tight the piece-wise lower bound is. In general, the number of cutting planes should be much less than the dimension of the parameter vector.", "# statistics\nbmrm_stats = bmrm.get_result()\nnCP = bmrm_stats.nCP\nnzA = bmrm_stats.nzA\n\nprint 'number of cutting planes: %d' % nCP\nprint 'number of active cutting planes: %d' % nzA", "In our case, we have 101 active cutting planes, which is much less than 4082, i.e. the number of parameters. We could expect a good model by looking at these statistics. Now come to the online solvers. Unlike the cutting plane algorithms re-optimizes over all the previously added dual variables, an online solver will update the solution based on a single point. This difference results in a faster convergence rate, i.e. less oracle calls, please refer to Table 1 in [4] for more detail. Here, we use the stochastic subgradient descent (<a href=\"http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1CStochasticSOSVM.html\">StochasticSOSVM</a>) to compare with the BMRM algorithm shown before.", "from shogun import StochasticSOSVM\n\n# the 3rd parameter is do_weighted_averaging, by turning this on, \n# a possibly faster convergence rate may be achieved.\n# the 4th parameter controls outputs of verbose training information\nsgd = StochasticSOSVM(model, labels, True, True)\n\nsgd.set_num_iter(100)\nsgd.set_lambda(0.01)\n \n# train\nt0 = time.time()\nsgd.train()\nt1 = time.time()\n \nw_sgd = sgd.get_w()\n \nprint \"SGD took\", t1 - t0, \"seconds.\"", "We compare the SGD and BMRM in terms of the primal objectives versus effective passes. We first plot the training progress (until both algorithms converge) and then zoom in to check the first 100 passes. In order to make a fair comparison, we set the regularization constant to 1e-2 for both algorithms.", "fig, axes = plt.subplots(nrows=1, ncols=2, figsize=(12,4))\n\nprimal_sgd = sgd.get_helper().get_primal_values()\n\nxs = range(dual_bmrm.size-1)\naxes[0].plot(xs, primal_bmrm[1:], label='BMRM')\naxes[0].plot(range(99), primal_sgd[1:100], label='SGD')\naxes[0].set_xlabel('effecitve passes')\naxes[0].set_ylabel('primal objective')\naxes[0].set_title('whole training progress')\naxes[0].legend(loc=1)\naxes[0].grid(True)\n\naxes[1].plot(range(99), primal_bmrm[1:100], label='BMRM')\naxes[1].plot(range(99), primal_sgd[1:100], label='SGD')\naxes[1].set_xlabel('effecitve passes')\naxes[1].set_ylabel('primal objective')\naxes[1].set_title('first 100 effective passes')\naxes[1].legend(loc=1)\naxes[1].grid(True)", "As is shown above, the SGD solver uses less oracle calls to get to converge. Note that the timing is 2 times slower than they actually need, since there are additional computations of primal objective and training error in each pass. The training errors of both algorithms for each pass are shown in below.", "fig, axes = plt.subplots(nrows=1, ncols=2, figsize=(12,4))\n\nterr_bmrm = bmrm.get_helper().get_train_errors()\nterr_sgd = sgd.get_helper().get_train_errors()\n\nxs = range(terr_bmrm.size-1)\naxes[0].plot(xs, terr_bmrm[1:], label='BMRM')\naxes[0].plot(range(99), terr_sgd[1:100], label='SGD')\naxes[0].set_xlabel('effecitve passes')\naxes[0].set_ylabel('training error')\naxes[0].set_title('whole training progress')\naxes[0].legend(loc=1)\naxes[0].grid(True)\n\naxes[1].plot(range(99), terr_bmrm[1:100], label='BMRM')\naxes[1].plot(range(99), terr_sgd[1:100], label='SGD')\naxes[1].set_xlabel('effecitve passes')\naxes[1].set_ylabel('training error')\naxes[1].set_title('first 100 effective passes')\naxes[1].legend(loc=1)\naxes[1].grid(True)", "Interestingly, the training errors of SGD solver are lower than BMRM's in first 100 passes, but in the end the BMRM solver obtains a better training performance. A probable explanation is that BMRM uses very limited number of cutting planes at beginning, which form a poor approximation of the objective function. As the number of cutting planes increasing, we got a tighter piecewise lower bound, thus improve the performance. In addition, we would like to show the pairwise weights, which may learn important co-occurrances of letters. The hinton diagram is a wonderful tool for visualizing 2D data, in which positive and negative values are represented by white and black squares, respectively, and the size of each square represents the magnitude of each value. In our case, a smaller number i.e. a large black square indicates the two letters tend to coincide.", "def hinton(matrix, max_weight=None, ax=None):\n \"\"\"Draw Hinton diagram for visualizing a weight matrix.\"\"\"\n ax = ax if ax is not None else plt.gca()\n\n if not max_weight:\n max_weight = 2**np.ceil(np.log(np.abs(matrix).max())/np.log(2))\n\n ax.patch.set_facecolor('gray')\n ax.set_aspect('equal', 'box')\n ax.xaxis.set_major_locator(plt.NullLocator())\n ax.yaxis.set_major_locator(plt.NullLocator())\n\n for (x,y),w in np.ndenumerate(matrix):\n color = 'white' if w > 0 else 'black'\n size = np.sqrt(np.abs(w))\n rect = plt.Rectangle([x - size / 2, y - size / 2], size, size,\n facecolor=color, edgecolor=color)\n ax.add_patch(rect)\n\n ax.autoscale_view()\n ax.invert_yaxis()\n\n# get pairwise parameters, also accessible from\n# w[n_dims*n_stats:n_dims*n_stats+n_stats*n_stats]\nmodel.w_to_fparams(w_sgd) # update factor parameters\nw_p = ftype_all[1].get_w()\nw_p = np.reshape(w_p,(n_stats,n_stats))\nhinton(w_p)", "Inference\nNext, we show how to do inference with the learned model parameters for a given data point.", "# get testing data\nsamples_ts, labels_ts = prepare_data(p_ts, l_ts, ftype_all, n_ts_samples)\n\nfrom shogun import FactorGraphFeatures, FactorGraphObservation, TREE_MAX_PROD, MAPInference\n\n# get a factor graph instance from test data\nfg0 = samples_ts.get_sample(100)\nfg0.compute_energies()\nfg0.connect_components()\n\n# create a MAP inference using tree max-product\ninfer_met = MAPInference(fg0, TREE_MAX_PROD)\ninfer_met.inference()\n\n# get inference results\ny_pred = infer_met.get_structured_outputs()\ny_truth = FactorGraphObservation.obtain_from_generic(labels_ts.get_label(100))\nprint y_pred.get_data()\nprint y_truth.get_data()", "Evaluation\nIn the end, we check average training error and average testing error. The evaluation can be done by two methods. We can either use the apply() function in the structured output machine or use the <a href=\"http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1CSOSVMHelper.html\">SOSVMHelper</a>.", "from shogun import LabelsFactory, SOSVMHelper\n\n# training error of BMRM method\nbmrm.set_w(w_bmrm)\nmodel.w_to_fparams(w_bmrm)\nlbs_bmrm = bmrm.apply()\nacc_loss = 0.0\nave_loss = 0.0\nfor i in xrange(n_tr_samples):\n\ty_pred = lbs_bmrm.get_label(i)\n\ty_truth = labels.get_label(i)\n\tacc_loss = acc_loss + model.delta_loss(y_truth, y_pred)\n\nave_loss = acc_loss / n_tr_samples\nprint('BMRM: Average training error is %.4f' % ave_loss)\n\n# training error of stochastic method\nprint('SGD: Average training error is %.4f' % SOSVMHelper.average_loss(w_sgd, model))\n\n# testing error\nbmrm.set_features(samples_ts)\nbmrm.set_labels(labels_ts)\n\nlbs_bmrm_ts = bmrm.apply()\nacc_loss = 0.0\nave_loss_ts = 0.0\n\nfor i in xrange(n_ts_samples):\n\ty_pred = lbs_bmrm_ts.get_label(i)\n\ty_truth = labels_ts.get_label(i)\n\tacc_loss = acc_loss + model.delta_loss(y_truth, y_pred)\n\nave_loss_ts = acc_loss / n_ts_samples\nprint('BMRM: Average testing error is %.4f' % ave_loss_ts)\n\n# testing error of stochastic method\nprint('SGD: Average testing error is %.4f' % SOSVMHelper.average_loss(sgd.get_w(), model))", "References\n[1] Kschischang, F. R., B. J. Frey, and H.-A. Loeliger, Factor graphs and the sum-product algorithm, IEEE Transactions on Information Theory 2001.\n[2] Teo, C.H., Vishwanathan, S.V.N, Smola, A. and Quoc, V.Le, Bundle Methods for Regularized Risk Minimization, JMLR 2010.\n[3] Tsochantaridis, I., Hofmann, T., Joachims, T., Altun, Y., Support Vector Machine Learning for Interdependent and Structured Ouput Spaces, ICML 2004.\n[4] Lacoste-Julien, S., Jaggi, M., Schmidt, M., Pletscher, P., Block-Coordinate Frank-Wolfe Optimization for Structural SVMs, ICML 2013." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
acmiyaguchi/data-pipeline
reports/engagement_ratio/MauDau.ipynb
mpl-2.0
[ "Overall Firefox Engagement Ratio\nCompute the Engagement Ratio for the overall Firefox population as described in Bug 1240849. The resulting data is shown on the Firefox Dashboard, and the more granular MAU and DAU values can be viewed via the Diagnostic Data Viewer.\nThe actual Daily Active Users (DAU) and Monthly Active Users (MAU) computations are defined in standards.py in the python_moztelemetry repo.", "from pyspark.sql import SQLContext\nfrom pyspark.sql.types import *\nfrom datetime import datetime as _datetime, timedelta, date\nimport boto3\nimport botocore\nimport csv\nimport os.path\n\nbucket = \"telemetry-parquet\"\nprefix = \"main_summary/v3\"\n%time dataset = sqlContext.read.load(\"s3://{}/{}\".format(bucket, prefix), \"parquet\")", "How many cores are we running on?", "sc.defaultParallelism", "And what do the underlying records look like?", "dataset.printSchema()", "We want to incrementally update the data, re-computing any values that are missing or for which data is still arriving. Define that logic here.", "def fmt(the_date, date_format=\"%Y%m%d\"):\n return _datetime.strftime(the_date, date_format)\n\n# Our calculations look for activity date reported within\n# a certain time window. If that window has passed, we do\n# not need to re-compute data for that period.\ndef should_be_updated(record,\n target_col=\"day\",\n generated_col=\"generated_on\",\n date_format=\"%Y%m%d\"):\n target = _datetime.strptime(record[target_col], date_format)\n generated = _datetime.strptime(record[generated_col], date_format)\n \n # Don't regenerate data that was already updated today.\n today = fmt(_datetime.utcnow(), date_format)\n if record[generated_col] >= today:\n return False\n \n diff = generated - target\n return diff.days <= 10\n\n\nfrom moztelemetry.standards import filter_date_range, count_distinct_clientids\n\n# Similar to the version in standards.py, but uses subsession_start_date\n# instead of activityTimestamp\ndef dau(dataframe, target_day, future_days=10, date_format=\"%Y%m%d\"):\n \"\"\"Compute Daily Active Users (DAU) from the Executive Summary dataset.\n See https://bugzilla.mozilla.org/show_bug.cgi?id=1240849\n \"\"\"\n target_day_date = _datetime.strptime(target_day, date_format)\n min_activity = _datetime.strftime(target_day_date, \"%Y-%m-%d\")\n max_activity = _datetime.strftime(target_day_date + timedelta(1), \"%Y-%m-%d\")\n act_col = dataframe.subsession_start_date\n\n min_submission = target_day\n max_submission_date = target_day_date + timedelta(future_days)\n max_submission = _datetime.strftime(max_submission_date, date_format)\n sub_col = dataframe.submission_date_s3\n\n filtered = filter_date_range(dataframe, act_col, min_activity, max_activity,\n sub_col, min_submission, max_submission)\n return count_distinct_clientids(filtered)\n\n# Similar to the version in standards.py, but uses subsession_start_date\n# instead of activityTimestamp\ndef mau(dataframe, target_day, past_days=28, future_days=10, date_format=\"%Y%m%d\"):\n \"\"\"Compute Monthly Active Users (MAU) from the Executive Summary dataset.\n See https://bugzilla.mozilla.org/show_bug.cgi?id=1240849\n \"\"\"\n target_day_date = _datetime.strptime(target_day, date_format)\n\n # Compute activity over `past_days` days leading up to target_day\n min_activity_date = target_day_date - timedelta(past_days)\n min_activity = _datetime.strftime(min_activity_date, \"%Y-%m-%d\")\n max_activity = _datetime.strftime(target_day_date + timedelta(1), \"%Y-%m-%d\")\n act_col = dataframe.subsession_start_date\n\n min_submission = _datetime.strftime(min_activity_date, date_format)\n max_submission_date = target_day_date + timedelta(future_days)\n max_submission = _datetime.strftime(max_submission_date, date_format)\n sub_col = dataframe.submission_date_s3\n\n filtered = filter_date_range(dataframe, act_col, min_activity, max_activity,\n sub_col, min_submission, max_submission)\n return count_distinct_clientids(filtered)\n\n# Identify all missing days, or days that have not yet passed\n# the \"still reporting in\" threshold (as of 2016-03-17, that is\n# defined as 10 days).\ndef update_engagement_csv(dataset, old_filename, new_filename, \n cutoff_days=30, date_format=\"%Y%m%d\"):\n cutoff_date = _datetime.utcnow() - timedelta(cutoff_days)\n cutoff = fmt(cutoff_date, date_format)\n print \"Cutoff date: {}\".format(cutoff)\n\n fields = [\"day\", \"dau\", \"mau\", \"generated_on\"]\n\n should_write_header = True\n potential_updates = {}\n # Carry over rows we won't touch\n if os.path.exists(old_filename):\n with open(old_filename) as csv_old:\n reader = csv.DictReader(csv_old)\n with open(new_filename, \"w\") as csv_new:\n writer = csv.DictWriter(csv_new, fields)\n writer.writeheader()\n should_write_header = False\n for row in reader:\n if row['day'] < cutoff:\n writer.writerow(row)\n else:\n potential_updates[row['day']] = row\n\n with open(new_filename, \"a\") as csv_new:\n writer = csv.DictWriter(csv_new, fields)\n if should_write_header:\n writer.writeheader()\n\n for i in range(cutoff_days, 0, -1):\n target_day = fmt(_datetime.utcnow() - timedelta(i), date_format)\n if target_day in potential_updates and not should_be_updated(potential_updates[target_day]):\n # It's fine as-is.\n writer.writerow(potential_updates[target_day])\n else:\n # Update it.\n print \"We should update data for {}\".format(target_day)\n record = {\"day\": target_day, \"generated_on\": fmt(_datetime.utcnow(), date_format)}\n print \"Starting dau {} at {}\".format(target_day, _datetime.utcnow())\n record[\"dau\"] = dau(dataset, target_day)\n print \"Finished dau {} at {}\".format(target_day, _datetime.utcnow())\n print \"Starting mau {} at {}\".format(target_day, _datetime.utcnow())\n record[\"mau\"] = mau(dataset, target_day)\n print \"Finished mau {} at {}\".format(target_day, _datetime.utcnow())\n writer.writerow(record)", "Fetch existing data from S3\nAttempt to fetch an existing data file from S3. If found, update it incrementally. Otherwise, re-compute the entire dataset.", "from boto3.s3.transfer import S3Transfer\ndata_bucket = \"net-mozaws-prod-us-west-2-pipeline-analysis\"\nengagement_basename = \"engagement_ratio.csv\"\nnew_engagement_basename = \"engagement_ratio.{}.csv\".format(_datetime.strftime(_datetime.utcnow(), \"%Y%m%d\"))\ns3path = \"mreid/maudau\"\nengagement_key = \"{}/{}\".format(s3path, engagement_basename)\n\nclient = boto3.client('s3', 'us-west-2')\ntransfer = S3Transfer(client)\n\ntry:\n transfer.download_file(data_bucket, engagement_key, engagement_basename)\nexcept botocore.exceptions.ClientError as e:\n # If the file wasn't there, that's ok. Otherwise, abort!\n if e.response['Error']['Code'] != \"404\":\n raise e\n else:\n print \"Did not find an existing file at '{}'\".format(engagement_key)\n\n\n# reorganize dataset\ndataset = dataset.select(dataset.client_id.alias('clientId'), 'subsession_start_date', 'submission_date_s3')\n\nupdate_engagement_csv(dataset, engagement_basename, new_engagement_basename)", "Update data on S3\nNow we have an updated dataset on the local filesystem.\nSince it is so tiny, we keep a date-stamped backup of each dataset in addition to the \"latest\" file.\nUpload the updated file back to S3, as well as relaying it to the S3 bucket that automatically relays to the dashboard server. This final upload appears in the Firefox Dashboard data dir as engagement_ratio.csv.", "## Upload the updated csv file to S3\n\n# Update the day-specific file:\nnew_s3_name = \"{}/{}\".format(s3path, new_engagement_basename)\ntransfer.upload_file(new_engagement_basename, data_bucket, new_s3_name)\n\n# Update the \"main\" file\ntransfer.upload_file(new_engagement_basename, data_bucket, engagement_key)\n\n# Update the dashboard file\ndash_bucket = \"net-mozaws-prod-metrics-data\"\ndash_s3_name = \"firefox-dashboard/{}\".format(engagement_basename)\ntransfer.upload_file(new_engagement_basename, dash_bucket, dash_s3_name,\n extra_args={'ACL': 'bucket-owner-full-control'})" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
yongtang/tensorflow
tensorflow/compiler/xla/g3doc/tutorials/autoclustering_xla.ipynb
apache-2.0
[ "Copyright 2019 The TensorFlow Authors.", "#@title Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.", "Classifying CIFAR-10 with XLA\n<table class=\"tfo-notebook-buttons\" align=\"left\">\n <td>\n <a target=\"_blank\" href=\"https://www.tensorflow.org/xla/tutorials/autoclustering_xla\"><img src=\"https://www.tensorflow.org/images/tf_logo_32px.png\" />View on TensorFlow.org</a>\n </td>\n <td>\n <a target=\"_blank\" href=\"https://colab.research.google.com/github/tensorflow/tensorflow/blob/master/tensorflow/compiler/xla/g3doc/tutorials/autoclustering_xla.ipynb\"><img src=\"https://www.tensorflow.org/images/colab_logo_32px.png\" />Run in Google Colab</a>\n </td>\n <td>\n <a target=\"_blank\" href=\"https://github.com/tensorflow/tensorflow/blob/master/tensorflow/compiler/xla/g3doc/tutorials/autoclustering_xla.ipynb\"><img src=\"https://www.tensorflow.org/images/GitHub-Mark-32px.png\" />View source on GitHub</a>\n </td>\n <td>\n <a href=\"https://storage.googleapis.com/tensorflow_docs/tensorflow/tensorflow/compiler/xla/g3doc/tutorials/autoclustering_xla.ipynb\"><img src=\"https://www.tensorflow.org/images/download_logo_32px.png\" />Download notebook</a>\n </td>\n</table>\n\nThis tutorial trains a TensorFlow model to classify the CIFAR-10 dataset, and we compile it using XLA.\nLoad and normalize the dataset using the TensorFlow Datasets API:", "!pip install tensorflow_datasets\n\nimport tensorflow as tf\nimport tensorflow_datasets as tfds\n\n# Check that GPU is available: cf. https://colab.research.google.com/notebooks/gpu.ipynb\nassert(tf.test.gpu_device_name())\n\ntf.keras.backend.clear_session()\ntf.config.optimizer.set_jit(False) # Start with XLA disabled.\n\ndef load_data():\n result = tfds.load('cifar10', batch_size = -1)\n (x_train, y_train) = result['train']['image'],result['train']['label']\n (x_test, y_test) = result['test']['image'],result['test']['label']\n \n x_train = x_train.numpy().astype('float32') / 256\n x_test = x_test.numpy().astype('float32') / 256\n\n # Convert class vectors to binary class matrices.\n y_train = tf.keras.utils.to_categorical(y_train, num_classes=10)\n y_test = tf.keras.utils.to_categorical(y_test, num_classes=10)\n return ((x_train, y_train), (x_test, y_test))\n\n(x_train, y_train), (x_test, y_test) = load_data()", "We define the model, adapted from the Keras CIFAR-10 example:", "def generate_model():\n return tf.keras.models.Sequential([\n tf.keras.layers.Conv2D(32, (3, 3), padding='same', input_shape=x_train.shape[1:]),\n tf.keras.layers.Activation('relu'),\n tf.keras.layers.Conv2D(32, (3, 3)),\n tf.keras.layers.Activation('relu'),\n tf.keras.layers.MaxPooling2D(pool_size=(2, 2)),\n tf.keras.layers.Dropout(0.25),\n\n tf.keras.layers.Conv2D(64, (3, 3), padding='same'),\n tf.keras.layers.Activation('relu'),\n tf.keras.layers.Conv2D(64, (3, 3)),\n tf.keras.layers.Activation('relu'),\n tf.keras.layers.MaxPooling2D(pool_size=(2, 2)),\n tf.keras.layers.Dropout(0.25),\n\n tf.keras.layers.Flatten(),\n tf.keras.layers.Dense(512),\n tf.keras.layers.Activation('relu'),\n tf.keras.layers.Dropout(0.5),\n tf.keras.layers.Dense(10),\n tf.keras.layers.Activation('softmax')\n ])\n\nmodel = generate_model()", "We train the model using the\nRMSprop\noptimizer:", "def compile_model(model):\n opt = tf.keras.optimizers.RMSprop(learning_rate=0.0001, decay=1e-6)\n model.compile(loss='categorical_crossentropy',\n optimizer=opt,\n metrics=['accuracy'])\n return model\n\nmodel = compile_model(model)\n\ndef train_model(model, x_train, y_train, x_test, y_test, epochs=25):\n model.fit(x_train, y_train, batch_size=256, epochs=epochs, validation_data=(x_test, y_test), shuffle=True)\n\ndef warmup(model, x_train, y_train, x_test, y_test):\n # Warm up the JIT, we do not wish to measure the compilation time.\n initial_weights = model.get_weights()\n train_model(model, x_train, y_train, x_test, y_test, epochs=1)\n model.set_weights(initial_weights)\n\nwarmup(model, x_train, y_train, x_test, y_test)\n%time train_model(model, x_train, y_train, x_test, y_test)\n\nscores = model.evaluate(x_test, y_test, verbose=1)\nprint('Test loss:', scores[0])\nprint('Test accuracy:', scores[1])", "Now let's train the model again, using the XLA compiler.\nTo enable the compiler in the middle of the application, we need to reset the Keras session.", "# We need to clear the session to enable JIT in the middle of the program.\ntf.keras.backend.clear_session()\ntf.config.optimizer.set_jit(True) # Enable XLA.\nmodel = compile_model(generate_model())\n(x_train, y_train), (x_test, y_test) = load_data()\n\nwarmup(model, x_train, y_train, x_test, y_test)\n%time train_model(model, x_train, y_train, x_test, y_test)", "On a machine with a Titan V GPU and an Intel Xeon E5-2690 CPU the speed up is ~1.17x." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]