hexsha
stringlengths
40
40
size
int64
6
14.9M
ext
stringclasses
1 value
lang
stringclasses
1 value
max_stars_repo_path
stringlengths
6
260
max_stars_repo_name
stringlengths
6
119
max_stars_repo_head_hexsha
stringlengths
40
41
max_stars_repo_licenses
sequence
max_stars_count
int64
1
191k
max_stars_repo_stars_event_min_datetime
stringlengths
24
24
max_stars_repo_stars_event_max_datetime
stringlengths
24
24
max_issues_repo_path
stringlengths
6
260
max_issues_repo_name
stringlengths
6
119
max_issues_repo_head_hexsha
stringlengths
40
41
max_issues_repo_licenses
sequence
max_issues_count
int64
1
67k
max_issues_repo_issues_event_min_datetime
stringlengths
24
24
max_issues_repo_issues_event_max_datetime
stringlengths
24
24
max_forks_repo_path
stringlengths
6
260
max_forks_repo_name
stringlengths
6
119
max_forks_repo_head_hexsha
stringlengths
40
41
max_forks_repo_licenses
sequence
max_forks_count
int64
1
105k
max_forks_repo_forks_event_min_datetime
stringlengths
24
24
max_forks_repo_forks_event_max_datetime
stringlengths
24
24
avg_line_length
float64
2
1.04M
max_line_length
int64
2
11.2M
alphanum_fraction
float64
0
1
cells
sequence
cell_types
sequence
cell_type_groups
sequence
d01f281dee2ec733e8695351325522c9bb885d13
80,105
ipynb
Jupyter Notebook
cpd3.5/notebooks/python_sdk/deployments/coreml/Use Core ML to predict Boston house prices.ipynb
muthukumarbala07/watson-machine-learning-samples
ecc66faf7a7c60ca168b9c7ef0bca3c766babb94
[ "Apache-2.0" ]
null
null
null
cpd3.5/notebooks/python_sdk/deployments/coreml/Use Core ML to predict Boston house prices.ipynb
muthukumarbala07/watson-machine-learning-samples
ecc66faf7a7c60ca168b9c7ef0bca3c766babb94
[ "Apache-2.0" ]
null
null
null
cpd3.5/notebooks/python_sdk/deployments/coreml/Use Core ML to predict Boston house prices.ipynb
muthukumarbala07/watson-machine-learning-samples
ecc66faf7a7c60ca168b9c7ef0bca3c766babb94
[ "Apache-2.0" ]
null
null
null
57.670986
20,480
0.728906
[ [ [ "This notebook demonstrates how to perform regression analysis using scikit-learn and the watson-machine-learning-client package.\n\nSome familiarity with Python is helpful. This notebook is compatible with Python 3.7.\n\nYou will use the sample data set, **sklearn.datasets.load_boston** which is available in scikit-learn, to predict house prices.\n\n## Learning goals\n\nIn this notebook, you will learn how to:\n\n- Load a sample data set from ``scikit-learn``\n- Explore data\n- Prepare data for training and evaluation\n- Create a scikit-learn pipeline\n- Train and evaluate a model\n- Store a model in the Watson Machine Learning (WML) repository\n- Deploy a model as Core ML\n\n\n## Contents\n\n1.\t[Set up the environment](#setup)\n2.\t[Load and explore data](#load)\n3.\t[Build a scikit-learn linear regression model](#model)\n4.\t[Set up the WML instance and save the model in the WML repository](#upload)\n5.\t[Deploy the model via Core ML](#deploy)\n6. [Clean up](#cleanup)\n7.\t[Summary and next steps](#summary)", "_____no_output_____" ], [ "<a id=\"setup\"></a>\n## 1. Set up the environment\n\nBefore you use the sample code in this notebook, you must perform the following setup tasks:\n\n- Contact with your Cloud Pack for Data administrator and ask him for your account credentials", "_____no_output_____" ], [ "### Connection to WML\n\nAuthenticate the Watson Machine Learning service on IBM Cloud Pack for Data. You need to provide platform `url`, your `username` and `password`.", "_____no_output_____" ] ], [ [ "username = 'PASTE YOUR USERNAME HERE'\npassword = 'PASTE YOUR PASSWORD HERE'\nurl = 'PASTE THE PLATFORM URL HERE'", "_____no_output_____" ], [ "wml_credentials = {\n \"username\": username,\n \"password\": password,\n \"url\": url,\n \"instance_id\": 'openshift',\n \"version\": '3.5'\n}", "_____no_output_____" ] ], [ [ "### Install and import the `ibm-watson-machine-learning` package\n**Note:** `ibm-watson-machine-learning` documentation can be found <a href=\"http://ibm-wml-api-pyclient.mybluemix.net/\" target=\"_blank\" rel=\"noopener no referrer\">here</a>.", "_____no_output_____" ] ], [ [ "!pip install -U ibm-watson-machine-learning", "_____no_output_____" ], [ "from ibm_watson_machine_learning import APIClient\n\nclient = APIClient(wml_credentials)", "2020-12-08 12:44:04,591 - root - WARNING - scikit-learn version 0.23.2 is not supported. Minimum required version: 0.17. Maximum required version: 0.19.2. Disabling scikit-learn conversion API.\n2020-12-08 12:44:04,653 - root - WARNING - Keras version 2.2.5 detected. Last version known to be fully compatible of Keras is 2.2.4 .\n" ] ], [ [ "### Working with spaces\n\nFirst of all, you need to create a space that will be used for your work. If you do not have space already created, you can use `{PLATFORM_URL}/ml-runtime/spaces?context=icp4data` to create one.\n\n- Click New Deployment Space\n- Create an empty space\n- Go to space `Settings` tab\n- Copy `space_id` and paste it below\n\n**Tip**: You can also use SDK to prepare the space for your work. More information can be found [here](https://github.com/IBM/watson-machine-learning-samples/blob/master/cpd3.5/notebooks/python_sdk/instance-management/Space%20management.ipynb).\n\n**Action**: Assign space ID below", "_____no_output_____" ] ], [ [ "space_id = 'PASTE YOUR SPACE ID HERE'", "_____no_output_____" ] ], [ [ "You can use `list` method to print all existing spaces.", "_____no_output_____" ] ], [ [ "client.spaces.list(limit=10)", "_____no_output_____" ] ], [ [ "To be able to interact with all resources available in Watson Machine Learning, you need to set **space** which you will be using.", "_____no_output_____" ] ], [ [ "client.set.default_space(space_id)", "_____no_output_____" ] ], [ [ "<a id=\"load\"></a>\n## 2. Load and explore data", "_____no_output_____" ], [ "The sample data set contains boston house prices. The data set can be found <a href=\"https://archive.ics.uci.edu/ml/machine-learning-databases/housing/\" target=\"_blank\" rel=\"noopener no referrer\">here</a>.\n\nIn this section, you will learn how to:\n- [2.1 Explore Data](#dataset) \n- [2.2 Check the correlations between predictors and the target](#corr)", "_____no_output_____" ], [ "### 2.1 Explore data<a id=\"dataset\"></a>\n\nIn this subsection, you will perform exploratory data analysis of the boston house prices data set.", "_____no_output_____" ] ], [ [ "!pip install --upgrade scikit-learn==0.23.1 seaborn", "_____no_output_____" ], [ "import sklearn\nfrom sklearn import datasets\nimport pandas as pd\n\nboston_data = datasets.load_boston()", "_____no_output_____" ] ], [ [ "Let's check the names of the predictors.", "_____no_output_____" ] ], [ [ "print(boston_data.feature_names)", "['CRIM' 'ZN' 'INDUS' 'CHAS' 'NOX' 'RM' 'AGE' 'DIS' 'RAD' 'TAX' 'PTRATIO'\n 'B' 'LSTAT']\n" ] ], [ [ "**Tip:** Run `print(boston_data.DESCR)` to view a detailed description of the data set.", "_____no_output_____" ] ], [ [ "print(boston_data.DESCR)", ".. _boston_dataset:\n\nBoston house prices dataset\n---------------------------\n\n**Data Set Characteristics:** \n\n :Number of Instances: 506 \n\n :Number of Attributes: 13 numeric/categorical predictive. Median Value (attribute 14) is usually the target.\n\n :Attribute Information (in order):\n - CRIM per capita crime rate by town\n - ZN proportion of residential land zoned for lots over 25,000 sq.ft.\n - INDUS proportion of non-retail business acres per town\n - CHAS Charles River dummy variable (= 1 if tract bounds river; 0 otherwise)\n - NOX nitric oxides concentration (parts per 10 million)\n - RM average number of rooms per dwelling\n - AGE proportion of owner-occupied units built prior to 1940\n - DIS weighted distances to five Boston employment centres\n - RAD index of accessibility to radial highways\n - TAX full-value property-tax rate per $10,000\n - PTRATIO pupil-teacher ratio by town\n - B 1000(Bk - 0.63)^2 where Bk is the proportion of blacks by town\n - LSTAT % lower status of the population\n - MEDV Median value of owner-occupied homes in $1000's\n\n :Missing Attribute Values: None\n\n :Creator: Harrison, D. and Rubinfeld, D.L.\n\nThis is a copy of UCI ML housing dataset.\nhttps://archive.ics.uci.edu/ml/machine-learning-databases/housing/\n\n\nThis dataset was taken from the StatLib library which is maintained at Carnegie Mellon University.\n\nThe Boston house-price data of Harrison, D. and Rubinfeld, D.L. 'Hedonic\nprices and the demand for clean air', J. Environ. Economics & Management,\nvol.5, 81-102, 1978. Used in Belsley, Kuh & Welsch, 'Regression diagnostics\n...', Wiley, 1980. N.B. Various transformations are used in the table on\npages 244-261 of the latter.\n\nThe Boston house-price data has been used in many machine learning papers that address regression\nproblems. \n \n.. topic:: References\n\n - Belsley, Kuh & Welsch, 'Regression diagnostics: Identifying Influential Data and Sources of Collinearity', Wiley, 1980. 244-261.\n - Quinlan,R. (1993). Combining Instance-Based and Model-Based Learning. In Proceedings on the Tenth International Conference of Machine Learning, 236-243, University of Massachusetts, Amherst. Morgan Kaufmann.\n\n" ] ], [ [ "Create a pandas DataFrame and display some descriptive statistics.", "_____no_output_____" ] ], [ [ "boston_pd = pd.DataFrame(boston_data.data)\nboston_pd.columns = boston_data.feature_names\nboston_pd['PRICE'] = boston_data.target", "_____no_output_____" ] ], [ [ "The describe method generates summary statistics of numerical predictors.", "_____no_output_____" ] ], [ [ "boston_pd.describe()", "_____no_output_____" ] ], [ [ "### 2.2 Check the correlations between predictors and the target<a id=\"corr\"></a>", "_____no_output_____" ] ], [ [ "import seaborn as sns\n%matplotlib inline\n\ncorr_coeffs = boston_pd.corr()\nsns.heatmap(corr_coeffs, xticklabels=corr_coeffs.columns, yticklabels=corr_coeffs.columns);", "_____no_output_____" ] ], [ [ "<a id=\"model\"></a>\n## 3. Build a scikit-learn linear regression model\n\nIn this section, you will learn how to:\n- [3.1 Split data](#prep)\n- [3.2 Create a scikit-learn pipeline](#pipe)\n- [3.3 Train the model](#train)", "_____no_output_____" ], [ "### 3.1 Split data<a id=\"prep\"></a>\n\nIn this subsection, you will split the data set into: \n- Train data set\n- Test data set", "_____no_output_____" ] ], [ [ "from sklearn.model_selection import train_test_split\n\nX = boston_pd.drop('PRICE', axis = 1)\ny = boston_pd['PRICE']\nX_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.33, random_state = 5)\n\nprint('Number of training records: ' + str(X_train.shape[0]))\nprint('Number of test records: ' + str(X_test.shape[0]))", "Number of training records: 339\nNumber of test records: 167\n" ] ], [ [ "Your data has been successfully split into two data sets: \n\n- The train data set, which is the largest group, is used for training.\n- The test data set will be used for model evaluation and is used to test the model.", "_____no_output_____" ], [ "### 3.2 Create a scikit-learn pipeline<a id=\"pipe\"></a>", "_____no_output_____" ], [ "In this subsection, you will create a scikit-learn pipeline.", "_____no_output_____" ], [ "First, import the scikit-learn machine learning packages that are needed in the subsequent steps.", "_____no_output_____" ] ], [ [ "from sklearn.pipeline import Pipeline\nfrom sklearn import preprocessing\nfrom sklearn.linear_model import LinearRegression\nfrom sklearn.metrics import mean_squared_error", "_____no_output_____" ] ], [ [ "Standardize the features by removing the mean and by scaling to unit variance.", "_____no_output_____" ] ], [ [ "scaler = preprocessing.StandardScaler()", "_____no_output_____" ] ], [ [ "Next, define the regressor you want to use. This notebook uses the Linear Regression model.", "_____no_output_____" ] ], [ [ "lr = LinearRegression()", "_____no_output_____" ] ], [ [ "Build the pipeline. A pipeline consists of a transformer (Standard Scaler) and an estimator (Linear Regression model).", "_____no_output_____" ] ], [ [ "pipeline = Pipeline([('scaler', scaler), ('lr', lr)])", "_____no_output_____" ] ], [ [ "### 3.3 Train the model<a id=\"train\"></a>", "_____no_output_____" ], [ "Now, you can use the **pipeline** and **train data** you defined previously to train your SVM model.", "_____no_output_____" ] ], [ [ "model = pipeline.fit(X_train, y_train)", "_____no_output_____" ] ], [ [ "Check the model quality.", "_____no_output_____" ] ], [ [ "y_pred = model.predict(X_test)\nmse = sklearn.metrics.mean_squared_error(y_test, y_pred)\n\nprint('MSE: ' + str(mse))", "MSE: 28.530458765974625\n" ] ], [ [ "Plot the scatter plot of prices vs. predicted prices.", "_____no_output_____" ] ], [ [ "import matplotlib.pyplot as plt\n\nplt.style.use('ggplot')\nplt.title('Predicted prices vs prices')\nplt.ylabel('Prices')\nplt.xlabel('Predicted prices')\nplot = plt.scatter(y_pred, y_test)", "_____no_output_____" ] ], [ [ "**Note:** You can tune your model to achieve better accuracy. To keep this example simple, the tuning section is omitted.", "_____no_output_____" ], [ "<a id=\"upload\"></a>\n## 4. Save the model in the WML repository", "_____no_output_____" ], [ "In this section, you will learn how to use the common Python client to manage your model in the WML repository.", "_____no_output_____" ] ], [ [ "sofware_spec_uid = client.software_specifications.get_id_by_name(\"default_py3.7\")\n\nmetadata = {\n client.repository.ModelMetaNames.NAME: 'Boston house price',\n client.repository.ModelMetaNames.TYPE: 'scikit-learn_0.23',\n client.repository.ModelMetaNames.SOFTWARE_SPEC_UID: sofware_spec_uid\n}\n\npublished_model = client.repository.store_model(\n model=model,\n meta_props=metadata,\n training_data=X_train, training_target=y_train)\n", "_____no_output_____" ], [ "model_uid = client.repository.get_model_uid(published_model)", "_____no_output_____" ] ], [ [ "#### Get information about all of the models in the WML repository.", "_____no_output_____" ] ], [ [ "models_details = client.repository.list_models()", "_____no_output_____" ] ], [ [ "<a id=\"deploy\"></a>\n## 5. Deploy the model via Core ML", "_____no_output_____" ], [ "In this section, you will learn how to use the WML client to create a **virtual** deployment via the `Core ML`. You will also learn how to use `download_url` to download a Core ML model for your <a href=\"https://developer.apple.com/xcode/\" target=\"_blank\" rel=\"noopener no referrer\">Xcode</a> project.\n\n- [5.1 Create a virtual deployment for the model](#create)\n- [5.2 Download the Core ML file from the deployment](#getdeploy)\n- [5.3 Test the CoreML model](#testcoreML)", "_____no_output_____" ], [ "### 5.1 Create a virtual deployment for the model<a id=\"create\"></a>", "_____no_output_____" ] ], [ [ "metadata = {\n client.deployments.ConfigurationMetaNames.NAME: \"Virtual deployment of Boston model\",\n client.deployments.ConfigurationMetaNames.VIRTUAL: {\"export_format\": \"coreml\"}\n}\n\ncreated_deployment = client.deployments.create(model_uid, meta_props=metadata)", "\n\n#######################################################################################\n\nSynchronous deployment creation for uid: '9b319604-4b55-4a86-8728-51572eeeb761' started\n\n#######################################################################################\n\n\ninitializing.........................\nready\n\n\n------------------------------------------------------------------------------------------------\nSuccessfully finished deployment creation, deployment_uid='29ebee31-849b-4ef1-b201-259dbddeb158'\n------------------------------------------------------------------------------------------------\n\n\n" ] ], [ [ "Now, you can define and print the download endpoint. You can use this endpoint to download the Core ML model.", "_____no_output_____" ], [ "### 5.2 Download the `Core ML` file from the deployment<a id=\"getdeploy\"></a>", "_____no_output_____" ] ], [ [ "client.deployments.list()", "_____no_output_____" ] ], [ [ "<a id=\"score\"></a>\n#### Download the virtual deployment content: Core ML model.", "_____no_output_____" ] ], [ [ "deployment_uid = client.deployments.get_uid(created_deployment)\n\ndeployment_content = client.deployments.download(deployment_uid)", "\n\n----------------------------------------------------------\nSuccessfully downloaded deployment file: mlartifact.tar.gz\n----------------------------------------------------------\n\n\n" ] ], [ [ "Use the code in the cell below to create the download link.", "_____no_output_____" ] ], [ [ "from ibm_watson_machine_learning.utils import create_download_link\n\ncreate_download_link(deployment_content)", "_____no_output_____" ] ], [ [ "**Note:** You can use <a href=\"https://developer.apple.com/xcode/\" target=\"_blank\" rel=\"noopener no referrer\">Xcode</a> to preview the model's metadata (after unzipping). ", "_____no_output_____" ], [ "### 5.3 Test the `Core ML` model<a id=\"testcoreML\"></a>", "_____no_output_____" ], [ "Use the following steps to run a test against the downloaded Core ML model.", "_____no_output_____" ] ], [ [ "!pip install --upgrade coremltools", "_____no_output_____" ] ], [ [ "Use the ``coremltools`` to load the model and check some basic metadata.", "_____no_output_____" ], [ "First, extract the model.", "_____no_output_____" ] ], [ [ "from ibm_watson_machine_learning.utils import extract_mlmodel_from_archive\n\nextracted_model_path = extract_mlmodel_from_archive('mlartifact.tar.gz', model_uid)", "_____no_output_____" ] ], [ [ "Load the model and check the description.", "_____no_output_____" ] ], [ [ "import coremltools\n\nloaded_model = coremltools.models.MLModel(extracted_model_path)\nprint(loaded_model.get_spec())", "specificationVersion: 1\ndescription {\n input {\n name: \"input\"\n type {\n multiArrayType {\n shape: 13\n dataType: DOUBLE\n }\n }\n }\n output {\n name: \"prediction\"\n type {\n doubleType {\n }\n }\n }\n predictedFeatureName: \"prediction\"\n metadata {\n shortDescription: \"\\'description\\'\"\n userDefined {\n key: \"coremltoolsVersion\"\n value: \"3.4\"\n }\n }\n}\npipelineRegressor {\n pipeline {\n models {\n specificationVersion: 1\n description {\n input {\n name: \"input\"\n type {\n multiArrayType {\n shape: 13\n dataType: DOUBLE\n }\n }\n }\n output {\n name: \"__feature_vector__\"\n type {\n multiArrayType {\n shape: 13\n dataType: DOUBLE\n }\n }\n }\n metadata {\n userDefined {\n key: \"coremltoolsVersion\"\n value: \"3.4\"\n }\n }\n }\n scaler {\n shiftValue: -3.5107058407079643\n shiftValue: -11.233038348082596\n shiftValue: -10.946755162241887\n shiftValue: -0.061946902654867256\n shiftValue: -0.5524333333333333\n shiftValue: -6.2900589970501475\n shiftValue: -67.4339233038348\n shiftValue: -3.7929982300884952\n shiftValue: -9.587020648967552\n shiftValue: -404.9882005899705\n shiftValue: -18.456342182890854\n shiftValue: -359.3829498525074\n shiftValue: -12.5223598820059\n scaleValue: 0.11919939314854636\n scaleValue: 0.04472688940586527\n scaleValue: 0.14990467420150372\n scaleValue: 4.14836050491109\n scaleValue: 8.709158768395854\n scaleValue: 1.4339791968637936\n scaleValue: 0.035440297674478205\n scaleValue: 0.49360314158376223\n scaleValue: 0.11485019638452705\n scaleValue: 0.005946475916321702\n scaleValue: 0.4634385504439216\n scaleValue: 0.011393122149179365\n scaleValue: 0.14172437454317954\n }\n }\n models {\n specificationVersion: 1\n description {\n input {\n name: \"__feature_vector__\"\n type {\n multiArrayType {\n shape: 13\n dataType: DOUBLE\n }\n }\n }\n output {\n name: \"prediction\"\n type {\n doubleType {\n }\n }\n }\n predictedFeatureName: \"prediction\"\n metadata {\n userDefined {\n key: \"coremltoolsVersion\"\n value: \"3.4\"\n }\n }\n }\n glmRegressor {\n weights {\n value: -1.311930314912692\n value: 0.8618774463035517\n value: -0.16719286609046674\n value: 0.1895784329617395\n value: -1.4865858389370386\n value: 2.7913156462931568\n value: -0.3273770336805285\n value: -2.7720409347134205\n value: 2.9756754908489014\n value: -2.2727548977084533\n value: -2.133758688598611\n value: 1.058429930547136\n value: -3.3349540749442603\n }\n offset: 22.537168141592925\n }\n }\n }\n}\n\n" ] ], [ [ "The model looks good and can be used on your iPhone now.", "_____no_output_____" ], [ "<a id=\"cleanup\"></a>\n## 6. Clean up ", "_____no_output_____" ], [ "If you want to clean up all created assets:\n- experiments\n- trainings\n- pipelines\n- model definitions\n- models\n- functions\n- deployments\n\nplease follow up this sample [notebook](https://github.com/IBM/watson-machine-learning-samples/blob/master/cpd3.5/notebooks/python_sdk/instance-management/Machine%20Learning%20artifacts%20management.ipynb).", "_____no_output_____" ], [ "<a id=\"summary\"></a>\n## 7. Summary and next steps ", "_____no_output_____" ], [ "You successfully completed this notebook! \n \nYou learned how to use scikit-learn to create a Core ML model.\n\nIf you are interested in sample swift application (for iOS), please visit <a href=\"https://github.com/IBM/watson-machine-learning-samples/tree/master/cloud/applications/go-digits\" target=\"_blank\" rel=\"noopener no referrer\">here</a>. \n\nCheck out our <a href=\"https://dataplatform.cloud.ibm.com/docs/content/analyze-data/wml-setup.html\" target=\"_blank\" rel=\"noopener noreferrer\">Online Documentation</a> for more samples, tutorials, documentation, how-tos, and blog posts. ", "_____no_output_____" ], [ "### Author\n\n**Lukasz Cmielowski**, Ph.D., is a Lead Data Scientist at IBM developing enterprise-level applications that substantially increases clients' ability to turn data into actionable knowledge. \n**Jihyoung Kim**, Ph.D., is a Data Scientist at IBM who strives to make data science easy for everyone through Watson Studio.", "_____no_output_____" ], [ "Copyright © 2020, 2021 IBM. This notebook and its source code are released under the terms of the MIT License.", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "markdown", "markdown", "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown" ] ]
d01f2c950531002079f0d3a9de2f6022bf6c72b9
26,629
ipynb
Jupyter Notebook
Packing/AMET_MPIESM_MPI.ipynb
geek-yang/JointAnalysis
88dd29d931614fe9dfb3314cb877a31f37333336
[ "Apache-2.0" ]
null
null
null
Packing/AMET_MPIESM_MPI.ipynb
geek-yang/JointAnalysis
88dd29d931614fe9dfb3314cb877a31f37333336
[ "Apache-2.0" ]
null
null
null
Packing/AMET_MPIESM_MPI.ipynb
geek-yang/JointAnalysis
88dd29d931614fe9dfb3314cb877a31f37333336
[ "Apache-2.0" ]
null
null
null
54.014199
268
0.578167
[ [ [ "# Copyright Netherlands eScience Center <br>\n** Function : Computing AMET with Surface & TOA flux** <br>\n** Author : Yang Liu ** <br>\n** First Built : 2019.08.09 ** <br>\n** Last Update : 2019.09.09 ** <br>\nDescription : This notebook aims to compute AMET with TOA/surface flux fields from NorESM model. The NorESM model is launched by NERSC in Blue Action Work Package 3 as coordinated experiments for joint analysis. It contributes to the Deliverable 3.1. <br>\nReturn Values : netCDF4 <br>\nCaveat : The fields used here are post-processed monthly mean fields. Hence there is no accumulation that need to be taken into account.<br>\n\nThe **positive sign** for each variable varies:<br>\n* Latent heat flux (LHF) - downward <br>\n* Sensible heat flux (SHF) - downward <br>\n* Net solar radiation flux at TOA (NTopSol & UTopSol) - downward <br>\n* Net solar radiation flux at surface (NSurfSol) - downward <br>\n* Net longwave radiation flux at surface (NSurfTherm) - downward <br>\n* Net longwave radiation flux at TOA (OLR) - downward <br>", "_____no_output_____" ] ], [ [ "%matplotlib inline\nimport numpy as np\nimport sys\nsys.path.append(\"/home/ESLT0068/NLeSC/Computation_Modeling/Bjerknes/Scripts/META\")\nimport scipy as sp\nimport pygrib\nimport time as tttt\nfrom netCDF4 import Dataset,num2date\nimport os\nimport meta.statistics\nimport meta.visualizer", "_____no_output_____" ], [ "# constants\nconstant = {'g' : 9.80616, # gravititional acceleration [m / s2]\n 'R' : 6371009, # radius of the earth [m]\n 'cp': 1004.64, # heat capacity of air [J/(Kg*K)]\n 'Lv': 2264670, # Latent heat of vaporization [J/Kg]\n 'R_dry' : 286.9, # gas constant of dry air [J/(kg*K)]\n 'R_vap' : 461.5, # gas constant for water vapour [J/(kg*K)]\n }", "_____no_output_____" ], [ "################################ Input zone ######################################\n# specify starting and ending time\nstart_year = 1979\nend_year = 2013\n# specify data path\ndatapath = '/home/ESLT0068/WorkFlow/Core_Database_BlueAction_WP3/MPIESM_MPI'\n# specify output path for figures\noutput_path = '/home/ESLT0068/WorkFlow/Core_Database_BlueAction_WP3/AMET_netCDF'\n# ensemble number\nensemble = 10\n# experiment number\nexp = 4\n# example file\n#datapath_example = os.path.join(datapath, 'SHF', 'Amon2d_amip_bac_rg_1_SHF_1979-2013.grb')\n#datapath_example = os.path.join(datapath, 'LHF', 'Amon2d_amip_bac_rg_1_LHF_1979-2013.grb')\n#datapath_example = os.path.join(datapath, 'NSurfSol', 'Amon2d_amip_bac_rg_1_NSurfSol_1979-2014.grb')\n#datapath_example = os.path.join(datapath, 'NTopSol', 'Amon2d_amip_bac_rg_1_DTopSol_1979-2014.grb')\n#datapath_example = os.path.join(datapath, 'UTopSol', 'Amon2d_amip_bac_rg_1_UTopSol_1979-2014.grb')\n#datapath_example = os.path.join(datapath, 'NSurfTherm', 'Amon2d_amip_bac_rg_1_NSurfTherm_1979-2014.grb')\ndatapath_example = os.path.join(datapath, 'OLR', 'Amon2d_amip_bac_rg_1_OLR_1979-2014.grb')\n####################################################################################", "_____no_output_____" ], [ "def var_key_retrieve(datapath, exp_num, ensemble_num):\n # get the path to each datasets\n print (\"Start retrieving datasets of experiment {} ensemble number {}\".format(exp_num+1, ensemble_num))\n # get data path\n if exp_num == 0 : # exp 1\n datapath_slhf = os.path.join(datapath, 'LHF', 'Amon2d_amip_bac_rg_{}_LHF_1979-2013.grb'.format(ensemble_num))\n datapath_sshf = os.path.join(datapath, 'SHF', 'Amon2d_amip_bac_rg_{}_SHF_1979-2013.grb'.format(ensemble_num))\n datapath_ssr = os.path.join(datapath, 'NSurfSol', 'Amon2d_amip_bac_rg_{}_NSurfSol_1979-2014.grb'.format(ensemble_num))\n datapath_str = os.path.join(datapath, 'NSurfTherm', 'Amon2d_amip_bac_rg_{}_NSurfTherm_1979-2014.grb'.format(ensemble_num))\n datapath_tsr_in = os.path.join(datapath, 'NTopSol', 'Amon2d_amip_bac_rg_{}_DTopSol_1979-2014.grb'.format(ensemble_num))\n datapath_tsr_out = os.path.join(datapath, 'UTopSol', 'Amon2d_amip_bac_rg_{}_UTopSol_1979-2014.grb'.format(ensemble_num))\n datapath_ttr = os.path.join(datapath, 'OLR', 'Amon2d_amip_bac_rg_{}_OLR_1979-2014.grb'.format(ensemble_num))\n elif exp_num == 1:\n datapath_slhf = os.path.join(datapath, 'LHF', 'Amon2d_amip_bac_exp{}_rg_{}_LHF_1979-2013.grb'.format(exp_num+1, ensemble_num))\n datapath_sshf = os.path.join(datapath, 'SHF', 'Amon2d_amip_bac_exp{}_rg_{}_SHF_1979-2013.grb'.format(exp_num+1, ensemble_num))\n datapath_ssr = os.path.join(datapath, 'NSurfSol', 'Amon2d_amip_bac_exp{}_rg_{}_NSurfSol_1979-2014.grb'.format(exp_num+1, ensemble_num))\n datapath_str = os.path.join(datapath, 'NSurfTherm', 'Amon2d_amip_bac_exp{}_rg_{}_NSurfTherm_1979-2014.grb'.format(exp_num+1, ensemble_num))\n datapath_tsr_in = os.path.join(datapath, 'NTopSol', 'Amon2d_amip_bac_exp{}_rg_{}_DTopSol_1979-2014.grb'.format(exp_num+1, ensemble_num))\n datapath_tsr_out = os.path.join(datapath, 'UTopSol', 'Amon2d_amip_bac_exp{}_rg_{}_UTopSol_1979-2014.grb'.format(exp_num+1, ensemble_num))\n datapath_ttr = os.path.join(datapath, 'OLR', 'Amon2d_amip_bac_exp{}_rg_{}_OLR_1979-2014.grb'.format(exp_num+1, ensemble_num))\n else:\n datapath_slhf = os.path.join(datapath, 'LHF', 'Amon2d_amip_bac_exp{}_rg_{}_LHF_1979-2013.grb'.format(exp_num+1, ensemble_num))\n datapath_sshf = os.path.join(datapath, 'SHF', 'Amon2d_amip_bac_exp{}_rg_{}_SHF_1979-2013.grb'.format(exp_num+1, ensemble_num))\n datapath_ssr = os.path.join(datapath, 'NSurfSol', 'Amon2d_amip_bac_exp{}_rg_{}_NSurfSol_1979-2013.grb'.format(exp_num+1, ensemble_num))\n datapath_str = os.path.join(datapath, 'NSurfTherm', 'Amon2d_amip_bac_exp{}_rg_{}_NSurfTherm_1979-2013.grb'.format(exp_num+1, ensemble_num))\n datapath_tsr_in = os.path.join(datapath, 'NTopSol', 'Amon2d_amip_bac_exp{}_rg_{}_DTopSol_1979-2013.grb'.format(exp_num+1, ensemble_num))\n datapath_tsr_out = os.path.join(datapath, 'UTopSol', 'Amon2d_amip_bac_exp{}_rg_{}_UTopSol_1979-2013.grb'.format(exp_num+1, ensemble_num))\n datapath_ttr = os.path.join(datapath, 'OLR', 'Amon2d_amip_bac_exp{}_rg_{}_OLR_1979-2013.grb'.format(exp_num+1, ensemble_num))\n \n # get the variable keys \n grbs_slhf = pygrib.open(datapath_slhf)\n grbs_sshf = pygrib.open(datapath_sshf)\n grbs_ssr = pygrib.open(datapath_ssr)\n grbs_str = pygrib.open(datapath_str)\n grbs_tsr_in = pygrib.open(datapath_tsr_in)\n grbs_tsr_out = pygrib.open(datapath_tsr_out)\n grbs_ttr = pygrib.open(datapath_ttr)\n\n print (\"Retrieving datasets successfully and return the variable key!\")\n return grbs_slhf, grbs_sshf, grbs_ssr, grbs_str, grbs_tsr_in, grbs_tsr_out, grbs_ttr", "_____no_output_____" ], [ "def amet(grbs_slhf, grbs_sshf, grbs_ssr, grbs_str, grbs_tsr_in,\n grbs_tsr_out, grbs_ttr, period_1979_2013, lat, lon):\n # get all the varialbes\n # make sure we know the sign of all the input variables!!!\n # ascending lat\n var_slhf = np.zeros((len(period_1979_2013)*12,len(lat),len(lon)),dtype=float) # surface latent heat flux W/m2\n var_sshf = np.zeros((len(period_1979_2013)*12,len(lat),len(lon)),dtype=float) # surface sensible heat flux W/m2 \n var_ssr = np.zeros((len(period_1979_2013)*12,len(lat),len(lon)),dtype=float)\n var_str = np.zeros((len(period_1979_2013)*12,len(lat),len(lon)),dtype=float)\n var_tsr_in = np.zeros((len(period_1979_2013)*12,len(lat),len(lon)),dtype=float)\n var_tsr_out = np.zeros((len(period_1979_2013)*12,len(lat),len(lon)),dtype=float)\n var_ttr = np.zeros((len(period_1979_2013)*12,len(lat),len(lon)),dtype=float)\n # load data\n counter = 1\n for i in np.arange(len(period_1979_2013)*12):\n key_slhf = grbs_slhf.message(counter)\n key_sshf = grbs_sshf.message(counter)\n key_ssr = grbs_ssr.message(counter)\n key_str = grbs_str.message(counter)\n key_tsr_in = grbs_tsr_in.message(counter)\n key_tsr_out = grbs_tsr_out.message(counter)\n key_ttr = grbs_ttr.message(counter)\n \n var_slhf[i,:,:] = key_slhf.values\n var_sshf[i,:,:] = key_sshf.values\n var_ssr[i,:,:] = key_ssr.values\n var_str[i,:,:] = key_str.values\n var_tsr_in[i,:,:] = key_tsr_in.values\n var_tsr_out[i,:,:] = key_tsr_out.values\n var_ttr[i,:,:] = key_ttr.values\n \n # counter update\n counter +=1\n \n #size of the grid box\n dx = 2 * np.pi * constant['R'] * np.cos(2 * np.pi * lat /\n 360) / len(lon) \n dy = np.pi * constant['R'] / len(lat)\n # calculate total net energy flux at TOA/surface\n net_flux_surf = var_slhf + var_sshf + var_ssr + var_str\n net_flux_toa = var_tsr_in + var_tsr_out + var_ttr\n net_flux_surf_area = np.zeros(net_flux_surf.shape, dtype=float) # unit W\n net_flux_toa_area = np.zeros(net_flux_toa.shape, dtype=float)\n\n grbs_slhf.close()\n grbs_sshf.close()\n grbs_ssr.close()\n grbs_str.close()\n grbs_tsr_in.close()\n grbs_tsr_out.close()\n grbs_ttr.close()\n \n for i in np.arange(len(lat)):\n # change the unit to terawatt\n net_flux_surf_area[:,i,:] = net_flux_surf[:,i,:]* dx[i] * dy / 1E+12\n net_flux_toa_area[:,i,:] = net_flux_toa[:,i,:]* dx[i] * dy / 1E+12\n \n # take the zonal integral of flux\n net_flux_surf_int = np.sum(net_flux_surf_area,2) / 1000 # PW\n net_flux_toa_int = np.sum(net_flux_toa_area,2) / 1000\n # AMET as the residual of net flux at TOA & surface\n AMET_res_ERAI = np.zeros(net_flux_surf_int.shape)\n for i in np.arange(len(lat)):\n AMET_res_ERAI[:,i] = -(np.sum(net_flux_toa_int[:,0:i+1],1) -\n np.sum(net_flux_surf_int[:,0:i+1],1))\n AMET_res_ERAI = AMET_res_ERAI.reshape(-1,12,len(lat))\n return AMET_res_ERAI", "_____no_output_____" ], [ "def create_netcdf_point (pool_amet, lat, output_path, exp):\n print ('*******************************************************************')\n print ('*********************** create netcdf file*************************')\n print ('*******************************************************************')\n #logging.info(\"Start creating netcdf file for the 2D fields of ERAI at each grid point.\")\n # get the basic dimensions\n ens, year, month, _ = pool_amet.shape\n # wrap the datasets into netcdf file\n # 'NETCDF3_CLASSIC', 'NETCDF3_64BIT', 'NETCDF4_CLASSIC', and 'NETCDF4'\n data_wrap = Dataset(os.path.join(output_path, 'amet_MPIESM_MPI_exp{}.nc'.format(exp+1)),'w',format = 'NETCDF4')\n # create dimensions for netcdf data\n ens_wrap_dim = data_wrap.createDimension('ensemble', ens)\n year_wrap_dim = data_wrap.createDimension('year', year)\n month_wrap_dim = data_wrap.createDimension('month', month)\n lat_wrap_dim = data_wrap.createDimension('latitude', len(lat))\n # create coordinate variable\n ens_wrap_var = data_wrap.createVariable('ensemble',np.int32,('ensemble',))\n year_wrap_var = data_wrap.createVariable('year',np.int32,('year',))\n month_wrap_var = data_wrap.createVariable('month',np.int32,('month',))\n lat_wrap_var = data_wrap.createVariable('latitude',np.float32,('latitude',))\n # create the actual 4d variable\n amet_wrap_var = data_wrap.createVariable('amet',np.float64,('ensemble','year','month','latitude'),zlib=True) \n # global attributes\n data_wrap.description = 'Monthly mean atmospheric meridional energy transport'\n # variable attributes\n lat_wrap_var.units = 'degree_north'\n amet_wrap_var.units = 'PW'\n amet_wrap_var.long_name = 'atmospheric meridional energy transport'\n # writing data\n ens_wrap_var[:] = np.arange(ens)\n month_wrap_var[:] = np.arange(month)+1\n year_wrap_var[:] = np.arange(year)+1979\n lat_wrap_var[:] = lat\n\n amet_wrap_var[:] = pool_amet\n\n # close the file\n data_wrap.close()\n print (\"The generation of netcdf files is complete!!\")", "_____no_output_____" ], [ "if __name__==\"__main__\":\n ####################################################################\n ###### Create time namelist matrix for variable extraction #######\n ####################################################################\n # date and time arrangement\n # namelist of month and days for file manipulation\n namelist_month = ['01','02','03','04','05','06','07','08','09','10','11','12']\n ensemble_list = ['01','02','03','04','05','06','07','08','09','10',\n '11','12','13','14','15','16','17','18','19','20',\n '21','22','23','24','25','26','27','28','29','30',]\n # index of months\n period_1979_2013 = np.arange(start_year,end_year+1,1)\n index_month = np.arange(1,13,1)\n ####################################################################\n ###### Extract invariant and calculate constants #######\n ####################################################################\n # get basic dimensions from sample file\n grbs_example = pygrib.open(datapath_example)\n key_example = grbs_example.message(1)\n lats, lons = key_example.latlons()\n lat = lats[:,0]\n lon = lons[0,:]\n grbs_example.close()\n # get invariant from benchmark file\n Dim_year_1979_2013 = len(period_1979_2013)\n Dim_month = len(index_month)\n Dim_latitude = len(lat)\n Dim_longitude = len(lon)\n #############################################\n ##### Create space for stroing data #####\n #############################################\n # loop for calculation \n for i in range(exp):\n pool_amet = np.zeros((ensemble,Dim_year_1979_2013,Dim_month,Dim_latitude),dtype = float)\n for j in range(ensemble):\n # get variable keys\n grbs_slhf, grbs_sshf, grbs_ssr, grbs_str, grbs_tsr_in,\\\n grbs_tsr_out, grbs_ttr = var_key_retrieve(datapath, i, j)\n # compute amet\n pool_amet[j,:,:,:] = amet(grbs_slhf, grbs_sshf, grbs_ssr, grbs_str, grbs_tsr_in,\\\n grbs_tsr_out, grbs_ttr, period_1979_2013, lat, lon) \n ####################################################################\n ###### Data Wrapping (NetCDF) #######\n ####################################################################\n # save netcdf\n create_netcdf_point(pool_amet, lat, output_path, i)\n print ('Packing AMET is complete!!!')\n print ('The output is in sleep, safe and sound!!!')", "Start retrieving datasets of experiment 1 ensemble number 0\nRetrieving datasets successfully and return the variable key!\nStart retrieving datasets of experiment 1 ensemble number 1\nRetrieving datasets successfully and return the variable key!\nStart retrieving datasets of experiment 1 ensemble number 2\nRetrieving datasets successfully and return the variable key!\nStart retrieving datasets of experiment 1 ensemble number 3\nRetrieving datasets successfully and return the variable key!\nStart retrieving datasets of experiment 1 ensemble number 4\nRetrieving datasets successfully and return the variable key!\nStart retrieving datasets of experiment 1 ensemble number 5\nRetrieving datasets successfully and return the variable key!\nStart retrieving datasets of experiment 1 ensemble number 6\nRetrieving datasets successfully and return the variable key!\nStart retrieving datasets of experiment 1 ensemble number 7\nRetrieving datasets successfully and return the variable key!\nStart retrieving datasets of experiment 1 ensemble number 8\nRetrieving datasets successfully and return the variable key!\nStart retrieving datasets of experiment 1 ensemble number 9\nRetrieving datasets successfully and return the variable key!\n*******************************************************************\n*********************** create netcdf file*************************\n*******************************************************************\nThe generation of netcdf files is complete!!\nPacking AMET is complete!!!\nThe output is in sleep, safe and sound!!!\nStart retrieving datasets of experiment 2 ensemble number 0\nRetrieving datasets successfully and return the variable key!\nStart retrieving datasets of experiment 2 ensemble number 1\nRetrieving datasets successfully and return the variable key!\nStart retrieving datasets of experiment 2 ensemble number 2\nRetrieving datasets successfully and return the variable key!\nStart retrieving datasets of experiment 2 ensemble number 3\nRetrieving datasets successfully and return the variable key!\nStart retrieving datasets of experiment 2 ensemble number 4\nRetrieving datasets successfully and return the variable key!\nStart retrieving datasets of experiment 2 ensemble number 5\nRetrieving datasets successfully and return the variable key!\nStart retrieving datasets of experiment 2 ensemble number 6\nRetrieving datasets successfully and return the variable key!\nStart retrieving datasets of experiment 2 ensemble number 7\nRetrieving datasets successfully and return the variable key!\nStart retrieving datasets of experiment 2 ensemble number 8\nRetrieving datasets successfully and return the variable key!\nStart retrieving datasets of experiment 2 ensemble number 9\nRetrieving datasets successfully and return the variable key!\n*******************************************************************\n*********************** create netcdf file*************************\n*******************************************************************\nThe generation of netcdf files is complete!!\nPacking AMET is complete!!!\nThe output is in sleep, safe and sound!!!\nStart retrieving datasets of experiment 3 ensemble number 0\nRetrieving datasets successfully and return the variable key!\nStart retrieving datasets of experiment 3 ensemble number 1\nRetrieving datasets successfully and return the variable key!\nStart retrieving datasets of experiment 3 ensemble number 2\nRetrieving datasets successfully and return the variable key!\nStart retrieving datasets of experiment 3 ensemble number 3\nRetrieving datasets successfully and return the variable key!\nStart retrieving datasets of experiment 3 ensemble number 4\nRetrieving datasets successfully and return the variable key!\nStart retrieving datasets of experiment 3 ensemble number 5\nRetrieving datasets successfully and return the variable key!\nStart retrieving datasets of experiment 3 ensemble number 6\nRetrieving datasets successfully and return the variable key!\nStart retrieving datasets of experiment 3 ensemble number 7\nRetrieving datasets successfully and return the variable key!\nStart retrieving datasets of experiment 3 ensemble number 8\nRetrieving datasets successfully and return the variable key!\nStart retrieving datasets of experiment 3 ensemble number 9\nRetrieving datasets successfully and return the variable key!\n*******************************************************************\n*********************** create netcdf file*************************\n*******************************************************************\nThe generation of netcdf files is complete!!\nPacking AMET is complete!!!\nThe output is in sleep, safe and sound!!!\nStart retrieving datasets of experiment 4 ensemble number 0\nRetrieving datasets successfully and return the variable key!\nStart retrieving datasets of experiment 4 ensemble number 1\nRetrieving datasets successfully and return the variable key!\nStart retrieving datasets of experiment 4 ensemble number 2\nRetrieving datasets successfully and return the variable key!\nStart retrieving datasets of experiment 4 ensemble number 3\nRetrieving datasets successfully and return the variable key!\nStart retrieving datasets of experiment 4 ensemble number 4\nRetrieving datasets successfully and return the variable key!\nStart retrieving datasets of experiment 4 ensemble number 5\nRetrieving datasets successfully and return the variable key!\nStart retrieving datasets of experiment 4 ensemble number 6\nRetrieving datasets successfully and return the variable key!\nStart retrieving datasets of experiment 4 ensemble number 7\nRetrieving datasets successfully and return the variable key!\nStart retrieving datasets of experiment 4 ensemble number 8\nRetrieving datasets successfully and return the variable key!\nStart retrieving datasets of experiment 4 ensemble number 9\nRetrieving datasets successfully and return the variable key!\n*******************************************************************\n*********************** create netcdf file*************************\n*******************************************************************\nThe generation of netcdf files is complete!!\nPacking AMET is complete!!!\nThe output is in sleep, safe and sound!!!\n" ], [ "############################################################################\n############################################################################\n# first check\ngrbs_example = pygrib.open(datapath_example)\nkey_example = grbs_example.message(1)\nlats, lons = key_example.latlons()\nlat = lats[:,0]\nlon = lons[0,:]\nprint(lat)\nprint(lon)\n#k = key_example.values\n#print(k[30:40,330:340])\n#print(key_example.unit)\n# print all the credentials\n#for i in grbs_example:\n# print(i)\ngrbs_example.close()\n\n# index of months\nperiod_1979_2013 = np.arange(start_year,end_year+1,1)\n\nvalues = np.zeros((len(period_1979_2013)*12,len(lat),len(lon)),dtype=float)\ncounter = 1\n\ngrbs_example = pygrib.open(datapath_example)\n\nfor i in np.arange(len(period_1979_2013)*12):\n key = grbs_example.message(counter)\n values[i,:,:] = key.values\n counter +=1\n \nvalue_max = np.amax(values)\nvalue_min = np.amin(values)\n\nprint(value_max)\nprint(value_min)", "_____no_output_____" ] ] ]
[ "markdown", "code" ]
[ [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code" ] ]
d01f357e56cef7b5ea39eb8e0d763f9966a51066
4,808
ipynb
Jupyter Notebook
ipynb/Romania.ipynb
oscovida/oscovida.github.io
c74d6da79feda1b5ccce107ad3acd48cf0e74c1c
[ "CC-BY-4.0" ]
2
2020-06-19T09:16:14.000Z
2021-01-24T17:47:56.000Z
ipynb/Romania.ipynb
oscovida/oscovida.github.io
c74d6da79feda1b5ccce107ad3acd48cf0e74c1c
[ "CC-BY-4.0" ]
8
2020-04-20T16:49:49.000Z
2021-12-25T16:54:19.000Z
ipynb/Romania.ipynb
oscovida/oscovida.github.io
c74d6da79feda1b5ccce107ad3acd48cf0e74c1c
[ "CC-BY-4.0" ]
4
2020-04-20T13:24:45.000Z
2021-01-29T11:12:12.000Z
28.790419
161
0.510399
[ [ [ "# Romania\n\n* Homepage of project: https://oscovida.github.io\n* Plots are explained at http://oscovida.github.io/plots.html\n* [Execute this Jupyter Notebook using myBinder](https://mybinder.org/v2/gh/oscovida/binder/master?filepath=ipynb/Romania.ipynb)", "_____no_output_____" ] ], [ [ "import datetime\nimport time\n\nstart = datetime.datetime.now()\nprint(f\"Notebook executed on: {start.strftime('%d/%m/%Y %H:%M:%S%Z')} {time.tzname[time.daylight]}\")", "_____no_output_____" ], [ "%config InlineBackend.figure_formats = ['svg']\nfrom oscovida import *", "_____no_output_____" ], [ "overview(\"Romania\", weeks=5);", "_____no_output_____" ], [ "overview(\"Romania\");", "_____no_output_____" ], [ "compare_plot(\"Romania\", normalise=True);\n", "_____no_output_____" ], [ "# load the data\ncases, deaths = get_country_data(\"Romania\")\n\n# get population of the region for future normalisation:\ninhabitants = population(\"Romania\")\nprint(f'Population of \"Romania\": {inhabitants} people')\n\n# compose into one table\ntable = compose_dataframe_summary(cases, deaths)\n\n# show tables with up to 1000 rows\npd.set_option(\"max_rows\", 1000)\n\n# display the table\ntable", "_____no_output_____" ] ], [ [ "# Explore the data in your web browser\n\n- If you want to execute this notebook, [click here to use myBinder](https://mybinder.org/v2/gh/oscovida/binder/master?filepath=ipynb/Romania.ipynb)\n- and wait (~1 to 2 minutes)\n- Then press SHIFT+RETURN to advance code cell to code cell\n- See http://jupyter.org for more details on how to use Jupyter Notebook", "_____no_output_____" ], [ "# Acknowledgements:\n\n- Johns Hopkins University provides data for countries\n- Robert Koch Institute provides data for within Germany\n- Atlo Team for gathering and providing data from Hungary (https://atlo.team/koronamonitor/)\n- Open source and scientific computing community for the data tools\n- Github for hosting repository and html files\n- Project Jupyter for the Notebook and binder service\n- The H2020 project Photon and Neutron Open Science Cloud ([PaNOSC](https://www.panosc.eu/))\n\n--------------------", "_____no_output_____" ] ], [ [ "print(f\"Download of data from Johns Hopkins university: cases at {fetch_cases_last_execution()} and \"\n f\"deaths at {fetch_deaths_last_execution()}.\")", "_____no_output_____" ], [ "# to force a fresh download of data, run \"clear_cache()\"", "_____no_output_____" ], [ "print(f\"Notebook execution took: {datetime.datetime.now()-start}\")\n", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code" ]
[ [ "markdown" ], [ "code", "code", "code", "code", "code", "code" ], [ "markdown", "markdown" ], [ "code", "code", "code" ] ]
d01f3e74f9141a60e17ac6ef2b9782818c5c9555
248,147
ipynb
Jupyter Notebook
Fall2020/Demos/Linear Regression Housing Demo.ipynb
yul091/MLClass
6df8ccdbedfb01e448211cbe948147876a2a3057
[ "MIT" ]
23
2020-08-16T00:00:04.000Z
2022-02-09T17:45:47.000Z
Fall2020/Demos/Linear Regression Housing Demo.ipynb
yul091/MLClass
6df8ccdbedfb01e448211cbe948147876a2a3057
[ "MIT" ]
null
null
null
Fall2020/Demos/Linear Regression Housing Demo.ipynb
yul091/MLClass
6df8ccdbedfb01e448211cbe948147876a2a3057
[ "MIT" ]
13
2020-08-17T23:34:49.000Z
2021-12-20T04:42:45.000Z
478.125241
129,784
0.933443
[ [ [ "import numpy as np\nimport sklearn\nimport pandas as pd\nimport seaborn as sns \nimport matplotlib.pyplot as plt \n\n%matplotlib inline", "_____no_output_____" ], [ "# Load the Boston Housing Dataset from sklearn\nfrom sklearn.datasets import load_boston\nboston_dataset = load_boston()\nprint(boston_dataset.keys())\nprint(boston_dataset.DESCR)", "dict_keys(['data', 'target', 'feature_names', 'DESCR', 'filename'])\n.. _boston_dataset:\n\nBoston house prices dataset\n---------------------------\n\n**Data Set Characteristics:** \n\n :Number of Instances: 506 \n\n :Number of Attributes: 13 numeric/categorical predictive. Median Value (attribute 14) is usually the target.\n\n :Attribute Information (in order):\n - CRIM per capita crime rate by town\n - ZN proportion of residential land zoned for lots over 25,000 sq.ft.\n - INDUS proportion of non-retail business acres per town\n - CHAS Charles River dummy variable (= 1 if tract bounds river; 0 otherwise)\n - NOX nitric oxides concentration (parts per 10 million)\n - RM average number of rooms per dwelling\n - AGE proportion of owner-occupied units built prior to 1940\n - DIS weighted distances to five Boston employment centres\n - RAD index of accessibility to radial highways\n - TAX full-value property-tax rate per $10,000\n - PTRATIO pupil-teacher ratio by town\n - B 1000(Bk - 0.63)^2 where Bk is the proportion of blacks by town\n - LSTAT % lower status of the population\n - MEDV Median value of owner-occupied homes in $1000's\n\n :Missing Attribute Values: None\n\n :Creator: Harrison, D. and Rubinfeld, D.L.\n\nThis is a copy of UCI ML housing dataset.\nhttps://archive.ics.uci.edu/ml/machine-learning-databases/housing/\n\n\nThis dataset was taken from the StatLib library which is maintained at Carnegie Mellon University.\n\nThe Boston house-price data of Harrison, D. and Rubinfeld, D.L. 'Hedonic\nprices and the demand for clean air', J. Environ. Economics & Management,\nvol.5, 81-102, 1978. Used in Belsley, Kuh & Welsch, 'Regression diagnostics\n...', Wiley, 1980. N.B. Various transformations are used in the table on\npages 244-261 of the latter.\n\nThe Boston house-price data has been used in many machine learning papers that address regression\nproblems. \n \n.. topic:: References\n\n - Belsley, Kuh & Welsch, 'Regression diagnostics: Identifying Influential Data and Sources of Collinearity', Wiley, 1980. 244-261.\n - Quinlan,R. (1993). Combining Instance-Based and Model-Based Learning. In Proceedings on the Tenth International Conference of Machine Learning, 236-243, University of Massachusetts, Amherst. Morgan Kaufmann.\n\n" ], [ "# Create the dataset\nboston = pd.DataFrame(boston_dataset.data, columns=boston_dataset.feature_names)\nboston['MEDV'] = boston_dataset.target\nboston.head()", "_____no_output_____" ], [ "# Introductory Data Analysis\n# First, let us make sure there are no missing values or NANs in the dataset\nprint(boston.isnull().sum())\n\n# Next, let us plot the target vaqriable MEDV\n\nsns.set(rc={'figure.figsize':(11.7,8.27)})\nsns.distplot(boston['MEDV'], bins=30)\nplt.show()\n\n# Finally, let us get the correlation matrix\ncorrelation_matrix = boston.corr().round(2)\n# annot = True to print the values inside the square\nsns.heatmap(data=correlation_matrix, annot=True)", "CRIM 0\nZN 0\nINDUS 0\nCHAS 0\nNOX 0\nRM 0\nAGE 0\nDIS 0\nRAD 0\nTAX 0\nPTRATIO 0\nB 0\nLSTAT 0\nMEDV 0\ndtype: int64\n" ], [ "# Let us take few of the features and see how they relate to the target in a 1D plot\nplt.figure(figsize=(20, 5))\n\nfeatures = ['LSTAT', 'RM','CHAS','NOX','AGE','DIS']\ntarget = boston['MEDV']\n\nfor i, col in enumerate(features):\n plt.subplot(1, len(features) , i+1)\n x = boston[col]\n y = target\n plt.scatter(x, y, marker='o')\n plt.title(col)\n plt.xlabel(col)\n plt.ylabel('MEDV')", "_____no_output_____" ], [ "from sklearn.model_selection import train_test_split\nX = boston.to_numpy()\nX = np.delete(X, 13, 1)\ny = boston['MEDV'].to_numpy()\nX_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.2, random_state=5)\nprint(X_train.shape)\nprint(X_test.shape)\nprint(y_train.shape)\nprint(y_test.shape)", "(404, 13)\n(102, 13)\n(404,)\n(102,)\n" ], [ "# Lets now train the model\nfrom sklearn.linear_model import LinearRegression\nlin_model = LinearRegression()\nlin_model.fit(X_train, y_train)", "_____no_output_____" ], [ "# Model Evaluation\n# Lets first evaluate on training set\nfrom sklearn.metrics import r2_score\n\ndef rmse(predictions, targets):\n return np.sqrt(((predictions - targets) ** 2).mean())\n\ny_pred_train = lin_model.predict(X_train)\nrmse_train = rmse(y_pred_train, y_train)\nr2_train = r2_score(y_train, y_pred_train)\nprint(\"Training RMSE = \" + str(rmse_train))\nprint(\"Training R2 = \" + str(r2_train))\n\n# Let us now evaluate on the test set\ny_pred_test = lin_model.predict(X_test)\nrmse_test = rmse(y_pred_test, y_test)\nr2_test = r2_score(y_test, y_pred_test)\nprint(\"Test RMSE = \" + str(rmse_test))\nprint(\"Test R2 = \" + str(r2_test))\n\n# Finally, let us see the learnt weights!\nnp.set_printoptions(precision=3)\nprint(lin_model.coef_)", "Training RMSE = 4.741000992236516\nTraining R2 = 0.738339392059052\nTest RMSE = 4.568292042303218\nTest R2 = 0.7334492147453064\n[-1.308e-01 4.940e-02 1.095e-03 2.705e+00 -1.596e+01 3.414e+00\n 1.119e-03 -1.493e+00 3.644e-01 -1.317e-02 -9.524e-01 1.175e-02\n -5.941e-01]\n" ], [ "# Now, what if we use lesser number of features?\n# For example, suppose we choose two of the highly correlated features 'LSTAT' and 'RM'\nX = pd.DataFrame(np.c_[boston['LSTAT'], boston['RM']], columns = ['LSTAT','RM'])\ny = boston['MEDV']\nX = np.array(X)\ny = np.array(y)\nX_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.2, random_state=5)\n# Training Phase\nlin_model = LinearRegression()\nlin_model.fit(X_train, y_train)\n\n# Evaluation Phase\ny_pred_train = lin_model.predict(X_train)\nrmse_train = rmse(y_pred_train, y_train)\nr2_train = r2_score(y_train, y_pred_train)\nprint(\"Training RMSE = \" + str(rmse_train))\nprint(\"Training R2 = \" + str(r2_train))\n\n# Let us now evaluate on the test set\ny_pred_test = lin_model.predict(X_test)\nrmse_test = rmse(y_pred_test, y_test)\nr2_test = r2_score(y_test, y_pred_test)\nprint(\"Test RMSE = \" + str(rmse_test))\nprint(\"Test R2 = \" + str(r2_test))", "Training RMSE = 5.6371293350711955\nTraining R2 = 0.6300745149331701\nTest RMSE = 5.137400784702911\nTest R2 = 0.6628996975186952\n" ] ] ]
[ "code" ]
[ [ "code", "code", "code", "code", "code", "code", "code", "code", "code" ] ]
d01f4bb18a8cd39d3e64c36128366f2c2375e46f
26,408
ipynb
Jupyter Notebook
misc/improving.ipynb
IGARDS/ranklib
1acd8c0bd4d4045b55e6c5bd6cbb2fbe080c7479
[ "MIT" ]
1
2018-04-30T16:40:07.000Z
2018-04-30T16:40:07.000Z
misc/improving.ipynb
IGARDS/ranklib
1acd8c0bd4d4045b55e6c5bd6cbb2fbe080c7479
[ "MIT" ]
3
2018-04-30T18:50:15.000Z
2018-04-30T18:51:52.000Z
misc/improving.ipynb
IGARDS/ranklib
1acd8c0bd4d4045b55e6c5bd6cbb2fbe080c7479
[ "MIT" ]
null
null
null
27.944974
404
0.422486
[ [ [ "from IPython.core.display import display, HTML\nimport pandas as pd\nimport numpy as np\nimport copy\nimport os\n%load_ext autoreload\n%autoreload 2", "_____no_output_____" ], [ "import sys\nsys.path.insert(0,\"/local/rankability_toolbox\")\n\nPATH_TO_RANKLIB='/local/ranklib'\n\nfrom numpy import ix_\nimport numpy as np\n\nD = np.loadtxt(PATH_TO_RANKLIB+\"/problem_instances/instances/graphs/NFL-2007-D_Matrix.txt\",delimiter=\",\")\n\nDsmall = D[ix_(np.arange(8),np.arange(8))]\nDsmall", "_____no_output_____" ], [ "import pyrankability", "_____no_output_____" ], [ "(6*6-6)/2.-9", "_____no_output_____" ], [ "import itertools\nimport random \nfrom collections import Counter\nimport math\n\nD = np.zeros((6,6),dtype=int)\nfor i in range(D.shape[0]):\n for j in range(i+1,D.shape[0]):\n D[i,j] = 1\n \nDtest = np.zeros((6,6),dtype=int)\nDtest[0,5] = 1\nDtest[0,4] = 1\nDtest[0,1] = 1\nDtest[1,2] = 1\nDtest[1,3] = 1\nDtest[2,1] = 1\nDtest[3,0] = 1\nDtest[3,5] = 1\nDtest[5,1] = 1\nDtest[5,2] = 1\nDtest[5,4] = 1\nD = Dtest\n\ntarget_k = 9\ntarget_p = 12\nmatch_k = []\nmatch_p = []\nmatch_both = []\nmax_count = 100000\nfor num_ones in [1]:#[target_k]:\n possible_inxs = list(set(list(range(D.shape[0]*D.shape[0]))) - set([0,6+1,6+6+2,6+6+6+3,6+6+6+6+4,6+6+6+6+6+5]))\n n = len(possible_inxs)\n r = num_ones\n total = math.factorial(n) / math.factorial(r) / math.factorial(n-r)\n print(total)\n count = 0\n for one_inxs in itertools.combinations(possible_inxs,num_ones):\n count += 1\n if count > max_count:\n print(\"reached max\")\n break\n if count % 100 == 0:\n print(count/total)\n remaining_inxs = list(set(possible_inxs) - set(one_inxs))\n Dcopy = copy.copy(D)\n for ix in one_inxs:\n if Dcopy.flat[ix] == 1:\n Dcopy.flat[ix] = 0\n else:\n Dcopy.flat[ix] = 1\n k,P = pyrankability.exact.find_P_simple(Dcopy)\n if len(P) != target_p:\n continue\n P = np.array(P)+1\n d = dict(Counter(P[:,0]))\n t1 = len(d.values())\n vs = list(d.values())\n vs.sort()\n d2 = dict(Counter(P[:,1]))\n t2 = len(d2.values())\n vs2 = list(d2.values())\n vs2.sort()\n if tuple(vs) == (2,10) and tuple(vs2) == (2,2,2,6):\n print(Dcopy)\n match_p.append(Dcopy) \nprint('finished')", "30.0\n[[0 1 0 0 1 1]\n [0 0 1 1 0 0]\n [1 1 0 0 0 0]\n [1 0 0 0 0 1]\n [0 0 0 0 0 0]\n [0 1 1 0 1 0]]\nfinished\n" ], [ "Dcopy", "_____no_output_____" ], [ "k,P = pyrankability.exact.find_P_simple(match_p[-1])\nprint(k)\nnp.array(P).transpose()", "9\n" ], [ " match_p", "_____no_output_____" ], [ "Dtest = np.zeros((6,6),dtype=int)\nDtest[0,5] = 1\nDtest[0,4] = 1\nDtest[0,1] = 1\nDtest[1,2] = 1\nDtest[1,3] = 1\nDtest[2,1] = 1\n#Dtest[3,0] = 1\nDtest[3,5] = 1\nDtest[5,1] = 1\nDtest[5,2] = 1\nDtest[5,4] = 1\nk,P = pyrankability.exact.find_P_simple(Dtest)\nk,P", "_____no_output_____" ], [ "from collections import Counter\nfor Dcopy in [match_p[-1]]:\n k,P = pyrankability.exact.find_P_simple(Dcopy)\n P = np.array(P)+1\n\n #t1 = len(dict(Counter(P[:,0])).values())\n print(\"k\",k)\n print(P.transpose())\n for i in range(6):\n d = dict(Counter(P[:,i]))\n t = list(d.values())\n t.sort()\n print(t)", "k 9\n[[3 3 4 4 4 4 4 4 4 4 4 4]\n [4 4 1 1 1 1 1 1 3 3 6 6]\n [1 1 6 6 6 6 6 6 1 1 3 3]\n [6 6 2 2 3 3 5 5 6 6 1 1]\n [2 5 3 5 2 5 2 3 2 5 2 5]\n [5 2 5 3 5 2 3 2 5 2 5 2]]\n[2, 10]\n[2, 2, 2, 6]\n[2, 4, 6]\n[2, 2, 2, 2, 4]\n[2, 5, 5]\n[2, 5, 5]\n" ], [ "perm = np.array([1,2,5,4,3,6])-1\nDnew = pyrankability.common.permute_D(match_p[-1],perm)\nrows,cols = np.where(Dnew == 0)\ninxs = []\nfor i in range(len(rows)):\n if rows[i] == cols[i]:\n continue\n inxs.append((rows[i],cols[i]))\nsaved = []\nfor choice in itertools.combinations(inxs,2):\n Dcopy = copy.copy(Dnew)\n for item in choice:\n Dcopy[item[0],item[1]] = 1\n k,P = pyrankability.exact.find_P_simple(Dcopy)\n P = np.array(P)+1\n if len(P) == 2 and k == 7:\n saved.append((Dcopy,choice))", "_____no_output_____" ], [ "from collections import Counter\ni = 1\nfor Dcopy,choice in saved:\n print(\"Option\",i)\n k,P = pyrankability.exact.find_P_simple(Dcopy)\n P = np.array(P)+1\n\n #t1 = len(dict(Counter(P[:,0])).values())\n print(Dcopy)\n print(np.array(choice)+1)\n print(\"k\",k)\n print(P.transpose())\n i+=1", "Option 1\n[[0 1 1 0 0 1]\n [0 0 1 1 1 0]\n [0 0 0 0 0 0]\n [1 0 0 0 0 1]\n [1 1 0 0 0 1]\n [0 1 1 0 1 0]]\n[[2 3]\n [5 6]]\nk 7\n[[4 5]\n [5 4]\n [1 1]\n [6 6]\n [2 2]\n [3 3]]\nOption 2\n[[0 1 1 0 0 1]\n [0 0 0 1 1 0]\n [0 1 0 0 1 0]\n [1 0 0 0 0 1]\n [1 1 0 0 0 0]\n [0 1 1 0 1 0]]\n[[3 2]\n [3 5]]\nk 7\n[[4 4]\n [1 1]\n [6 6]\n [3 3]\n [2 5]\n [5 2]]\nOption 3\n[[0 1 1 0 0 1]\n [0 0 0 1 1 0]\n [0 1 0 0 0 0]\n [1 0 0 0 0 1]\n [1 1 0 0 0 1]\n [0 1 1 0 1 0]]\n[[3 2]\n [5 6]]\nk 7\n[[4 5]\n [5 4]\n [1 1]\n [6 6]\n [3 3]\n [2 2]]\nOption 4\n[[0 1 1 0 0 1]\n [0 0 0 1 1 0]\n [0 0 0 0 0 0]\n [1 1 0 0 0 1]\n [1 1 0 1 0 0]\n [0 1 1 0 1 0]]\n[[4 2]\n [5 4]]\nk 7\n[[5 5]\n [4 4]\n [1 1]\n [6 6]\n [2 3]\n [3 2]]\nOption 5\n[[0 1 1 0 0 1]\n [0 0 0 1 1 0]\n [0 0 0 0 0 0]\n [1 1 0 0 0 1]\n [1 1 0 0 0 0]\n [1 1 1 0 1 0]]\n[[4 2]\n [6 1]]\nk 7\n[[4 4]\n [6 6]\n [5 5]\n [1 1]\n [2 3]\n [3 2]]\nOption 6\n[[0 1 1 0 0 1]\n [0 0 0 1 1 0]\n [0 0 0 0 0 0]\n [1 0 1 0 0 1]\n [1 1 0 1 0 0]\n [0 1 1 0 1 0]]\n[[4 3]\n [5 4]]\nk 7\n[[5 5]\n [4 4]\n [1 1]\n [6 6]\n [2 3]\n [3 2]]\nOption 7\n[[0 1 1 0 0 1]\n [0 0 0 1 1 0]\n [0 0 0 0 0 0]\n [1 0 1 0 0 1]\n [1 1 0 0 0 0]\n [1 1 1 0 1 0]]\n[[4 3]\n [6 1]]\nk 7\n[[4 4]\n [6 6]\n [5 5]\n [1 1]\n [2 3]\n [3 2]]\nOption 8\n[[0 1 1 0 0 1]\n [0 0 0 1 1 0]\n [0 0 0 0 0 0]\n [1 0 0 0 1 1]\n [1 1 0 0 0 1]\n [0 1 1 0 1 0]]\n[[4 5]\n [5 6]]\nk 7\n[[4 4]\n [5 5]\n [1 1]\n [6 6]\n [2 3]\n [3 2]]\nOption 9\n[[0 1 1 0 0 1]\n [0 0 0 1 1 0]\n [0 0 0 0 0 0]\n [1 0 0 0 1 1]\n [1 1 0 0 0 0]\n [1 1 1 0 1 0]]\n[[4 5]\n [6 1]]\nk 7\n[[4 4]\n [6 6]\n [5 5]\n [1 1]\n [2 3]\n [3 2]]\nOption 10\n[[0 1 1 0 0 1]\n [0 0 0 1 1 0]\n [0 0 0 0 0 0]\n [1 0 0 0 0 1]\n [1 1 1 1 0 0]\n [0 1 1 0 1 0]]\n[[5 3]\n [5 4]]\nk 7\n[[5 5]\n [4 4]\n [1 1]\n [6 6]\n [2 3]\n [3 2]]\nOption 11\n[[0 1 1 0 0 1]\n [0 0 0 1 1 0]\n [0 0 0 0 0 0]\n [1 0 0 0 0 1]\n [1 1 1 0 0 0]\n [1 1 1 0 1 0]]\n[[5 3]\n [6 1]]\nk 7\n[[4 4]\n [6 6]\n [5 5]\n [1 1]\n [2 3]\n [3 2]]\nOption 12\n[[0 1 1 0 0 1]\n [0 0 0 1 1 0]\n [0 0 0 0 0 0]\n [1 0 0 0 0 1]\n [1 1 0 1 0 1]\n [0 1 1 0 1 0]]\n[[5 4]\n [5 6]]\nk 7\n[[5 5]\n [4 4]\n [1 1]\n [6 6]\n [2 3]\n [3 2]]\n" ], [ "P_target = [[5,4,1,6,3,2],\n [5,4,1,6,2,3],\n [4,5,1,6,2,3],\n [4,5,1,6,3,2],\n [4,1,6,3,5,2],\n [4,1,6,3,2,5],\n [4,1,6,2,5,3],\n [4,1,6,2,3,5],\n [4,1,6,5,2,3],\n [4,1,6,5,3,2],\n [4,6,5,1,3,2],\n [4,6,5,1,2,3]\n ]\nfor i in range(len(P_target)):\n P_target[i] = tuple(P_target[i])\nfor perm in P:\n if tuple(perm) in P_target:\n print('here')\n else:\n print('not')", "here\nhere\nhere\nhere\nhere\nhere\nhere\nhere\nhere\nhere\nhere\nhere\n" ], [ "P_target = [[5,4,1,6,3,2],\n [5,4,1,6,2,3],\n [4,5,1,6,2,3],\n [4,5,1,6,3,2],\n [4,1,6,3,5,2],\n [4,1,6,3,2,5],\n [4,1,6,2,5,3],\n [4,1,6,2,3,5],\n [4,1,6,5,2,3],\n [4,1,6,5,3,2],\n [4,6,5,1,3,2],\n [4,6,5,1,2,3]\n ]\n\nP_determined = [[4 1 6 2 3 5]\n [4 1 6 2 5 3]\n [4 1 6 3 2 5]\n [4 1 6 3 5 2]\n [4 1 6 5 2 3]\n [4 1 6 5 3 2]\n [4 5 1 6 2 3]\n [4 5 1 6 3 2]\n [4 6 5 1 2 3]\n [4 6 5 1 3 2]\n [5 4 1 6 2 3]\n [5 4 1 6 3 2]]\n\nP_target = np.array(P_target)\nprint(P_target.transpose())\nfor i in range(6):\n d = dict(Counter(P_target[:,i]))\n t = list(d.values())\n t.sort()\n print(t)", "_____no_output_____" ], [ "[2, 5]\n[2, 5, 6]\n[4, 5, 6]\n[3, 4, 5, 7]\n[3, 5, 7]\n[3, 5, 7]", "_____no_output_____" ], [ "Dtilde, changes, output = pyrankability.improve.greedy(D,1,verbose=False)", "_____no_output_____" ], [ "Dchanges", "_____no_output_____" ], [ "if D.shape[0] <= 8: # Only solve small problems\n search = pyrankability.exact.ExhaustiveSearch(Dsmall)\n search.find_P()\n\n print(pyrankability.common.as_json(search.k,search.P,{}))\n\n p = len(search.P)\n k = search.k", "{\"k\": 17, \"p\": 3, \"P\": [[1, 3, 7, 4, 5, 2, 6, 8], [1, 7, 3, 4, 5, 2, 6, 8], [1, 7, 4, 3, 5, 2, 6, 8]], \"other\": {}}\n" ], [ "def greedy(D,l):\n D = np.copy(D) # Leave the original untouched\n for niter in range(l):\n n=D.shape[0]\n \n k,P,X,Y,k2 = pyrankability.lp.lp(D)\n\n mult = 100\n X = np.round(X*mult)/mult\n Y = np.round(Y*mult)/mult\n\n T0 = np.zeros((n,n))\n T1 = np.zeros((n,n))\n inxs = np.where(D + D.transpose() == 0)\n T0[inxs] = 1\n inxs = np.where(D + D.transpose() == 2)\n T1[inxs] = 1\n T0[np.arange(n),np.arange(n)]= 0\n T1[np.arange(n),np.arange(n)] = 0\n\n DOM = D + X - Y\n\n Madd=T0*DOM # note: DOM = P_> in paper\n M1 = Madd # Copy Madd into M, % Madd identifies values >0 in P_> that have 0-tied values in D\n M1[Madd<=0] = np.nan # Set anything <= 0 to NaN\n min_inx = np.nanargmin(M1) # Find min value and index\n bestlinktoadd_i, bestlinktoadd_j = np.unravel_index(min_inx,M1.shape) # adding (i,j) link associated with\n # smallest nonzero value in Madd is likely to produce greatest improvement in rankability\n minMadd = M1[bestlinktoadd_i, bestlinktoadd_j]\n\n Mdelete=T1*DOM # note: DOM = P_> in paper\n Mdelete=Mdelete*(Mdelete<1) # Mdelete identifies values <1 in P_> that have 1-tied values in D\n bestlinktodelete_i, bestlinktodelete_j=np.unravel_index(np.nanargmax(Mdelete), Mdelete.shape) # deleting (i,j) link associated with\n # largest non-unit (less than 1) value in Mdelete is likely to produce greatest improvement in rankability\n maxMdelete = Mdelete[bestlinktodelete_i, bestlinktodelete_j]\n\n # This next section modifies D to create Dtilde\n Dtilde = np.copy(D) # initialize Dtilde\n # choose whether to add or remove a link depending on which will have the biggest\n # impact on reducing the size of the set P\n # PAUL: Or if we only want to do link addition, you don't need to form\n # Mdelete and find the largest non-unit value in it. And vice versa, if\n # only link removal is desired, don't form Madd.\n if (1-minMadd)>maxMdelete and p>=2:\n formatSpec = 'The best one-link way to improve rankability is by adding a link from %d to %d.\\nThis one modification removes about %.10f percent of the rankings in P.'%(bestlinktoadd_i,bestlinktoadd_j,(1-minMadd)*100)\n print(formatSpec)\n Dtilde[bestlinktoadd_i,bestlinktoadd_j]=1 # adds this link, creating one-mod Dtilde\n elif 1-minMadd<maxMdelete and p>=2:\n formatSpec = 'The best one-link way to improve rankability is by deleting the link from %d to %d.\\nThis one modification removes about %.10f percent of the rankings in P.' % (bestlinktodelete_i,bestlinktodelete_j,maxMdelete*100)\n print(formatSpec)\n Dtilde[bestlinktodelete_i,bestlinktodelete_j] = 0 # removes this link, creating one-mod Dtilde\n \n D = Dtilde", "_____no_output_____" ], [ "Dtilde = greedy(D,1)", "The best one-link way to improve rankability is by adding a link from 28 to 14.\nThis one modification removes about 91.0000000000 percent of the rankings in P.\n" ], [ "search = pyrankability.exact.ExhaustiveSearch(Dtilde)\nsearch.find_P()\n\nprint(pyrankability.common.as_json(search.k,search.P,{}))", "{\"k\": 16, \"p\": 1, \"P\": [[1, 7, 4, 3, 5, 2, 6, 8]], \"other\": {}}\n" ], [ "bestlinktoadd_i, bestlinktoadd_j", "_____no_output_____" ], [ "\n\n % Form modification matrices Madd (M_+) and Mdelete (M_-), which are used\n % to determine which link modification most improves rankability\n\n\n Mdelete=T1.*DOM; % note: DOM = P_> in paper\n Mdelete=Mdelete.*(Mdelete<1); % Mdelete identifies values <1 in P_> that have 1-tied values in D\n maxMdelete=max(max(Mdelete));\n [bestlinktodelete_i bestlinktodelete_j]=find(Mdelete==maxMdelete); % deleting (i,j) link associated with\n % largest non-unit (less than 1) value in Mdelete is likely to produce greatest improvement in rankability\n\n\n % This next section modifies D to create Dtilde\n Dtilde=D; % initialize Dtilde\n % choose whether to add or remove a link depending on which will have the biggest\n % impact on reducing the size of the set P\n % PAUL: Or if we only want to do link addition, you don't need to form\n % Mdelete and find the largest non-unit value in it. And vice versa, if\n % only link removal is desired, don't form Madd.\n if 1-minMadd>maxMdelete & p>=2\n formatSpec = 'The best one-link way to improve rankability is by adding a link from %4.f to %4.f.\\nThis one modification removes about %2.f percent of the rankings in P.';\n fprintf(formatSpec,bestlinktoadd_i(1),bestlinktoadd_j(1),(1-minMadd)*100)\n Dtilde(bestlinktoadd_i(1),bestlinktoadd_j(1))=1; % adds this link, creating one-mod Dtilde\n elseif 1-minMadd<maxMdelete & p>=2\n formatSpec = 'The best one-link way to improve rankability is by deleting the link from %4.f to %4.f.\\nThis one modification removes about %2.f percent of the rankings in P.';\n fprintf(formatSpec,bestlinktodelete_i(1),bestlinktodelete_j(1),maxMdelete*100)\n Dtilde(bestlinktodelete_i(1),bestlinktodelete_j(1))=0; % removes this link, creating one-mod Dtilde\n end\n\n\n % set D=Dtilde and repeat until l link modifications have been made or\n % p=1\n D=Dtilde;", "_____no_output_____" ] ] ]
[ "code" ]
[ [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ] ]
d01f4dd1a9aeb0a9a6800e04ef2b2097a5fda0f8
734,101
ipynb
Jupyter Notebook
factory/gmvrfit_reduce_to_gmvpfit_example.ipynb
llondon6/koalas
bc778ba492027b3d40f9c92ef44da5949d0e43c7
[ "MIT" ]
1
2019-03-29T03:59:12.000Z
2019-03-29T03:59:12.000Z
factory/gmvrfit_reduce_to_gmvpfit_example.ipynb
llondon6/koalas
bc778ba492027b3d40f9c92ef44da5949d0e43c7
[ "MIT" ]
13
2018-08-23T11:37:20.000Z
2021-12-13T15:24:32.000Z
factory/gmvrfit_reduce_to_gmvpfit_example.ipynb
llondon6/koalas
bc778ba492027b3d40f9c92ef44da5949d0e43c7
[ "MIT" ]
3
2018-08-29T06:36:58.000Z
2021-11-21T16:06:22.000Z
2,050.561453
336,950
0.949079
[ [ [ "# Dempnstration that GMVRFIT reduces to GMVPFIT (or equivalent) for polynomial cases\n<center>Development for a fitting function (greedy+linear based on mvpolyfit and gmvpfit) that handles rational fucntions</center>", "_____no_output_____" ] ], [ [ "# Low-level import \nfrom numpy import *\nfrom numpy.linalg import pinv,lstsq\n# Setup ipython environment\n%load_ext autoreload\n%autoreload 2\n%matplotlib inline\n# Setup plotting backend\nimport matplotlib as mpl\nmpl.rcParams['lines.linewidth'] = 0.8\nmpl.rcParams['font.family'] = 'serif'\nmpl.rcParams['font.size'] = 12\nmpl.rcParams['axes.labelsize'] = 20\nmpl.rcParams['axes.titlesize'] = 20\nfrom mpl_toolkits.mplot3d import Axes3D\nfrom matplotlib.pyplot import *\n#\nfrom positive import *", "_____no_output_____" ] ], [ [ "## Package Development (positive/learning.py)", "_____no_output_____" ], [ "### Setup test data", "_____no_output_____" ] ], [ [ "################################################################################\nh = 3\nQ = 25\nx = h*linspace(-1,1,Q) \ny = h*linspace(-1,1,Q) \nX,Y = meshgrid(x,y)\n# X += np.random.random( X.shape )-0.5\n# Y += np.random.random( X.shape )-0.5\n\nzfun = lambda xx,yy: 50 + (1.0 + 0.5*xx*yy + xx**2 + yy**2 ) \nnumerator_symbols, denominator_symbols = ['01','00','11'],[]\n\nnp.random.seed(42)\nns = 0.1*(np.random.random( X.shape )-0.5)\nZ = zfun(X,Y) + ns\ndomain,scalar_range = ndflatten( [X,Y], Z )\n################################################################################", "_____no_output_____" ] ], [ [ "### Initiate class object for fitting", "_____no_output_____" ] ], [ [ "foo = mvrfit( domain, scalar_range, numerator_symbols, denominator_symbols, verbose=True )", "_____no_output_____" ] ], [ [ "### Plot using class method", "_____no_output_____" ] ], [ [ "foo.plot()", "_____no_output_____" ] ], [ [ "### Generate python string for fit model", "_____no_output_____" ] ], [ [ "print foo.__str_python__(precision=8)", "f = lambda x0,x1: 5.74999156e+01 + 4.41113329e+00 * ( 2.26778466e-01*(x0*x0) + 1.13422851e-01*(x0*x1) + 2.26544539e-01*(x1*x1) + -1.47329976e+00 ) \n" ] ], [ [ "### Use greedy algorithm", "_____no_output_____" ] ], [ [ "star = gmvrfit( domain, scalar_range, verbose=True )", "(\u001b[0;36mgmvrfit\u001b[0m)>> Now working deg = 1\n&& The estimator has changed by -inf\n&& Degree tempering will continue.\nFalse\n&& The current boundary is [('1', True), ('1', False)]\n&& The current estimator value is 0.988648\n\n(\u001b[0;36mgmvrfit\u001b[0m)>> Now working deg = 2\n&& The estimator has changed by -0.981918\n&& Degree tempering will continue.\nFalse\n&& The current boundary is [('00', True), ('11', True), ('01', True)]\n&& The current estimator value is 0.006730\n\n(\u001b[0;36mgmvrfit\u001b[0m)>> Now working deg = 3\n&& The estimator has changed by 0.000000\n&& Degree tempering will continue.\nFalse\n&& The current boundary is [('00', True), ('11', True), ('01', True)]\n&& The current estimator value is 0.006730\n\n(\u001b[0;36mgmvrfit\u001b[0m)>> Now working deg = 4\n&& The estimator has changed by -0.000024\n&& Degree tempering has completed becuase the estimator has changed by |-0.000024| < 0.010000. The results of the last iteration will be kept.\nTrue\n&& The Final boundary is [('00', True), ('11', True), ('01', True)]\n&& The Final estimator value is 0.006730\n\n\n========================================\n# Degree Tempered Positive Greedy Solution:\n========================================\nf = lambda x0,x1: 5.74999156e+01 + 4.41113329e+00 * ( 2.26778466e-01*(x0*x0) + 1.13422851e-01*(x0*x1) + 2.26544539e-01*(x1*x1) + -1.47329976e+00 ) \n\n############################################\n# Applying a Negative Greedy Algorithm\n############################################\n\n\nIteration #1 (Negative Greedy)\n------------------------------------\n>> min_estimator = 3.6869e-01\n>> The current boundary = [('00', True), ('11', True), ('01', True)]\n>> Exiting because |min_est-initial_estimator_value| = |0.368686-0.006730| = |0.361955| > 0.358336.\n>> NOTE that the result of the previous iteration will be kept.\n\n========================================\n# Negative Greedy Solution:\n========================================\nf = lambda x0,x1: 5.74999156e+01 + 4.41113329e+00 * ( 2.26778466e-01*(x0*x0) + 1.13422851e-01*(x0*x1) + 2.26544539e-01*(x1*x1) + -1.47329976e+00 ) \n\nFit Information:\n----------------------------------------\nf = lambda x0,x1: 5.74999156e+01 + 4.41113329e+00 * ( 2.26778466e-01*(x0*x0) + 1.13422851e-01*(x0*x1) + 2.26544539e-01*(x1*x1) + -1.47329976e+00 ) \n" ], [ "star.plot()\nstar.bin['pgreedy_result'].plot()\nstar.bin['ngreedy_result'].plot()", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ] ]
d01f54e45316959314a5732f6c110253b0b101f8
13,969
ipynb
Jupyter Notebook
notebooks/JSON database.ipynb
ecoinvent/brightway2
78303c1efaf40dd94089929a8b0c3a9b59736733
[ "BSD-3-Clause" ]
25
2020-04-20T22:22:08.000Z
2022-01-05T10:36:01.000Z
notebooks/JSON database.ipynb
ecoinvent/brightway2
78303c1efaf40dd94089929a8b0c3a9b59736733
[ "BSD-3-Clause" ]
15
2021-02-02T22:46:46.000Z
2022-03-31T11:51:56.000Z
notebooks/JSON database.ipynb
brightway-lca/brightway25
1a8784cf78bf1bea074ec4c0e3dd7253a532e9e2
[ "BSD-3-Clause" ]
16
2020-04-25T14:37:27.000Z
2022-03-27T12:13:37.000Z
29.848291
1,043
0.528885
[ [ [ "empty" ] ] ]
[ "empty" ]
[ [ "empty" ] ]
d01f5e05f1656b11a2f4ad6676891fe7d52f4840
331,974
ipynb
Jupyter Notebook
.ipynb_checkpoints/Annulus_Simple_Matplotlib-checkpoint.ipynb
brettavedisian/Liquid-Crystals
c7c6eaec594e0de8966408264ca7ee06c2fdb5d3
[ "MIT" ]
null
null
null
.ipynb_checkpoints/Annulus_Simple_Matplotlib-checkpoint.ipynb
brettavedisian/Liquid-Crystals
c7c6eaec594e0de8966408264ca7ee06c2fdb5d3
[ "MIT" ]
null
null
null
.ipynb_checkpoints/Annulus_Simple_Matplotlib-checkpoint.ipynb
brettavedisian/Liquid-Crystals
c7c6eaec594e0de8966408264ca7ee06c2fdb5d3
[ "MIT" ]
null
null
null
846.872449
134,646
0.943053
[ [ [ "# Annulus_Simple_Matplotlib\n# mjm June 20, 2016\n#\n# solve Poisson eqn with Vin = V0 and Vout = 0 for an annulus\n# with inner radius r1, outer radius r2\n# Vin = 10, Vout =0\n#\nfrom dolfin import *\nfrom mshr import * # need for Circle object to make annulus \nimport numpy as np\nimport matplotlib.pyplot as plt\nimport matplotlib.tri as tri\nfrom mpl_toolkits.mplot3d import Axes3D\n#parameters[\"plotting_backend\"] = \"matplotlib\"\nimport logging\nlogging.getLogger(\"FFC\").setLevel(logging.WARNING)\n#from matplotlib import cm\n%matplotlib inline", "_____no_output_____" ] ], [ [ "# Commands for plotting\nThese are used so the the usual \"plot\" will use matplotlib.", "_____no_output_____" ] ], [ [ "# commands for plotting, \"plot\" works with matplotlib \ndef mesh2triang(mesh):\n xy = mesh.coordinates()\n return tri.Triangulation(xy[:, 0], xy[:, 1], mesh.cells())\n\ndef mplot_cellfunction(cellfn):\n C = cellfn.array()\n tri = mesh2triang(cellfn.mesh())\n return plt.tripcolor(tri, facecolors=C)\n\ndef mplot_function(f):\n mesh = f.function_space().mesh()\n if (mesh.geometry().dim() != 2):\n raise AttributeError('Mesh must be 2D')\n # DG0 cellwise function\n if f.vector().size() == mesh.num_cells():\n C = f.vector().array()\n return plt.tripcolor(mesh2triang(mesh), C)\n # Scalar function, interpolated to vertices\n elif f.value_rank() == 0:\n C = f.compute_vertex_values(mesh)\n return plt.tripcolor(mesh2triang(mesh), C, shading='gouraud')\n # Vector function, interpolated to vertices\n elif f.value_rank() == 1:\n w0 = f.compute_vertex_values(mesh)\n if (len(w0) != 2*mesh.num_vertices()):\n raise AttributeError('Vector field must be 2D')\n X = mesh.coordinates()[:, 0]\n Y = mesh.coordinates()[:, 1]\n U = w0[:mesh.num_vertices()]\n V = w0[mesh.num_vertices():]\n return plt.quiver(X,Y,U,V)\n\n# Plot a generic dolfin object (if supported)\ndef plot(obj):\n plt.gca().set_aspect('equal')\n if isinstance(obj, Function):\n return mplot_function(obj)\n elif isinstance(obj, CellFunctionSizet):\n return mplot_cellfunction(obj)\n elif isinstance(obj, CellFunctionDouble):\n return mplot_cellfunction(obj)\n elif isinstance(obj, CellFunctionInt):\n return mplot_cellfunction(obj)\n elif isinstance(obj, Mesh):\n if (obj.geometry().dim() != 2):\n raise AttributeError('Mesh must be 2D')\n return plt.triplot(mesh2triang(obj), color='#808080')\n\n raise AttributeError('Failed to plot %s'%type(obj))\n# end of commands for plotting", "_____no_output_____" ] ], [ [ "# Annulus \nThis is the field in an annulus. We specify boundary conditions and solve the problem.", "_____no_output_____" ] ], [ [ "r1 = 1 # inner circle radius\nr2 = 10 # outer circle radius\n\n# shapes of inner/outer boundaries are circles\nc1 = Circle(Point(0.0, 0.0), r1)\nc2 = Circle(Point(0.0, 0.0), r2)\n\ndomain = c2 - c1 # solve between circles\nres = 20\nmesh = generate_mesh(domain, res)\n\nclass outer_boundary(SubDomain):\n\tdef inside(self, x, on_boundary):\n\t\ttol = 1e-2\n\t\treturn on_boundary and (abs(sqrt(x[0]*x[0] + x[1]*x[1])) - r2) < tol\n\nclass inner_boundary(SubDomain):\n\tdef inside(self, x, on_boundary):\n\t\ttol = 1e-2\n\t\treturn on_boundary and (abs(sqrt(x[0]*x[0] + x[1]*x[1])) - r1) < tol\n\nouterradius = outer_boundary()\ninnerradius = inner_boundary()\n\nboundaries = FacetFunction(\"size_t\", mesh)\n\nboundaries.set_all(0)\nouterradius.mark(boundaries,2)\ninnerradius.mark(boundaries,1)\n\nV = FunctionSpace(mesh,'Lagrange',1)\n\nn = Constant(10.0) \n\nbcs = [DirichletBC(V, 0, boundaries, 2),\n\t DirichletBC(V, n, boundaries, 1)]\n#\t DirichletBC(V, nx, boundaries, 1)]\n\nu = TrialFunction(V)\n\nv = TestFunction(V)\nf = Constant(0.0)\na = inner(nabla_grad(u), nabla_grad(v))*dx\nL = f*v*dx\n\nu = Function(V)\nsolve(a == L, u, bcs)\n", "_____no_output_____" ] ], [ [ "# Plotting with matplotlib\nNow the usual \"plot\" commands will work for plotting the mesh and the function.", "_____no_output_____" ] ], [ [ "plot(mesh) # usual Fenics command, will use matplotlib", "_____no_output_____" ], [ "plot(u) # usual Fenics command, will use matplotlib", "_____no_output_____" ] ], [ [ "If you want to do usual \"matplotlib\" stuff then you still need \"plt.\" prefix on commands.", "_____no_output_____" ] ], [ [ "plt.figure()\nplt.subplot(1,2,1)\nplot(mesh)\nplt.xlabel('x')\nplt.ylabel('y')\nplt.subplot(1,2,2)\nplot(u) \nplt.title('annulus solution')", "_____no_output_____" ] ], [ [ "# Plotting along a line\nIt turns out the the solution \"u\" is a function that can be evaluated at a point. So in the next cell we loop through a line and make a vector of points for plotting. You just need to give it coordinates $u(x,y)$.", "_____no_output_____" ] ], [ [ "y = np.linspace(r1,r2*0.99,100)\nuu = []\nnp.array(uu)\nfor i in range(len(y)):\n yy = y[i]\n uu.append(u(0.0,yy)) #evaluate u along y axis\nplt.figure()\nplt.plot(y,uu)\nplt.grid(True)\nplt.xlabel('y')\nplt.ylabel('V')", "_____no_output_____" ], [ "u", "_____no_output_____" ] ] ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ] ]
d01f69beda578642dfbf5071389bb928ec455970
64,179
ipynb
Jupyter Notebook
Handwritten Digits Recognition 02 - TensorFlow.ipynb
kevin-linps/Handwritten-digits-recognition
631145b7dee6c57e6288249cdc4d5d14daa3b789
[ "MIT" ]
null
null
null
Handwritten Digits Recognition 02 - TensorFlow.ipynb
kevin-linps/Handwritten-digits-recognition
631145b7dee6c57e6288249cdc4d5d14daa3b789
[ "MIT" ]
null
null
null
Handwritten Digits Recognition 02 - TensorFlow.ipynb
kevin-linps/Handwritten-digits-recognition
631145b7dee6c57e6288249cdc4d5d14daa3b789
[ "MIT" ]
null
null
null
222.072664
39,354
0.75693
[ [ [ "# Handwritten Digits Recognition 02 - TensorFlow\n\nFrom the table below, we see that MNIST database is way larger than scikit-learn database, which we modelled in the previous notebook. Both number of samples and size of each sample are significantly higher. The good new is that, with TensorFlow and Keras, we can build neural networks, which are powerful enough to handle MNIST database! In this notebook, we are going to use Convolutional Neural Network (CNN) to perform image recognition.\n\n| | Scikit-learn database | MNIST database |\n|-----------|-----------------------|----------------|\n| Samples | 1797 | 70,000 |\n| Dimensions | 64 (8x8) | 784 (28x28) |\n\n1. More information about Scikit-learn Database: https://scikit-learn.org/stable/modules/generated/sklearn.datasets.load_digits.html\n2. More information about MNIST Database: https://en.wikipedia.org/wiki/MNIST_database", "_____no_output_____" ], [ "## Loading MNIST database\n\nWe are going to load MNIST database using utilities provided by TensorFlow. When importing TensorFlow, I always first check if it is using the GPU. ", "_____no_output_____" ] ], [ [ "import tensorflow as tf\nfrom tensorflow import keras\n\nimport numpy as np\nimport matplotlib.pyplot as plt\n\nprint(\"TensorFlow Version\", tf.__version__)\n\nif tf.test.is_gpu_available:\n print(\"Device:\" ,tf.test.gpu_device_name())", "TensorFlow Version 2.1.0\nDevice: /device:GPU:0\n" ] ], [ [ "Now, load the MNIST database using TensorFlow. From the output, we can see that the images are 28x28. The database contains 60,000 training and 10,000 testing images. There is no missing entries.", "_____no_output_____" ] ], [ [ "(X_train, y_train), (X_test, y_test) = tf.keras.datasets.mnist.load_data()\n\nprint(X_train.shape, y_train.shape)\nprint(X_test.shape, y_test.shape)", "(60000, 28, 28) (60000,)\n(10000, 28, 28) (10000,)\n" ] ], [ [ "Before we get our hands dirty with all the hardwork, let's just take a moment and look at some digits in the dataset. The digits displayed are the first eight digits in the set. We can see that the image quality is quite high, significantly better than the ones in scikit-learn digits set.", "_____no_output_____" ] ], [ [ "fig, axes = plt.subplots(2, 4)\nfor i, ax in zip(range(8), axes.flatten()):\n ax.imshow(X_train[i], cmap=plt.cm.gray_r, interpolation='nearest')\n ax.set_title(\"Number %d\" % y_train[i])\n ax.set_axis_off()\n\nfig.suptitle(\"Image of Digits in MNIST Database\")\nplt.show()", "_____no_output_____" ] ], [ [ "## Training a convolutional neural network with TensorFlow\n\nEach pixel in the images is stored as integers ranging from 0 to 255. CNN requires us to normalize the numbers to be between 0 and 1. We also increased a dimension so that the images can be fed into the CNN. Also, convert the labels (*y_train, y_test*) to one-hot encoding since we are categorizing images.", "_____no_output_____" ] ], [ [ "# Normalize and flatten the images\nx_train = X_train.reshape((60000, 28, 28, 1)).astype('float32') / 255\nx_test = X_test.reshape((10000, 28, 28, 1)).astype('float32') / 255", "_____no_output_____" ], [ "# Convert to one-hot encoding\nfrom keras.utils import np_utils\ny_train = np_utils.to_categorical(y_train)\ny_test = np_utils.to_categorical(y_test)", "Using TensorFlow backend.\n" ] ], [ [ "This is the structure of the convolutional neural network. We have two convolution layers to extract features, along with two pooling layers to reduce the dimension of the features. The dropout layer disgards 20% of the data to prevent overfitting. The multi-dimensional data is then flattened in to vectors. The two dence layers with 128 neurons are trained to do the classification. Lastly, the dense layer with 10 neurons output the results.", "_____no_output_____" ] ], [ [ "model = keras.Sequential([\n \n keras.layers.Conv2D(32, (5,5), activation = 'relu'),\n keras.layers.MaxPool2D(pool_size = (2,2)),\n \n keras.layers.Conv2D(32, (5,5), activation = 'relu'),\n keras.layers.MaxPool2D(pool_size = (2,2)),\n \n keras.layers.Dropout(rate = 0.2),\n keras.layers.Flatten(),\n \n keras.layers.Dense(units = 128, activation = 'relu'),\n keras.layers.Dense(units = 128, activation = 'relu'),\n \n keras.layers.Dense(units = 10, activation = 'softmax')\n])\n\nmodel.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])\n\nmodel.fit(x_train, y_train, epochs=10)", "Train on 60000 samples\nEpoch 1/10\n60000/60000 [==============================] - 12s 199us/sample - loss: 0.1625 - accuracy: 0.9496\nEpoch 2/10\n60000/60000 [==============================] - 7s 120us/sample - loss: 0.0563 - accuracy: 0.9822\nEpoch 3/10\n60000/60000 [==============================] - 7s 117us/sample - loss: 0.0423 - accuracy: 0.9865\nEpoch 4/10\n60000/60000 [==============================] - 7s 121us/sample - loss: 0.0351 - accuracy: 0.9888\nEpoch 5/10\n60000/60000 [==============================] - 7s 123us/sample - loss: 0.0290 - accuracy: 0.9908\nEpoch 6/10\n60000/60000 [==============================] - 7s 121us/sample - loss: 0.0255 - accuracy: 0.9921\nEpoch 7/10\n60000/60000 [==============================] - 7s 121us/sample - loss: 0.0218 - accuracy: 0.9929\nEpoch 8/10\n60000/60000 [==============================] - 7s 122us/sample - loss: 0.0199 - accuracy: 0.9937\nEpoch 9/10\n60000/60000 [==============================] - 7s 124us/sample - loss: 0.0182 - accuracy: 0.9942\nEpoch 10/10\n60000/60000 [==============================] - 7s 122us/sample - loss: 0.0162 - accuracy: 0.9946\n" ], [ "# Test the accuracy of the model on the testing set\ntest_loss, test_acc = model.evaluate(x_test, y_test, verbose = 2)\n\nprint()\nprint('Test accuracy:', test_acc)", "10000/10000 - 1s - loss: 0.0290 - accuracy: 0.9921\n\nTest accuracy: 0.9921\n" ] ], [ [ "The accuracy of the CNN is 99.46% and its performance on the testing set is 99.21%. No overfitting. We have a robust model!", "_____no_output_____" ], [ "## Saving the trained model\n\nBelow is the summary of the model. It is amazing that we have trained 109,930 parameters! Now, save this model so we don't have to train it again in the future.", "_____no_output_____" ] ], [ [ "# Show the model architecture\nmodel.summary()", "Model: \"sequential\"\n_________________________________________________________________\nLayer (type) Output Shape Param # \n=================================================================\nconv2d (Conv2D) multiple 832 \n_________________________________________________________________\nmax_pooling2d (MaxPooling2D) multiple 0 \n_________________________________________________________________\nconv2d_1 (Conv2D) multiple 25632 \n_________________________________________________________________\nmax_pooling2d_1 (MaxPooling2 multiple 0 \n_________________________________________________________________\ndropout (Dropout) multiple 0 \n_________________________________________________________________\nflatten (Flatten) multiple 0 \n_________________________________________________________________\ndense (Dense) multiple 65664 \n_________________________________________________________________\ndense_1 (Dense) multiple 16512 \n_________________________________________________________________\ndense_2 (Dense) multiple 1290 \n=================================================================\nTotal params: 109,930\nTrainable params: 109,930\nNon-trainable params: 0\n_________________________________________________________________\n" ] ], [ [ "Just like in the previous notebook, we can save this model as well.", "_____no_output_____" ] ], [ [ "model.save(\"CNN_model.h5\")", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ] ]
d01fb33a077706b84ce5e40b190916400e3af244
39,037
ipynb
Jupyter Notebook
uci/uci.ipynb
DPautoGAN/DPAutoGAN
40b72fd59cb7cbca7b544ada30c9e78731465692
[ "MIT" ]
13
2019-12-19T14:32:06.000Z
2022-03-30T15:47:35.000Z
uci/uci.ipynb
DPautoGAN/DPAutoGAN
40b72fd59cb7cbca7b544ada30c9e78731465692
[ "MIT" ]
null
null
null
uci/uci.ipynb
DPautoGAN/DPAutoGAN
40b72fd59cb7cbca7b544ada30c9e78731465692
[ "MIT" ]
9
2020-01-21T06:30:06.000Z
2022-01-07T20:06:11.000Z
36.011993
218
0.477496
[ [ [ "import pandas as pd\nimport numpy as np\nimport matplotlib.pyplot as plt\n\nfrom sklearn.ensemble import RandomForestRegressor, RandomForestClassifier\nfrom sklearn.metrics import mean_squared_error, accuracy_score, f1_score, r2_score, explained_variance_score, roc_auc_score\nfrom sklearn.preprocessing import MinMaxScaler, OneHotEncoder, LabelBinarizer\nfrom sklearn.neural_network import MLPClassifier, MLPRegressor\nfrom sklearn.linear_model import Lasso\n\nimport torch\nfrom torch import nn\nimport torch.nn.functional as F\n\nfrom dp_wgan import Generator, Discriminator\nfrom dp_autoencoder import Autoencoder\nfrom evaluation import *\nimport dp_optimizer, sampling, analysis, evaluation\n\ntorch.manual_seed(0)\nnp.random.seed(0)", "_____no_output_____" ], [ "names = ['age', 'workclass', 'fnlwgt', 'education', 'education-num', 'marital-status', 'occupation', 'relationship', 'race', 'sex', 'capital-gain', 'capital-loss', 'hours-per-week', 'native-country', 'salary']\ntrain = pd.read_csv('adult.data', names=names)\ntest = pd.read_csv('adult.test', names=names)\n\ndf = pd.concat([train, test])\n\ndf", "_____no_output_____" ], [ "class Processor:\n def __init__(self, datatypes):\n self.datatypes = datatypes\n \n def fit(self, matrix):\n preprocessors, cutoffs = [], []\n for i, (column, datatype) in enumerate(self.datatypes):\n preprocessed_col = matrix[:,i].reshape(-1, 1)\n\n if 'categorical' in datatype:\n preprocessor = LabelBinarizer()\n else:\n preprocessor = MinMaxScaler()\n\n preprocessed_col = preprocessor.fit_transform(preprocessed_col)\n cutoffs.append(preprocessed_col.shape[1])\n preprocessors.append(preprocessor)\n \n self.cutoffs = cutoffs\n self.preprocessors = preprocessors\n \n def transform(self, matrix):\n preprocessed_cols = []\n \n for i, (column, datatype) in enumerate(self.datatypes):\n preprocessed_col = matrix[:,i].reshape(-1, 1)\n preprocessed_col = self.preprocessors[i].transform(preprocessed_col)\n preprocessed_cols.append(preprocessed_col)\n\n return np.concatenate(preprocessed_cols, axis=1)\n\n \n def fit_transform(self, matrix):\n self.fit(matrix)\n return self.transform(matrix)\n \n def inverse_transform(self, matrix):\n postprocessed_cols = []\n\n j = 0\n for i, (column, datatype) in enumerate(self.datatypes):\n postprocessed_col = self.preprocessors[i].inverse_transform(matrix[:,j:j+self.cutoffs[i]])\n\n if 'categorical' in datatype:\n postprocessed_col = postprocessed_col.reshape(-1, 1)\n else:\n if 'positive' in datatype:\n postprocessed_col = postprocessed_col.clip(min=0)\n\n if 'int' in datatype:\n postprocessed_col = postprocessed_col.round()\n\n postprocessed_cols.append(postprocessed_col)\n \n j += self.cutoffs[i]\n \n return np.concatenate(postprocessed_cols, axis=1)\n\n\ndatatypes = [\n ('age', 'positive int'),\n ('workclass', 'categorical'),\n ('education-num', 'categorical'),\n ('marital-status', 'categorical'),\n ('occupation', 'categorical'),\n ('relationship', 'categorical'),\n ('race', 'categorical'),\n ('sex', 'categorical binary'),\n ('capital-gain', 'positive float'),\n ('capital-loss', 'positive float'),\n ('hours-per-week', 'positive int'),\n ('native-country', 'categorical'),\n ('salary', 'categorical binary'),\n]", "_____no_output_____" ], [ "np.random.seed(0)\n\nprocessor = Processor(datatypes)\n\nrelevant_df = df.drop(columns=['education', 'fnlwgt'])\nfor column, datatype in datatypes:\n if 'categorical' in datatype:\n relevant_df[column] = relevant_df[column].astype('category').cat.codes\n \ntrain_df = relevant_df.head(32562)\n\nX_real = torch.tensor(relevant_df.values.astype('float32'))\nX_encoded = torch.tensor(processor.fit_transform(X_real).astype('float32'))\n\ntrain_cutoff = 32562\n\nX_train_real = X_real[:train_cutoff]\nX_test_real = X_real[:train_cutoff]\n\nX_train_encoded = X_encoded[:train_cutoff]\nX_test_encoded = X_encoded[train_cutoff:]\n\nX_encoded.shape\n\nprint(X_train_encoded)\nprint(X_test_encoded)", "tensor([[0.3014, 0.0000, 0.0000, ..., 0.0000, 0.0000, 0.0000],\n [0.4521, 0.0000, 0.0000, ..., 0.0000, 0.0000, 0.0000],\n [0.2877, 0.0000, 0.0000, ..., 0.0000, 0.0000, 0.0000],\n ...,\n [0.0685, 0.0000, 0.0000, ..., 0.0000, 0.0000, 0.0000],\n [0.4795, 0.0000, 0.0000, ..., 0.0000, 0.0000, 1.0000],\n [0.1096, 0.0000, 0.0000, ..., 0.0000, 0.0000, 0.0000]])\ntensor([[0.2877, 0.0000, 0.0000, ..., 0.0000, 0.0000, 0.0000],\n [0.1507, 0.0000, 0.0000, ..., 0.0000, 0.0000, 1.0000],\n [0.3699, 0.0000, 0.0000, ..., 0.0000, 0.0000, 1.0000],\n ...,\n [0.2877, 0.0000, 0.0000, ..., 0.0000, 0.0000, 0.0000],\n [0.3699, 0.0000, 0.0000, ..., 0.0000, 0.0000, 0.0000],\n [0.2466, 0.0000, 0.0000, ..., 0.0000, 0.0000, 1.0000]])\n" ], [ "ae_params = {\n 'b1': 0.9,\n 'b2': 0.999,\n 'binary': False,\n 'compress_dim': 15,\n 'delta': 1e-5,\n 'device': 'cuda',\n 'iterations': 20000,\n 'lr': 0.005,\n 'l2_penalty': 0.,\n 'l2_norm_clip': 0.012,\n 'minibatch_size': 64,\n 'microbatch_size': 1,\n 'noise_multiplier': 2.5,\n 'nonprivate': True,\n}", "_____no_output_____" ], [ "autoencoder = Autoencoder(\n example_dim=len(X_train_encoded[0]),\n compression_dim=ae_params['compress_dim'],\n binary=ae_params['binary'],\n device=ae_params['device'],\n)\n\ndecoder_optimizer = dp_optimizer.DPAdam(\n l2_norm_clip=ae_params['l2_norm_clip'],\n noise_multiplier=ae_params['noise_multiplier'],\n minibatch_size=ae_params['minibatch_size'],\n microbatch_size=ae_params['microbatch_size'],\n nonprivate=ae_params['nonprivate'],\n params=autoencoder.get_decoder().parameters(),\n lr=ae_params['lr'],\n betas=(ae_params['b1'], ae_params['b2']),\n weight_decay=ae_params['l2_penalty'],\n)\n\nencoder_optimizer = torch.optim.Adam(\n params=autoencoder.get_encoder().parameters(),\n lr=ae_params['lr'] * ae_params['microbatch_size'] / ae_params['minibatch_size'],\n betas=(ae_params['b1'], ae_params['b2']),\n weight_decay=ae_params['l2_penalty'],\n)\n\nweights, ds = [], []\nfor name, datatype in datatypes:\n if 'categorical' in datatype:\n num_values = len(np.unique(relevant_df[name]))\n if num_values == 2:\n weights.append(1.)\n ds.append((datatype, 1))\n else:\n for i in range(num_values):\n weights.append(1. / num_values)\n ds.append((datatype, num_values))\n else:\n weights.append(1.)\n ds.append((datatype, 1))\nweights = torch.tensor(weights).to(ae_params['device'])\n\n#autoencoder_loss = (lambda input, target: torch.mul(weights, torch.pow(input-target, 2)).sum(dim=1).mean(dim=0))\n#autoencoder_loss = lambda input, target: torch.mul(weights, F.binary_cross_entropy(input, target, reduction='none')).sum(dim=1).mean(dim=0)\nautoencoder_loss = nn.BCELoss()\n#autoencoder_loss = nn.MSELoss()\n\nprint(autoencoder)\n\nprint('Achieves ({}, {})-DP'.format(\n analysis.epsilon(\n len(X_train_encoded),\n ae_params['minibatch_size'],\n ae_params['noise_multiplier'],\n ae_params['iterations'],\n ae_params['delta']\n ),\n ae_params['delta'],\n))\n\nminibatch_loader, microbatch_loader = sampling.get_data_loaders(\n minibatch_size=ae_params['minibatch_size'],\n microbatch_size=ae_params['microbatch_size'],\n iterations=ae_params['iterations'],\n nonprivate=ae_params['nonprivate'],\n)\n\ntrain_losses, validation_losses = [], []\n\nX_train_encoded = X_train_encoded.to(ae_params['device'])\nX_test_encoded = X_test_encoded.to(ae_params['device'])\n\nfor iteration, X_minibatch in enumerate(minibatch_loader(X_train_encoded)):\n \n encoder_optimizer.zero_grad()\n decoder_optimizer.zero_grad()\n \n for X_microbatch in microbatch_loader(X_minibatch):\n\n decoder_optimizer.zero_microbatch_grad()\n output = autoencoder(X_microbatch)\n loss = autoencoder_loss(output, X_microbatch)\n loss.backward()\n decoder_optimizer.microbatch_step()\n \n validation_loss = autoencoder_loss(autoencoder(X_test_encoded).detach(), X_test_encoded)\n \n encoder_optimizer.step()\n decoder_optimizer.step()\n\n train_losses.append(loss.item())\n validation_losses.append(validation_loss.item())\n\n if iteration % 1000 == 0:\n print ('[Iteration %d/%d] [Loss: %f] [Validation Loss: %f]' % (\n iteration, ae_params['iterations'], loss.item(), validation_loss.item())\n )\n\npd.DataFrame(data={'train': train_losses, 'validation': validation_losses}).plot()", "_____no_output_____" ], [ "with open('ae_eps_inf.dat', 'wb') as f:\n torch.save(autoencoder, f)", "_____no_output_____" ], [ "gan_params = {\n 'alpha': 0.99,\n 'binary': False,\n 'clip_value': 0.01,\n 'd_updates': 15,\n 'delta': 1e-5,\n 'device': 'cuda',\n 'iterations': 15000,\n 'latent_dim': 64,\n 'lr': 0.005,\n 'l2_penalty': 0.,\n 'l2_norm_clip': 0.022,\n 'minibatch_size': 128,\n 'microbatch_size': 1,\n 'noise_multiplier': 3.5,\n 'nonprivate': False,\n}", "_____no_output_____" ], [ "with open('ae_eps_inf.dat', 'rb') as f:\n autoencoder = torch.load(f)\ndecoder = autoencoder.get_decoder()\n \ngenerator = Generator(\n input_dim=gan_params['latent_dim'],\n output_dim=autoencoder.get_compression_dim(),\n binary=gan_params['binary'],\n device=gan_params['device'],\n)\n\ng_optimizer = torch.optim.RMSprop(\n params=generator.parameters(),\n lr=gan_params['lr'],\n alpha=gan_params['alpha'],\n weight_decay=gan_params['l2_penalty'],\n)\n\ndiscriminator = Discriminator(\n input_dim=len(X_train_encoded[0]),\n device=gan_params['device'],\n)\n\nd_optimizer = dp_optimizer.DPRMSprop(\n l2_norm_clip=gan_params['l2_norm_clip'],\n noise_multiplier=gan_params['noise_multiplier'],\n minibatch_size=gan_params['minibatch_size'],\n microbatch_size=gan_params['microbatch_size'],\n nonprivate=gan_params['nonprivate'],\n params=discriminator.parameters(),\n lr=gan_params['lr'],\n alpha=gan_params['alpha'],\n weight_decay=gan_params['l2_penalty'],\n)\n\nprint(generator)\nprint(discriminator)\n\nprint('Achieves ({}, {})-DP'.format(\n analysis.epsilon(\n len(X_train_encoded),\n gan_params['minibatch_size'],\n gan_params['noise_multiplier'],\n gan_params['iterations'],\n gan_params['delta']\n ),\n gan_params['delta'],\n))\n\nminibatch_loader, microbatch_loader = sampling.get_data_loaders(\n minibatch_size=gan_params['minibatch_size'],\n microbatch_size=gan_params['microbatch_size'],\n iterations=gan_params['iterations'],\n nonprivate=gan_params['nonprivate'],\n)\n\nX_train_encoded = X_train_encoded.to(gan_params['device'])\nX_test_encoded = X_test_encoded.to(ae_params['device'])\n\nfor iteration, X_minibatch in enumerate(minibatch_loader(X_train_encoded)):\n \n d_optimizer.zero_grad()\n \n for real in microbatch_loader(X_minibatch):\n z = torch.randn(real.size(0), gan_params['latent_dim'], device=gan_params['device'])\n fake = decoder(generator(z)).detach()\n \n d_optimizer.zero_microbatch_grad()\n d_loss = -torch.mean(discriminator(real)) + torch.mean(discriminator(fake))\n d_loss.backward()\n d_optimizer.microbatch_step()\n \n d_optimizer.step()\n\n for parameter in discriminator.parameters():\n parameter.data.clamp_(-gan_params['clip_value'], gan_params['clip_value'])\n\n if iteration % gan_params['d_updates'] == 0:\n z = torch.randn(X_minibatch.size(0), gan_params['latent_dim'], device=gan_params['device'])\n fake = decoder(generator(z))\n\n g_optimizer.zero_grad()\n g_loss = -torch.mean(discriminator(fake))\n g_loss.backward()\n g_optimizer.step()\n\n if iteration % 1000 == 0:\n print('[Iteration %d/%d] [D loss: %f] [G loss: %f]' % (\n iteration, gan_params['iterations'], d_loss.item(), g_loss.item()\n ))\n \n z = torch.randn(len(X_train_real), gan_params['latent_dim'], device=gan_params['device'])\n X_synthetic_encoded = decoder(generator(z)).cpu().detach().numpy()\n X_synthetic_real = processor.inverse_transform(X_synthetic_encoded)\n X_synthetic_encoded = processor.transform(X_synthetic_real)\n synthetic_data = pd.DataFrame(X_synthetic_real, columns=relevant_df.columns)\n\n i = 0\n columns = relevant_df.columns\n relevant_df[columns[i]].hist()\n synthetic_data[columns[i]].hist()\n plt.show()\n \n #pca_evaluation(pd.DataFrame(X_train_real), pd.DataFrame(X_synthetic_real))\n #plt.show()\n", "_____no_output_____" ], [ "with open('gen_eps_inf.dat', 'wb') as f:\n torch.save(generator, f)", "_____no_output_____" ], [ "X_train_encoded = X_train_encoded.cpu()\nX_test_encoded = X_test_encoded.cpu()\n\nclf = RandomForestClassifier(n_estimators=100)\nclf.fit(X_train_encoded[:,:-1], X_train_encoded[:,-1])\nprediction = clf.predict(X_test_encoded[:,:-1])\n\nprint(accuracy_score(X_test_encoded[:,-1], prediction))\nprint(f1_score(X_test_encoded[:,-1], prediction))", "0.8449017199017199\n0.650905571685331\n" ], [ "with open('gen_eps_inf.dat', 'rb') as f:\n generator = torch.load(f)\n \nwith open('ae_eps_inf.dat', 'rb') as f:\n autoencoder = torch.load(f)\ndecoder = autoencoder.get_decoder()\n\nz = torch.randn(len(X_train_real), gan_params['latent_dim'], device=gan_params['device'])\nX_synthetic_encoded = decoder(generator(z)).cpu().detach().numpy()\nX_synthetic_real = processor.inverse_transform(X_synthetic_encoded)\nX_synthetic_encoded = processor.transform(X_synthetic_real)\n\n#pd.DataFrame(X_encoded.numpy()).to_csv('real.csv')\npd.DataFrame(X_synthetic_encoded).to_csv('synthetic.csv')", "_____no_output_____" ], [ "with open('gen_eps_inf.dat', 'rb') as f:\n generator = torch.load(f)\n \nwith open('ae_eps_inf.dat', 'rb') as f:\n autoencoder = torch.load(f)\ndecoder = autoencoder.get_decoder()\n \nX_test_encoded = X_test_encoded.cpu()\n\nz = torch.randn(len(X_train_real), gan_params['latent_dim'], device=gan_params['device'])\nX_synthetic_encoded = decoder(generator(z)).cpu().detach().numpy()\n\nX_synthetic_real = processor.inverse_transform(X_synthetic_encoded)\nX_synthetic_encoded = processor.transform(X_synthetic_real)\n\nclf = RandomForestClassifier(n_estimators=100)\nclf.fit(X_synthetic_encoded[:,:-1], X_synthetic_encoded[:,-1])\nprediction = clf.predict(X_test_encoded[:,:-1])\n\nprint(accuracy_score(X_test_encoded[:,-1], prediction))\nprint(f1_score(X_test_encoded[:,-1], prediction))", "0.7896191646191646\n0.28481937774065563\n" ], [ "with open('gen_eps_inf.dat', 'rb') as f:\n generator = torch.load(f)\n \nwith open('ae_eps_inf.dat', 'rb') as f:\n autoencoder = torch.load(f)\ndecoder = autoencoder.get_decoder()\n\nz = torch.randn(len(X_train_real), gan_params['latent_dim'], device=gan_params['device'])\nX_synthetic_encoded = decoder(generator(z)).cpu().detach().numpy()\nX_synthetic_real = processor.inverse_transform(X_synthetic_encoded)\nsynthetic_data = pd.DataFrame(X_synthetic_real, columns=relevant_df.columns)\n\ncolumn = 'age'\nfig = plt.figure()\nax = fig.add_subplot()\nax.hist(train_df[column].values,)# bins=)\nax.hist(synthetic_data[column].values, color='red', alpha=0.35,)# bins10)", "_____no_output_____" ], [ "with open('gen_eps_inf.dat', 'rb') as f:\n generator = torch.load(f)\n \nwith open('ae_eps_inf.dat', 'rb') as f:\n autoencoder = torch.load(f)\ndecoder = autoencoder.get_decoder()\n\nz = torch.randn(len(X_train_real), gan_params['latent_dim'], device=gan_params['device'])\nX_synthetic_encoded = decoder(generator(z)).cpu().detach().numpy()\nX_synthetic_real = processor.inverse_transform(X_synthetic_encoded)\nsynthetic_data = pd.DataFrame(X_synthetic_real, columns=relevant_df.columns)\n\nregression_real = []\nclassification_real = []\nregression_synthetic = []\nclassification_synthetic = []\ntarget_real = []\ntarget_synthetic = []\n\nfor column, datatype in datatypes:\n p = Processor([datatype for datatype in datatypes if datatype[0] != column])\n \n train_cutoff = 32562\n \n p.fit(relevant_df.drop(columns=[column]).values)\n\n X_enc = p.transform(relevant_df.drop(columns=[column]).values)\n y_enc = relevant_df[column]\n\n X_enc_train = X_enc[:train_cutoff]\n X_enc_test = X_enc[train_cutoff:]\n \n y_enc_train = y_enc[:train_cutoff]\n y_enc_test = y_enc[train_cutoff:]\n\n X_enc_syn = p.transform(synthetic_data.drop(columns=[column]).values)\n y_enc_syn = synthetic_data[column]\n \n if 'binary' in datatype:\n model = lambda: RandomForestClassifier(n_estimators=10)\n score = lambda true, pred: f1_score(true, pred)\n elif 'categorical' in datatype:\n model = lambda: RandomForestClassifier(n_estimators=10)\n score = lambda true, pred: f1_score(true, pred, average='micro')\n else:\n model = lambda: Lasso()\n explained_var = lambda true, pred: explained_variance_score(true, pred)\n score = r2_score\n \n real, synthetic = model(), model()\n \n real.fit(X_enc_train, y_enc_train)\n synthetic.fit(X_enc_syn, y_enc_syn)\n \n real_preds = real.predict(X_enc_test)\n synthetic_preds = synthetic.predict(X_enc_test)\n \n print(column, datatype)\n if column == 'salary':\n target_real.append(score(y_enc_test, real_preds))\n target_synthetic.append(score(y_enc_test, synthetic_preds))\n elif 'categorical' in datatype:\n classification_real.append(score(y_enc_test, real_preds))\n classification_synthetic.append(score(y_enc_test, synthetic_preds))\n else:\n regression_real.append(score(y_enc_test, real_preds))\n regression_synthetic.append(score(y_enc_test, synthetic_preds))\n\n print(score.__name__)\n print('Real: {}'.format(score(y_enc_test, real_preds)))\n print('Synthetic: {}'.format(score(y_enc_test, synthetic_preds)))\n print('')\n \nplt.scatter(classification_real, classification_synthetic, c='blue')\nplt.scatter(regression_real, regression_synthetic, c='red')\nplt.scatter(target_real, target_synthetic, c='green')\nplt.xlabel('Real Data')\nplt.ylabel('Synthetic Data')\nplt.axis((0., 1., 0., 1.))\nplt.plot((0, 1), (0, 1))\nplt.show()", "age positive int\nr2_score\nReal: 0.2818276514546748\nSynthetic: 0.2934080317762404\n\nworkclass categorical\n<lambda>\nReal: 0.7126535626535625\nSynthetic: 0.5954545454545455\n\neducation-num categorical\n<lambda>\nReal: 0.3444103194103194\nSynthetic: 0.20356265356265357\n\nmarital-status categorical\n<lambda>\nReal: 0.8179975429975429\nSynthetic: 0.6943488943488944\n\noccupation categorical\n<lambda>\nReal: 0.31713759213759213\nSynthetic: 0.10988943488943491\n\nrelationship categorical\n<lambda>\nReal: 0.7573095823095823\nSynthetic: 0.6036240786240786\n\nrace categorical\n<lambda>\nReal: 0.8418918918918918\nSynthetic: 0.6242014742014742\n\nsex categorical binary\n<lambda>\nReal: 0.8650217706821479\nSynthetic: 0.7528768254295888\n\ncapital-gain positive float\nr2_score\nReal: 0.0842600819364091\nSynthetic: -0.45992054506116253\n\ncapital-loss positive float\nr2_score\nReal: 0.025128599655994677\nSynthetic: -0.0388926609428184\n\nhours-per-week positive int\nr2_score\nReal: 0.03869403234808988\nSynthetic: 0.05981951744890357\n\nnative-country categorical\n<lambda>\nReal: 0.8848280098280098\nSynthetic: 0.8514742014742015\n\nsalary categorical binary\n<lambda>\nReal: 0.6341257560838373\nSynthetic: 0.23112078346028292\n\n" ] ] ]
[ "code" ]
[ [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ] ]
d01fbe9e1f4f84ee5abd12bb72f3b79d00f39853
18,142
ipynb
Jupyter Notebook
average_radiosonde.ipynb
franzihe/Haukeliseter_16_17
91a50a34ce6084dfaae455df8fea336ace877873
[ "MIT" ]
null
null
null
average_radiosonde.ipynb
franzihe/Haukeliseter_16_17
91a50a34ce6084dfaae455df8fea336ace877873
[ "MIT" ]
null
null
null
average_radiosonde.ipynb
franzihe/Haukeliseter_16_17
91a50a34ce6084dfaae455df8fea336ace877873
[ "MIT" ]
null
null
null
44.465686
257
0.507386
[ [ [ "import pandas as pd\nimport numpy as np\nnp.warnings.filterwarnings('ignore')\nimport xarray as xr\nfrom metpy.units import units\nfrom metpy.plots import SkewT\nimport metpy.calc as mpcalc\nimport matplotlib.pyplot as plt\nimport seaborn as sns\n\nimport sys\nsys.path.append('/home/franzihe/Documents/Python/Thesis/')\nimport createFolder as cF\n", "_____no_output_____" ], [ "# plot cosmetics\nsns.set_context('paper', font_scale=1.6)\n\nsns.set(font = 'Serif', font_scale = 1.6, )\nsns.set_style('ticks', \n {'font.family':'serif', #'font.serif':'Helvetica'\n 'grid.linestyle': '--',\n 'axes.grid': True,\n }, \n )\n# Set the palette to the \"pastel\" default palette:\nsns.set_palette(\"colorblind\")", "_____no_output_____" ], [ "savefig = 1\nif savefig == 1:\n figdir = '/home/franzihe/Documents/Figures/Weathermast_MEPS_Retrieval/Haukeliseter/MEPS_CTRL_ICET/' \n cF.createFolder('%s/' %figdir)\n form = 'png'", "_____no_output_____" ], [ "\n\nhour = '12'\n\nm = ['12','01', '02']\nh = ['00', '12']\nmeps_run = [ 'CTRL', 'ICE-T', ]", "_____no_output_____" ], [ "# Select col_names to be importet for the sounding plot\ncol_names = ['PRES', 'HGHT', 'TEMP', 'DWPT', 'MIXR', 'DRCT', 'SKNT', 'THTA']\nheader = np.arange(0,6)", "_____no_output_____" ], [ "def concat_profile_all_days(df, Date, observation, _pres, _temp, _dwpt, _xwind, _ywind):\n _lev = np.arange(1000,-25, -25)\n _averaged = pd.DataFrame()\n\n for i in _lev:\n filter1 = np.logical_and(df.PRES > i-25,\n df.PRES <= i+25 ) \n \n _averaged = pd.concat([_averaged, df.where(filter1).mean()], axis = 1)\n _averaged = _averaged.rename(columns = {0:i})\n\n _averaged = _averaged.T\n \n # concat the pressure, height, temperature, dewpoint, mixing ration, wind direction, wind speed, \n # potential temperature of all dates \n _pres = pd.concat([_pres, _averaged.PRES], axis = 1).rename(columns = {'PRES':Date})\n# _hght = pd.concat([_hght, _averaged.HGHT], axis = 1).rename(columns = {'HGHT':Date})\n _temp = pd.concat([_temp, _averaged.TEMP], axis = 1).rename(columns = {'TEMP':Date})\n _dwpt = pd.concat([_dwpt, _averaged.DWPT], axis = 1).rename(columns = {'DWPT':Date})\n # _mixr = pd.concat([_mixr, _averaged.MIXR], axis = 1).rename(columns = {'MIXR':Date})\n # _drct = pd.concat([_drct, _averaged.DRCT], axis = 1).rename(columns = {'DRCT':Date})\n # _sknt = pd.concat([_sknt, _averaged.SKNT], axis = 1).rename(columns = {'SKNT':Date})\n # _thta = pd.concat([_thta, _averaged.THTA], axis = 1).rename(columns = {'THTA':Date})\n _xwind = pd.concat([_xwind, _averaged.x_wind], axis = 1).rename(columns = {'x_wind':Date})\n _ywind = pd.concat([_ywind, _averaged.y_wind], axis = 1).rename(columns = {'y_wind':Date})\n \n return(_pres, _temp, _dwpt, _xwind, _ywind)", "_____no_output_____" ], [ "p = dict()\nT = dict()\nTd = dict()\nu = dict()\nv = dict()\np_meps = dict()\nT_meps = dict()\nTd_meps = dict()\nu_meps = dict()\nv_meps = dict()", "_____no_output_____" ], [ "for hour in h:\n \n _temp = pd.DataFrame()\n _pres = pd.DataFrame()\n _hght = pd.DataFrame()\n _temp = pd.DataFrame()\n _dwpt = pd.DataFrame()\n _mixr = pd.DataFrame()\n _drct = pd.DataFrame()\n _sknt = pd.DataFrame()\n _thta = pd.DataFrame()\n _xwind = pd.DataFrame()\n _ywind = pd.DataFrame()\n\n _pres_meps = pd.DataFrame()\n _temp_meps = pd.DataFrame()\n _dwpt_meps = pd.DataFrame()\n _xwind_meps = pd.DataFrame()\n _ywind_meps = pd.DataFrame()\n for month in m:\n if month == '12':\n t = np.array([8, 9, 10, 12, 15, 20, 21, 22, 23, 24, 25, 26, 29, 31])\n if month == '01':\n t = np.array([2, 3, 5, 6, 8, 9, 10, 11, 12, 28])\n if month == '02':\n t = np.array([2, 3, 4]) \n if month == '12':\n year = '2016'\n if month == '01' or month == '02':\n year = '2017'\n for day in t:\n if day < 10:\n day = '0%s' %day\n Date = year+month+str(day)\n\n stn = '01415' #1415 is ID for Stavanger\n Sounding_filename = '/home/franzihe/Documents/Data/Sounding/{}/{}{}{}_{}.txt'.format(stn,year,month,str(day),hour)\n\n\n df = pd.read_table(Sounding_filename, delim_whitespace=True, skiprows = header, \\\n usecols=[0, 1, 2, 3, 5, 6, 7, 8], names=col_names)\n\n ### the footer changes depending on how high the sound measured --> lines change from Radiosonde to Radiosonde\n # 1. find idx of first value matching the name 'Station'\n lines = df.index[df['PRES'].str.match('Station')]\n if len(lines) == 0:\n print('no file found: %s%s%s_%s' %(year,month,day,hour))\n # continue\n else:\n # read in the Sounding files\n idx = lines[0]\n footer = np.arange((idx+header.size),220)\n skiprow = np.append(header,footer)\n df = pd.read_table(Sounding_filename, delim_whitespace=True, skiprows = skiprow, \\\n usecols=[0, 1, 2, 3, 5, 6, 7, 8], names=col_names)\n df['x_wind'], df['y_wind'] = mpcalc.wind_components(df.SKNT.values *units.knots, df.DRCT.values*units.degrees)\n\n\n\n # _lev = np.arange(1000,-25, -25)\n # _averaged = pd.DataFrame()\n\n # for i in _lev:\n # filter1 = np.logical_and(df.PRES > i-25,\n # df.PRES <= i+25 ) \n # \n # _averaged = pd.concat([_averaged, df.where(filter1).mean()], axis = 1)\n # _averaged = _averaged.rename(columns = {0:i})\n #\n # _averaged = _averaged.T\n\n # concat the pressure, height, temperature, dewpoint, mixing ration, wind direction, wind speed, \n # potential temperature of all dates \n # _pres = pd.concat([_pres, _averaged.PRES], axis = 1).rename(columns = {'PRES':Date})\n # _hght = pd.concat([_hght, _averaged.HGHT], axis = 1).rename(columns = {'HGHT':Date})\n # _temp = pd.concat([_temp, _averaged.TEMP], axis = 1).rename(columns = {'TEMP':Date})\n # _dwpt = pd.concat([_dwpt, _averaged.DWPT], axis = 1).rename(columns = {'DWPT':Date})\n # _mixr = pd.concat([_mixr, _averaged.MIXR], axis = 1).rename(columns = {'MIXR':Date})\n # _drct = pd.concat([_drct, _averaged.DRCT], axis = 1).rename(columns = {'DRCT':Date})\n # _sknt = pd.concat([_sknt, _averaged.SKNT], axis = 1).rename(columns = {'SKNT':Date})\n # _thta = pd.concat([_thta, _averaged.THTA], axis = 1).rename(columns = {'THTA':Date})\n # _xwind = pd.concat([_xwind, _averaged.x_wind], axis = 1).rename(columns = {'x_wind':Date})\n # _ywind = pd.concat([_ywind, _averaged.y_wind], axis = 1).rename(columns = {'y_wind':Date})\n _pres, _temp, _dwpt, _xwind, _ywind = concat_profile_all_days(df, Date, 'RS', _pres, _temp, _dwpt, _xwind, _ywind)\n\n\n\n # read in the MEPS runs\n # for meps in meps_run:\n meps = 'CTRL'\n stn = 'Stavanger'\n meps_dirnc = '/home/franzihe/Documents/Data/MEPS/%s/%s/%s_00.nc' %(stn,meps,Date)\n meps_f = xr.open_dataset(meps_dirnc, drop_variables ={'air_temperature_0m','liquid_water_content_of_surface_snow','rainfall_amount', 'snowfall_amount', 'graupelfall_amount', 'surface_air_pressure', 'surface_geopotential',\n 'precipitation_amount_acc', 'integral_of_snowfall_amount_wrt_time', 'integral_of_rainfall_amount_wrt_time',\n 'integral_of_graupelfall_amount_wrt_time', 'surface_snow_sublimation_amount_acc', 'air_temperature_2m','relative_humidity_2m',\n 'specific_humidity_2m', 'x_wind_10m', 'y_wind_10m', 'air_pressure_at_sea_level', \n 'atmosphere_cloud_condensed_water_content_ml', 'atmosphere_cloud_ice_content_ml', 'atmosphere_cloud_snow_content_ml','atmosphere_cloud_rain_content_ml', 'atmosphere_cloud_graupel_content_ml',\n 'pressure_departure', 'layer_thickness', 'geop_layer_thickness'},\n ).reset_index(dims_or_levels = ['height0', 'height1', 'height3', 'height_above_msl', ], drop=True).sortby('hybrid', ascending = False)\n # pressuer into hPa\n meps_f['pressure_ml'] = meps_f.pressure_ml/100\n # air temperature has to be flipped, something was wrong when reading the data from Stavanger\n meps_f['air_temperature_ml'] = (('time', 'hybrid',),meps_f.air_temperature_ml.values[:,::-1] - 273.15)\n meps_f['specific_humidity_ml'] = (('time', 'hybrid',),meps_f.specific_humidity_ml.values[:,::-1])\n meps_f['x_wind_ml'] = (('time', 'hybrid',), meps_f.x_wind_ml.values[:,::-1])\n meps_f['y_wind_ml'] = (('time', 'hybrid',), meps_f.y_wind_ml.values[:,::-1])\n\n # calculate the dewpoint by first calculating the relative humidity from the specific humidity\n meps_f['relative_humidity'] = (('time', 'hybrid', ), mpcalc.relative_humidity_from_specific_humidity(meps_f.pressure_ml.values * units.hPa, \n meps_f.air_temperature_ml.values * units.degC, \n meps_f.specific_humidity_ml.values * units('kg/kg')))\n\n meps_f['DWPT'] = (('time', 'hybrid',), mpcalc.dewpoint_from_relative_humidity(meps_f.air_temperature_ml.values * units.degC, meps_f.relative_humidity))\n\n if hour == '12':\n meps_f = meps_f.isel(time = 11).to_dataframe()\n elif hour == '00':\n meps_f = meps_f.isel(time = 23).to_dataframe()\n\n meps_f = meps_f.rename(columns = {'x_wind_ml':'x_wind', 'y_wind_ml':'y_wind', 'pressure_ml':'PRES', 'air_temperature_ml':'TEMP'})\n\n _pres_meps, _temp_meps, _dwpt_meps, _xwind_meps, _ywind_meps = concat_profile_all_days(meps_f, Date, 'MEPS',\n _pres_meps, _temp_meps, _dwpt_meps, _xwind_meps, _ywind_meps)\n\n\n\n\n ## average pressure, height, temperature, dewpoint, mixing ration, wind direction, wind speed, \n # potential temperature over time to get seasonal mean and assign units.\n p[hour] = _pres.mean(axis = 1, skipna=True).values * units.hPa\n #z = _hght.mean(axis = 1, skipna=True).values * units.meter\n T[hour] = _temp.mean(axis = 1, skipna=True).values * units.degC\n Td[hour] = _dwpt.mean(axis = 1, skipna=True).values * units.degC\n #qv = _mixr.mean(axis = 1, skipna=True).values * units('g/kg')\n #WD = _drct.mean(axis = 1, skipna=True).values * units.degrees\n #WS = _sknt.mean(axis = 1, skipna=True).values * units.knots\n #th = _thta.mean(axis = 1, skipna=True).values * units.kelvin\n u[hour] = _xwind.mean(axis = 1, skipna = True)\n v[hour] = _ywind.mean(axis = 1, skipna = True)\n\n p_meps[hour] = _pres_meps.mean(axis = 1, skipna=True).values * units.hPa\n T_meps[hour] = _temp_meps.mean(axis = 1, skipna=True).values * units.degC\n Td_meps[hour] = _dwpt_meps.mean(axis = 1, skipna=True).values * units.degC\n u_meps[hour] = _xwind_meps.mean(axis = 1, skipna = True)\n v_meps[hour] = _ywind_meps.mean(axis = 1, skipna = True)\n\n #u, v = mpcalc.wind_components(WS, WD)", "no file found: 20161224_00\nno file found: 20170102_00\nno file found: 20170103_00\nno file found: 20170112_00\nno file found: 20170128_00\nno file found: 20170202_00\nno file found: 20170203_00\nno file found: 20161223_12\nno file found: 20161224_12\nno file found: 20170102_12\nno file found: 20170103_12\nno file found: 20170128_12\nno file found: 20170202_12\n" ], [ "def plt_skewT(skew, p, T, p_meps, T_meps, Td, Td_meps, u, v, profile_time):\n # Plot the data using normal plotting functions, in this case using\n # log scaling in Y, as dictated by the typical meteorological plot\n skew.plot(p, T, )\n skew.plot(p_meps, T_meps, )\n\n skew.plot(p, Td, )\n skew.plot(p_meps, Td_meps, )\n\n skew.plot_barbs(p, u, v)\n skew.ax.set_ylim(1000, 100)\n\n # Add the relevant special lines\n skew.plot_dry_adiabats()\n skew.plot_moist_adiabats()\n skew.plot_mixing_lines()\n\n # Good bounds for aspect ratio\n skew.ax.set_xlim(-30, 40)\n skew.ax.text(0.05, 1, 'Vertical profile mean - Stavanger: {} UTC'.format(profile_time), transform=skew.ax.transAxes,\n fontsize=14, verticalalignment='bottom',)# bbox='fancy')", "_____no_output_____" ], [ "fig = plt.figure(figsize=(18, 9))\n\n#plot skewT for 00UTC \nskew = SkewT(fig, rotation=45,subplot=121)\nplt_skewT(skew, p['00'], T['00'], p_meps['00'], T_meps['00'], Td['00'], Td_meps['00'], u['00'], v['00'], '00')\n\nskew = SkewT(fig, rotation=45,subplot=122)\nplt_skewT(skew, p['12'], T['12'], p_meps['12'], T_meps['12'], Td['12'], Td_meps['12'], u['12'], v['12'], '12')\n\nif savefig == 1:\n cF.createFolder('%s/' %(figdir))\n fig_name = 'winter_16_17_vertical_profile.'+form\n plt.savefig('%s/%s' %(figdir, fig_name), format = form, bbox_inches='tight', transparent=True)\n print('plot saved: %s/%s' %(figdir, fig_name))\n plt.close()", "plot saved: /home/franzihe/Documents/Figures/Weathermast_MEPS_Retrieval/Haukeliseter/MEPS_CTRL_ICET//winter_16_17_vertical_profile.png\n" ] ] ]
[ "code" ]
[ [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ] ]
d01fd76698305d30500dfd5d5d4ceb03669694df
129,068
ipynb
Jupyter Notebook
Transparent Model Interpretability.ipynb
iamollas/Informatics-Cafe-XAI-IML-Tutorial
56e6320b65bf207b90308b5f21214600e7f65fb6
[ "MIT" ]
null
null
null
Transparent Model Interpretability.ipynb
iamollas/Informatics-Cafe-XAI-IML-Tutorial
56e6320b65bf207b90308b5f21214600e7f65fb6
[ "MIT" ]
null
null
null
Transparent Model Interpretability.ipynb
iamollas/Informatics-Cafe-XAI-IML-Tutorial
56e6320b65bf207b90308b5f21214600e7f65fb6
[ "MIT" ]
null
null
null
594.78341
82,660
0.946331
[ [ [ "# Lets play with a funny fake dataset. \nThis dataset contains few features and it has an dependent variable which says if we are going ever to graduate or not", "_____no_output_____" ], [ "Importing few libraries", "_____no_output_____" ] ], [ [ "from sklearn import datasets,model_selection\nimport matplotlib.pyplot as plt\nimport pandas as pd\nimport seaborn as sns\nimport numpy as np\nfrom sklearn.metrics import accuracy_score\nfrom sklearn.linear_model import LogisticRegression\nfrom ipywidgets import interactive\nfrom sklearn.preprocessing import MinMaxScaler\nfrom sklearn import model_selection", "_____no_output_____" ] ], [ [ "Then, we will load our fake dataset, and we will split our dataset in two parts, one for training and one for testing", "_____no_output_____" ] ], [ [ "student = pd.read_csv('LionForests-Bot/students2.csv')\nfeature_names = list(student.columns)[:-1]\nclass_names=[\"Won't graduate\",'Will graduate (eventually)']\nX = student.iloc[:, 0:-1].values\ny = student.iloc[:, -1].values\nx_train, x_test, y_train, y_test = model_selection.train_test_split(X, y, test_size=0.3,random_state=0)", "_____no_output_____" ], [ "fig, (ax1, ax2, ax3, ax4, ax5) = plt.subplots(1, 5, figsize=(20,4), dpi=200)\nax1.hist(X[:,0:1], bins='auto')\nax1.set(xlabel='Years in school')\nax2.hist(X[:,1:2], bins='auto')\nax2.set(xlabel='# of courses completed')\nax3.hist(X[:,2:3], bins='auto')\nax3.set(xlabel='Attending class per week')\nax4.hist(X[:,3:4], bins='auto')\nax4.set(xlabel='Owns car')\nax5.hist(X[:,4:], bins='auto')\nax5.set(xlabel='# of roomates')\nplt.show()", "_____no_output_____" ] ], [ [ "We are also scaling our data in the range [0,1] in order later the interpretations to be comparable", "_____no_output_____" ] ], [ [ "scaler = MinMaxScaler()\nscaler.fit(x_train)\nx_train = scaler.transform(x_train)\nx_test = scaler.transform(x_test)", "_____no_output_____" ] ], [ [ "Now, we will train a linear model, called logistic regression with our dataset. And we will evaluate its performance", "_____no_output_____" ] ], [ [ "#lin_model = LogisticRegression(solver=\"newton-cg\",penalty='l2',max_iter=1000,C=100,random_state=0)\nlin_model = LogisticRegression(solver=\"liblinear\",penalty='l1',max_iter=1000,C=10,random_state=0)\nlin_model.fit(x_train, y_train)\npredicted_train = lin_model.predict(x_train)\npredicted_test = lin_model.predict(x_test)\npredicted_proba_test = lin_model.predict_proba(x_test)\nprint(\"Logistic Regression Model Performance:\")\nprint(\"Accuracy in Train Set\",accuracy_score(y_train, predicted_train))\nprint(\"Accuracy in Test Set\",accuracy_score(y_test, predicted_test))", "Logistic Regression Model Performance:\nAccuracy in Train Set 0.8414285714285714\nAccuracy in Test Set 0.85\n" ] ], [ [ "To globally interpret this model, we will plot the weights of each variable/feature", "_____no_output_____" ] ], [ [ "weights = lin_model.coef_\nmodel_weights = pd.DataFrame({ 'features': list(feature_names),'weights': list(weights[0])})\n#model_weights = model_weights.sort_values(by='weights', ascending=False) #Normal sort\nmodel_weights = model_weights.reindex(model_weights['weights'].abs().sort_values(ascending=False).index) #Sort by absolute value\nmodel_weights = model_weights[(model_weights[\"weights\"] != 0)] \nprint(\"Number of features:\",len(model_weights.values))\nplt.figure(num=None, figsize=(8, 6), dpi=100, facecolor='w', edgecolor='k')\nsns.barplot(x=\"weights\", y=\"features\", data=model_weights)\nplt.title(\"Intercept (Bias): \"+str(lin_model.intercept_[0]),loc='right')\nplt.xticks(rotation=90)\nplt.show()", "Number of features: 5\n" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ] ]
d01fddbe6591231dd76fa146cd9a89e28f52e9cc
14,154
ipynb
Jupyter Notebook
fashion.ipynb
rajeevak40/Course_AWS_Certified_Machine_Learning
4debbf43d7a0a358395060cf2289393607554a1f
[ "MIT" ]
null
null
null
fashion.ipynb
rajeevak40/Course_AWS_Certified_Machine_Learning
4debbf43d7a0a358395060cf2289393607554a1f
[ "MIT" ]
null
null
null
fashion.ipynb
rajeevak40/Course_AWS_Certified_Machine_Learning
4debbf43d7a0a358395060cf2289393607554a1f
[ "MIT" ]
null
null
null
37.149606
254
0.414441
[ [ [ "<a href=\"https://colab.research.google.com/github/rajeevak40/Course_AWS_Certified_Machine_Learning/blob/master/fashion.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>", "_____no_output_____" ] ], [ [ "import tensorflow as tf\nimport matplotlib.pyplot as plt\n\n", "_____no_output_____" ], [ "fashion = tf.keras.datasets.fashion_mnist\n(train_data, train_lable), (test_data, test_lable)= fashion.load_data()", "Downloading data from https://storage.googleapis.com/tensorflow/tf-keras-datasets/train-labels-idx1-ubyte.gz\n32768/29515 [=================================] - 0s 0us/step\nDownloading data from https://storage.googleapis.com/tensorflow/tf-keras-datasets/train-images-idx3-ubyte.gz\n26427392/26421880 [==============================] - 0s 0us/step\nDownloading data from https://storage.googleapis.com/tensorflow/tf-keras-datasets/t10k-labels-idx1-ubyte.gz\n8192/5148 [===============================================] - 0s 0us/step\nDownloading data from https://storage.googleapis.com/tensorflow/tf-keras-datasets/t10k-images-idx3-ubyte.gz\n4423680/4422102 [==============================] - 0s 0us/step\n" ], [ "train_data= train_data/255\ntest_data=test_data/255", "_____no_output_____" ], [ "model = tf.keras.Sequential([ \n tf.keras.layers.Flatten(),\n tf.keras.layers.Dense(128, input_shape=(28,28), activation='relu'),\n tf.keras.layers.Dense(64,activation='relu' ),\n tf.keras.layers.Dense(64,activation='relu' ),\n tf.keras.layers.Dense(10, activation='softmax')\n])", "_____no_output_____" ], [ "model.compile(optimizer=tf.keras.optimizers.Adam(learning_rate=0.001), loss='sparse_categorical_crossentropy', metrics=['accuracy'])\nhistory=model.fit(train_data, train_lable, epochs=20, verbose=1)", "Epoch 1/20\n1875/1875 [==============================] - 4s 2ms/step - loss: 0.1813 - accuracy: 0.9311\nEpoch 2/20\n1875/1875 [==============================] - 4s 2ms/step - loss: 0.1736 - accuracy: 0.9334\nEpoch 3/20\n1875/1875 [==============================] - 4s 2ms/step - loss: 0.1706 - accuracy: 0.9340\nEpoch 4/20\n1875/1875 [==============================] - 4s 2ms/step - loss: 0.1667 - accuracy: 0.9362\nEpoch 5/20\n1875/1875 [==============================] - 4s 2ms/step - loss: 0.1642 - accuracy: 0.9370\nEpoch 6/20\n1875/1875 [==============================] - 4s 2ms/step - loss: 0.1602 - accuracy: 0.9380\nEpoch 7/20\n1875/1875 [==============================] - 4s 2ms/step - loss: 0.1586 - accuracy: 0.9389\nEpoch 8/20\n1875/1875 [==============================] - 4s 2ms/step - loss: 0.1534 - accuracy: 0.9401\nEpoch 9/20\n1875/1875 [==============================] - 4s 2ms/step - loss: 0.1514 - accuracy: 0.9419\nEpoch 10/20\n1875/1875 [==============================] - 4s 2ms/step - loss: 0.1498 - accuracy: 0.9428\nEpoch 11/20\n1875/1875 [==============================] - 4s 2ms/step - loss: 0.1456 - accuracy: 0.9437\nEpoch 12/20\n1875/1875 [==============================] - 4s 2ms/step - loss: 0.1432 - accuracy: 0.9456\nEpoch 13/20\n1875/1875 [==============================] - 4s 2ms/step - loss: 0.1405 - accuracy: 0.9460\nEpoch 14/20\n1875/1875 [==============================] - 4s 2ms/step - loss: 0.1394 - accuracy: 0.9454\nEpoch 15/20\n1875/1875 [==============================] - 4s 2ms/step - loss: 0.1363 - accuracy: 0.9476\nEpoch 16/20\n1875/1875 [==============================] - 4s 2ms/step - loss: 0.1350 - accuracy: 0.9484\nEpoch 17/20\n1875/1875 [==============================] - 4s 2ms/step - loss: 0.1323 - accuracy: 0.9480\nEpoch 18/20\n1875/1875 [==============================] - 4s 2ms/step - loss: 0.1309 - accuracy: 0.9497\nEpoch 19/20\n1875/1875 [==============================] - 4s 2ms/step - loss: 0.1290 - accuracy: 0.9511\nEpoch 20/20\n1875/1875 [==============================] - 4s 2ms/step - loss: 0.1229 - accuracy: 0.9528\n" ], [ "model.evaluate(test_data, test_lable)", "313/313 [==============================] - 1s 2ms/step - loss: 0.3598 - accuracy: 0.8886\n" ] ], [ [ "# Using CNN\n\n", "_____no_output_____" ] ], [ [ "(train_data1, train_lable1), (test_data1, test_lable1)= fashion.load_data()", "_____no_output_____" ], [ "train_data1=train_data1.reshape(60000, 28,28,1)\ntest_data1= test_data1.reshape(10000,28,28,1)\n\ntrain_data1= train_data1/255\ntest_data1=test_data1/255", "_____no_output_____" ], [ "model = tf.keras.Sequential([ \n tf.keras.layers.Conv2D(128,(3,3), activation='relu', input_shape=(28,28,1)),\n tf.keras.layers.MaxPool2D(2,2),\n tf.keras.layers.Conv2D(128,(3,3), activation='relu'),\n tf.keras.layers.MaxPool2D(2,2),\n tf.keras.layers.Flatten(),\n tf.keras.layers.Dense(128, activation='relu'),\n tf.keras.layers.Dense(64,activation='relu' ),\n tf.keras.layers.Dense(10, activation='softmax')\n])\n", "_____no_output_____" ], [ "model.compile(optimizer='adam', loss='sparse_categorical_crossentropy', metrics=['accuracy'])\nmodel.fit(train_data1, train_lable, epochs=20)", "Epoch 1/20\n1875/1875 [==============================] - 35s 4ms/step - loss: 0.4404 - accuracy: 0.8395\nEpoch 2/20\n1875/1875 [==============================] - 7s 4ms/step - loss: 0.2853 - accuracy: 0.8952\nEpoch 3/20\n1875/1875 [==============================] - 7s 4ms/step - loss: 0.2400 - accuracy: 0.9103\nEpoch 4/20\n1875/1875 [==============================] - 7s 4ms/step - loss: 0.2068 - accuracy: 0.9223\nEpoch 5/20\n1875/1875 [==============================] - 7s 4ms/step - loss: 0.1792 - accuracy: 0.9330\nEpoch 6/20\n1875/1875 [==============================] - 7s 4ms/step - loss: 0.1559 - accuracy: 0.9418\nEpoch 7/20\n1875/1875 [==============================] - 7s 4ms/step - loss: 0.1356 - accuracy: 0.9495\nEpoch 8/20\n1875/1875 [==============================] - 7s 4ms/step - loss: 0.1191 - accuracy: 0.9553\nEpoch 9/20\n1875/1875 [==============================] - 7s 4ms/step - loss: 0.1019 - accuracy: 0.9616\nEpoch 10/20\n1875/1875 [==============================] - 7s 4ms/step - loss: 0.0917 - accuracy: 0.9650\nEpoch 11/20\n1875/1875 [==============================] - 7s 4ms/step - loss: 0.0820 - accuracy: 0.9691\nEpoch 12/20\n1875/1875 [==============================] - 7s 4ms/step - loss: 0.0718 - accuracy: 0.9735\nEpoch 13/20\n1875/1875 [==============================] - 7s 4ms/step - loss: 0.0660 - accuracy: 0.9753\nEpoch 14/20\n1875/1875 [==============================] - 7s 4ms/step - loss: 0.0596 - accuracy: 0.9776\nEpoch 15/20\n1875/1875 [==============================] - 7s 4ms/step - loss: 0.0539 - accuracy: 0.9810\nEpoch 16/20\n1875/1875 [==============================] - 7s 4ms/step - loss: 0.0522 - accuracy: 0.9815\nEpoch 17/20\n1875/1875 [==============================] - 7s 4ms/step - loss: 0.0445 - accuracy: 0.9838\nEpoch 18/20\n1875/1875 [==============================] - 7s 4ms/step - loss: 0.0434 - accuracy: 0.9840\nEpoch 19/20\n1875/1875 [==============================] - 7s 4ms/step - loss: 0.0419 - accuracy: 0.9844\nEpoch 20/20\n1875/1875 [==============================] - 7s 4ms/step - loss: 0.0359 - accuracy: 0.9869\n" ], [ "model.evaluate(test_data1, test_lable)", "313/313 [==============================] - 1s 2ms/step - loss: 0.5393 - accuracy: 0.9108\n" ], [ "", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code" ]
[ [ "markdown" ], [ "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code" ] ]
d01fe8fbc589303ab3dac7ae4d122afec4e330d6
157,848
ipynb
Jupyter Notebook
d2l/tensorflow/chapter_linear-networks/linear-regression-scratch.ipynb
nilesh-patil/dive-into-deeplearning
d1d20885652e640a92cd82dfb6627fcdb519c51d
[ "MIT" ]
null
null
null
d2l/tensorflow/chapter_linear-networks/linear-regression-scratch.ipynb
nilesh-patil/dive-into-deeplearning
d1d20885652e640a92cd82dfb6627fcdb519c51d
[ "MIT" ]
null
null
null
d2l/tensorflow/chapter_linear-networks/linear-regression-scratch.ipynb
nilesh-patil/dive-into-deeplearning
d1d20885652e640a92cd82dfb6627fcdb519c51d
[ "MIT" ]
null
null
null
79.882591
232
0.557118
[ [ [ "# Linear Regression Implementation from Scratch\n:label:`sec_linear_scratch`\n\nNow that you understand the key ideas behind linear regression,\nwe can begin to work through a hands-on implementation in code.\nIn this section, (**we will implement the entire method from scratch,\nincluding the data pipeline, the model,\nthe loss function, and the minibatch stochastic gradient descent optimizer.**)\nWhile modern deep learning frameworks can automate nearly all of this work,\nimplementing things from scratch is the only way\nto make sure that you really know what you are doing.\nMoreover, when it comes time to customize models,\ndefining our own layers or loss functions,\nunderstanding how things work under the hood will prove handy.\nIn this section, we will rely only on tensors and auto differentiation.\nAfterwards, we will introduce a more concise implementation,\ntaking advantage of bells and whistles of deep learning frameworks.\n", "_____no_output_____" ] ], [ [ "%matplotlib inline\nimport random\nimport tensorflow as tf\nfrom d2l import tensorflow as d2l", "_____no_output_____" ] ], [ [ "## Generating the Dataset\n\nTo keep things simple, we will [**construct an artificial dataset\naccording to a linear model with additive noise.**]\nOur task will be to recover this model's parameters\nusing the finite set of examples contained in our dataset.\nWe will keep the data low-dimensional so we can visualize it easily.\nIn the following code snippet, we generate a dataset\ncontaining 1000 examples, each consisting of 2 features\nsampled from a standard normal distribution.\nThus our synthetic dataset will be a matrix\n$\\mathbf{X}\\in \\mathbb{R}^{1000 \\times 2}$.\n\n(**The true parameters generating our dataset will be\n$\\mathbf{w} = [2, -3.4]^\\top$ and $b = 4.2$,\nand**) our synthetic labels will be assigned according\nto the following linear model with the noise term $\\epsilon$:\n\n(**$$\\mathbf{y}= \\mathbf{X} \\mathbf{w} + b + \\mathbf\\epsilon.$$**)\n\nYou could think of $\\epsilon$ as capturing potential\nmeasurement errors on the features and labels.\nWe will assume that the standard assumptions hold and thus\nthat $\\epsilon$ obeys a normal distribution with mean of 0.\nTo make our problem easy, we will set its standard deviation to 0.01.\nThe following code generates our synthetic dataset.\n", "_____no_output_____" ] ], [ [ "def synthetic_data(w, b, num_examples): #@save\n \"\"\"Generate y = Xw + b + noise.\"\"\"\n X = tf.zeros((num_examples, w.shape[0]))\n X += tf.random.normal(shape=X.shape)\n y = tf.matmul(X, tf.reshape(w, (-1, 1))) + b\n y += tf.random.normal(shape=y.shape, stddev=0.01)\n y = tf.reshape(y, (-1, 1))\n return X, y", "_____no_output_____" ], [ "true_w = tf.constant([2, -3.4])\ntrue_b = 4.2\nfeatures, labels = synthetic_data(true_w, true_b, 1000)", "_____no_output_____" ] ], [ [ "Note that [**each row in `features` consists of a 2-dimensional data example\nand that each row in `labels` consists of a 1-dimensional label value (a scalar).**]\n", "_____no_output_____" ] ], [ [ "print('features:', features[0],'\\nlabel:', labels[0])", "features: tf.Tensor([ 0.8627048 -0.8168014], shape=(2,), dtype=float32) \nlabel: tf.Tensor([8.699112], shape=(1,), dtype=float32)\n" ] ], [ [ "By generating a scatter plot using the second feature `features[:, 1]` and `labels`,\nwe can clearly observe the linear correlation between the two.\n", "_____no_output_____" ] ], [ [ "d2l.set_figsize()\n# The semicolon is for displaying the plot only\nd2l.plt.scatter(features[:, (1)].numpy(), labels.numpy(), 1);", "_____no_output_____" ] ], [ [ "## Reading the Dataset\n\nRecall that training models consists of\nmaking multiple passes over the dataset,\ngrabbing one minibatch of examples at a time,\nand using them to update our model.\nSince this process is so fundamental\nto training machine learning algorithms,\nit is worth defining a utility function\nto shuffle the dataset and access it in minibatches.\n\nIn the following code, we [**define the `data_iter` function**] (~~that~~)\nto demonstrate one possible implementation of this functionality.\nThe function (**takes a batch size, a matrix of features,\nand a vector of labels, yielding minibatches of the size `batch_size`.**)\nEach minibatch consists of a tuple of features and labels.\n", "_____no_output_____" ] ], [ [ "def data_iter(batch_size, features, labels):\n num_examples = len(features)\n indices = list(range(num_examples))\n # The examples are read at random, in no particular order\n random.shuffle(indices)\n for i in range(0, num_examples, batch_size):\n j = tf.constant(indices[i: min(i + batch_size, num_examples)])\n yield tf.gather(features, j), tf.gather(labels, j)", "_____no_output_____" ] ], [ [ "In general, note that we want to use reasonably sized minibatches\nto take advantage of the GPU hardware,\nwhich excels at parallelizing operations.\nBecause each example can be fed through our models in parallel\nand the gradient of the loss function for each example can also be taken in parallel,\nGPUs allow us to process hundreds of examples in scarcely more time\nthan it might take to process just a single example.\n\nTo build some intuition, let us read and print\nthe first small batch of data examples.\nThe shape of the features in each minibatch tells us\nboth the minibatch size and the number of input features.\nLikewise, our minibatch of labels will have a shape given by `batch_size`.\n", "_____no_output_____" ] ], [ [ "batch_size = 10\n\nfor X, y in data_iter(batch_size, features, labels):\n print(X, '\\n', y)\n break", "tf.Tensor(\n[[ 0.34395403 0.250355 ]\n [ 0.8474066 -0.08658892]\n [ 1.332213 -0.05381915]\n [-1.0579451 0.5105379 ]\n [-0.48678052 0.12689345]\n [-0.19708689 -0.7590605 ]\n [-1.4754761 -0.98582214]\n [ 0.35217085 0.43196547]\n [-1.7024363 0.54085165]\n [-0.10568867 -1.4778754 ]], shape=(10, 2), dtype=float32) \n tf.Tensor(\n[[ 4.034952 ]\n [ 6.1658163 ]\n [ 7.0530744 ]\n [ 0.32585293]\n [ 2.8073056 ]\n [ 6.393605 ]\n [ 4.5981565 ]\n [ 3.43894 ]\n [-1.0478138 ]\n [ 9.006084 ]], shape=(10, 1), dtype=float32)\n" ] ], [ [ "As we run the iteration, we obtain distinct minibatches\nsuccessively until the entire dataset has been exhausted (try this).\nWhile the iteration implemented above is good for didactic purposes,\nit is inefficient in ways that might get us in trouble on real problems.\nFor example, it requires that we load all the data in memory\nand that we perform lots of random memory access.\nThe built-in iterators implemented in a deep learning framework\nare considerably more efficient and they can deal\nwith both data stored in files and data fed via data streams.\n\n\n## Initializing Model Parameters\n\n[**Before we can begin optimizing our model's parameters**] by minibatch stochastic gradient descent,\n(**we need to have some parameters in the first place.**)\nIn the following code, we initialize weights by sampling\nrandom numbers from a normal distribution with mean 0\nand a standard deviation of 0.01, and setting the bias to 0.\n", "_____no_output_____" ] ], [ [ "w = tf.Variable(tf.random.normal(shape=(2, 1), mean=0, stddev=0.01),\n trainable=True)\nb = tf.Variable(tf.zeros(1), trainable=True)", "_____no_output_____" ] ], [ [ "After initializing our parameters,\nour next task is to update them until\nthey fit our data sufficiently well.\nEach update requires taking the gradient\nof our loss function with respect to the parameters.\nGiven this gradient, we can update each parameter\nin the direction that may reduce the loss.\n\nSince nobody wants to compute gradients explicitly\n(this is tedious and error prone),\nwe use automatic differentiation,\nas introduced in :numref:`sec_autograd`, to compute the gradient.\n\n\n## Defining the Model\n\nNext, we must [**define our model,\nrelating its inputs and parameters to its outputs.**]\nRecall that to calculate the output of the linear model,\nwe simply take the matrix-vector dot product\nof the input features $\\mathbf{X}$ and the model weights $\\mathbf{w}$,\nand add the offset $b$ to each example.\nNote that below $\\mathbf{Xw}$ is a vector and $b$ is a scalar.\nRecall the broadcasting mechanism as described in :numref:`subsec_broadcasting`.\nWhen we add a vector and a scalar,\nthe scalar is added to each component of the vector.\n", "_____no_output_____" ] ], [ [ "def linreg(X, w, b): #@save\n \"\"\"The linear regression model.\"\"\"\n return tf.matmul(X, w) + b", "_____no_output_____" ] ], [ [ "## Defining the Loss Function\n\nSince [**updating our model requires taking\nthe gradient of our loss function,**]\nwe ought to (**define the loss function first.**)\nHere we will use the squared loss function\nas described in :numref:`sec_linear_regression`.\nIn the implementation, we need to transform the true value `y`\ninto the predicted value's shape `y_hat`.\nThe result returned by the following function\nwill also have the same shape as `y_hat`.\n", "_____no_output_____" ] ], [ [ "def squared_loss(y_hat, y): #@save\n \"\"\"Squared loss.\"\"\"\n return (y_hat - tf.reshape(y, y_hat.shape)) ** 2 / 2", "_____no_output_____" ] ], [ [ "## Defining the Optimization Algorithm\n\nAs we discussed in :numref:`sec_linear_regression`,\nlinear regression has a closed-form solution.\nHowever, this is not a book about linear regression:\nit is a book about deep learning.\nSince none of the other models that this book introduces\ncan be solved analytically, we will take this opportunity to introduce your first working example of\nminibatch stochastic gradient descent.\n[~~Despite linear regression has a closed-form solution, other models in this book don't. Here we introduce minibatch stochastic gradient descent.~~]\n\nAt each step, using one minibatch randomly drawn from our dataset,\nwe will estimate the gradient of the loss with respect to our parameters.\nNext, we will update our parameters\nin the direction that may reduce the loss.\nThe following code applies the minibatch stochastic gradient descent update,\ngiven a set of parameters, a learning rate, and a batch size.\nThe size of the update step is determined by the learning rate `lr`.\nBecause our loss is calculated as a sum over the minibatch of examples,\nwe normalize our step size by the batch size (`batch_size`),\nso that the magnitude of a typical step size\ndoes not depend heavily on our choice of the batch size.\n", "_____no_output_____" ] ], [ [ "def sgd(params, grads, lr, batch_size): #@save\n \"\"\"Minibatch stochastic gradient descent.\"\"\"\n for param, grad in zip(params, grads):\n param.assign_sub(lr*grad/batch_size)", "_____no_output_____" ] ], [ [ "## Training\n\nNow that we have all of the parts in place,\nwe are ready to [**implement the main training loop.**]\nIt is crucial that you understand this code\nbecause you will see nearly identical training loops\nover and over again throughout your career in deep learning.\n\nIn each iteration, we will grab a minibatch of training examples,\nand pass them through our model to obtain a set of predictions.\nAfter calculating the loss, we initiate the backwards pass through the network,\nstoring the gradients with respect to each parameter.\nFinally, we will call the optimization algorithm `sgd`\nto update the model parameters.\n\nIn summary, we will execute the following loop:\n\n* Initialize parameters $(\\mathbf{w}, b)$\n* Repeat until done\n * Compute gradient $\\mathbf{g} \\leftarrow \\partial_{(\\mathbf{w},b)} \\frac{1}{|\\mathcal{B}|} \\sum_{i \\in \\mathcal{B}} l(\\mathbf{x}^{(i)}, y^{(i)}, \\mathbf{w}, b)$\n * Update parameters $(\\mathbf{w}, b) \\leftarrow (\\mathbf{w}, b) - \\eta \\mathbf{g}$\n\nIn each *epoch*,\nwe will iterate through the entire dataset\n(using the `data_iter` function) once\npassing through every example in the training dataset\n(assuming that the number of examples is divisible by the batch size).\nThe number of epochs `num_epochs` and the learning rate `lr` are both hyperparameters,\nwhich we set here to 3 and 0.03, respectively.\nUnfortunately, setting hyperparameters is tricky\nand requires some adjustment by trial and error.\nWe elide these details for now but revise them\nlater in\n:numref:`chap_optimization`.\n", "_____no_output_____" ] ], [ [ "lr = 0.03\nnum_epochs = 3\nnet = linreg\nloss = squared_loss", "_____no_output_____" ], [ "for epoch in range(num_epochs):\n for X, y in data_iter(batch_size, features, labels):\n with tf.GradientTape() as g:\n l = loss(net(X, w, b), y) # Minibatch loss in `X` and `y`\n # Compute gradient on l with respect to [`w`, `b`]\n dw, db = g.gradient(l, [w, b])\n # Update parameters using their gradient\n sgd([w, b], [dw, db], lr, batch_size)\n train_l = loss(net(features, w, b), labels)\n print(f'epoch {epoch + 1}, loss {float(tf.reduce_mean(train_l)):f}')", "epoch 1, loss 0.029337\n" ] ], [ [ "In this case, because we synthesized the dataset ourselves,\nwe know precisely what the true parameters are.\nThus, we can [**evaluate our success in training\nby comparing the true parameters\nwith those that we learned**] through our training loop.\nIndeed they turn out to be very close to each other.\n", "_____no_output_____" ] ], [ [ "print(f'error in estimating w: {true_w - tf.reshape(w, true_w.shape)}')\nprint(f'error in estimating b: {true_b - b}')", "error in estimating w: [-0.00040174 -0.00101519]\nerror in estimating b: [0.00056839]\n" ] ], [ [ "Note that we should not take it for granted\nthat we are able to recover the parameters perfectly.\nHowever, in machine learning, we are typically less concerned\nwith recovering true underlying parameters,\nand more concerned with parameters that lead to highly accurate prediction.\nFortunately, even on difficult optimization problems,\nstochastic gradient descent can often find remarkably good solutions,\nowing partly to the fact that, for deep networks,\nthere exist many configurations of the parameters\nthat lead to highly accurate prediction.\n\n\n## Summary\n\n* We saw how a deep network can be implemented and optimized from scratch, using just tensors and auto differentiation, without any need for defining layers or fancy optimizers.\n* This section only scratches the surface of what is possible. In the following sections, we will describe additional models based on the concepts that we have just introduced and learn how to implement them more concisely.\n\n\n## Exercises\n\n1. What would happen if we were to initialize the weights to zero. Would the algorithm still work?\n1. Assume that you are\n [Georg Simon Ohm](https://en.wikipedia.org/wiki/Georg_Ohm) trying to come up\n with a model between voltage and current. Can you use auto differentiation to learn the parameters of your model?\n1. Can you use [Planck's Law](https://en.wikipedia.org/wiki/Planck%27s_law) to determine the temperature of an object using spectral energy density?\n1. What are the problems you might encounter if you wanted to compute the second derivatives? How would you fix them?\n1. Why is the `reshape` function needed in the `squared_loss` function?\n1. Experiment using different learning rates to find out how fast the loss function value drops.\n1. If the number of examples cannot be divided by the batch size, what happens to the `data_iter` function's behavior?\n", "_____no_output_____" ], [ "[Discussions](https://discuss.d2l.ai/t/201)\n", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ] ]
d0201ef1246972dc1c534e57edac8bb3a3d449b8
8,803
ipynb
Jupyter Notebook
math/Math32_Tensor_Product_Solutions.ipynb
jahadtariq/Quantum-Computing
77df163bfc9ec72035f8b3392d450da59710d4a3
[ "Apache-2.0", "CC-BY-4.0" ]
1
2021-08-15T10:57:16.000Z
2021-08-15T10:57:16.000Z
math/Math32_Tensor_Product_Solutions.ipynb
jahadtariq/Quantum-Computing
77df163bfc9ec72035f8b3392d450da59710d4a3
[ "Apache-2.0", "CC-BY-4.0" ]
null
null
null
math/Math32_Tensor_Product_Solutions.ipynb
jahadtariq/Quantum-Computing
77df163bfc9ec72035f8b3392d450da59710d4a3
[ "Apache-2.0", "CC-BY-4.0" ]
3
2021-08-11T11:12:38.000Z
2021-09-14T09:15:08.000Z
31.779783
310
0.417471
[ [ [ "<a href=\"https://qworld.net\" target=\"_blank\" align=\"left\"><img src=\"../qworld/images/header.jpg\" align=\"left\"></a>\n$ \\newcommand{\\bra}[1]{\\langle #1|} $\n$ \\newcommand{\\ket}[1]{|#1\\rangle} $\n$ \\newcommand{\\braket}[2]{\\langle #1|#2\\rangle} $\n$ \\newcommand{\\dot}[2]{ #1 \\cdot #2} $\n$ \\newcommand{\\biginner}[2]{\\left\\langle #1,#2\\right\\rangle} $\n$ \\newcommand{\\mymatrix}[2]{\\left( \\begin{array}{#1} #2\\end{array} \\right)} $\n$ \\newcommand{\\myvector}[1]{\\mymatrix{c}{#1}} $\n$ \\newcommand{\\myrvector}[1]{\\mymatrix{r}{#1}} $\n$ \\newcommand{\\mypar}[1]{\\left( #1 \\right)} $\n$ \\newcommand{\\mybigpar}[1]{ \\Big( #1 \\Big)} $\n$ \\newcommand{\\sqrttwo}{\\frac{1}{\\sqrt{2}}} $\n$ \\newcommand{\\dsqrttwo}{\\dfrac{1}{\\sqrt{2}}} $\n$ \\newcommand{\\onehalf}{\\frac{1}{2}} $\n$ \\newcommand{\\donehalf}{\\dfrac{1}{2}} $\n$ \\newcommand{\\hadamard}{ \\mymatrix{rr}{ \\sqrttwo & \\sqrttwo \\\\ \\sqrttwo & -\\sqrttwo }} $\n$ \\newcommand{\\vzero}{\\myvector{1\\\\0}} $\n$ \\newcommand{\\vone}{\\myvector{0\\\\1}} $\n$ \\newcommand{\\stateplus}{\\myvector{ \\sqrttwo \\\\ \\sqrttwo } } $\n$ \\newcommand{\\stateminus}{ \\myrvector{ \\sqrttwo \\\\ -\\sqrttwo } } $\n$ \\newcommand{\\myarray}[2]{ \\begin{array}{#1}#2\\end{array}} $\n$ \\newcommand{\\X}{ \\mymatrix{cc}{0 & 1 \\\\ 1 & 0} } $\n$ \\newcommand{\\I}{ \\mymatrix{rr}{1 & 0 \\\\ 0 & 1} } $\n$ \\newcommand{\\Z}{ \\mymatrix{rr}{1 & 0 \\\\ 0 & -1} } $\n$ \\newcommand{\\Htwo}{ \\mymatrix{rrrr}{ \\frac{1}{2} & \\frac{1}{2} & \\frac{1}{2} & \\frac{1}{2} \\\\ \\frac{1}{2} & -\\frac{1}{2} & \\frac{1}{2} & -\\frac{1}{2} \\\\ \\frac{1}{2} & \\frac{1}{2} & -\\frac{1}{2} & -\\frac{1}{2} \\\\ \\frac{1}{2} & -\\frac{1}{2} & -\\frac{1}{2} & \\frac{1}{2} } } $\n$ \\newcommand{\\CNOT}{ \\mymatrix{cccc}{1 & 0 & 0 & 0 \\\\ 0 & 1 & 0 & 0 \\\\ 0 & 0 & 0 & 1 \\\\ 0 & 0 & 1 & 0} } $\n$ \\newcommand{\\norm}[1]{ \\left\\lVert #1 \\right\\rVert } $\n$ \\newcommand{\\pstate}[1]{ \\lceil \\mspace{-1mu} #1 \\mspace{-1.5mu} \\rfloor } $\n$ \\newcommand{\\greenbit}[1] {\\mathbf{{\\color{green}#1}}} $\n$ \\newcommand{\\bluebit}[1] {\\mathbf{{\\color{blue}#1}}} $\n$ \\newcommand{\\redbit}[1] {\\mathbf{{\\color{red}#1}}} $\n$ \\newcommand{\\brownbit}[1] {\\mathbf{{\\color{brown}#1}}} $\n$ \\newcommand{\\blackbit}[1] {\\mathbf{{\\color{black}#1}}} $", "_____no_output_____" ], [ "<font style=\"font-size:28px;\" align=\"left\"><b> <font color=\"blue\"> Solutions for </font> Matrices: Tensor Product</b></font>\n<br>\n_prepared by Abuzer Yakaryilmaz_\n<br><br>", "_____no_output_____" ], [ "<a id=\"task1\"></a>\n<h3> Task 1 </h3>\n\nFind $ u \\otimes v $ and $ v \\otimes u $ for the given vectors $ u = \\myrvector{-2 \\\\ -1 \\\\ 0 \\\\ 1} $ and $ v = \\myrvector{ 1 \\\\ 2 \\\\ 3 } $.", "_____no_output_____" ], [ "<h3>Solution</h3>", "_____no_output_____" ] ], [ [ "u = [-2,-1,0,1]\nv = [1,2,3]\n\nuv = []\nvu = []\n\n\nfor i in range(len(u)): # one element of u is picked\n for j in range(len(v)): # now we iteratively select every element of v\n uv.append(u[i]*v[j]) # this one element of u is iteratively multiplied with every element of v \n \nprint(\"u-tensor-v is\",uv) \n\nfor i in range(len(v)): # one element of v is picked\n for j in range(len(u)): # now we iteratively select every element of u\n vu.append(v[i]*u[j]) # this one element of v is iteratively multiplied with every element of u \n \nprint(\"v-tensor-u is\",vu) ", "_____no_output_____" ] ], [ [ "<a id=\"task2\"></a>\n<h3> Task 2 </h3>\n\nFind $ A \\otimes B $ for the given matrices\n$\n A = \\mymatrix{rrr}{-1 & 0 & 1 \\\\ -2 & -1 & 2} ~~\\mbox{and}~~ \n B = \\mymatrix{rr}{0 & 2 \\\\ 3 & -1 \\\\ -1 & 1 }.\n$", "_____no_output_____" ], [ "<h3>Solution</h3>", "_____no_output_____" ] ], [ [ "A = [\n [-1,0,1],\n [-2,-1,2]\n]\n\nB = [\n [0,2],\n [3,-1],\n [-1,1]\n]\n\nprint(\"A =\")\nfor i in range(len(A)):\n print(A[i])\n\nprint() # print a line\nprint(\"B =\")\nfor i in range(len(B)):\n print(B[i])\n\n# let's define A-tensor-B as a (6x6)-dimensional zero matrix\nAB = []\nfor i in range(6):\n AB.append([])\n for j in range(6):\n AB[i].append(0)\n\n \n \n# let's find A-tensor-B\nfor i in range(2):\n for j in range(3):\n # for each A(i,j) we execute the following codes\n a = A[i][j]\n # we access each element of B\n for m in range(3):\n for n in range(2):\n b = B[m][n]\n # now we put (a*b) in the appropriate index of AB\n AB[3*i+m][2*j+n] = a * b\n \n \n\nprint() # print a line\nprint(\"A-tensor-B =\") \nprint() # print a line\nfor i in range(6):\n print(AB[i])", "_____no_output_____" ] ], [ [ "<a id=\"task3\"></a>\n<h3> Task 3 </h3>\n\nFind $ B \\otimes A $ for the given matrices\n$\n A = \\mymatrix{rrr}{-1 & 0 & 1 \\\\ -2 & -1 & 2} ~~\\mbox{and}~~ \n B = \\mymatrix{rr}{0 & 2 \\\\ 3 & -1 \\\\ -1 & 1 }.\n$", "_____no_output_____" ], [ "<h3>Solution</h3>", "_____no_output_____" ] ], [ [ "A = [\n [-1,0,1],\n [-2,-1,2]\n]\n\nB = [\n [0,2],\n [3,-1],\n [-1,1]\n]\n\nprint() # print a line\nprint(\"B =\")\nfor i in range(len(B)):\n print(B[i])\n \nprint(\"A =\")\nfor i in range(len(A)):\n print(A[i])\n\n# let's define B-tensor-A as a (6x6)-dimensional zero matrix\nBA = []\nfor i in range(6):\n BA.append([])\n for j in range(6):\n BA[i].append(0)\n \n# let's find B-tensor-A\nfor i in range(3):\n for j in range(2):\n # for each B(i,j) we execute the following codes\n b = B[i][j]\n # we access each element of A\n for m in range(2):\n for n in range(3):\n a = A[m][n]\n # now we put (a*b) in the appropriate index of AB\n BA[2*i+m][3*j+n] = b * a\n \n \n\nprint() # print a line\nprint(\"B-tensor-A =\") \nprint() # print a line\nfor i in range(6):\n print(BA[i])", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown", "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ] ]
d02024e3e1087479b65cb447ef2569e3beb5fa81
776,366
ipynb
Jupyter Notebook
train_nuclei.ipynb
xumm94/2018_data_science_bowl
9f7a6b60b7c1e933c30acd8abbdeeb7bd869a3f6
[ "MIT" ]
null
null
null
train_nuclei.ipynb
xumm94/2018_data_science_bowl
9f7a6b60b7c1e933c30acd8abbdeeb7bd869a3f6
[ "MIT" ]
null
null
null
train_nuclei.ipynb
xumm94/2018_data_science_bowl
9f7a6b60b7c1e933c30acd8abbdeeb7bd869a3f6
[ "MIT" ]
null
null
null
552.180654
308,122
0.925894
[ [ [ "# Mask R-CNN - Train on Nuclei Dataset (updated from train_shape.ipynb)\n\n\nThis notebook shows how to train Mask R-CNN on your own dataset. To keep things simple we use a synthetic dataset of shapes (squares, triangles, and circles) which enables fast training. You'd still need a GPU, though, because the network backbone is a Resnet101, which would be too slow to train on a CPU. On a GPU, you can start to get okay-ish results in a few minutes, and good results in less than an hour.\n\nThe code of the *Shapes* dataset is included below. It generates images on the fly, so it doesn't require downloading any data. And it can generate images of any size, so we pick a small image size to train faster. \n\n", "_____no_output_____" ] ], [ [ "import os\nimport sys\nimport random\nimport math\nimport re\nimport time\nimport tqdm\nimport numpy as np\nimport cv2\nimport matplotlib\nimport matplotlib.pyplot as plt\n\nfrom config import Config\nimport utils\nimport model as modellib\nimport visualize\nfrom model import log\n\n%matplotlib inline \n\n# Root directory of the project\nROOT_DIR = os.getcwd()\n\n# Directory to save logs and trained model\n# MODEL_DIR = os.path.join(ROOT_DIR, \"logs\")\nMODEL_DIR = \"/data/lf/Nuclei/logs\"\nDATA_DIR = os.path.join(ROOT_DIR, \"data\")\n# Local path to trained weights file\nCOCO_MODEL_PATH = os.path.join(ROOT_DIR, \"models\", \"mask_rcnn_coco.h5\")\n# Download COCO trained weights from Releases if needed\nif not os.path.exists(COCO_MODEL_PATH):\n utils.download_trained_weights(COCO_MODEL_PATH)", "/home/lf/anaconda3/lib/python3.6/importlib/_bootstrap.py:205: RuntimeWarning: compiletime version 3.5 of module 'tensorflow.python.framework.fast_tensor_util' does not match runtime version 3.6\n return f(*args, **kwds)\n/home/lf/anaconda3/lib/python3.6/site-packages/h5py/__init__.py:34: FutureWarning: Conversion of the second argument of issubdtype from `float` to `np.floating` is deprecated. In future, it will be treated as `np.float64 == np.dtype(float).type`.\n from ._conv import register_converters as _register_converters\nUsing TensorFlow backend.\n" ] ], [ [ "## Configurations", "_____no_output_____" ] ], [ [ "class NucleiConfig(Config):\n \"\"\"Configuration for training on the toy shapes dataset.\n Derives from the base Config class and overrides values specific\n to the toy shapes dataset.\n \"\"\"\n # Give the configuration a recognizable name\n NAME = \"nuclei\"\n\n # Train on 1 GPU and 8 images per GPU. We can put multiple images on each\n # GPU because the images are small. Batch size is 8 (GPUs * images/GPU).\n GPU_COUNT = 1\n IMAGES_PER_GPU = 4\n\n # Number of classes (including background)\n NUM_CLASSES = 1 + 1 # background + 3 shapes\n\n # Use small images for faster training. Set the limits of the small side\n # the large side, and that determines the image shape.\n IMAGE_MIN_DIM = 512\n IMAGE_MAX_DIM = 512\n\n # Use smaller anchors because our image and objects are small\n RPN_ANCHOR_SCALES = (8, 16, 32, 64, 128) # anchor side in pixels\n\n # Reduce training ROIs per image because the images are small and have\n # few objects. Aim to allow ROI sampling to pick 33% positive ROIs.\n TRAIN_ROIS_PER_IMAGE = 32\n\n # Use a small epoch since the data is simple\n STEPS_PER_EPOCH = 100\n\n # use small validation steps since the epoch is small\n VALIDATION_STEPS = 5\n \nconfig = NucleiConfig()\nconfig.display()\ntype(config.display())", "\nConfigurations:\nBACKBONE_SHAPES [[128 128]\n [ 64 64]\n [ 32 32]\n [ 16 16]\n [ 8 8]]\nBACKBONE_STRIDES [4, 8, 16, 32, 64]\nBATCH_SIZE 4\nBBOX_STD_DEV [0.1 0.1 0.2 0.2]\nDETECTION_MAX_INSTANCES 100\nDETECTION_MIN_CONFIDENCE 0.7\nDETECTION_NMS_THRESHOLD 0.3\nGPU_COUNT 1\nIMAGES_PER_GPU 4\nIMAGE_MAX_DIM 512\nIMAGE_MIN_DIM 512\nIMAGE_PADDING True\nIMAGE_SHAPE [512 512 3]\nLEARNING_MOMENTUM 0.9\nLEARNING_RATE 0.001\nMASK_POOL_SIZE 14\nMASK_SHAPE [28, 28]\nMAX_GT_INSTANCES 100\nMEAN_PIXEL [123.7 116.8 103.9]\nMINI_MASK_SHAPE (56, 56)\nNAME nuclei\nNUM_CLASSES 2\nPOOL_SIZE 7\nPOST_NMS_ROIS_INFERENCE 1000\nPOST_NMS_ROIS_TRAINING 2000\nROI_POSITIVE_RATIO 0.33\nRPN_ANCHOR_RATIOS [0.5, 1, 2]\nRPN_ANCHOR_SCALES (8, 16, 32, 64, 128)\nRPN_ANCHOR_STRIDE 1\nRPN_BBOX_STD_DEV [0.1 0.1 0.2 0.2]\nRPN_NMS_THRESHOLD 0.7\nRPN_TRAIN_ANCHORS_PER_IMAGE 256\nSTEPS_PER_EPOCH 100\nTRAIN_ROIS_PER_IMAGE 32\nUSE_MINI_MASK True\nUSE_RPN_ROIS True\nVALIDATION_STEPS 5\nWEIGHT_DECAY 0.0001\n\n\n\nConfigurations:\nBACKBONE_SHAPES [[128 128]\n [ 64 64]\n [ 32 32]\n [ 16 16]\n [ 8 8]]\nBACKBONE_STRIDES [4, 8, 16, 32, 64]\nBATCH_SIZE 4\nBBOX_STD_DEV [0.1 0.1 0.2 0.2]\nDETECTION_MAX_INSTANCES 100\nDETECTION_MIN_CONFIDENCE 0.7\nDETECTION_NMS_THRESHOLD 0.3\nGPU_COUNT 1\nIMAGES_PER_GPU 4\nIMAGE_MAX_DIM 512\nIMAGE_MIN_DIM 512\nIMAGE_PADDING True\nIMAGE_SHAPE [512 512 3]\nLEARNING_MOMENTUM 0.9\nLEARNING_RATE 0.001\nMASK_POOL_SIZE 14\nMASK_SHAPE [28, 28]\nMAX_GT_INSTANCES 100\nMEAN_PIXEL [123.7 116.8 103.9]\nMINI_MASK_SHAPE (56, 56)\nNAME nuclei\nNUM_CLASSES 2\nPOOL_SIZE 7\nPOST_NMS_ROIS_INFERENCE 1000\nPOST_NMS_ROIS_TRAINING 2000\nROI_POSITIVE_RATIO 0.33\nRPN_ANCHOR_RATIOS [0.5, 1, 2]\nRPN_ANCHOR_SCALES (8, 16, 32, 64, 128)\nRPN_ANCHOR_STRIDE 1\nRPN_BBOX_STD_DEV [0.1 0.1 0.2 0.2]\nRPN_NMS_THRESHOLD 0.7\nRPN_TRAIN_ANCHORS_PER_IMAGE 256\nSTEPS_PER_EPOCH 100\nTRAIN_ROIS_PER_IMAGE 32\nUSE_MINI_MASK True\nUSE_RPN_ROIS True\nVALIDATION_STEPS 5\nWEIGHT_DECAY 0.0001\n\n\n" ] ], [ [ "## Notebook Preferences", "_____no_output_____" ] ], [ [ "def get_ax(rows=1, cols=1, size=8):\n \"\"\"Return a Matplotlib Axes array to be used in\n all visualizations in the notebook. Provide a\n central point to control graph sizes.\n \n Change the default size attribute to control the size\n of rendered images\n \"\"\"\n _, ax = plt.subplots(rows, cols, figsize=(size*cols, size*rows))\n return ax", "_____no_output_____" ] ], [ [ "## Dataset\n\nLoad the nuclei dataset\n\nExtend the Dataset class and add a method to get the nuclei dataset, `load_image_info()`, and override the following methods:\n\n* load_image()\n* load_mask()\n* image_reference()", "_____no_output_____" ] ], [ [ "class NucleiDataset(utils.Dataset):\n\n \"\"\"Load the images and masks from dataset.\"\"\"\n\n def load_image_info(self, set_path, img_set):\n \"\"\"Get the picture names(ids) of the dataset.\"\"\"\n \n # Add classes\n self.add_class(\"nucleis\", 1, \"regular\")\n # TO DO : Three different image types into three classes\n \n # Add images\n # Get the images ids of training/testing set\n# train_ids = next(os.walk(set_path))[1]\n with open(img_set) as f:\n read_data = f.readlines()\n train_ids = [read_data[i][:-1] for i in range(0,len(read_data))]\n # Get the info of the images\n for i, id_ in enumerate(train_ids):\n file_path = os.path.join(set_path, id_)\n img_path = os.path.join(file_path, \"images\")\n masks_path = os.path.join(file_path, \"masks\")\n img_name = id_ + \".png\"\n img = cv2.imread(os.path.join(img_path, img_name))\n width, height, _ = img.shape\n self.add_image(\"nucleis\", image_id=id_, path=file_path,\n img_path=img_path, masks_path=masks_path,\n width=width, height=height,\n nucleis=\"nucleis\") \n\n def load_image(self, image_id):\n \"\"\"Load image from file of the given image ID.\"\"\"\n info = self.image_info[image_id]\n img_path = info[\"img_path\"]\n img_name = info[\"id\"] + \".png\"\n image = cv2.imread(os.path.join(img_path, img_name))\n return image\n\n def image_reference(self, image_id):\n \"\"\"Return the path of the given image ID.\"\"\"\n info = self.image_info[image_id]\n if info[\"source\"] == \"nucleis\":\n return info[\"path\"]\n else:\n super(self.__class__).image_reference(self, image_id)\n\n def load_mask(self, image_id):\n \"\"\"Load the instance masks of the given image ID.\"\"\"\n info = self.image_info[image_id]\n mask_files = next(os.walk(info[\"masks_path\"]))[2]\n masks = np. zeros([info['width'], info['height'], len(mask_files)], dtype=np.uint8)\n for i, id_ in enumerate(mask_files):\n single_mask = cv2.imread(os.path.join(info[\"masks_path\"], id_), 0)\n masks[:, :, i:i+1] = single_mask[:, :, np.newaxis]\n class_ids = np.ones(len(mask_files))\n return masks, class_ids.astype(np.int32)\n \n", "_____no_output_____" ], [ "kFOLD_DIR = os.path.join(ROOT_DIR, \"kfold_dataset\")\nwith open(kFOLD_DIR + '/10-fold-val-3.txt') as f:\n read_data = f.readlines()\n \ntrain_ids = [read_data[i][:-1] for i in range(0,len(read_data))]\nprint(train_ids)", "['45c3bdef1819ba7029990e159f61543ed25781d13fb4dc5d4de52e803debd7d3', '4185b9369fc8bdcc7e7c68f2129b9a7442237cd0f836a4b6d13ef64bf0ef572a', '2817299fd3b88670e86a9db5651ba24333c299d1d41e5491aabfcd95aee84174', '7aae06bc4558829473071defec0b7ab3bfa9c5005548a13da95596bb6a66d105', '0bf33d3db4282d918ec3da7112d0bf0427d4eafe74b3ee0bb419770eefe8d7d6', 'd910b2b1be8406caecfe31a503d412ffc4e3d488286242ebc7381836121dd4ef', '7d40ea6ead1bec903f26d9046d291aedcb12a584b4d3b337ea252b34c7d86072', '4d14a3629b6af6de86d850be236b833a7bfcbf6d8665fd73c6dc339e06c14607', '89be66f88612aae541f5843abcd9c015832b5d6c54a28103b3019f7f38df8a6d', 'f9ea1a1159c33f39bbe5f18bb278d961188b40508277eab7c0b4b91219b37b5d', '64eeef16fdc4e26523d27bfa71a1d38d2cb2e4fa116c0d0ea56b1322f806f0b9', '10328b822b836e67b547b4144e0b7eb43747c114ce4cacd8b540648892945b00', '3a3fee427e6ef7dfd0d82681e2bcee2d054f80287aea7dfa3fa4447666f929b9', '68f833de9f8c631cedd7031b8ed9b908c42cbbc1e14254722728a8b7d596fd4c', '619429303c1af7540916509fe7900cf483eba4391b06aac87ff7f66ca1ab6483', '4b274461c6d001a7a9aeaf5952b40ac4934d1be96b9c176edfd628a8f77e6df2', '80632d6be60c8462e50d51bcf5caf15308931603095d6b5e772a115cd0d0470c', '76c44d1addac92a65f1331f2d93f4e3b130bd4e538a6e5239c3ac1f4c403608a', '1e8408fbb1619e7a0bcdd0bcd21fae57e7cb1f297d4c79787a9d0f5695d77073', '57bd029b19c1b382bef9db3ac14f13ea85e36a6053b92e46caedee95c05847ab', '58406ed8ef944831c413c3424dc2b07e59aef13eb1ff16acbb3402b38b5de0bd', '61dc249314d7b965eb4561ec739eab9b0f60af55c97b25ced8cb2a42a0be128e', 'b6d50fa22380ae3a7e8c52c5bc44a254e7b2596fd8927980dbe2c160cb5689b5', 'c00ae67f72816daee468474026e30705003b2d3501f123579a4f0a6366b66aa1', 'b560dba92fbf2af785739efced50d5866c86dc4dada9be3832138bef4c3524d2', '52a6b8ae4c8e0a8a07a31b8e3f401d8811bf1942969c198e51dfcbd98520aa60', '2869fad54664677e81bacbf00c2256e89a7b90b69d9688c9342e2c736ff5421c', '573e1480b500c395f8d3f1800e1998bf553af0d3d43039333d33cf37d08f64e5', '243443ae303cc09cfbea85bfd22b0c4f026342f3dfc3aa1076f27867910d025b', '0c6507d493bf79b2ba248c5cca3d14df8b67328b89efa5f4a32f97a06a88c92c', '12f89395ad5d21491ab9cec137e247652451d283064773507d7dc362243c5b8e', '16c3d5935ba94b720becc24b7a05741c26149e221e3401924080f41e2f891368', 'd1dbc6ee7c44a7027e935d040e496793186b884a1028d0e26284a206c6f5aff0', '6b6d4e6ff52de473a4b6f8bd0f11ae22242d508cc4117ff38ec39cbb88088aaa', 'a7f6194ddbeaefb1da571226a97785d09ccafc5893ce3c77078d2040bccfcb77', 'd2ce593bddf9998ce3b76328c0151d0ba4b644c293aca7f6254e521c448b305f', '6fc83b33896f58a4a067d8fdcf51f15d4ae9be05d8c3815d23336f1f2a8c45a1', '06350c7cc618be442c15706db7a68e91f313758d224de4608f9b960106d4f9ca', '70827e40a7155391984e56703c6df3392fb4a94bbd6c7008da6a6ca3244965d9', 'e23e11414ee645b51081fb202d38b793f0c8ef2940f8228ded384899d21b02c2', '55ff2b0ec48b76e10c7ee18add5794005cd551697f96af865c763d50da78dd9c', '6fb82031f7fc5f4fa6e0bc2ef3421db19036b5c2cdd2725009ab465d66d61d72', 'bde3727f3a9e8b2b58f383ebc762b2157eb50cdbff23e69b025418b43967556b', '5bb8508ff8ec8683fc6a8aa6bd470f6feb3af4eccdca07f51a1ebc9dad67cfb8', '29780b28e6a75fac7b96f164a1580666513199794f1b19a5df8587fe0cb59b67', '0d3640c1f1b80f24e94cc9a5f3e1d9e8db7bf6af7d4aba920265f46cadc25e37', 'aa47f0b303b1d525b52452ae3a8553b2d61d719a28aee547e2ef1fc6730a078f', '9620c33d8ef2772dbc5bd152429f507bd7fafb27e12109003292b671e556b089', '8cdbdda8b3a64c97409c0160bcfb06eb8e876cedc3691aa63ca16dbafae6f948', '6aa7dd0c88bec4f96cdd497f9c37779733033d9ec6513307461302d36bd32ac7', '958114e5f37d5e1420b410bd716753b3e874b175f2b6958ebf1ec2bdf776e41f', '1e61ecf354cb93a62a9561db87a53985fb54e001444f98112ed0fc623fad793e', '77ceeb87f560775ac150b8b9b09684ed3e806d0af6f26cce8f10c5fc280f5df2', '1bd0f2b3000b7c7723f25335fabfcdddcdf4595dd7de1b142d52bb7a186885f0', '4ff152d76db095f75c664dd48e41e8c9953fd0e784535883916383165e28a08e', '9f073db4acd7e634fd578af50d4e77218742f63a4d423a99808d6fd7cb0d3cdb', 'e1bcb583985325d0ef5f3ef52957d0371c96d4af767b13e48102bca9d5351a9b', 'd52958107d0b1f0288f50f346a833df3df485b92d5516cfcb536e73ab7adafd0', '3ca8181367fc1258a418f7bf5044533c83e02a59c1a96def043295c429c297a8', '04acab7636c4cf61d288a5962f15fa456b7bde31a021e5deedfbf51288e4001e', '1c681dfa5cf7e413305d2e90ee47553a46e29cce4f6ed034c8297e511714f867', '65c8527c16a016191118e8adc3d307fe3a73d37cbe05597a95aebd75daf8d051', '1f0008060150b5b93084ae2e4dabd160ab80a95ce8071a321b80ec4e33b58aca', '7f55678298adb736987d9fb5d1d2daefb08fe5bf4d81b2380bedf9449f79cc38', 'fc9269fb2e651cd4a32b65ae164f79b0a2ea823e0a83508c85d7985a6bed43cf', '4d40de30a3db3bc4f241cb7f48e8497c11e8f20a99bf55788bdce17242029745']\n" ], [ "# Training dataset\nTRAINSET_DIR = os.path.join(DATA_DIR, \"stage1_train_fixed\")\n# VALSET_DIR = os.path.join(DATA_DIR, \"stage1_val\")\nTESTSET_DIR = os.path.join(DATA_DIR, \"stage1_test\")\nkFOLD_DIR = os.path.join(ROOT_DIR, \"kfold_dataset\")\n\ndataset_train = NucleiDataset()\ndataset_train.load_image_info(TRAINSET_DIR, os.path.join(kFOLD_DIR, \"10-fold-train-10.txt\"))\ndataset_train.prepare()\n\ndataset_val = NucleiDataset()\ndataset_val.load_image_info(TRAINSET_DIR, os.path.join(kFOLD_DIR, \"10-fold-val-10.txt\"))\ndataset_val.prepare()\n\nprint(\"Loading {} training images, {} validation images\"\n .format(dataset_train.num_images, dataset_val.num_images))", "Loading 594 training images, 66 validation images\n" ], [ "# Load and display random samples\nimage_ids = np.random.choice(dataset_train.image_ids, 4)\nprint(dataset_train.num_images)\nfor i, image_id in enumerate(image_ids):\n image = dataset_train.load_image(image_id)\n mask, class_ids = dataset_train.load_mask(image_id)\n visualize.display_top_masks(image, mask, class_ids, dataset_train.class_names)", "594\n" ] ], [ [ "## Bounding Boxes\n\nAlthough we don't have the specific box coordinates in the dataset, we can compute the bounding boxes from masks instead. This allows us to handle bounding boxes consistently regardless of the source dataset, and it also makes it easier to resize, rotate, or crop images because we simply generate the bounding boxes from the updates masks rather than computing bounding box transformation for each type of image transformation.", "_____no_output_____" ] ], [ [ "# Load random image and mask.\nimage_id = random.choice(dataset_train.image_ids)\nimage = dataset_train.load_image(image_id)\nmask, class_ids = dataset_train.load_mask(image_id)\n# Compute Bounding box\nbbox = utils.extract_bboxes(mask)\n\n# Display image and additional stats\nprint(\"image_id \", image_id, dataset_train.image_reference(image_id))\nlog(\"image\", image)\nlog(\"mask\", mask)\nlog(\"class_ids\", class_ids)\nlog(\"bbox\", bbox)\n# Display image and instances\nvisualize.display_instances(image, bbox, mask, class_ids, dataset_train.class_names)", "image_id 527 /home/lf/Nuclei/data/stage1_train_fixed/3b0709483b1e86449cc355bb797e841117ba178c6ae1ed955384f4da6486aa20\nimage shape: (256, 320, 3) min: 28.00000 max: 214.00000\nmask shape: (256, 320, 17) min: 0.00000 max: 255.00000\nclass_ids shape: (17,) min: 1.00000 max: 1.00000\nbbox shape: (17, 4) min: 0.00000 max: 320.00000\n" ] ], [ [ "## Ceate Model", "_____no_output_____" ] ], [ [ "# Create model in training mode\nmodel = modellib.MaskRCNN(mode=\"training\", config=config,\n model_dir=MODEL_DIR)", "_____no_output_____" ], [ "# Which weights to start with?\ninit_with = \"coco\" # imagenet, coco, or last\n\nif init_with == \"imagenet\":\n model.load_weights(model.get_imagenet_weights(), by_name=True)\nelif init_with == \"coco\":\n # Load weights trained on MS COCO, but skip layers that\n # are different due to the different number of classes\n # See README for instructions to download the COCO weights\n model.load_weights(COCO_MODEL_PATH, by_name=True,\n exclude=[\"mrcnn_class_logits\", \"mrcnn_bbox_fc\", \n \"mrcnn_bbox\", \"mrcnn_mask\"])\nelif init_with == \"last\":\n # Load the last model you trained and continue training\n model.load_weights(model.find_last()[1], by_name=True)", "_____no_output_____" ] ], [ [ "## Training\n\nTrain in two stages:\n1. Only the heads. Here we're freezing all the backbone layers and training only the randomly initialized layers (i.e. the ones that we didn't use pre-trained weights from MS COCO). To train only the head layers, pass `layers='heads'` to the `train()` function.\n\n2. Fine-tune all layers. For this simple example it's not necessary, but we're including it to show the process. Simply pass `layers=\"all` to train all layers.", "_____no_output_____" ] ], [ [ "# Train the head branches\n# Passing layers=\"heads\" freezes all layers except the head\n# layers. You can also pass a regular expression to select\n# which layers to train by name pattern.\nmodel.train(dataset_train, dataset_val, \n learning_rate=config.LEARNING_RATE, \n epochs=1, \n layers='heads')", "\nStarting at epoch 0. LR=0.001\n\nCheckpoint Path: /data/lf/Nuclei/logs/nuclei20180306T1050/mask_rcnn_nuclei_{epoch:04d}.h5\nSelecting layers to train\nfpn_c5p5 (Conv2D)\nfpn_c4p4 (Conv2D)\nfpn_c3p3 (Conv2D)\nfpn_c2p2 (Conv2D)\nfpn_p5 (Conv2D)\nfpn_p2 (Conv2D)\nfpn_p3 (Conv2D)\nfpn_p4 (Conv2D)\nIn model: rpn_model\n rpn_conv_shared (Conv2D)\n rpn_class_raw (Conv2D)\n rpn_bbox_pred (Conv2D)\nmrcnn_mask_conv1 (TimeDistributed)\nmrcnn_mask_bn1 (TimeDistributed)\nmrcnn_mask_conv2 (TimeDistributed)\nmrcnn_mask_bn2 (TimeDistributed)\nmrcnn_class_conv1 (TimeDistributed)\nmrcnn_class_bn1 (TimeDistributed)\nmrcnn_mask_conv3 (TimeDistributed)\nmrcnn_mask_bn3 (TimeDistributed)\nmrcnn_class_conv2 (TimeDistributed)\nmrcnn_class_bn2 (TimeDistributed)\nmrcnn_mask_conv4 (TimeDistributed)\nmrcnn_mask_bn4 (TimeDistributed)\nmrcnn_bbox_fc (TimeDistributed)\nmrcnn_mask_deconv (TimeDistributed)\nmrcnn_class_logits (TimeDistributed)\nmrcnn_mask (TimeDistributed)\n" ], [ "# Fine tune all layers\n# Passing layers=\"all\" trains all layers. You can also \n# pass a regular expression to select which layers to\n# train by name pattern.\nmodel.train(dataset_train, dataset_val, \n learning_rate=config.LEARNING_RATE / 10,\n epochs=2, \n layers=\"all\")", "\nStarting at epoch 1. LR=0.0001\n\nCheckpoint Path: /home/liangf/IVision/Mask_RCNN/logs/shapes20180131T1458/mask_rcnn_shapes_{epoch:04d}.h5\nSelecting layers to train\nconv1 (Conv2D)\nbn_conv1 (BatchNorm)\nres2a_branch2a (Conv2D)\nbn2a_branch2a (BatchNorm)\nres2a_branch2b (Conv2D)\nbn2a_branch2b (BatchNorm)\nres2a_branch2c (Conv2D)\nres2a_branch1 (Conv2D)\nbn2a_branch2c (BatchNorm)\nbn2a_branch1 (BatchNorm)\nres2b_branch2a (Conv2D)\nbn2b_branch2a (BatchNorm)\nres2b_branch2b (Conv2D)\nbn2b_branch2b (BatchNorm)\nres2b_branch2c (Conv2D)\nbn2b_branch2c (BatchNorm)\nres2c_branch2a (Conv2D)\nbn2c_branch2a (BatchNorm)\nres2c_branch2b (Conv2D)\nbn2c_branch2b (BatchNorm)\nres2c_branch2c (Conv2D)\nbn2c_branch2c (BatchNorm)\nres3a_branch2a (Conv2D)\nbn3a_branch2a (BatchNorm)\nres3a_branch2b (Conv2D)\nbn3a_branch2b (BatchNorm)\nres3a_branch2c (Conv2D)\nres3a_branch1 (Conv2D)\nbn3a_branch2c (BatchNorm)\nbn3a_branch1 (BatchNorm)\nres3b_branch2a (Conv2D)\nbn3b_branch2a (BatchNorm)\nres3b_branch2b (Conv2D)\nbn3b_branch2b (BatchNorm)\nres3b_branch2c (Conv2D)\nbn3b_branch2c (BatchNorm)\nres3c_branch2a (Conv2D)\nbn3c_branch2a (BatchNorm)\nres3c_branch2b (Conv2D)\nbn3c_branch2b (BatchNorm)\nres3c_branch2c (Conv2D)\nbn3c_branch2c (BatchNorm)\nres3d_branch2a (Conv2D)\nbn3d_branch2a (BatchNorm)\nres3d_branch2b (Conv2D)\nbn3d_branch2b (BatchNorm)\nres3d_branch2c (Conv2D)\nbn3d_branch2c (BatchNorm)\nres4a_branch2a (Conv2D)\nbn4a_branch2a (BatchNorm)\nres4a_branch2b (Conv2D)\nbn4a_branch2b (BatchNorm)\nres4a_branch2c (Conv2D)\nres4a_branch1 (Conv2D)\nbn4a_branch2c (BatchNorm)\nbn4a_branch1 (BatchNorm)\nres4b_branch2a (Conv2D)\nbn4b_branch2a (BatchNorm)\nres4b_branch2b (Conv2D)\nbn4b_branch2b (BatchNorm)\nres4b_branch2c (Conv2D)\nbn4b_branch2c (BatchNorm)\nres4c_branch2a (Conv2D)\nbn4c_branch2a (BatchNorm)\nres4c_branch2b (Conv2D)\nbn4c_branch2b (BatchNorm)\nres4c_branch2c (Conv2D)\nbn4c_branch2c (BatchNorm)\nres4d_branch2a (Conv2D)\nbn4d_branch2a (BatchNorm)\nres4d_branch2b (Conv2D)\nbn4d_branch2b (BatchNorm)\nres4d_branch2c (Conv2D)\nbn4d_branch2c (BatchNorm)\nres4e_branch2a (Conv2D)\nbn4e_branch2a (BatchNorm)\nres4e_branch2b (Conv2D)\nbn4e_branch2b (BatchNorm)\nres4e_branch2c (Conv2D)\nbn4e_branch2c (BatchNorm)\nres4f_branch2a (Conv2D)\nbn4f_branch2a (BatchNorm)\nres4f_branch2b (Conv2D)\nbn4f_branch2b (BatchNorm)\nres4f_branch2c (Conv2D)\nbn4f_branch2c (BatchNorm)\nres4g_branch2a (Conv2D)\nbn4g_branch2a (BatchNorm)\nres4g_branch2b (Conv2D)\nbn4g_branch2b (BatchNorm)\nres4g_branch2c (Conv2D)\nbn4g_branch2c (BatchNorm)\nres4h_branch2a (Conv2D)\nbn4h_branch2a (BatchNorm)\nres4h_branch2b (Conv2D)\nbn4h_branch2b (BatchNorm)\nres4h_branch2c (Conv2D)\nbn4h_branch2c (BatchNorm)\nres4i_branch2a (Conv2D)\nbn4i_branch2a (BatchNorm)\nres4i_branch2b (Conv2D)\nbn4i_branch2b (BatchNorm)\nres4i_branch2c (Conv2D)\nbn4i_branch2c (BatchNorm)\nres4j_branch2a (Conv2D)\nbn4j_branch2a (BatchNorm)\nres4j_branch2b (Conv2D)\nbn4j_branch2b (BatchNorm)\nres4j_branch2c (Conv2D)\nbn4j_branch2c (BatchNorm)\nres4k_branch2a (Conv2D)\nbn4k_branch2a (BatchNorm)\nres4k_branch2b (Conv2D)\nbn4k_branch2b (BatchNorm)\nres4k_branch2c (Conv2D)\nbn4k_branch2c (BatchNorm)\nres4l_branch2a (Conv2D)\nbn4l_branch2a (BatchNorm)\nres4l_branch2b (Conv2D)\nbn4l_branch2b (BatchNorm)\nres4l_branch2c (Conv2D)\nbn4l_branch2c (BatchNorm)\nres4m_branch2a (Conv2D)\nbn4m_branch2a (BatchNorm)\nres4m_branch2b (Conv2D)\nbn4m_branch2b (BatchNorm)\nres4m_branch2c (Conv2D)\nbn4m_branch2c (BatchNorm)\nres4n_branch2a (Conv2D)\nbn4n_branch2a (BatchNorm)\nres4n_branch2b (Conv2D)\nbn4n_branch2b (BatchNorm)\nres4n_branch2c (Conv2D)\nbn4n_branch2c (BatchNorm)\nres4o_branch2a (Conv2D)\nbn4o_branch2a (BatchNorm)\nres4o_branch2b (Conv2D)\nbn4o_branch2b (BatchNorm)\nres4o_branch2c (Conv2D)\nbn4o_branch2c (BatchNorm)\nres4p_branch2a (Conv2D)\nbn4p_branch2a (BatchNorm)\nres4p_branch2b (Conv2D)\nbn4p_branch2b (BatchNorm)\nres4p_branch2c (Conv2D)\nbn4p_branch2c (BatchNorm)\nres4q_branch2a (Conv2D)\nbn4q_branch2a (BatchNorm)\nres4q_branch2b (Conv2D)\nbn4q_branch2b (BatchNorm)\nres4q_branch2c (Conv2D)\nbn4q_branch2c (BatchNorm)\nres4r_branch2a (Conv2D)\nbn4r_branch2a (BatchNorm)\nres4r_branch2b (Conv2D)\nbn4r_branch2b (BatchNorm)\nres4r_branch2c (Conv2D)\nbn4r_branch2c (BatchNorm)\nres4s_branch2a (Conv2D)\nbn4s_branch2a (BatchNorm)\nres4s_branch2b (Conv2D)\nbn4s_branch2b (BatchNorm)\nres4s_branch2c (Conv2D)\nbn4s_branch2c (BatchNorm)\nres4t_branch2a (Conv2D)\nbn4t_branch2a (BatchNorm)\nres4t_branch2b (Conv2D)\nbn4t_branch2b (BatchNorm)\nres4t_branch2c (Conv2D)\nbn4t_branch2c (BatchNorm)\nres4u_branch2a (Conv2D)\nbn4u_branch2a (BatchNorm)\nres4u_branch2b (Conv2D)\nbn4u_branch2b (BatchNorm)\nres4u_branch2c (Conv2D)\nbn4u_branch2c (BatchNorm)\nres4v_branch2a (Conv2D)\nbn4v_branch2a (BatchNorm)\nres4v_branch2b (Conv2D)\nbn4v_branch2b (BatchNorm)\nres4v_branch2c (Conv2D)\nbn4v_branch2c (BatchNorm)\nres4w_branch2a (Conv2D)\nbn4w_branch2a (BatchNorm)\nres4w_branch2b (Conv2D)\nbn4w_branch2b (BatchNorm)\nres4w_branch2c (Conv2D)\nbn4w_branch2c (BatchNorm)\nres5a_branch2a (Conv2D)\nbn5a_branch2a (BatchNorm)\nres5a_branch2b (Conv2D)\nbn5a_branch2b (BatchNorm)\nres5a_branch2c (Conv2D)\nres5a_branch1 (Conv2D)\nbn5a_branch2c (BatchNorm)\nbn5a_branch1 (BatchNorm)\nres5b_branch2a (Conv2D)\nbn5b_branch2a (BatchNorm)\nres5b_branch2b (Conv2D)\nbn5b_branch2b (BatchNorm)\nres5b_branch2c (Conv2D)\nbn5b_branch2c (BatchNorm)\nres5c_branch2a (Conv2D)\nbn5c_branch2a (BatchNorm)\nres5c_branch2b (Conv2D)\nbn5c_branch2b (BatchNorm)\nres5c_branch2c (Conv2D)\nbn5c_branch2c (BatchNorm)\nfpn_c5p5 (Conv2D)\nfpn_c4p4 (Conv2D)\nfpn_c3p3 (Conv2D)\nfpn_c2p2 (Conv2D)\nfpn_p5 (Conv2D)\nfpn_p2 (Conv2D)\nfpn_p3 (Conv2D)\nfpn_p4 (Conv2D)\nIn model: rpn_model\n rpn_conv_shared (Conv2D)\n rpn_class_raw (Conv2D)\n rpn_bbox_pred (Conv2D)\nmrcnn_mask_conv1 (TimeDistributed)\nmrcnn_mask_bn1 (TimeDistributed)\nmrcnn_mask_conv2 (TimeDistributed)\nmrcnn_mask_bn2 (TimeDistributed)\nmrcnn_class_conv1 (TimeDistributed)\nmrcnn_class_bn1 (TimeDistributed)\nmrcnn_mask_conv3 (TimeDistributed)\nmrcnn_mask_bn3 (TimeDistributed)\nmrcnn_class_conv2 (TimeDistributed)\nmrcnn_class_bn2 (TimeDistributed)\nmrcnn_mask_conv4 (TimeDistributed)\nmrcnn_mask_bn4 (TimeDistributed)\nmrcnn_bbox_fc (TimeDistributed)\nmrcnn_mask_deconv (TimeDistributed)\nmrcnn_class_logits (TimeDistributed)\nmrcnn_mask (TimeDistributed)\n" ], [ "import datetime\nprint(now)\nimport time \nrq = \"config-\" + time.strftime('%Y%m%d%H%M', time.localtime(time.time())) +\".log\"\nprint(rq)", "2018-03-06 11:15:02.284211\nconfig-201803061120.log\n" ], [ "# Save weights\n# Typically not needed because callbacks save after every epoch\n# Uncomment to save manually\n# model_path = os.path.join(MODEL_DIR, \"mask_rcnn_shapes.h5\")\n# model.keras_model.save_weights(model_path)", "_____no_output_____" ] ], [ [ "## Detection Example", "_____no_output_____" ] ], [ [ "class InferenceConfig(NucleiConfig):\n GPU_COUNT = 1\n IMAGES_PER_GPU = 1\n DETECTION_NMS_THRESHOLD = 0.3\n DETECTION_MAX_INSTANCES = 300\n\ninference_config = InferenceConfig()\n# Recreate the model in inference mode\nmodel = modellib.MaskRCNN(mode=\"inference\", \n config=inference_config,\n model_dir=MODEL_DIR)\n\n# Get path to saved weights\n# Either set a specific path or find last trained weights\n# model_path = os.path.join(ROOT_DIR, \".h5 file name here\")\nmodel_path = \"/data2/liangfeng/nuclei_models/nuclei20180202T1847/mask_rcnn_nuclei_0080.h5\"\n\n# Load trained weights (fill in path to trained weights here)\nassert model_path != \"\", \"Provide path to trained weights\"\nprint(\"Loading weights from \", model_path)\nmodel.load_weights(model_path, by_name=True)", "Loading weights from /data2/liangfeng/nuclei_models/nuclei20180202T1847/mask_rcnn_nuclei_0080.h5\n" ], [ "# Test on a random image(load_image_gt will resize the image!)\nimage_id = random.choice(dataset_val.image_ids)\noriginal_image, image_meta, gt_class_id, gt_bbox, gt_mask =\\\n modellib.load_image_gt(dataset_val, inference_config, \n image_id, use_mini_mask=False)\n \nprint(\"image_id \", image_id, dataset_val.image_reference(image_id))\nlog(\"original_image\", original_image)\nlog(\"image_meta\", image_meta)\nlog(\"gt_class_id\", gt_class_id)\nlog(\"gt_bbox\", gt_bbox)\nlog(\"gt_mask\", gt_mask)\n\nvisualize.display_instances(original_image, gt_bbox, gt_mask, gt_class_id, \n dataset_train.class_names, figsize=(8, 8))", "_____no_output_____" ], [ "results = model.detect([original_image], verbose=1)\n\nr = results[0]\n# print(r)\nvisualize.display_instances(original_image, r['rois'], r['masks'], r['class_ids'], \n dataset_val.class_names, r['scores'], ax=get_ax())", "Processing 1 images\nimage shape: (1024, 1024, 3) min: 0.00000 max: 232.00000\nmolded_images shape: (1, 1024, 1024, 3) min: -123.70000 max: 128.10000\nimage_metas shape: (1, 10) min: 0.00000 max: 1024.00000\n" ] ], [ [ "## Evaluation", "_____no_output_____" ] ], [ [ "# Compute VOC-Style mAP @ IoU=0.5\n# Running on 10 images. Increase for better accuracy.\n# image_ids = np.random.choice(dataset_val.image_ids, 10)\nimage_ids = dataset_val.image_ids\nAPs = []\nfor image_id in image_ids:\n # Load image and ground truth data\n image, image_meta, gt_class_id, gt_bbox, gt_mask =\\\n modellib.load_image_gt(dataset_val, inference_config,\n image_id, use_mini_mask=False)\n molded_images = np.expand_dims(modellib.mold_image(image, inference_config), 0)\n # Run object detection\n results = model.detect([image], verbose=0)\n r = results[0]\n # Compute AP\n AP, precisions, recalls, overlaps =\\\n utils.compute_ap(gt_bbox, gt_class_id,\n r[\"rois\"], r[\"class_ids\"], r[\"scores\"])\n APs.append(AP)\n \nprint(\"mAP: \", np.mean(APs))", "mAP: 0.808316577444\n" ] ], [ [ "## Writing the Results", "_____no_output_____" ] ], [ [ "# Get the Test set.\nTESTSET_DIR = os.path.join(DATA_DIR, \"stage1_test\")\ndataset_test = NucleiDataset()\ndataset_test.load_image_info(TESTSET_DIR)\ndataset_test.prepare()\n\nprint(\"Predict {} images\".format(dataset_test.num_images))", "Predict 65 images\n" ], [ "# Load random image and mask(Original Size).\nimage_id = np.random.choice(dataset_test.image_ids)\n\nimage = dataset_test.load_image(image_id)\n\nplt.figure() \nplt.imshow(image)\nplt.title(image_id, fontsize=9)\nplt.axis('off')\n\n# images = dataset_test.load_image(image_ids)\n# mask, class_ids = dataset_test.load_mask(image_id)\n# Compute Bounding box\n# bbox = utils.extract_bboxes(mask)\n\n# Display image and additional stats\n# print(\"image_id \", image_id, dataset_test.image_reference(image_id))\n\n# log(\"image\", image)\n# log(\"mask\", mask)\n# log(\"class_ids\", class_ids)\n# log(\"bbox\", bbox)\n# Display image and instances\n# visualize.display_instances(image, bbox, mask, class_ids, dataset_test.class_names)", "_____no_output_____" ], [ "results = model.detect([image], verbose=1)\nr = results[0]\nmask_exist = np.zeros(r['masks'].shape[:-1], dtype=np.uint8)\nmask_sum = np.zeros(r['masks'].shape[:-1], dtype=np.uint8)\nfor i in range(r['masks'].shape[-1]):\n _mask = r['masks'][:,:,i]\n mask_sum += _mask\n# print(np.multiply(mask_exist, _mask))\n# print(np.where(np.multiply(mask_exist, _mask) == 1))\n index_ = np.where(np.multiply(mask_exist, _mask) == 1)\n _mask[index_] = 0\n mask_exist += _mask\n \n \n \n# masks_sum = np.sum(r['masks'] ,axis=2)\n\n# overlap = np.where(masks_sum > 1)\n# print(overlap)\n# plt.figure() \nplt.subplot(1,2,1)\nplt.imshow(mask_exist)\nplt.subplot(1,2,2)\nplt.imshow(mask_sum)\n# visualize.display_instances(image, r['rois'], r['masks'], r['class_ids'], \n# dataset_test.class_names, r['scores'], ax=get_ax())", "Processing 1 images\nimage shape: (519, 253, 3) min: 9.00000 max: 101.00000\nmolded_images shape: (1, 1024, 1024, 3) min: -123.70000 max: -3.90000\nimage_metas shape: (1, 10) min: 0.00000 max: 1024.00000\n(array([], dtype=int64), array([], dtype=int64))\n(array([211, 211, 211, 211, 211, 211, 211, 212, 212, 212, 212, 212, 212,\n 212, 213, 213, 213, 213, 213, 214, 214, 214, 214, 214, 215, 215,\n 215, 215, 215, 215, 216, 216, 216, 216, 216, 216, 217, 217, 217,\n 217, 217, 217, 218, 218, 218, 218, 218, 218, 218, 218, 219, 219,\n 219, 219, 219, 219, 219, 219, 219, 220, 220, 220, 220, 220, 220,\n 220, 221, 221, 221, 221, 221, 221, 222, 222, 223, 223, 223, 223,\n 223, 224, 224, 224, 224, 224, 225, 225, 225, 225, 225, 226, 226,\n 226, 226, 226, 227, 227, 227, 227, 227, 228, 228, 228, 228, 228,\n 228, 229, 229, 229, 229, 229, 229, 230, 230, 230, 230, 230, 231,\n 231, 231, 231, 232, 232, 232, 232]), array([139, 140, 141, 142, 143, 151, 152, 139, 140, 141, 142, 143, 151,\n 152, 139, 140, 141, 142, 143, 139, 140, 141, 142, 143, 138, 139,\n 140, 141, 142, 143, 138, 139, 140, 141, 142, 143, 138, 139, 140,\n 141, 142, 143, 138, 139, 140, 141, 142, 143, 151, 152, 138, 139,\n 140, 141, 142, 143, 150, 151, 152, 139, 140, 141, 142, 143, 151,\n 152, 139, 140, 141, 142, 143, 151, 142, 143, 142, 143, 144, 145,\n 148, 145, 146, 147, 148, 149, 145, 146, 147, 148, 149, 145, 146,\n 147, 148, 149, 145, 146, 147, 148, 149, 145, 146, 147, 148, 149,\n 150, 145, 146, 147, 148, 149, 150, 145, 146, 147, 148, 149, 145,\n 146, 147, 148, 145, 146, 147, 148]))\n" ], [ "a = [[0, 1],[0, 0]]\nnp.any(a)", "_____no_output_____" ], [ "def rle_encoding(x):\n dots = np.where(x.T.flatten() == 1)[0]\n run_lengths = []\n prev = -2\n for b in dots:\n if (b>prev+1): run_lengths.extend((b + 1, 0))\n run_lengths[-1] += 1\n prev = b\n return run_lengths\nimport pandas as pd\n\ntest_ids = []\ntest_rles = []\n\nid_ = dataset_val.image_info[image_id][\"id\"]\nresults = model.detect([image], verbose=1)\nr = results[0]\n\nfor i in range(len(r['scores'])):\n test_ids.append(id_)\n test_rles.append(rle_encoding(r['masks'][:, : , i]))\n\nsub = pd.DataFrame()\nsub['ImageId'] = test_ids\nsub['EncodedPixels'] = pd.Series(test_rles).apply(lambda x: ' '.join(str(y) for y in x))\nmodel_path\ncsvpath = \"{}.csv\".format(model_path)\nprint(csvpath)\nsub.to_csv(csvpath, index=False)\n# plt.imshow('image',r['masks'][0])\n", "Processing 1 images\nimage shape: (256, 256, 3) min: 10.00000 max: 255.00000\nmolded_images shape: (1, 1024, 1024, 3) min: -123.70000 max: 151.10000\nimage_metas shape: (1, 10) min: 0.00000 max: 912.00000\n/data2/liangfeng/nuclei_models/nuclei20180202T1847/mask_rcnn_nuclei_0040.h5.csv\n" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code" ] ]
d0202e7d55aed5ff965378766b7f24c84fd14dac
939,574
ipynb
Jupyter Notebook
notebooks/2_socioeconomic_data_validation.ipynb
fernandascovino/pr-educacao
79793089552e75573cc77c90ccbf2cf04972ab42
[ "MIT" ]
null
null
null
notebooks/2_socioeconomic_data_validation.ipynb
fernandascovino/pr-educacao
79793089552e75573cc77c90ccbf2cf04972ab42
[ "MIT" ]
1
2019-04-03T14:20:06.000Z
2019-04-03T14:20:06.000Z
notebooks/2_socioeconomic_data_validation.ipynb
fernandascovino/pr-educacao
79793089552e75573cc77c90ccbf2cf04972ab42
[ "MIT" ]
1
2019-03-15T12:48:44.000Z
2019-03-15T12:48:44.000Z
297.521849
308,712
0.906993
[ [ [ "<h1>Índice<span class=\"tocSkip\"></span></h1>\n<div class=\"toc\"><ul class=\"toc-item\"><li><span><a href=\"#Socioeconomic-data-validation\" data-toc-modified-id=\"Socioeconomic-data-validation-1\"><span class=\"toc-item-num\">1&nbsp;&nbsp;</span>Socioeconomic data validation</a></span><ul class=\"toc-item\"><li><ul class=\"toc-item\"><li><span><a href=\"#Goals\" data-toc-modified-id=\"Goals-1.0.1\"><span class=\"toc-item-num\">1.0.1&nbsp;&nbsp;</span>Goals</a></span></li><li><span><a href=\"#Data-scources\" data-toc-modified-id=\"Data-scources-1.0.2\"><span class=\"toc-item-num\">1.0.2&nbsp;&nbsp;</span>Data scources</a></span></li><li><span><a href=\"#Methodology\" data-toc-modified-id=\"Methodology-1.0.3\"><span class=\"toc-item-num\">1.0.3&nbsp;&nbsp;</span>Methodology</a></span></li><li><span><a href=\"#Results\" data-toc-modified-id=\"Results-1.0.4\"><span class=\"toc-item-num\">1.0.4&nbsp;&nbsp;</span>Results</a></span><ul class=\"toc-item\"><li><span><a href=\"#Outputs\" data-toc-modified-id=\"Outputs-1.0.4.1\"><span class=\"toc-item-num\">1.0.4.1&nbsp;&nbsp;</span>Outputs</a></span></li></ul></li><li><span><a href=\"#Authors\" data-toc-modified-id=\"Authors-1.0.5\"><span class=\"toc-item-num\">1.0.5&nbsp;&nbsp;</span>Authors</a></span></li></ul></li><li><span><a href=\"#Import-data\" data-toc-modified-id=\"Import-data-1.1\"><span class=\"toc-item-num\">1.1&nbsp;&nbsp;</span>Import data</a></span></li><li><span><a href=\"#INSE-data-analysis\" data-toc-modified-id=\"INSE-data-analysis-1.2\"><span class=\"toc-item-num\">1.2&nbsp;&nbsp;</span>INSE data analysis</a></span><ul class=\"toc-item\"><li><span><a href=\"#Filtering-model-(refence)-and-risk-(attention)-schools\" data-toc-modified-id=\"Filtering-model-(refence)-and-risk-(attention)-schools-1.2.1\"><span class=\"toc-item-num\">1.2.1&nbsp;&nbsp;</span>Filtering model (<code>refence</code>) and risk (<code>attention</code>) schools</a></span></li><li><span><a href=\"#Join-INSE-data\" data-toc-modified-id=\"Join-INSE-data-1.2.2\"><span class=\"toc-item-num\">1.2.2&nbsp;&nbsp;</span>Join INSE data</a></span></li><li><span><a href=\"#Comparing-INSE-data-in-categories\" data-toc-modified-id=\"Comparing-INSE-data-in-categories-1.2.3\"><span class=\"toc-item-num\">1.2.3&nbsp;&nbsp;</span>Comparing INSE data in categories</a></span></li></ul></li><li><span><a href=\"#Statistical-INSE-analysis\" data-toc-modified-id=\"Statistical-INSE-analysis-1.3\"><span class=\"toc-item-num\">1.3&nbsp;&nbsp;</span>Statistical INSE analysis</a></span><ul class=\"toc-item\"><li><span><a href=\"#Normality-test\" data-toc-modified-id=\"Normality-test-1.3.1\"><span class=\"toc-item-num\">1.3.1&nbsp;&nbsp;</span>Normality test</a></span><ul class=\"toc-item\"><li><span><a href=\"#D'Agostino-and-Pearson's\" data-toc-modified-id=\"D'Agostino-and-Pearson's-1.3.1.1\"><span class=\"toc-item-num\">1.3.1.1&nbsp;&nbsp;</span>D'Agostino and Pearson's</a></span></li><li><span><a href=\"#Shapiro-Wiki\" data-toc-modified-id=\"Shapiro-Wiki-1.3.1.2\"><span class=\"toc-item-num\">1.3.1.2&nbsp;&nbsp;</span>Shapiro-Wiki</a></span></li></ul></li><li><span><a href=\"#t-test\" data-toc-modified-id=\"t-test-1.3.2\"><span class=\"toc-item-num\">1.3.2&nbsp;&nbsp;</span><em>t</em> test</a></span><ul class=\"toc-item\"><li><span><a href=\"#Model-x-risk-schools\" data-toc-modified-id=\"Model-x-risk-schools-1.3.2.1\"><span class=\"toc-item-num\">1.3.2.1&nbsp;&nbsp;</span>Model x risk schools</a></span></li></ul></li><li><span><a href=\"#Cohen's-D\" data-toc-modified-id=\"Cohen's-D-1.3.3\"><span class=\"toc-item-num\">1.3.3&nbsp;&nbsp;</span>Cohen's D</a></span><ul class=\"toc-item\"><li><span><a href=\"#Model-x-risk-schools\" data-toc-modified-id=\"Model-x-risk-schools-1.3.3.1\"><span class=\"toc-item-num\">1.3.3.1&nbsp;&nbsp;</span>Model x risk schools</a></span></li><li><span><a href=\"#Best-evolution-model-x-risk-schools\" data-toc-modified-id=\"Best-evolution-model-x-risk-schools-1.3.3.2\"><span class=\"toc-item-num\">1.3.3.2&nbsp;&nbsp;</span>Best evolution model x risk schools</a></span></li><li><span><a href=\"#Other-model-x-risk-schools\" data-toc-modified-id=\"Other-model-x-risk-schools-1.3.3.3\"><span class=\"toc-item-num\">1.3.3.3&nbsp;&nbsp;</span>Other model x risk schools</a></span></li></ul></li></ul></li></ul></li><li><span><a href=\"#Testes-estatísticos\" data-toc-modified-id=\"Testes-estatísticos-2\"><span class=\"toc-item-num\">2&nbsp;&nbsp;</span>Testes estatísticos</a></span><ul class=\"toc-item\"><li><span><a href=\"#Cohen's-D\" data-toc-modified-id=\"Cohen's-D-2.1\"><span class=\"toc-item-num\">2.1&nbsp;&nbsp;</span>Cohen's D</a></span></li></ul></li><li><span><a href=\"#Tentando-inferir-causalidade\" data-toc-modified-id=\"Tentando-inferir-causalidade-3\"><span class=\"toc-item-num\">3&nbsp;&nbsp;</span>Tentando inferir causalidade</a></span><ul class=\"toc-item\"><li><span><a href=\"#Regressões-lineares\" data-toc-modified-id=\"Regressões-lineares-3.1\"><span class=\"toc-item-num\">3.1&nbsp;&nbsp;</span>Regressões lineares</a></span></li><li><span><a href=\"#Testes-pareados\" data-toc-modified-id=\"Testes-pareados-3.2\"><span class=\"toc-item-num\">3.2&nbsp;&nbsp;</span>Testes pareados</a></span></li></ul></li></ul></div>", "_____no_output_____" ], [ "# Socioeconomic data validation\n---\n\nA literatura indica que o fator mais importante para o desempenho das escolas é o nível sócio econômico dos alunos. Estamos pressupondo que escolas próximas possuem alunos de nível sócio econômico próximo, mas isso precisa ser testado. Usei os dados do [INSE](http://portal.inep.gov.br/web/guest/indicadores-educacionais) para medir qual era o nível sócio econômico dos alunos de cada escola em 2015.\n\n### Goals\nExamining the geolocated IDEB data frrom schools and modeling *risk* and *model* schools for the research.\nCombining the school's IDEB (SAEB + approval rate) marks with Rio de Janeiro's municipal shapefile, we hope to discover some local standards in school performance over the years. The time interval we will analyze is from 2011 until today.\n\n### Data scources\n\n- `ideb_merged.csv`: resulted data from the geolocalization, IDEB by years on columns \n- `ideb_merged_kepler.csv`: resulted data from the geolocalization, format for kepler input\n\n### Methodology\n\nThe goal is to determine the \"model\" schools in a certain ratio. We'll define those \"models\" as schools that had a great grown and stands nearby \"high risk\" schools, the ones in the lowest strata. For that, we construct the model below with suggestions by Ragazzo:\n\nWe are interested in the following groups:\n - Group 1: Schools from very low (< 4) to high (> 6)\n - Group 2: Schools from low (4 < x < 5) to high (> 6)\n - Group 3: Schools went to high (> 6) with delta > 2\n \nThe *attention level* (or risk) of a school is defined by which quartile it belongs on IDEB 2017 distribution (most recent), from the lowest quartile (level 4) to the highest (level 1).\n\n### Results\n\n1. [Identify the schools with most IDEB variation from 2005 to 2017](#1)\n\n2. [Identify schools that jumped from low / very low IDEB (<5 / <4) and went to high IDEB (> 6), from 2005 to 2017](#2)\n\n2. [Model neighboors: which schools had a large delta and were nearby schools on the highest attention level (4)?](#3)\n\n3. [See if the education census contains information on who was the principal of each school each year.](#4) - actually, we use an indicator of school's \"managment complexity\" with the IDEB data. We didn't find any difference between levels of \"managment complexity\" related to IDEB marks from those schools in each level.\n\n#### Outputs\n\n- `model_neighboors_closest_multiple.csv`: database with the risk schools and closest model schools\n- `top_15_delta.csv`, `bottom_15_delta.csv`: top and bottom schools evolution from 2005 to 2017\n- `kepler_with_filters.csv`: database for plot in kepler with schools categories (from the methology)\n\n\n### Authors\nOriginal code by Guilherme Almeida here, adapted by Fernanda Scovino - 2019.", "_____no_output_____" ] ], [ [ "# Import config\nimport os\nimport sys\nsys.path.insert(0, '../')\nfrom config import RAW_PATH, TREAT_PATH, OUTPUT_PATH\n\n# DATA ANALYSIS & VIZ TOOLS\nfrom copy import deepcopy\n\nimport pandas as pd\nimport numpy as np\npd.options.display.max_columns = 999\n\nimport geopandas as gpd\nfrom shapely.wkt import loads\nimport matplotlib.pyplot as plt\nimport seaborn as sns\n\n%pylab inline\npylab.rcParams['figure.figsize'] = (12, 15)\n\n# CONFIGS\n%load_ext autoreload\n#%autoreload 2\n\n#import warnings\n#warnings.filterwarnings('ignore')", "Populating the interactive namespace from numpy and matplotlib\n" ], [ "palette = ['#FEC300', '#F1920E', '#E3611C', '#C70039', '#900C3F', '#5A1846', '#3a414c', '#29323C']\nsns.set()", "_____no_output_____" ] ], [ [ "## Import data", "_____no_output_____" ] ], [ [ "inse = pd.read_excel(RAW_PATH / \"INSE_2015.xlsx\")", "_____no_output_____" ], [ "schools_ideb = pd.read_csv(OUTPUT_PATH / \"kepler_with_filters.csv\")", "_____no_output_____" ] ], [ [ "## INSE data analysis", "_____no_output_____" ] ], [ [ "inse.rename(columns={\"CO_ESCOLA\" : \"cod_inep\"}, inplace=True)\ninse.head()", "_____no_output_____" ], [ "schools_ideb['ano'] = pd.to_datetime(schools_ideb['ano'])\nschools_ideb.head()", "_____no_output_____" ] ], [ [ "### Filtering model (`refence`) and risk (`attention`) schools", "_____no_output_____" ] ], [ [ "reference = schools_ideb[(schools_ideb['ano'].dt.year == 2017) &\n ((schools_ideb['pessimo_pra_bom_bin'] == 1) | (schools_ideb['ruim_pra_bom_bin'] == 1))]\nreference.info()", "<class 'pandas.core.frame.DataFrame'>\nInt64Index: 161 entries, 4131 to 4729\nData columns (total 14 columns):\nano 161 non-null datetime64[ns]\ncod_inep 161 non-null int64\ngeometry 161 non-null object\nideb 161 non-null float64\nnome_abrev 161 non-null object\nnome_escola 161 non-null object\nlon 161 non-null float64\nlat 161 non-null float64\npessimo_pra_bom_bin 161 non-null int64\nruim_pra_bom_bin 161 non-null int64\nmelhora_com_final_bom_bin 161 non-null int64\ninicial_baixo_bin 161 non-null int64\ninicial_baixissimo_bin 161 non-null int64\nnivel_atencao 161 non-null float64\ndtypes: datetime64[ns](1), float64(4), int64(6), object(3)\nmemory usage: 18.9+ KB\n" ], [ "attention = schools_ideb[(schools_ideb['ano'].dt.year == 2017) & (schools_ideb['nivel_atencao'] == 4)]\nattention.info()", "<class 'pandas.core.frame.DataFrame'>\nInt64Index: 176 entries, 4127 to 4728\nData columns (total 14 columns):\nano 176 non-null datetime64[ns]\ncod_inep 176 non-null int64\ngeometry 176 non-null object\nideb 176 non-null float64\nnome_abrev 176 non-null object\nnome_escola 176 non-null object\nlon 176 non-null float64\nlat 176 non-null float64\npessimo_pra_bom_bin 176 non-null int64\nruim_pra_bom_bin 176 non-null int64\nmelhora_com_final_bom_bin 176 non-null int64\ninicial_baixo_bin 176 non-null int64\ninicial_baixissimo_bin 176 non-null int64\nnivel_atencao 176 non-null float64\ndtypes: datetime64[ns](1), float64(4), int64(6), object(3)\nmemory usage: 20.6+ KB\n" ] ], [ [ "### Join INSE data", "_____no_output_____" ] ], [ [ "inse_cols = [\"cod_inep\", \"NOME_ESCOLA\", \"INSE_VALOR_ABSOLUTO\", \"INSE_CLASSIFICACAO\"]\n\nreference = pd.merge(reference, inse[inse_cols], how = \"left\", on = \"cod_inep\")\nattention = pd.merge(attention, inse[inse_cols], how = \"left\", on = \"cod_inep\")", "_____no_output_____" ], [ "reference['tipo_escola'] = 'Escola referência'\nreference.info()", "<class 'pandas.core.frame.DataFrame'>\nInt64Index: 161 entries, 0 to 160\nData columns (total 18 columns):\nano 161 non-null datetime64[ns]\ncod_inep 161 non-null int64\ngeometry 161 non-null object\nideb 161 non-null float64\nnome_abrev 161 non-null object\nnome_escola 161 non-null object\nlon 161 non-null float64\nlat 161 non-null float64\npessimo_pra_bom_bin 161 non-null int64\nruim_pra_bom_bin 161 non-null int64\nmelhora_com_final_bom_bin 161 non-null int64\ninicial_baixo_bin 161 non-null int64\ninicial_baixissimo_bin 161 non-null int64\nnivel_atencao 161 non-null float64\nNOME_ESCOLA 147 non-null object\nINSE_VALOR_ABSOLUTO 147 non-null float64\nINSE_CLASSIFICACAO 147 non-null object\ntipo_escola 161 non-null object\ndtypes: datetime64[ns](1), float64(5), int64(6), object(6)\nmemory usage: 23.9+ KB\n" ], [ "attention['tipo_escola'] = 'Escola de risco'\nattention.info()", "<class 'pandas.core.frame.DataFrame'>\nInt64Index: 176 entries, 0 to 175\nData columns (total 18 columns):\nano 176 non-null datetime64[ns]\ncod_inep 176 non-null int64\ngeometry 176 non-null object\nideb 176 non-null float64\nnome_abrev 176 non-null object\nnome_escola 176 non-null object\nlon 176 non-null float64\nlat 176 non-null float64\npessimo_pra_bom_bin 176 non-null int64\nruim_pra_bom_bin 176 non-null int64\nmelhora_com_final_bom_bin 176 non-null int64\ninicial_baixo_bin 176 non-null int64\ninicial_baixissimo_bin 176 non-null int64\nnivel_atencao 176 non-null float64\nNOME_ESCOLA 167 non-null object\nINSE_VALOR_ABSOLUTO 167 non-null float64\nINSE_CLASSIFICACAO 167 non-null object\ntipo_escola 176 non-null object\ndtypes: datetime64[ns](1), float64(5), int64(6), object(6)\nmemory usage: 26.1+ KB\n" ], [ "df_inse = attention.append(reference)", "_____no_output_____" ], [ "df_inse['escola_risco'] = df_inse['nivel_atencao'].apply(lambda x : 1 if x == 4 else 0)\ndf_inse['tipo_especifico'] = df_inse[['pessimo_pra_bom_bin', 'ruim_pra_bom_bin', 'escola_risco']].idxmax(axis=1)\n\ndel df_inse['escola_risco']", "_____no_output_____" ], [ "df_inse.head()", "_____no_output_____" ], [ "df_inse['tipo_especifico'].value_counts()", "_____no_output_____" ], [ "df_inse.to_csv(TREAT_PATH / \"risk_and_model_schools_inse.csv\", index = False)", "_____no_output_____" ] ], [ [ "### Comparing INSE data in categories", "_____no_output_____" ] ], [ [ "sns.distplot(attention[\"INSE_VALOR_ABSOLUTO\"].dropna(), bins='fd', label='Escolas de risco')\nsns.distplot(reference[\"INSE_VALOR_ABSOLUTO\"].dropna(), bins='fd', label='Escolas modelo')\nplt.legend()", "_____no_output_____" ], [ "pylab.rcParams['figure.figsize'] = (10, 8)\n\ntitle = \"Comparação do nível sócio-econômico das escolas selecionadas\"\nylabel=\"INSE (2015) médio da escola\"\nxlabel=\"Tipo da escola\"\n\nsns.boxplot(y =\"INSE_VALOR_ABSOLUTO\", x=\"tipo_escola\", data=df_inse).set(ylabel=ylabel, xlabel=xlabel, title=title)", "_____no_output_____" ], [ "pylab.rcParams['figure.figsize'] = (10, 8)\nxlabel = \"Tipo da escola (específico)\"\nsns.boxplot(y = \"INSE_VALOR_ABSOLUTO\", x=\"tipo_especifico\", data=df_inse).set(ylabel=ylabel, xlabel=xlabel, title=title)", "_____no_output_____" ] ], [ [ "## Statistical INSE analysis", "_____no_output_____" ], [ "### Normality test\n\nFrom [this article:](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3693611/)\n\n> According to the available literature, **assessing the normality assumption should be taken into account for using parametric statistical tests.** It seems that the most popular test for normality, that is, the K-S test, should no longer be used owing to its low power. It is preferable that normality be assessed both visually and through normality tests, of which the Shapiro-Wilk test, provided by the SPSS software, is highly recommended. The normality assumption also needs to be considered for validation of data presented in the literature as it shows whether correct statistical tests have been used.", "_____no_output_____" ] ], [ [ "from scipy.stats import normaltest, shapiro, probplot", "_____no_output_____" ] ], [ [ "#### D'Agostino and Pearson's", "_____no_output_____" ] ], [ [ "normaltest(attention[\"INSE_VALOR_ABSOLUTO\"].dropna())", "_____no_output_____" ], [ "normaltest(reference[\"INSE_VALOR_ABSOLUTO\"].dropna())", "_____no_output_____" ] ], [ [ "#### Shapiro-Wiki", "_____no_output_____" ] ], [ [ "shapiro(attention[\"INSE_VALOR_ABSOLUTO\"].dropna())", "_____no_output_____" ], [ "qs = probplot(reference[\"INSE_VALOR_ABSOLUTO\"].dropna(), plot=plt)", "_____no_output_____" ], [ "shapiro(reference[\"INSE_VALOR_ABSOLUTO\"].dropna())", "_____no_output_____" ], [ "ws = probplot(attention[\"INSE_VALOR_ABSOLUTO\"].dropna(), plot=plt)", "_____no_output_____" ] ], [ [ "### *t* test\n\nAbout parametric tests: [here](https://www.healthknowledge.org.uk/public-health-textbook/research-methods/1b-statistical-methods/parametric-nonparametric-tests)\n\nWe can test the hypothesis of INSE be related to IDEB scores from the risk ($\\mu_r$) and model schools ($\\mu_m$) as it follows:\n\n$H_0 = \\mu_r = \\mu_m$\n\n$H_a = \\mu_r != \\mu_m$\n\nFor the *t* test, we need to ensure that:\n\n1. the variances arer equal (1.94 close ennough to 2.05)\n2. the samples have the same size (?)\n3. ", "_____no_output_____" ] ], [ [ "from scipy.stats import ttest_ind as ttest, normaltest, kstest", "_____no_output_____" ], [ "attention[\"INSE_VALOR_ABSOLUTO\"].dropna().describe()", "_____no_output_____" ], [ "reference[\"INSE_VALOR_ABSOLUTO\"].dropna().describe()", "_____no_output_____" ] ], [ [ "#### Model x risk schools", "_____no_output_____" ] ], [ [ "ttest(attention[\"INSE_VALOR_ABSOLUTO\"], reference[\"INSE_VALOR_ABSOLUTO\"], nan_policy=\"omit\", equal_var=True)", "_____no_output_____" ], [ "ttest(attention[\"INSE_VALOR_ABSOLUTO\"], reference[\"INSE_VALOR_ABSOLUTO\"], nan_policy=\"omit\", equal_var=False)", "_____no_output_____" ] ], [ [ "### Cohen's D\n\nMinha métrica preferida de tamanho de efeito é o Cohen's D, mas aparentemente não tem nenhuma implementação canônica dele. Vou usar a que eu encontrei [nesse site](https://machinelearningmastery.com/effect-size-measures-in-python/).", "_____no_output_____" ] ], [ [ "from numpy.random import randn\nfrom numpy.random import seed\nfrom numpy import mean\nfrom numpy import var\nfrom math import sqrt\n\n# == Code made by Guilherme Almeida, 2019 ==\n# function to calculate Cohen's d for independent samples\ndef cohend(d1, d2):\n \n # calculate the size of samples\n n1, n2 = len(d1), len(d2)\n \n # calculate the variance of the samples\n s1, s2 = var(d1, ddof=1), var(d2, ddof=1)\n \n # calculate the pooled standard deviation\n s = sqrt(((n1 - 1) * s1 + (n2 - 1) * s2) / (n1 + n2 - 2))\n \n # calculate the means of the samples\n u1, u2 = mean(d1), mean(d2)\n \n # calculate the effect size\n result = abs(u1 - u2) / s\n return result", "_____no_output_____" ] ], [ [ "#### Model x risk schools", "_____no_output_____" ] ], [ [ "ttest(attention[\"INSE_VALOR_ABSOLUTO\"], reference[\"INSE_VALOR_ABSOLUTO\"], nan_policy=\"omit\")", "_____no_output_____" ], [ "cohend(reference[\"INSE_VALOR_ABSOLUTO\"], attention[\"INSE_VALOR_ABSOLUTO\"])", "_____no_output_____" ] ], [ [ "#### Best evolution model x risk schools", "_____no_output_____" ] ], [ [ "best_evolution = df_inse[df_inse['tipo_especifico'] == \"pessimo_pra_bom_bin\"]\nttest(attention[\"INSE_VALOR_ABSOLUTO\"], best_evolution[\"INSE_VALOR_ABSOLUTO\"], nan_policy=\"omit\")", "_____no_output_____" ], [ "cohend(attention[\"INSE_VALOR_ABSOLUTO\"], best_evolution[\"INSE_VALOR_ABSOLUTO\"])", "_____no_output_____" ] ], [ [ "#### Other model x risk schools", "_____no_output_____" ] ], [ [ "medium_evolution = df_inse[df_inse['tipo_especifico'] == \"ruim_pra_bom_bin\"]\nttest(attention[\"INSE_VALOR_ABSOLUTO\"], medium_evolution[\"INSE_VALOR_ABSOLUTO\"], nan_policy=\"omit\")", "_____no_output_____" ], [ "cohend(attention[\"INSE_VALOR_ABSOLUTO\"], medium_evolution[\"INSE_VALOR_ABSOLUTO\"])", "_____no_output_____" ] ], [ [ "ruim_pra_bom[\"tipo_especifico\"] = \"Ruim para bom\"\npessimo_pra_bom[\"tipo_especifico\"] = \"Muito ruim para bom\"\nrisco[\"tipo_especifico\"] = \"Desempenho abaixo\\ndo esperado\"", "_____no_output_____" ] ], [ [ "referencias.head()", "_____no_output_____" ], [ "referencias = pd.merge(referencias, inse[[\"cod_inep\", \"NOME_ESCOLA\", \"INSE_VALOR_ABSOLUTO\", \"INSE_CLASSIFICACAO\"]], how = \"left\", on = \"cod_inep\")\nrisco = pd.merge(risco, inse[[\"cod_inep\", \"NOME_ESCOLA\", \"INSE_VALOR_ABSOLUTO\", \"INSE_CLASSIFICACAO\"]], how=\"left\", on=\"cod_inep\")", "_____no_output_____" ], [ "referencias.INSE_VALOR_ABSOLUTO.describe()", "_____no_output_____" ], [ "risco.INSE_VALOR_ABSOLUTO.describe()", "_____no_output_____" ], [ "risco[\"tipo\"] = \"Escolas com desempenho abaixo do esperado\"\nreferencias[\"tipo\"] = \"Escolas-referência\"", "_____no_output_____" ], [ "df = risco.append(referencias)", "_____no_output_____" ], [ "df.to_csv(\"risco_referencia_inse.csv\", index = False)", "_____no_output_____" ], [ "df = pd.read_csv(\"risco_referencia_inse.csv\")\n\nsen.sen_boxplot(x = \"tipo\", y = \"INSE_VALOR_ABSOLUTO\", y_label = \"INSE (2015) médio da escola\", x_label = \" \",\n plot_title = \"Comparação do nível sócio-econômico das escolas selecionadas\",\n palette = {\"Escolas com desempenho abaixo do esperado\" : \"indianred\",\n \"Escolas-referência\" : \"skyblue\"},\n data = df, output_path = \"inse_op1.png\")", "_____no_output_____" ], [ "df = pd.read_csv(\"risco_referencia_inse.csv\")\n\nsen.sen_boxplot(x = \"tipo_especifico\", y = \"INSE_VALOR_ABSOLUTO\", y_label = \"INSE (2015) médio da escola\", x_label = \" \",\n plot_title = \"Comparação do nível sócio-econômico das escolas selecionadas\",\n palette = {\"Desempenho abaixo\\ndo esperado\" : \"indianred\",\n \"Ruim para bom\" : \"skyblue\",\n \"Muito ruim para bom\" : \"lightblue\"},\n data = df, output_path = \"inse_op2.png\")", "_____no_output_____" ] ], [ [ "# Testes estatísticos", "_____no_output_____" ], [ "## Cohen's D\n\nMinha métrica preferida de tamanho de efeito é o Cohen's D, mas aparentemente não tem nenhuma implementação canônica dele. Vou usar a que eu encontrei [nesse site](https://machinelearningmastery.com/effect-size-measures-in-python/).", "_____no_output_____" ] ], [ [ "from numpy.random import randn\nfrom numpy.random import seed\nfrom numpy import mean\nfrom numpy import var\nfrom math import sqrt\n\n# function to calculate Cohen's d for independent samples\ndef cohend(d1, d2):\n\t# calculate the size of samples\n\tn1, n2 = len(d1), len(d2)\n\t# calculate the variance of the samples\n\ts1, s2 = var(d1, ddof=1), var(d2, ddof=1)\n\t# calculate the pooled standard deviation\n\ts = sqrt(((n1 - 1) * s1 + (n2 - 1) * s2) / (n1 + n2 - 2))\n\t# calculate the means of the samples\n\tu1, u2 = mean(d1), mean(d2)\n\t# calculate the effect size\n\treturn (u1 - u2) / s", "_____no_output_____" ] ], [ [ "Todas as escolas referência vs. escolas risco", "_____no_output_____" ] ], [ [ "ttest(risco[\"INSE_VALOR_ABSOLUTO\"], referencias[\"INSE_VALOR_ABSOLUTO\"], nan_policy=\"omit\")", "_____no_output_____" ], [ "cohend(referencias[\"INSE_VALOR_ABSOLUTO\"], risco[\"INSE_VALOR_ABSOLUTO\"])", "_____no_output_____" ] ], [ [ "Só as escolas muito ruim pra bom vs. escolas risco", "_____no_output_____" ] ], [ [ "ttest(risco[\"INSE_VALOR_ABSOLUTO\"], referencias.query(\"tipo_especifico == 'Muito ruim para bom'\")[\"INSE_VALOR_ABSOLUTO\"], nan_policy=\"omit\")", "_____no_output_____" ], [ "cohend(referencias.query(\"tipo_especifico == 'Muito ruim para bom'\")[\"INSE_VALOR_ABSOLUTO\"], risco[\"INSE_VALOR_ABSOLUTO\"])", "_____no_output_____" ] ], [ [ "# Tentando inferir causalidade\n\nSabemos que existe uma diferença significativa entre os níveis sócio econômicos dos 2 grupos. Mas até que ponto essa diferença no INSE é capaz de explicar a diferença no IDEB? Será que resta algum efeito que pode ser atribuído às práticas de gestão? Esses testes buscam encontrar uma resposta para essa pergunta.", "_____no_output_____" ], [ "## Regressões lineares", "_____no_output_____" ] ], [ [ "#pega a nota do IDEB pra servir de DV\nideb = pd.read_csv(\"./pr-educacao/data/output/ideb_merged_kepler.csv\")\nideb[\"ano_true\"] = ideb[\"ano\"].apply(lambda x: int(x[0:4]))\nideb = ideb.query(\"ano_true == 2017\").copy()\nnota_ideb = ideb[[\"cod_inep\", \"ideb\"]]", "_____no_output_____" ], [ "df = pd.merge(df, nota_ideb, how = \"left\", on = \"cod_inep\")", "_____no_output_____" ], [ "df.dropna(subset=[\"INSE_VALOR_ABSOLUTO\"], inplace = True)", "_____no_output_____" ], [ "df[\"tipo_bin\"] = np.where(df[\"tipo\"] == \"Escolas-referência\", 1, 0)", "_____no_output_____" ], [ "from statsmodels.regression.linear_model import OLS as ols_py\nfrom statsmodels.tools.tools import add_constant\n\nivs_multi = add_constant(df[[\"tipo_bin\", \"INSE_VALOR_ABSOLUTO\"]])\n\nmodelo_multi = ols_py(df[[\"ideb\"]], ivs_multi).fit()\n\nprint(modelo_multi.summary())", " OLS Regression Results \n==============================================================================\nDep. Variable: ideb R-squared: 0.843\nModel: OLS Adj. R-squared: 0.841\nMethod: Least Squares F-statistic: 391.6\nDate: qua, 22 mai 2019 Prob (F-statistic): 2.13e-59\nTime: 12:22:10 Log-Likelihood: -23.834\nNo. Observations: 149 AIC: 53.67\nDf Residuals: 146 BIC: 62.68\nDf Model: 2 \nCovariance Type: nonrobust \n=======================================================================================\n coef std err t P>|t| [0.025 0.975]\n---------------------------------------------------------------------------------------\nconst 4.1078 0.652 6.297 0.000 2.819 5.397\ntipo_bin 1.3748 0.056 24.678 0.000 1.265 1.485\nINSE_VALOR_ABSOLUTO 0.0169 0.013 1.293 0.198 -0.009 0.043\n==============================================================================\nOmnibus: 7.292 Durbin-Watson: 1.867\nProb(Omnibus): 0.026 Jarque-Bera (JB): 11.543\nSkew: -0.180 Prob(JB): 0.00312\nKurtosis: 4.315 Cond. No. 1.40e+03\n==============================================================================\n\nWarnings:\n[1] Standard Errors assume that the covariance matrix of the errors is correctly specified.\n[2] The condition number is large, 1.4e+03. This might indicate that there are\nstrong multicollinearity or other numerical problems.\n" ] ], [ [ "O problema de fazer a regressão da maneira como eu coloquei acima é que tipo_bin foi criada parcialmente em função do IDEB (ver histogramas abaixo), então não é uma variável verdadeiramente independente. Talvez uma estratégia seja comparar modelos simples só com INSE e só com tipo_bin.", "_____no_output_____" ] ], [ [ "df.ideb.hist()", "_____no_output_____" ], [ "df.query(\"tipo_bin == 0\").ideb.hist()", "_____no_output_____" ], [ "df.query(\"tipo_bin == 1\").ideb.hist()", "_____no_output_____" ], [ "#correlação simples\nfrom scipy.stats import pearsonr\n\npearsonr(df[[\"ideb\"]], df[[\"INSE_VALOR_ABSOLUTO\"]])", "_____no_output_____" ], [ "iv_inse = add_constant(df[[\"INSE_VALOR_ABSOLUTO\"]])\niv_ideb = add_constant(df[[\"tipo_bin\"]])\n\nmodelo_inse = ols_py(df[[\"ideb\"]], iv_inse).fit()\nmodelo_tipo = ols_py(df[[\"ideb\"]], iv_ideb).fit()\n\nprint(modelo_inse.summary())\nprint(\"-----------------------------------------------------------\")\nprint(modelo_tipo.summary())", " OLS Regression Results \n==============================================================================\nDep. Variable: ideb R-squared: 0.187\nModel: OLS Adj. R-squared: 0.182\nMethod: Least Squares F-statistic: 33.90\nDate: qua, 22 mai 2019 Prob (F-statistic): 3.51e-08\nTime: 12:22:15 Log-Likelihood: -146.25\nNo. Observations: 149 AIC: 296.5\nDf Residuals: 147 BIC: 302.5\nDf Model: 1 \nCovariance Type: nonrobust \n=======================================================================================\n coef std err t P>|t| [0.025 0.975]\n---------------------------------------------------------------------------------------\nconst -2.4509 1.350 -1.815 0.072 -5.119 0.217\nINSE_VALOR_ABSOLUTO 0.1561 0.027 5.822 0.000 0.103 0.209\n==============================================================================\nOmnibus: 3.939 Durbin-Watson: 0.621\nProb(Omnibus): 0.140 Jarque-Bera (JB): 3.892\nSkew: 0.353 Prob(JB): 0.143\nKurtosis: 2.642 Cond. No. 1.28e+03\n==============================================================================\n\nWarnings:\n[1] Standard Errors assume that the covariance matrix of the errors is correctly specified.\n[2] The condition number is large, 1.28e+03. This might indicate that there are\nstrong multicollinearity or other numerical problems.\n-----------------------------------------------------------\n OLS Regression Results \n==============================================================================\nDep. Variable: ideb R-squared: 0.841\nModel: OLS Adj. R-squared: 0.840\nMethod: Least Squares F-statistic: 777.9\nDate: qua, 22 mai 2019 Prob (F-statistic): 1.40e-60\nTime: 12:22:15 Log-Likelihood: -24.683\nNo. Observations: 149 AIC: 53.37\nDf Residuals: 147 BIC: 59.37\nDf Model: 1 \nCovariance Type: nonrobust \n==============================================================================\n coef std err t P>|t| [0.025 0.975]\n------------------------------------------------------------------------------\nconst 4.9505 0.029 173.049 0.000 4.894 5.007\ntipo_bin 1.4058 0.050 27.891 0.000 1.306 1.505\n==============================================================================\nOmnibus: 6.509 Durbin-Watson: 1.870\nProb(Omnibus): 0.039 Jarque-Bera (JB): 9.934\nSkew: -0.147 Prob(JB): 0.00696\nKurtosis: 4.230 Cond. No. 2.42\n==============================================================================\n\nWarnings:\n[1] Standard Errors assume that the covariance matrix of the errors is correctly specified.\n" ] ], [ [ "## Testes pareados\n\nNossa unidade de observação, na verdade, não deveria ser uma escola, mas sim um par de escolas. Abaixo, tento fazer as análises levando em consideração o delta de INSE e o delta de IDEB para cada par de escolas. Isso é importante: sabemos que o INSE faz a diferença no IDEB geral, mas a pergunta é se ele consegue explicar as diferenças na performance dentro de cada par.", "_____no_output_____" ] ], [ [ "pairs = pd.read_csv(\"sponsors_mais_proximos.csv\")", "_____no_output_____" ], [ "pairs.head()", "_____no_output_____" ], [ "pairs.shape", "_____no_output_____" ], [ "inse_risco = inse[[\"cod_inep\", \"INSE_VALOR_ABSOLUTO\"]]\ninse_risco.columns = [\"cod_inep_risco\",\"inse_risco\"]\n\ninse_ref = inse[[\"cod_inep\", \"INSE_VALOR_ABSOLUTO\"]]\ninse_ref.columns = [\"cod_inep_referencia\",\"inse_referencia\"]", "_____no_output_____" ], [ "pairs = pd.merge(pairs, inse_risco, how = \"left\", on = \"cod_inep_risco\")\npairs = pd.merge(pairs, inse_ref, how = \"left\", on = \"cod_inep_referencia\")", "_____no_output_____" ], [ "#calcula os deltas\npairs[\"delta_inse\"] = pairs[\"inse_referencia\"] - pairs[\"inse_risco\"]\npairs[\"delta_ideb\"] = pairs[\"ideb_referencia\"] - pairs[\"ideb_risco\"]", "_____no_output_____" ], [ "pairs[\"delta_inse\"].describe()", "_____no_output_____" ], [ "pairs[\"delta_inse\"].hist()", "_____no_output_____" ], [ "pairs[\"delta_ideb\"].describe()", "_____no_output_____" ], [ "pairs[\"delta_ideb\"].hist()", "_____no_output_____" ], [ "pairs[pairs[\"delta_inse\"].isnull()]", "_____no_output_____" ], [ "clean_pairs = pairs.dropna(subset = [\"delta_inse\"])", "_____no_output_____" ], [ "import seaborn as sns\nimport matplotlib.pyplot as plt\n\nplt.figure(figsize = sen.aspect_ratio_locker([16, 9], 0.6))\n\ninse_plot = sns.regplot(\"delta_inse\", \"delta_ideb\", data = clean_pairs)\n\nplt.title(\"Correlação entre as diferenças do IDEB (2017) e do INSE (2015)\\npara cada par de escolas mais próximas\")\nplt.xlabel(\"$INSE_{referência} - INSE_{desempenho\\,abaixo\\,do\\,esperado}$\", fontsize = 12)\nplt.ylabel(\"$IDEB_{referência} - IDEB_{desempenh\\,abaixo\\,do\\,esperado}$\", fontsize = 12)\n\ninse_plot.get_figure().savefig(\"delta_inse.png\", dpi = 600)", "_____no_output_____" ], [ "pearsonr(clean_pairs[[\"delta_ideb\"]], clean_pairs[[\"delta_inse\"]])", "_____no_output_____" ], [ "X = add_constant(clean_pairs[[\"delta_inse\"]]) \n\nmodelo_pairs = ols_py(clean_pairs[[\"delta_ideb\"]], X).fit()\n\nprint(modelo_pairs.summary())", " OLS Regression Results \n==============================================================================\nDep. Variable: delta_ideb R-squared: 0.000\nModel: OLS Adj. R-squared: -0.010\nMethod: Least Squares F-statistic: 0.0004740\nDate: qua, 22 mai 2019 Prob (F-statistic): 0.983\nTime: 11:12:12 Log-Likelihood: -47.659\nNo. Observations: 100 AIC: 99.32\nDf Residuals: 98 BIC: 104.5\nDf Model: 1 \nCovariance Type: nonrobust \n==============================================================================\n coef std err t P>|t| [0.025 0.975]\n------------------------------------------------------------------------------\nconst 1.4143 0.051 27.838 0.000 1.313 1.515\ndelta_inse 0.0004 0.017 0.022 0.983 -0.034 0.035\n==============================================================================\nOmnibus: 8.509 Durbin-Watson: 1.977\nProb(Omnibus): 0.014 Jarque-Bera (JB): 8.171\nSkew: 0.654 Prob(JB): 0.0168\nKurtosis: 3.498 Cond. No. 3.97\n==============================================================================\n\nWarnings:\n[1] Standard Errors assume that the covariance matrix of the errors is correctly specified.\n" ] ], [ [ "Testando a assumption de que distância física se correlaciona com distância de INSE", "_____no_output_____" ] ], [ [ "pairs.head()", "_____no_output_____" ], [ "sns.regplot(\"distancia\", \"delta_inse\", data = clean_pairs.query(\"distancia < 4000\"))", "_____no_output_____" ], [ "multi_iv = add_constant(clean_pairs[[\"distancia\", \"delta_inse\"]])\n\nmodelo_ze = ols_py(clean_pairs[[\"delta_ideb\"]], multi_iv).fit()\n\nprint(modelo_ze.summary())", " OLS Regression Results \n==============================================================================\nDep. Variable: delta_ideb R-squared: 0.000\nModel: OLS Adj. R-squared: -0.021\nMethod: Least Squares F-statistic: 0.004600\nDate: qua, 22 mai 2019 Prob (F-statistic): 0.995\nTime: 11:40:22 Log-Likelihood: -47.654\nNo. Observations: 100 AIC: 101.3\nDf Residuals: 97 BIC: 109.1\nDf Model: 2 \nCovariance Type: nonrobust \n==============================================================================\n coef std err t P>|t| [0.025 0.975]\n------------------------------------------------------------------------------\nconst 1.4200 0.080 17.851 0.000 1.262 1.578\ndistancia -3.958e-06 4.24e-05 -0.093 0.926 -8.8e-05 8.01e-05\ndelta_inse 0.0006 0.018 0.033 0.974 -0.034 0.035\n==============================================================================\nOmnibus: 8.544 Durbin-Watson: 1.973\nProb(Omnibus): 0.014 Jarque-Bera (JB): 8.212\nSkew: 0.656 Prob(JB): 0.0165\nKurtosis: 3.500 Cond. No. 3.63e+03\n==============================================================================\n\nWarnings:\n[1] Standard Errors assume that the covariance matrix of the errors is correctly specified.\n[2] The condition number is large, 3.63e+03. This might indicate that there are\nstrong multicollinearity or other numerical problems.\n" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "raw", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown", "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "raw" ], [ "code", "code", "code", "code", "code", "code", "code", "code", "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown", "markdown" ], [ "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code" ] ]
d02030905b0a5851e858754be5eae74b2d72687a
46,320
ipynb
Jupyter Notebook
ch08_Reinforcement-learning/ch16-reinforcement-learning.ipynb
pythonProjectLearn/TensorflowLearning
7a72ebea060ce0a0db9a00994e4725ec5d84c10a
[ "MIT" ]
null
null
null
ch08_Reinforcement-learning/ch16-reinforcement-learning.ipynb
pythonProjectLearn/TensorflowLearning
7a72ebea060ce0a0db9a00994e4725ec5d84c10a
[ "MIT" ]
null
null
null
ch08_Reinforcement-learning/ch16-reinforcement-learning.ipynb
pythonProjectLearn/TensorflowLearning
7a72ebea060ce0a0db9a00994e4725ec5d84c10a
[ "MIT" ]
null
null
null
40.882613
1,838
0.570812
[ [ [ "### Intro & Resources\n* [Sutton/Barto ebook](https://goo.gl/7utZaz); [Silver online course](https://goo.gl/AWcMFW)", "_____no_output_____" ], [ "### Learning to Optimize Rewards\n* Definitions: software *agents* make *observations* & take *actions* within an *environment*. In return they can receive *rewards* (positive or negative).", "_____no_output_____" ], [ "### Policy Search\n* **Policy**: the algorithm used by an agent to determine a next action.", "_____no_output_____" ], [ "### OpenAI Gym ([link:](https://gym.openai.com/))\n* A toolkit for various simulated environments.", "_____no_output_____" ] ], [ [ "!pip3 install --upgrade gym", "Requirement already up-to-date: gym in /home/bjpcjp/anaconda3/lib/python3.5/site-packages\nRequirement already up-to-date: requests>=2.0 in /home/bjpcjp/anaconda3/lib/python3.5/site-packages (from gym)\nRequirement already up-to-date: pyglet>=1.2.0 in /home/bjpcjp/anaconda3/lib/python3.5/site-packages (from gym)\nRequirement already up-to-date: six in /home/bjpcjp/anaconda3/lib/python3.5/site-packages (from gym)\nRequirement already up-to-date: numpy>=1.10.4 in /home/bjpcjp/anaconda3/lib/python3.5/site-packages (from gym)\n" ], [ "import gym\nenv = gym.make(\"CartPole-v0\")\nobs = env.reset()\nobs\nenv.render()", "[2017-04-27 13:05:47,311] Making new env: CartPole-v0\n" ] ], [ [ "* **make()** creates environment\n* **reset()** returns a 1st env't\n* **CartPole()** - each observation = 1D numpy array (hposition, velocity, angle, angularvelocity)\n![cartpole](pics/cartpole.png)", "_____no_output_____" ] ], [ [ "img = env.render(mode=\"rgb_array\")\nimg.shape", "_____no_output_____" ], [ "# what actions are possible?\n# in this case: 0 = accelerate left, 1 = accelerate right\nenv.action_space", "_____no_output_____" ], [ "# pole is leaning right. let's go further to the right.\naction = 1\nobs, reward, done, info = env.step(action)\nobs, reward, done, info", "_____no_output_____" ] ], [ [ "* new observation:\n * hpos = obs[0]<0\n * velocity = obs[1]>0 = moving to the right\n * angle = obs[2]>0 = leaning right\n * ang velocity = obs[3]<0 = slowing down?\n* reward = 1.0\n* done = False (episode not over)\n* info = (empty)", "_____no_output_____" ] ], [ [ "# example policy: \n# (1) accelerate left when leaning left, (2) accelerate right when leaning right\n# average reward over 500 episodes?\n\ndef basic_policy(obs):\n angle = obs[2]\n return 0 if angle < 0 else 1\n\ntotals = []\nfor episode in range(500):\n episode_rewards = 0\n obs = env.reset()\n for step in range(1000): # 1000 steps max, we don't want to run forever\n action = basic_policy(obs)\n obs, reward, done, info = env.step(action)\n episode_rewards += reward\n if done:\n break\n totals.append(episode_rewards)\n\nimport numpy as np\nnp.mean(totals), np.std(totals), np.min(totals), np.max(totals)", "_____no_output_____" ] ], [ [ "### NN Policies\n* observations as inputs - actions to be executed as outputs - determined by p(action)\n* approach lets agent find best balance between **exploring new actions** & **reusing known good actions**.\n\n### Evaluating Actions: Credit Assignment problem\n* Reinforcement Learning (RL) training not like supervised learning. \n* RL feedback is via rewards (often sparse & delayed)\n* How to determine which previous steps were \"good\" or \"bad\"? (aka \"*credit assigmnment problem*\")\n* Common tactic: applying a **discount rate** to older rewards.\n\n* Use normalization across many episodes to increase score reliability. \n\nNN Policy | Discounts & Rewards\n- | -\n![nn-policy](pics/nn-policy.png) | ![discount-rewards](pics/discount-rewards.png)\n\n", "_____no_output_____" ] ], [ [ "import tensorflow as tf\nfrom tensorflow.contrib.layers import fully_connected\n\n# 1. Specify the neural network architecture\nn_inputs = 4 # == env.observation_space.shape[0]\nn_hidden = 4 # simple task, don't need more hidden neurons\nn_outputs = 1 # only output prob(accelerating left)\ninitializer = tf.contrib.layers.variance_scaling_initializer()\n\n# 2. Build the neural network\nX = tf.placeholder(\n tf.float32, shape=[None, n_inputs])\n\nhidden = fully_connected(\n X, n_hidden, \n activation_fn=tf.nn.elu,\n weights_initializer=initializer)\n\nlogits = fully_connected(\n hidden, n_outputs, \n activation_fn=None,\n weights_initializer=initializer)\n\noutputs = tf.nn.sigmoid(logits) # logistic (sigmoid) ==> return 0.0-1.0\n\n# 3. Select a random action based on the estimated probabilities\np_left_and_right = tf.concat(\n axis=1, values=[outputs, 1 - outputs])\n\naction = tf.multinomial(\n tf.log(p_left_and_right), \n num_samples=1)\n\ninit = tf.global_variables_initializer()", "_____no_output_____" ] ], [ [ "### Policy Gradient (PG) algorithms\n* example: [\"reinforce\" algo, 1992](https://goo.gl/tUe4Sh)\n", "_____no_output_____" ], [ "### Markov Decision processes (MDPs)\n\n* Markov chains = stochastic processes, no memory, fixed #states, random transitions\n* Markov decision processes = similar to MCs - agent can choose action; transition probabilities depend on the action; transitions can return reward/punishment.\n* Goal: find policy with maximum rewards over time.\n\nMarkov Chain | Markov Decision Process\n- | -\n![markov-chain](pics/markov-chain.png) | ![alt](pics/markov-decision-process.png)\n\n* **Bellman Optimality Equation**: a method to estimate optimal state value of any state *s*.\n* Knowing optimal states = useful, but doesn't tell agent what to do. **Q-Value algorithm** helps solve this problem. Optimal Q-Value of a state-action pair = sum of discounted future rewards the agent can expect on average.\n", "_____no_output_____" ] ], [ [ "# Define MDP:\n\nnan=np.nan # represents impossible actions\nT = np.array([ # shape=[s, a, s']\n [[0.7, 0.3, 0.0], [1.0, 0.0, 0.0], [0.8, 0.2, 0.0]],\n [[0.0, 1.0, 0.0], [nan, nan, nan], [0.0, 0.0, 1.0]],\n [[nan, nan, nan], [0.8, 0.1, 0.1], [nan, nan, nan]],\n ])\n\nR = np.array([ # shape=[s, a, s']\n [[10., 0.0, 0.0], [0.0, 0.0, 0.0], [0.0, 0.0, 0.0]],\n [[10., 0.0, 0.0], [nan, nan, nan], [0.0, 0.0, -50.]],\n [[nan, nan, nan], [40., 0.0, 0.0], [nan, nan, nan]],\n ])\n\npossible_actions = [[0, 1, 2], [0, 2], [1]]\n\n# run Q-Value Iteration algo\n\nQ = np.full((3, 3), -np.inf)\nfor state, actions in enumerate(possible_actions):\n Q[state, actions] = 0.0 # Initial value = 0.0, for all possible actions\n\nlearning_rate = 0.01\ndiscount_rate = 0.95\nn_iterations = 100\n\nfor iteration in range(n_iterations):\n Q_prev = Q.copy()\n for s in range(3):\n for a in possible_actions[s]:\n Q[s, a] = np.sum([\n T[s, a, sp] * (R[s, a, sp] + discount_rate * np.max(Q_prev[sp]))\n for sp in range(3)\n ])\n \nprint(\"Q: \\n\",Q)\nprint(\"Optimal action for each state:\\n\",np.argmax(Q, axis=1))", "Q: \n [[ 21.88646117 20.79149867 16.854807 ]\n [ 1.10804034 -inf 1.16703135]\n [ -inf 53.8607061 -inf]]\nOptimal action for each state:\n [0 2 1]\n" ], [ "# change discount rate to 0.9, see how policy changes:\n\ndiscount_rate = 0.90\n\nfor iteration in range(n_iterations):\n Q_prev = Q.copy()\n for s in range(3):\n for a in possible_actions[s]:\n Q[s, a] = np.sum([\n T[s, a, sp] * (R[s, a, sp] + discount_rate * np.max(Q_prev[sp]))\n for sp in range(3)\n ])\n \nprint(\"Q: \\n\",Q)\nprint(\"Optimal action for each state:\\n\",np.argmax(Q, axis=1))", "Q: \n [[ 1.89189499e+01 1.70270580e+01 1.36216526e+01]\n [ 3.09979853e-05 -inf -4.87968388e+00]\n [ -inf 5.01336811e+01 -inf]]\nOptimal action for each state:\n [0 0 1]\n" ] ], [ [ "### Temporal Difference Learning & Q-Learning\n* In general - agent has no knowledge of transition probabilities or rewards\n* **Temporal Difference Learning** (TD Learning) similar to value iteration, but accounts for this lack of knowlege.\n* Algorithm tracks running average of most recent awards & anticipated rewards.\n\n* **Q-Learning** algorithm adaptation of Q-Value Iteration where initial transition probabilities & rewards are unknown.", "_____no_output_____" ] ], [ [ "import numpy.random as rnd\n\nlearning_rate0 = 0.05\nlearning_rate_decay = 0.1\nn_iterations = 20000\n\ns = 0 # start in state 0\nQ = np.full((3, 3), -np.inf) # -inf for impossible actions\n\nfor state, actions in enumerate(possible_actions):\n Q[state, actions] = 0.0 # Initial value = 0.0, for all possible actions\n for iteration in range(n_iterations):\n a = rnd.choice(possible_actions[s]) # choose an action (randomly)\n sp = rnd.choice(range(3), p=T[s, a]) # pick next state using T[s, a]\n reward = R[s, a, sp]\n \n learning_rate = learning_rate0 / (1 + iteration * learning_rate_decay)\n \n Q[s, a] = learning_rate * Q[s, a] + (1 - learning_rate) * (reward + discount_rate * np.max(Q[sp]))\n\ns = sp # move to next state\n\nprint(\"Q: \\n\",Q)\nprint(\"Optimal action for each state:\\n\",np.argmax(Q, axis=1))", "Q: \n [[ -inf 2.47032823e-323 -inf]\n [ 0.00000000e+000 -inf 0.00000000e+000]\n [ -inf 0.00000000e+000 -inf]]\nOptimal action for each state:\n [1 0 1]\n" ] ], [ [ "### Exploration Policies\n* Q-Learning works only if exploration is thorough - not always possible.\n* Better alternative: explore more interesting routes using a *sigma* probability", "_____no_output_____" ], [ "### Approximate Q-Learning\n* TODO", "_____no_output_____" ], [ "### Ms Pac-Man with Deep Q-Learning", "_____no_output_____" ] ], [ [ "env = gym.make('MsPacman-v0')\nobs = env.reset()\nobs.shape, env.action_space\n\n# action_space = 9 possible joystick actions\n# observations = atari screenshots as 3D NumPy arrays", "[2017-04-27 13:06:21,861] Making new env: MsPacman-v0\n" ], [ "mspacman_color = np.array([210, 164, 74]).mean()\n\n# crop image, shrink to 88x80 pixels, convert to grayscale, improve contrast\n\ndef preprocess_observation(obs):\n img = obs[1:176:2, ::2] # crop and downsize\n img = img.mean(axis=2) # to greyscale\n img[img==mspacman_color] = 0 # improve contrast\n img = (img - 128) / 128 - 1 # normalize from -1. to 1.\n return img.reshape(88, 80, 1)", "_____no_output_____" ] ], [ [ "Ms PacMan Observation | Deep-Q net\n- | -\n![observation](pics/mspacman-before-after.png) | ![alt](pics/mspacman-deepq.png)\n", "_____no_output_____" ] ], [ [ "# Create DQN\n# 3 convo layers, then 2 FC layers including output layer\n\nfrom tensorflow.contrib.layers import convolution2d, fully_connected\n\ninput_height = 88\ninput_width = 80\ninput_channels = 1\nconv_n_maps = [32, 64, 64]\nconv_kernel_sizes = [(8,8), (4,4), (3,3)]\nconv_strides = [4, 2, 1]\nconv_paddings = [\"SAME\"]*3\nconv_activation = [tf.nn.relu]*3\nn_hidden_in = 64 * 11 * 10 # conv3 has 64 maps of 11x10 each\nn_hidden = 512\nhidden_activation = tf.nn.relu\nn_outputs = env.action_space.n # 9 discrete actions are available\n\ninitializer = tf.contrib.layers.variance_scaling_initializer()\n\n# training will need ***TWO*** DQNs:\n# one to train the actor\n# another to learn from trials & errors (critic)\n# q_network is our net builder.\n\ndef q_network(X_state, scope):\n prev_layer = X_state\n conv_layers = []\n\n with tf.variable_scope(scope) as scope:\n \n for n_maps, kernel_size, stride, padding, activation in zip(\n conv_n_maps, \n conv_kernel_sizes, \n conv_strides,\n conv_paddings, \n conv_activation):\n \n prev_layer = convolution2d(\n prev_layer, \n num_outputs=n_maps, \n kernel_size=kernel_size,\n stride=stride, \n padding=padding, \n activation_fn=activation,\n weights_initializer=initializer)\n \n conv_layers.append(prev_layer)\n\n last_conv_layer_flat = tf.reshape(\n prev_layer, \n shape=[-1, n_hidden_in])\n \n hidden = fully_connected(\n last_conv_layer_flat, \n n_hidden, \n activation_fn=hidden_activation,\n weights_initializer=initializer)\n \n outputs = fully_connected(\n hidden, \n n_outputs, \n activation_fn=None,\n weights_initializer=initializer)\n \n trainable_vars = tf.get_collection(\n tf.GraphKeys.TRAINABLE_VARIABLES,\n scope=scope.name)\n \n trainable_vars_by_name = {var.name[len(scope.name):]: var\n for var in trainable_vars}\n\n return outputs, trainable_vars_by_name\n", "_____no_output_____" ], [ "# create input placeholders & two DQNs\n\nX_state = tf.placeholder(\n tf.float32, \n shape=[None, input_height, input_width,\n input_channels])\n\nactor_q_values, actor_vars = q_network(X_state, scope=\"q_networks/actor\")\ncritic_q_values, critic_vars = q_network(X_state, scope=\"q_networks/critic\")\n\ncopy_ops = [actor_var.assign(critic_vars[var_name])\n for var_name, actor_var in actor_vars.items()]\n\n\n# op to copy all trainable vars of critic DQN to actor DQN...\n# use tf.group() to group all assignment ops together\n\ncopy_critic_to_actor = tf.group(*copy_ops)", "_____no_output_____" ], [ "# Critic DQN learns by matching Q-Value predictions \n# to actor's Q-Value estimations during game play\n\n# Actor will use a \"replay memory\" (5 tuples):\n# state, action, next-state, reward, (0=over/1=continue)\n\n# use normal supervised training ops\n# occasionally copy critic DQN to actor DQN\n\n# DQN normally returns one Q-Value for every poss. action\n# only need Q-Value of action actually chosen\n# So, convert action to one-hot vector [0...1...0], multiple by Q-values\n# then sum over 1st axis.\n\nX_action = tf.placeholder(\n tf.int32, shape=[None])\n\nq_value = tf.reduce_sum(\n critic_q_values * tf.one_hot(X_action, n_outputs),\n axis=1, keep_dims=True)\n", "_____no_output_____" ], [ "# training setup\n\ntf.reset_default_graph()\n\ny = tf.placeholder(\n tf.float32, shape=[None, 1])\n\ncost = tf.reduce_mean(\n tf.square(y - q_value))\n\n# non-trainable. minimize() op will manage incrementing it\nglobal_step = tf.Variable(\n 0, \n trainable=False, \n name='global_step')\n\noptimizer = tf.train.AdamOptimizer(learning_rate)\n\ntraining_op = optimizer.minimize(cost, global_step=global_step)\n\ninit = tf.global_variables_initializer()\n\nsaver = tf.train.Saver()\n", "_____no_output_____" ], [ "# use a deque list to build the replay memory\n\nfrom collections import deque\n\nreplay_memory_size = 10000\nreplay_memory = deque(\n [], maxlen=replay_memory_size)\n\ndef sample_memories(batch_size):\n indices = rnd.permutation(\n len(replay_memory))[:batch_size]\n cols = [[], [], [], [], []] # state, action, reward, next_state, continue\n\n for idx in indices:\n memory = replay_memory[idx]\n for col, value in zip(cols, memory):\n col.append(value)\n\n cols = [np.array(col) for col in cols]\n return (cols[0], cols[1], cols[2].reshape(-1, 1), cols[3], cols[4].reshape(-1, 1))\n", "_____no_output_____" ], [ "# create an actor\n# use epsilon-greedy policy\n# gradually decrease epsilon from 1.0 to 0.05 across 50K training steps\n\neps_min = 0.05\neps_max = 1.0\neps_decay_steps = 50000\n\ndef epsilon_greedy(q_values, step):\n epsilon = max(eps_min, eps_max - (eps_max-eps_min) * step/eps_decay_steps)\n if rnd.rand() < epsilon:\n return rnd.randint(n_outputs) # random action\n else:\n return np.argmax(q_values) # optimal action", "_____no_output_____" ], [ "# training setup: the variables\n\nn_steps = 100000 # total number of training steps\ntraining_start = 1000 # start training after 1,000 game iterations\ntraining_interval = 3 # run a training step every 3 game iterations\nsave_steps = 50 # save the model every 50 training steps\ncopy_steps = 25 # copy the critic to the actor every 25 training steps\ndiscount_rate = 0.95\nskip_start = 90 # skip the start of every game (it's just waiting time)\nbatch_size = 50\niteration = 0 # game iterations\ncheckpoint_path = \"./my_dqn.ckpt\"\ndone = True # env needs to be reset\n", "_____no_output_____" ], [ "# let's get busy\nimport os\n\nwith tf.Session() as sess:\n \n # restore models if checkpoint file exists\n if os.path.isfile(checkpoint_path):\n saver.restore(sess, checkpoint_path)\n \n # otherwise normally initialize variables\n else:\n init.run()\n \n while True:\n step = global_step.eval()\n if step >= n_steps:\n break\n\n # iteration = total number of game steps from beginning\n \n iteration += 1\n if done: # game over, start again\n obs = env.reset()\n\n for skip in range(skip_start): # skip the start of each game\n obs, reward, done, info = env.step(0)\n state = preprocess_observation(obs)\n\n # Actor evaluates what to do\n q_values = actor_q_values.eval(feed_dict={X_state: [state]})\n action = epsilon_greedy(q_values, step)\n\n # Actor plays\n obs, reward, done, info = env.step(action)\n next_state = preprocess_observation(obs)\n\n # Let's memorize what just happened\n replay_memory.append((state, action, reward, next_state, 1.0 - done))\n state = next_state\n if iteration < training_start or iteration % training_interval != 0:\n continue\n\n # Critic learns\n X_state_val, X_action_val, rewards, X_next_state_val, continues = (\n sample_memories(batch_size))\n\n next_q_values = actor_q_values.eval(\n feed_dict={X_state: X_next_state_val})\n\n max_next_q_values = np.max(\n next_q_values, axis=1, keepdims=True)\n\n y_val = rewards + continues * discount_rate * max_next_q_values\n\n training_op.run(\n feed_dict={X_state: X_state_val, X_action: X_action_val, y: y_val})\n\n # Regularly copy critic to actor\n if step % copy_steps == 0:\n copy_critic_to_actor.run()\n\n # And save regularly\n if step % save_steps == 0:\n saver.save(sess, checkpoint_path)\n \n print(\"\\n\",np.average(y_val))", "\n 1.09000234097\n\n 1.35392784142\n\n 1.56906713688\n\n 2.5765440191\n\n 1.57079289043\n\n 1.75170834792\n\n 1.97005553639\n\n 1.97246688247\n\n 2.16126081383\n\n 1.550295331\n\n 1.75750140131\n\n 1.56052656734\n\n 1.7519523176\n\n 1.74495741558\n\n 1.95223849511\n\n 1.35289915931\n\n 1.56913152564\n\n 2.96387254691\n\n 1.76067311585\n\n 1.35536773229\n\n 1.54768545294\n\n 1.53594982147\n\n 1.56104325151\n\n 1.96987313104\n\n 2.35546155441\n\n 1.5688166486\n\n 3.08286282682\n\n 3.28864161086\n\n 3.2878398273\n\n 3.09510449028\n\n 3.09807873964\n\n 3.90697311211\n\n 3.07757974195\n\n 3.09214673901\n\n 3.28402029777\n\n 3.28337000942\n\n 3.4255889504\n\n 3.49763186431\n\n 2.85764229989\n\n 3.04482784653\n\n 2.68228099513\n\n 3.28635532999\n\n 3.29647485089\n\n 3.07898310328\n\n 3.10530596256\n\n 3.27691918874\n\n 3.09561720395\n\n 2.67830030346\n\n 3.09576807404\n\n 3.288335078\n\n 3.0956065948\n\n 5.21222548962\n\n 4.21721751595\n\n 4.7905973649\n\n 4.59864345837\n\n 4.39875211382\n\n 4.51839643717\n\n 4.59503188992\n\n 5.01186150789\n\n 4.77968219852\n\n 4.78787856865\n\n 4.20382899523\n\n 4.20432999897\n\n 5.0028930707\n\n 5.20069698572\n\n 4.80375980473\n\n 5.19750945711\n\n 4.20367767668\n\n 4.19593407536\n\n 4.40061367989\n\n 4.6054182477\n\n 4.79921974087\n\n 4.38844807434\n\n 4.20397897291\n\n 4.60095557356\n\n 4.59488785553\n\n 5.75924422598\n\n 5.75949315596\n\n 5.16320213652\n\n 5.36019721937\n\n 5.56076610899\n\n 5.16949163198\n\n 5.75895399189\n\n 5.96050115204\n\n 5.97032629395\n" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown", "markdown", "markdown", "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code" ] ]
d02044f8849a68d81ca808d58efdc3127540f7a6
242,267
ipynb
Jupyter Notebook
Instructions/.ipynb_checkpoints/climate_starter_Initial_file-2-checkpoint.ipynb
BklynIrish/sqlalchemy_challenge
d14ccc3b6d96032b404d39d36ec2008e948aa8ae
[ "ADSL" ]
1
2021-01-17T21:55:41.000Z
2021-01-17T21:55:41.000Z
Instructions/.ipynb_checkpoints/climate_starter_Initial_file-2-checkpoint.ipynb
BklynIrish/sqlalchemy_challenge
d14ccc3b6d96032b404d39d36ec2008e948aa8ae
[ "ADSL" ]
null
null
null
Instructions/.ipynb_checkpoints/climate_starter_Initial_file-2-checkpoint.ipynb
BklynIrish/sqlalchemy_challenge
d14ccc3b6d96032b404d39d36ec2008e948aa8ae
[ "ADSL" ]
null
null
null
62.359588
43,728
0.63882
[ [ [ "# SQLAlchemy Homework - Surfs Up!\n\n### Before You Begin\n\n1. Create a new repository for this project called `sqlalchemy-challenge`. **Do not add this homework to an existing repository**.\n\n2. Clone the new repository to your computer.\n\n3. Add your Jupyter notebook and `app.py` to this folder. These will be the main scripts to run for analysis.\n\n4. Push the above changes to GitHub or GitLab.\n\n![surfs-up.png](Images/surfs-up.png)\n\nCongratulations! You've decided to treat yourself to a long holiday vacation in Honolulu, Hawaii! To help with your trip planning, you need to do some climate analysis on the area. The following outlines what you need to do.\n\n## Step 1 - Climate Analysis and Exploration\n\nTo begin, use Python and SQLAlchemy to do basic climate analysis and data exploration of your climate database. All of the following analysis should be completed using SQLAlchemy ORM queries, Pandas, and Matplotlib.\n\n* Use the provided [starter notebook](climate_starter.ipynb) and [hawaii.sqlite](Resources/hawaii.sqlite) files to complete your climate analysis and data exploration.\n\n* Choose a start date and end date for your trip. Make sure that your vacation range is approximately 3-15 days total.\n\n* Use SQLAlchemy `create_engine` to connect to your sqlite database.\n\n* Use SQLAlchemy `automap_base()` to reflect your tables into classes and save a reference to those classes called `Station` and `Measurement`.\n\n### Precipitation Analysis\n\n* Design a query to retrieve the last 12 months of precipitation data.\n\n* Select only the `date` and `prcp` values.\n\n* Load the query results into a Pandas DataFrame and set the index to the date column.\n\n* Sort the DataFrame values by `date`.\n\n* Plot the results using the DataFrame `plot` method.\n\n ![precipitation](Images/precipitation.png)\n\n* Use Pandas to print the summary statistics for the precipitation data.\n\n### Station Analysis\n\n* Design a query to calculate the total number of stations.\n\n* Design a query to find the most active stations.\n\n * List the stations and observation counts in descending order.\n\n * Which station has the highest number of observations?\n\n * Hint: You will need to use a function such as `func.min`, `func.max`, `func.avg`, and `func.count` in your queries.\n\n* Design a query to retrieve the last 12 months of temperature observation data (TOBS).\n\n * Filter by the station with the highest number of observations.\n\n * Plot the results as a histogram with `bins=12`.\n\n ![station-histogram](Images/station-histogram.png)\n\n- - -\n\n## Step 2 - Climate App\n\nNow that you have completed your initial analysis, design a Flask API based on the queries that you have just developed.\n\n* Use Flask to create your routes.\n\n### Routes\n\n* `/`\n\n * Home page.\n\n * List all routes that are available.\n\n* `/api/v1.0/precipitation`\n\n * Convert the query results to a dictionary using `date` as the key and `prcp` as the value.\n\n * Return the JSON representation of your dictionary.\n\n* `/api/v1.0/stations`\n\n * Return a JSON list of stations from the dataset.\n\n* `/api/v1.0/tobs`\n * Query the dates and temperature observations of the most active station for the last year of data.\n \n * Return a JSON list of temperature observations (TOBS) for the previous year.\n\n* `/api/v1.0/<start>` and `/api/v1.0/<start>/<end>`\n\n * Return a JSON list of the minimum temperature, the average temperature, and the max temperature for a given start or start-end range.\n\n * When given the start only, calculate `TMIN`, `TAVG`, and `TMAX` for all dates greater than and equal to the start date.\n\n * When given the start and the end date, calculate the `TMIN`, `TAVG`, and `TMAX` for dates between the start and end date inclusive.\n\n## Hints\n\n* You will need to join the station and measurement tables for some of the queries.\n\n* Use Flask `jsonify` to convert your API data into a valid JSON response object.\n\n- - -\n\n", "_____no_output_____" ] ], [ [ "%matplotlib inline\nfrom matplotlib import style\nstyle.use('fivethirtyeight')\nimport matplotlib.pyplot as plt", "_____no_output_____" ], [ "import numpy as np\nimport pandas as pd\n", "_____no_output_____" ], [ "import datetime as dt", "_____no_output_____" ], [ "import seaborn as sns\nfrom scipy.stats import linregress\nfrom sklearn import datasets", "_____no_output_____" ] ], [ [ "# Reflect Tables into SQLAlchemy ORM", "_____no_output_____" ], [ "### Precipitation Analysis\n\n* Design a query to retrieve the last 12 months of precipitation data.\n\n* Select only the `date` and `prcp` values.\n\n* Load the query results into a Pandas DataFrame and set the index to the date column.\n\n* Sort the DataFrame values by `date`.\n\n* Plot the results using the DataFrame `plot` method.\n\n *![precipitation](Images/precipitation.png)\n\n* Use Pandas to print the summary statistics for the precipitation data.\n", "_____no_output_____" ] ], [ [ "# Python SQL toolkit and Object Relational Mapper\nimport sqlalchemy\nfrom sqlalchemy.ext.automap import automap_base\nfrom sqlalchemy.orm import Session\nfrom sqlalchemy import create_engine, func, inspect", "_____no_output_____" ], [ "engine = create_engine(\"sqlite:///Resources/hawaii.sqlite\")\n#Base.metadata.create_all(engine)", "_____no_output_____" ], [ "inspector = inspect(engine)\ninspector.get_table_names()", "_____no_output_____" ], [ "# reflect an existing database into a new model\nBase = automap_base()\n\n# reflect the tables\nBase.prepare(engine,reflect= True)\n\n# Reflect Database into ORM class\n\n#Base.classes.measurement", "_____no_output_____" ], [ "# Create our session (link) from Python to the DB\nsession = Session(bind=engine)\nsession = Session(engine)", "_____no_output_____" ], [ "# We can view all of the classes that automap found\nBase.classes.keys()", "_____no_output_____" ], [ "# Save references to each table\nMeasurement = Base.classes.measurement\nStation = Base.classes.station", "_____no_output_____" ], [ "engine.execute('Select * from measurement').fetchall()", "_____no_output_____" ], [ "# Get columns of 'measurement' table\ncolumns = inspector.get_columns('measurement')\nfor c in columns:\n print(c)\n", "{'name': 'id', 'type': INTEGER(), 'nullable': False, 'default': None, 'autoincrement': 'auto', 'primary_key': 1}\n{'name': 'station', 'type': TEXT(), 'nullable': True, 'default': None, 'autoincrement': 'auto', 'primary_key': 0}\n{'name': 'date', 'type': TEXT(), 'nullable': True, 'default': None, 'autoincrement': 'auto', 'primary_key': 0}\n{'name': 'prcp', 'type': FLOAT(), 'nullable': True, 'default': None, 'autoincrement': 'auto', 'primary_key': 0}\n{'name': 'tobs', 'type': FLOAT(), 'nullable': True, 'default': None, 'autoincrement': 'auto', 'primary_key': 0}\n" ], [ "# A very odd way to get all column values if they are made by tuples with keys and values, it's more straightforward\n# and sensible to just do columns = inspector.get_columns('measurement') the a for loop: for c in columns: print(c)\n\ncolumns = inspector.get_columns('measurement')\nfor c in columns:\n print(c.keys())\nfor c in columns:\n print(c.values())\n\n", "dict_keys(['name', 'type', 'nullable', 'default', 'autoincrement', 'primary_key'])\ndict_keys(['name', 'type', 'nullable', 'default', 'autoincrement', 'primary_key'])\ndict_keys(['name', 'type', 'nullable', 'default', 'autoincrement', 'primary_key'])\ndict_keys(['name', 'type', 'nullable', 'default', 'autoincrement', 'primary_key'])\ndict_keys(['name', 'type', 'nullable', 'default', 'autoincrement', 'primary_key'])\ndict_values(['id', INTEGER(), False, None, 'auto', 1])\ndict_values(['station', TEXT(), True, None, 'auto', 0])\ndict_values(['date', TEXT(), True, None, 'auto', 0])\ndict_values(['prcp', FLOAT(), True, None, 'auto', 0])\ndict_values(['tobs', FLOAT(), True, None, 'auto', 0])\n" ] ], [ [ "# Exploratory Climate Analysis", "_____no_output_____" ] ], [ [ "# Design a query to retrieve the last 12 months of precipitation data and plot the results\n\n# Design a query to retrieve the last 12 months of precipitation data.\nmax_date = session.query(func.max(Measurement.date)).all()[0][0]\n\n# Select only the date and prcp values. \n#datetime.datetime.strptime(date_time_str, '%Y-%m-%d %H:%M:%S.%f')\n\nimport datetime \nprint(max_date)\nprint(type(max_date))\n\n# Calculate the date 1 year ago from the last data point in the database\n\nmin_date = datetime.datetime.strptime(max_date,'%Y-%m-%d') - datetime.timedelta(days = 365)\nprint(min_date)\nprint(min_date.year, min_date.month, min_date.day)\n\n\n# Perform a query to retrieve the data and precipitation scores\nresults = session.query(Measurement.prcp, Measurement.date).filter(Measurement.date >= min_date).all()\nresults\n# Load the query results into a Pandas DataFrame and set the index to the date column.\n\nprcp_anal_df = pd.DataFrame(results, columns = ['prcp','date']).set_index('date')\n\n# Sort the DataFrame values by date.\nprcp_anal_df.sort_values(by=['date'], inplace=True)\n\nprcp_anal_df\n\n\n\n \n\n\n", "2017-08-23\n<class 'str'>\n2016-08-23 00:00:00\n2016 8 23\n" ], [ "# Create Plot(s)\nprcp_anal_df.plot(rot = 90)\nplt.xlabel('Date')\nplt.ylabel('Precipitation (inches)')\nplt.title('Precipitation over One Year in Hawaii')\nplt.savefig(\"histo_prcp_date.png\")\nplt.show()\n\n", "_____no_output_____" ], [ "sns.set()\n\nplot1 = prcp_anal_df.plot(figsize = (10, 5))\n\nfig = plot1.get_figure()\n\nplt.title('Precipitation in Hawaii')\n\nplt.xlabel('Date')\n\nplt.ylabel('Precipitation')\n\nplt.legend([\"Precipitation\"],loc=\"best\")\n\nplt.xticks(rotation=45)\n\nplt.tight_layout()\n\nplt.savefig(\"Precipitation in Hawaii_bar.png\")\n\nplt.show()", "_____no_output_____" ], [ "prcp_anal_df.describe()", "_____no_output_____" ], [ "# I wanted a range of precipitation amounts for plotting purposes...the code on line 3 and 4 and 5 didn't work\n\n## prcp_anal.max_prcp = session.query(func.max(Measurement.prcp.filter(Measurement.date >= '2016-08-23' ))).\\\n## order_by(func.max(Items.UnitPrice * Items.Quantity).desc()).all()\n## prcp_anal.max_prcp\n\nprcp_anal_max_prcp = session.query(Measurement.prcp, func.max(Measurement.prcp)).\\\n filter(Measurement.date >= '2016-08-23').\\\n group_by(Measurement.date).\\\n order_by(func.max(Measurement.prcp).asc()).all()\nprcp_anal_max_prcp\n\n# I initially did the following in a cell below. Again, I wanted a range of prcp values for the year in our DataFrame \n# so here I got the min but realized both the min and the max, or both queries are useless to me here unless I were\n# use plt.ylim in my plots, which I don't, I just allow the DF to supply its intrinsic values\n# and both give identical results. I will leave it here in thes assignment just to show my thought process\n\n# prcp_anal_min_prcp = session.query(Measurement.prcp, func.min(Measurement.prcp)).\\\n# filter(Measurement.date > '2016-08-23').\\\n# group_by(Measurement.date).\\\n# order_by(func.min(Measurement.prcp).asc()).all()\n# prcp_anal_min_prcp", "_____no_output_____" ] ], [ [ "***STATION ANALYSIS***.\\\n1) Design a query to calculate the total number of stations.\\\n2) Design a query to find the most active stations.\\\n3) List the stations and observation counts in descending order.\\\n4) Which station has the highest number of observations?.\\\n Hint: You will need to use a function such as func.min, func.max, func.avg, and func.count in your queries..\\\n5) Design a query to retrieve the last 12 months of temperature observation data (TOBS)..\\\n6) Filter by the station with the highest number of observations..\\\n7) Plot the results as a histogram with bins=12.", "_____no_output_____" ] ], [ [ "Station = Base.classes.station", "_____no_output_____" ], [ "session = Session(engine)", "_____no_output_____" ], [ "# Getting column values from each table, here 'station'\ncolumns = inspector.get_columns('station')\nfor c in columns:\n print(c)", "{'name': 'id', 'type': INTEGER(), 'nullable': False, 'default': None, 'autoincrement': 'auto', 'primary_key': 1}\n{'name': 'station', 'type': TEXT(), 'nullable': True, 'default': None, 'autoincrement': 'auto', 'primary_key': 0}\n{'name': 'name', 'type': TEXT(), 'nullable': True, 'default': None, 'autoincrement': 'auto', 'primary_key': 0}\n{'name': 'latitude', 'type': FLOAT(), 'nullable': True, 'default': None, 'autoincrement': 'auto', 'primary_key': 0}\n{'name': 'longitude', 'type': FLOAT(), 'nullable': True, 'default': None, 'autoincrement': 'auto', 'primary_key': 0}\n{'name': 'elevation', 'type': FLOAT(), 'nullable': True, 'default': None, 'autoincrement': 'auto', 'primary_key': 0}\n" ], [ "# Get columns of 'measurement' table\ncolumns = inspector.get_columns('measurement')\nfor c in columns:\n print(c)", "{'name': 'id', 'type': INTEGER(), 'nullable': False, 'default': None, 'autoincrement': 'auto', 'primary_key': 1}\n{'name': 'station', 'type': TEXT(), 'nullable': True, 'default': None, 'autoincrement': 'auto', 'primary_key': 0}\n{'name': 'date', 'type': TEXT(), 'nullable': True, 'default': None, 'autoincrement': 'auto', 'primary_key': 0}\n{'name': 'prcp', 'type': FLOAT(), 'nullable': True, 'default': None, 'autoincrement': 'auto', 'primary_key': 0}\n{'name': 'tobs', 'type': FLOAT(), 'nullable': True, 'default': None, 'autoincrement': 'auto', 'primary_key': 0}\n" ], [ "engine.execute('Select * from station').fetchall()", "_____no_output_____" ], [ "# Design a query to show how many stations are available in this dataset?\nsession.query(Station.station).count()\n", "_____no_output_____" ], [ "# What are the most active stations? (i.e. what stations have the most rows)?\n# List the stations and the counts in descending order.\n# List the stations and the counts in descending order. Think about somehow using this from extra activity\n\nActive_Stations = session.query(Station.station ,func.count(Measurement.tobs)).filter(Station.station == Measurement.station).\\\ngroup_by(Station.station).order_by(func.count(Measurement.tobs).desc()).all()\n\nprint(f\"The most active station {Active_Stations[0][0]} has {Active_Stations[0][1]} observations!\")\n\n\nActive_Stations\n", "The most active station USC00519281 has 2772 observations!\n" ], [ "# Using the station id from the previous query, calculate the lowest temperature recorded, \n# highest temperature recorded, and average temperature of the most active station?\nStation_Name = session.query(Station.name).filter(Station.station == Active_Stations[0][0]).all() \n\nprint(Station_Name)\n\nTemp_Stats = session.query(func.min(Measurement.tobs),func.max(Measurement.tobs),func.avg(Measurement.tobs)).\\\n filter(Station.station == Active_Stations[0][0]).all()\n\nprint(Temp_Stats)", "[('WAIKIKI 717.2, HI US',)]\n[(53.0, 87.0, 73.09795396419437)]\n" ], [ "# Choose the station with the highest number of temperature observations.\nStation_Name = session.query(Station.name).filter(Station.station == Active_Stations[0][0]).all() \nStation_Name\n", "_____no_output_____" ], [ "# Query the last 12 months of temperature observation data for this station \nresults_WAHIAWA = session.query(Measurement.date,Measurement.tobs).filter(Measurement.date > min_date).\\\n filter(Station.station == Active_Stations[0][0]).all()\n\nresults_WAHIAWA", "_____no_output_____" ], [ "# Make a DataFrame from the query results above showing dates and temp observation at the most active station\nresults_WAHIAWA_df = pd.DataFrame(results_WAHIAWA)\n\nresults_WAHIAWA_df", "_____no_output_____" ], [ "# Plot the results as a histogram\n\nsns.set()\n\nplt.figure(figsize=(10,5))\n\nplt.hist(results_WAHIAWA_df['tobs'],bins=12,color='magenta')\n\nplt.xlabel('Temperature',weight='bold')\n\nplt.ylabel('Frequency',weight='bold')\n\nplt.title('Station Analysis',weight='bold')\n\nplt.legend([\"Temperature Observation\"],loc=\"best\")\n\nplt.savefig(\"Station_Analysis_hist.png\")\n\nplt.show()", "_____no_output_____" ] ], [ [ "## Bonus Challenge Assignment", "_____no_output_____" ] ], [ [ "# This function called `calc_temps` will accept start date and end date in the format '%Y-%m-%d' \n# and return the minimum, average, and maximum temperatures for that range of dates\ndef calc_temps(start_date, end_date):\n \"\"\"TMIN, TAVG, and TMAX for a list of dates.\n \n Args:\n start_date (string): A date string in the format %Y-%m-%d\n end_date (string): A date string in the format %Y-%m-%d\n \n Returns:\n TMIN, TAVE, and TMAX\n \"\"\"\n \n return session.query(func.min(Measurement.tobs), func.avg(Measurement.tobs), func.max(Measurement.tobs)).\\\n filter(Measurement.date >= start_date).filter(Measurement.date <= end_date).all()\n\n# function usage example\nprint(calc_temps('2012-02-28', '2012-03-05'))", "[(62.0, 69.57142857142857, 74.0)]\n" ], [ "# Use your previous function `calc_temps` to calculate the tmin, tavg, and tmax \ncalc_temps('2017-06-22', '2017-07-05')\n", "_____no_output_____" ], [ "# for your trip using the previous year's data for those same dates.\n(calc_temps('2016-06-22', '2016-07-05'))", "_____no_output_____" ], [ "# Plot the results from your previous query as a bar chart. \n# Use \"Trip Avg Temp\" as your Title\n# Use the average temperature for the y value\n# Use the peak-to-peak (tmax-tmin) value as the y error bar (yerr)\n", "_____no_output_____" ], [ "# Calculate the total amount of rainfall per weather station for your trip dates using the previous year's matching dates.\n# Sort this in descending order by precipitation amount and list the station, name, latitude, longitude, and elevation\n\n", "_____no_output_____" ], [ "# Create a query that will calculate the daily normals \n# (i.e. the averages for tmin, tmax, and tavg for all historic data matching a specific month and day)\n\ndef daily_normals(date):\n \"\"\"Daily Normals.\n \n Args:\n date (str): A date string in the format '%m-%d'\n \n Returns:\n A list of tuples containing the daily normals, tmin, tavg, and tmax\n \n \"\"\"\n \n sel = [func.min(Measurement.tobs), func.avg(Measurement.tobs), func.max(Measurement.tobs)]\n return session.query(*sel).filter(func.strftime(\"%m-%d\", Measurement.date) == date).all()\n \ndaily_normals(\"01-01\")", "_____no_output_____" ], [ "# calculate the daily normals for your trip\n# push each tuple of calculations into a list called `normals`\n\n# Set the start and end date of the trip\n\n# Use the start and end date to create a range of dates\n\n# Stip off the year and save a list of %m-%d strings\n\n# Loop through the list of %m-%d strings and calculate the normals for each date\n", "_____no_output_____" ], [ "# Load the previous query results into a Pandas DataFrame and add the `trip_dates` range as the `date` index\n", "_____no_output_____" ], [ "# Plot the daily normals as an area plot with `stacked=False`\n", "_____no_output_____" ] ], [ [ "## Step 2 - Climate App\n\nNow that you have completed your initial analysis, design a Flask API based on the queries that you have just developed.\n\n* Use Flask to create your routes.\n\n### Routes\n\n* `/`\n\n * Home page.\n\n * List all routes that are available.\n\n* `/api/v1.0/precipitation`\n\n * Convert the query results to a dictionary using `date` as the key and `prcp` as the value.\n\n * Return the JSON representation of your dictionary.\n\n* `/api/v1.0/stations`\n\n * Return a JSON list of stations from the dataset.\n\n* `/api/v1.0/tobs`\n * Query the dates and temperature observations of the most active station for the last year of data.\n \n * Return a JSON list of temperature observations (TOBS) for the previous year.\n\n* `/api/v1.0/<start>` and `/api/v1.0/<start>/<end>`\n\n * Return a JSON list of the minimum temperature, the average temperature, and the max temperature for a given start or start-end range.\n\n * When given the start only, calculate `TMIN`, `TAVG`, and `TMAX` for all dates greater than and equal to the start date.\n\n * When given the start and the end date, calculate the `TMIN`, `TAVG`, and `TMAX` for dates between the start and end date inclusive.\n\n## Hints\n\n* You will need to join the station and measurement tables for some of the queries.\n\n* Use Flask `jsonify` to convert your API data into a valid JSON response object.\n\n- - -\n\n", "_____no_output_____" ] ], [ [ "import numpy as np\n\nimport datetime as dt\nfrom datetime import timedelta, datetime\n\nimport sqlalchemy\nfrom sqlalchemy.ext.automap import automap_base\nfrom sqlalchemy.orm import Session\nfrom sqlalchemy import create_engine, func, distinct, text, desc\n\nfrom flask import Flask, jsonify\n\n\n#################################################\n# Database Setup\n#################################################\n#engine = create_engine(\"sqlite:///Resources/hawaii.sqlite\")\nengine = create_engine(\"sqlite:///Resources/hawaii.sqlite?check_same_thread=False\")\n# reflect an existing database into a new model\nBase = automap_base()\n# reflect the tables\nBase.prepare(engine, reflect=True)\n\n# Save reference to the table\nMeasurement = Base.classes.measurement\nStation = Base.classes.station\n\n#################################################\n# Flask Setup\n#################################################\napp = Flask(__name__)\n\n\n#################################################\n# Flask Routes\n#################################################\n\n@app.route(\"/\")\ndef welcome():\n \"\"\"List all available api routes.\"\"\"\n return (\n f\"Available Routes:<br/>\"\n f\"/api/v1.0/precipitation<br/>\"\n f\"/api/v1.0/stations<br/>\"\n f\"/api/v1.0/tobs<br/>\"\n f\"/api/v1.0/<br/>\"\n f\"/api/v1.0/\"\n )\n\n\n@app.route(\"/api/v1.0/precipitation\")\ndef precipitation():\n # Create our session (link) from Python to the DB\n session = Session(engine)\n\n \"\"\"Return a list of all precipitation data\"\"\"\n # Query Precipitation data\n annual_rainfall = session.query(Measurement.date, Measurement.prcp).order_by(Measurement.date).all()\n\n session.close()\n\n # Convert list of tuples into normal list\n all_rain = dict(annual_rainfall)\n\n return jsonify(all_rain)\n\nif __name__ == '__main__':\n app.run(debug=True)", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown" ], [ "code", "code", "code", "code" ], [ "markdown", "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code" ] ]
d0204d6ccf48d75c61cfe995f80acf3f22e84879
532,545
ipynb
Jupyter Notebook
lessons/03_Lesson03_doublet.ipynb
goodsang1023/aeropython
dbb780745fc93f3d6fd173b7b55ba23cda3c1d8d
[ "CC-BY-4.0" ]
null
null
null
lessons/03_Lesson03_doublet.ipynb
goodsang1023/aeropython
dbb780745fc93f3d6fd173b7b55ba23cda3c1d8d
[ "CC-BY-4.0" ]
null
null
null
lessons/03_Lesson03_doublet.ipynb
goodsang1023/aeropython
dbb780745fc93f3d6fd173b7b55ba23cda3c1d8d
[ "CC-BY-4.0" ]
1
2021-01-31T22:54:57.000Z
2021-01-31T22:54:57.000Z
702.565963
230,320
0.948861
[ [ [ "###### Text provided under a Creative Commons Attribution license, CC-BY. Code under MIT license. (c)2014 Lorena A. Barba, Olivier Mesnard. Thanks: NSF for support via CAREER award #1149784.", "_____no_output_____" ], [ "##### Version 0.2 -- February 2014", "_____no_output_____" ], [ "# Doublet", "_____no_output_____" ], [ "Welcome to the third lesson of *AeroPython*! We created some very interesting potential flows in lessons 1 and 2, with our [Source & Sink](01_Lesson01_sourceSink.ipynb) notebook, and our [Source & Sink in a Freestream](02_Lesson02_sourceSinkFreestream.ipynb) notebook.\n\nThink about the Source & Sink again, and now imagine that you are looking at this flow pattern from very far away. The streamlines that are between the source and the sink will be very short, from this vantage point. And the other streamlines will start looking like two groups of circles, tangent at the origin. If you look from far enough away, the distance between source and sink approaches zero, and the pattern you see is called a *doublet*.\n\nLet's see what this looks like. First, load our favorite libraries.", "_____no_output_____" ] ], [ [ "import math\nimport numpy\nfrom matplotlib import pyplot\n# embed figures into the notebook\n%matplotlib inline", "_____no_output_____" ] ], [ [ "In the previous notebook, we saw that a source-sink pair in a uniform flow can be used to represent the streamlines around a particular shape, named a Rankine oval. In this notebook, we will turn that source-sink pair into a doublet.\n\nFirst, consider a source of strength $\\sigma$ at $\\left(-\\frac{l}{2},0\\right)$ and a sink of opposite strength located at $\\left(\\frac{l}{2},0\\right)$. Here is a sketch to help you visualize the situation:\n\n<center><img src=\"resources/doubletSketch1.png\"><center>", "_____no_output_____" ], [ "The stream-function associated to the source-sink pair, evaluated at point $\\text{P}\\left(x,y\\right)$, is\n\n$$\\psi\\left(x,y\\right) = \\frac{\\sigma}{2\\pi}\\left(\\theta_1-\\theta_2\\right) = -\\frac{\\sigma}{2\\pi}\\Delta\\theta$$\n\nLet the distance $l$ between the two singularities approach zero while the strength magnitude is increasing so that the product $\\sigma l$ remains constant. In the limit, this flow pattern is a *doublet* and we define its strength by $\\kappa = \\sigma l$.\n\nThe stream-function of a doublet, evaluated at point $\\text{P}\\left(x,y\\right)$, is given by\n\n$$\\psi\\left(x,y\\right) = \\lim \\limits_{l \\to 0} \\left(-\\frac{\\sigma}{2\\pi}d\\theta\\right) \\quad \\text{and} \\quad \\sigma l = \\text{constant}$$\n\n<center><img src=\"resources/doubletSketch2.png\"></center>", "_____no_output_____" ], [ "Considering the case where $d\\theta$ is infinitesimal, we deduce from the figure above that\n\n$$a = l\\sin\\theta$$\n\n$$b = r-l\\cos\\theta$$\n\n$$d\\theta = \\frac{a}{b} = \\frac{l\\sin\\theta}{r-l\\cos\\theta}$$\n\nso the stream function becomes\n\n$$\\psi\\left(r,\\theta\\right) = \\lim \\limits_{l \\to 0} \\left(-\\frac{\\sigma l}{2\\pi}\\frac{\\sin\\theta}{r-l\\cos\\theta}\\right) \\quad \\text{and} \\quad \\sigma l = \\text{constant}$$\n\ni.e.\n\n$$\\psi\\left(r,\\theta\\right) = -\\frac{\\kappa}{2\\pi}\\frac{\\sin\\theta}{r}$$\n\nIn Cartesian coordinates, a doublet located at the origin has the stream function\n\n$$\\psi\\left(x,y\\right) = -\\frac{\\kappa}{2\\pi}\\frac{y}{x^2+y^2}$$\n\nfrom which we can derive the velocity components\n\n$$u\\left(x,y\\right) = \\frac{\\partial\\psi}{\\partial y} = -\\frac{\\kappa}{2\\pi}\\frac{x^2-y^2}{\\left(x^2+y^2\\right)^2}$$\n\n$$v\\left(x,y\\right) = -\\frac{\\partial\\psi}{\\partial x} = -\\frac{\\kappa}{2\\pi}\\frac{2xy}{\\left(x^2+y^2\\right)^2}$$\n\nNow we have done the math, it is time to code and visualize what the streamlines look like. We start by creating a mesh grid.", "_____no_output_____" ] ], [ [ "N = 50 # Number of points in each direction\nx_start, x_end = -2.0, 2.0 # x-direction boundaries\ny_start, y_end = -1.0, 1.0 # y-direction boundaries\nx = numpy.linspace(x_start, x_end, N) # creates a 1D-array for x\ny = numpy.linspace(y_start, y_end, N) # creates a 1D-array for y\nX, Y = numpy.meshgrid(x, y) # generates a mesh grid", "_____no_output_____" ] ], [ [ "We consider a doublet of strength $\\kappa=1.0$ located at the origin.", "_____no_output_____" ] ], [ [ "kappa = 1.0 # strength of the doublet\nx_doublet, y_doublet = 0.0, 0.0 # location of the doublet", "_____no_output_____" ] ], [ [ "As seen in the previous notebook, we play smart by defining functions to calculate the stream function and the velocity components that could be re-used if we decide to insert more than one doublet in our domain.", "_____no_output_____" ] ], [ [ "def get_velocity_doublet(strength, xd, yd, X, Y):\n \"\"\"\n Returns the velocity field generated by a doublet.\n \n Parameters\n ----------\n strength: float\n Strength of the doublet.\n xd: float\n x-coordinate of the doublet.\n yd: float\n y-coordinate of the doublet.\n X: 2D Numpy array of floats\n x-coordinate of the mesh points.\n Y: 2D Numpy array of floats\n y-coordinate of the mesh points.\n \n Returns\n -------\n u: 2D Numpy array of floats\n x-component of the velocity vector field.\n v: 2D Numpy array of floats\n y-component of the velocity vector field.\n \"\"\"\n u = (- strength / (2 * math.pi) *\n ((X - xd)**2 - (Y - yd)**2) /\n ((X - xd)**2 + (Y - yd)**2)**2)\n v = (- strength / (2 * math.pi) *\n 2 * (X - xd) * (Y - yd) /\n ((X - xd)**2 + (Y - yd)**2)**2)\n \n return u, v\n\ndef get_stream_function_doublet(strength, xd, yd, X, Y):\n \"\"\"\n Returns the stream-function generated by a doublet.\n \n Parameters\n ----------\n strength: float\n Strength of the doublet.\n xd: float\n x-coordinate of the doublet.\n yd: float\n y-coordinate of the doublet.\n X: 2D Numpy array of floats\n x-coordinate of the mesh points.\n Y: 2D Numpy array of floats\n y-coordinate of the mesh points.\n \n Returns\n -------\n psi: 2D Numpy array of floats\n The stream-function.\n \"\"\"\n psi = - strength / (2 * math.pi) * (Y - yd) / ((X - xd)**2 + (Y - yd)**2)\n \n return psi", "_____no_output_____" ] ], [ [ "Once the functions have been defined, we call them using the parameters of the doublet: its strength `kappa` and its location `x_doublet`, `y_doublet`.", "_____no_output_____" ] ], [ [ "# compute the velocity field on the mesh grid\nu_doublet, v_doublet = get_velocity_doublet(kappa, x_doublet, y_doublet, X, Y)\n\n# compute the stream-function on the mesh grid\npsi_doublet = get_stream_function_doublet(kappa, x_doublet, y_doublet, X, Y)", "_____no_output_____" ] ], [ [ "We are ready to do a nice visualization.", "_____no_output_____" ] ], [ [ "# plot the streamlines\nwidth = 10\nheight = (y_end - y_start) / (x_end - x_start) * width\npyplot.figure(figsize=(width, height))\npyplot.xlabel('x', fontsize=16)\npyplot.ylabel('y', fontsize=16)\npyplot.xlim(x_start, x_end)\npyplot.ylim(y_start, y_end)\npyplot.streamplot(X, Y, u_doublet, v_doublet,\n density=2, linewidth=1, arrowsize=1, arrowstyle='->')\npyplot.scatter(x_doublet, y_doublet, color='#CD2305', s=80, marker='o');", "_____no_output_____" ] ], [ [ "Just like we imagined that the streamlines of a source-sink pair would look from very far away. What is this good for, you might ask? It does not look like any streamline pattern that has a practical use in aerodynamics. If that is what you think, you would be wrong!", "_____no_output_____" ], [ "## Uniform flow past a doublet", "_____no_output_____" ], [ "A doublet alone does not give so much information about how it can be used to represent a practical flow pattern in aerodynamics. But let's use our superposition powers: our doublet in a uniform flow turns out to be a very interesting flow pattern. Let's first define a uniform horizontal flow.", "_____no_output_____" ] ], [ [ "u_inf = 1.0 # freestream speed", "_____no_output_____" ] ], [ [ "Remember from our previous lessons that the Cartesian velocity components of a uniform flow in the $x$-direction are given by $u=U_\\infty$ and $v=0$. Integrating, we find the stream-function, $\\psi = U_\\infty y$.\n\nSo let's calculate velocities and stream function values for all points in our grid. And as we now know, we can calculate them all together with one line of code per array.", "_____no_output_____" ] ], [ [ "u_freestream = u_inf * numpy.ones((N, N), dtype=float)\nv_freestream = numpy.zeros((N, N), dtype=float)\n\npsi_freestream = u_inf * Y", "_____no_output_____" ] ], [ [ "Below, the stream function of the flow created by superposition of a doublet in a free stream is obtained by simple addition. Like we did before in the [Source & Sink in a Freestream](02_Lesson02_sourceSinkFreestream.ipynb) notebook, we find the *dividing streamline* and plot it separately in red. \n\nThe plot shows that this pattern can represent the flow around a cylinder with center at the location of the doublet. All the streamlines remaining outside the cylinder originated from the uniform flow. All the streamlines inside the cylinder can be ignored and this area assumed to be a solid object. This will turn out to be more useful than you may think.", "_____no_output_____" ] ], [ [ "# superposition of the doublet on the freestream flow\nu = u_freestream + u_doublet\nv = v_freestream + v_doublet\npsi = psi_freestream + psi_doublet\n\n# plot the streamlines\nwidth = 10\nheight = (y_end - y_start) / (x_end - x_start) * width\npyplot.figure(figsize=(width, height))\npyplot.xlabel('x', fontsize=16)\npyplot.ylabel('y', fontsize=16)\npyplot.xlim(x_start, x_end)\npyplot.ylim(y_start, y_end)\npyplot.streamplot(X, Y, u, v,\n density=2, linewidth=1, arrowsize=1, arrowstyle='->')\npyplot.contour(X, Y, psi,\n levels=[0.], colors='#CD2305', linewidths=2, linestyles='solid')\npyplot.scatter(x_doublet, y_doublet, color='#CD2305', s=80, marker='o')\n\n# calculate the stagnation points\nx_stagn1, y_stagn1 = +math.sqrt(kappa / (2 * math.pi * u_inf)), 0.0\nx_stagn2, y_stagn2 = -math.sqrt(kappa / (2 * math.pi * u_inf)), 0.0\n\n# display the stagnation points\npyplot.scatter([x_stagn1, x_stagn2], [y_stagn1, y_stagn2],\n color='g', s=80, marker='o');", "_____no_output_____" ] ], [ [ "##### Challenge question", "_____no_output_____" ], [ "What is the radius of the circular cylinder created when a doublet of strength $\\kappa$ is added to a uniform flow $U_\\infty$ in the $x$-direction?", "_____no_output_____" ], [ "##### Challenge task", "_____no_output_____" ], [ "You have the streamfunction of the doublet in cylindrical coordinates above. Add the streamfunction of the free stream in those coordinates, and study it. You will see that $\\psi=0$ at $r=a$ for all values of $\\theta$. The line $\\psi=0$ represents the circular cylinder of radius $a$. Now write the velocity components in cylindrical coordinates, find the speed of the flow at the surface. What does this tell you?", "_____no_output_____" ], [ "### Bernoulli's equation and the pressure coefficient", "_____no_output_____" ], [ "A very useful measurement of a flow around a body is the *coefficient of pressure* $C_p$. To evaluate the pressure coefficient, we apply *Bernoulli's equation* for ideal flow, which says that along a streamline we can apply the following between two points:\n\n$$p_\\infty + \\frac{1}{2}\\rho U_\\infty^2 = p + \\frac{1}{2}\\rho U^2$$\n\nWe define the pressure coefficient as the ratio between the pressure difference with the free stream, and the dynamic pressure:\n\n$$C_p = \\frac{p-p_\\infty}{\\frac{1}{2}\\rho U_\\infty^2}$$\n\ni.e.,\n\n$$C_p = 1 - \\left(\\frac{U}{U_\\infty}\\right)^2$$\n\nIn an incompressible flow, $C_p=1$ at a stagnation point. Let's plot the pressure coefficient in the whole domain.", "_____no_output_____" ] ], [ [ "# compute the pressure coefficient field\ncp = 1.0 - (u**2 + v**2) / u_inf**2\n\n# plot the pressure coefficient field\nwidth = 10\nheight = (y_end - y_start) / (x_end - x_start) * width\npyplot.figure(figsize=(1.1 * width, height))\npyplot.xlabel('x', fontsize=16)\npyplot.ylabel('y', fontsize=16)\npyplot.xlim(x_start, x_end)\npyplot.ylim(y_start, y_end)\ncontf = pyplot.contourf(X, Y, cp,\n levels=numpy.linspace(-2.0, 1.0, 100), extend='both')\ncbar = pyplot.colorbar(contf)\ncbar.set_label('$C_p$', fontsize=16)\ncbar.set_ticks([-2.0, -1.0, 0.0, 1.0])\npyplot.scatter(x_doublet, y_doublet,\n color='#CD2305', s=80, marker='o')\npyplot.contour(X,Y,psi,\n levels=[0.], colors='#CD2305', linewidths=2, linestyles='solid')\npyplot.scatter([x_stagn1, x_stagn2], [y_stagn1, y_stagn2],\n color='g', s=80, marker='o');", "_____no_output_____" ] ], [ [ "##### Challenge task", "_____no_output_____" ], [ "Show that the pressure coefficient distribution on the surface of the circular cylinder is given by\n\n$$C_p = 1-4\\sin^2\\theta$$\n\nand plot the coefficient of pressure versus the angle.", "_____no_output_____" ], [ "##### Think", "_____no_output_____" ], [ "Don't you find it a bit fishy that the pressure coefficient (and the surface distribution of pressure) is symmetric about the vertical axis?\n\nThat means that the pressure in the front of the cylinder is the same as the pressure in the *back* of the cylinder. In turn, this means that the horizontal components of forces are zero.\n\nWe know that, even at very low Reynolds number (creeping flow), there *is* in fact a drag force. The theory is unable to reflect that experimentally observed fact! This discrepancy is known as *d'Alembert's paradox*. \n\nHere's how creeping flow around a cylinder *really* looks like:", "_____no_output_____" ] ], [ [ "from IPython.display import YouTubeVideo\nYouTubeVideo('Ekd8czwELOc')", "_____no_output_____" ] ], [ [ "If you look carefully, there is a slight asymmetry in the flow pattern. Can you explain it? What is the consequence of that?\n\nHere's a famous visualization of actual flow around a cylinder at a Reynolds number of 1.54. This image was obtained by S. Taneda and it appears in the \"Album of Fluid Motion\", by Milton Van Dyke. A treasure of a book.\n\n<center><img src=\"resources/Cylinder-Re=1dot54.png\"></center>", "_____no_output_____" ], [ "---", "_____no_output_____" ] ], [ [ "Please ignore the cell below. It just loads our style for the notebook.", "_____no_output_____" ] ], [ [ "from IPython.core.display import HTML\ndef css_styling(filepath):\n styles = open(filepath, 'r').read()\n return HTML(styles)\ncss_styling('../styles/custom.css')", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "raw", "code" ]
[ [ "markdown", "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown", "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "raw" ], [ "code" ] ]
d02054182c741f31b3c1fb1e9f7c693fbc9a0294
108,720
ipynb
Jupyter Notebook
Scr/trainning/.ipynb_checkpoints/Untitled-checkpoint.ipynb
ale-telefonica/market
bf086065ee13d06981bee212c043ba308c1261e8
[ "Apache-2.0" ]
null
null
null
Scr/trainning/.ipynb_checkpoints/Untitled-checkpoint.ipynb
ale-telefonica/market
bf086065ee13d06981bee212c043ba308c1261e8
[ "Apache-2.0" ]
null
null
null
Scr/trainning/.ipynb_checkpoints/Untitled-checkpoint.ipynb
ale-telefonica/market
bf086065ee13d06981bee212c043ba308c1261e8
[ "Apache-2.0" ]
null
null
null
43.732904
1,616
0.438208
[ [ [ "import MySQLdb\nfrom sklearn.svm import LinearSVC\nfrom tensorflow import keras\nfrom keras.models import load_model\nimport tensorflow as tf\nfrom random import seed\nimport pandas as pd\nimport numpy as np\nimport re\nfrom re import sub\nimport os\nimport string\nimport tempfile\nimport pickle\nimport tarfile\nfrom unidecode import unidecode\nimport nltk\nfrom nltk.corpus import stopwords\nfrom keras.callbacks import EarlyStopping, ReduceLROnPlateau", "_____no_output_____" ], [ "path = \".\"", "_____no_output_____" ], [ "username = \"remote_root\"\npassword = \"Faltan_4Ks\"\nhost = \"ci-oand-apps-02.hi.inet\"\ndb = \"alejandro_test_db\"\nscheme = \"MARKETv2\"", "_____no_output_____" ], [ "query = f\"Select * from {scheme}\"\nconn = MySQLdb.connect(host=host, user=username, passwd=password, db=db)\ntry:\n cursor = conn.cursor()\n cursor.execute(f\"describe {scheme}\")\n columns_tuple = cursor.fetchall()\n columns = [i[0] for i in columns_tuple]\n cursor.execute(query)\n results = cursor.fetchall()\nexcept Exception as e:\n print(\"Exception occur:\", e)\nfinally:\n conn.close()", "_____no_output_____" ], [ "data = pd.DataFrame(columns=columns, data=results)\ndata.head()", "_____no_output_____" ], [ "def text_to_word_list(text, stem=False, stopw=True):\n from nltk.stem import SnowballStemmer\n ''' \n Data Preprocess handler version 1.1\n Pre process and convert texts to a list of words \n '''\n text = unidecode(text)\n text = str(text)\n text = text.lower()\n\n # Clean the text\n text = re.sub(r\"<u.+>\", \"\", text) # Remove emojis\n text = re.sub(r\"[^A-Za-z0-9^,!?.\\/'+]\", \" \", text)\n text = re.sub(r\",\", \" \", text)\n text = re.sub(r\"\\.\", \" \", text)\n text = re.sub(r\"!\", \" ! \", text)\n text = re.sub(r\"\\?\", \" ? \", text)\n text = re.sub(r\"'\", \" \", text)\n text = re.sub(r\":\", \" : \", text)\n \n text = re.sub(r\"\\s{2,}\", \" \", text)\n\n text = text.split()\n if stopw:\n # Remove stopw\n stopw = stopwords.words(\"spanish\")\n stopw.remove(\"no\")\n text = [word for word in text if word not in stopw and len(word) > 1]\n \n if stem:\n stemmer = SnowballStemmer(\"spanish\")\n text = [stemmer.stem(word) for word in text]\n\n text = \" \".join(text)\n\n return text ", "_____no_output_____" ], [ "def clean_dataset(df):\n# df[\"Review.Last.Update.Date.and.Time\"] = df[\"Review.Last.Update.Date.and.Time\"].astype('datetime64')\n df[\"State\"] = True\n df.loc[df.Pais == \"Brasil\", \"State\"] = False\n df.loc[df.Comentario.isna(), \"State\"] = False\n df[\"Clean_text\"] = \" \"\n df.loc[df.State, \"Clean_text\"] = df[df.State == True].Comentario.apply(lambda x: text_to_word_list(x))\n df.loc[df.Clean_text.str.len() == 0, \"State\"] = False\n df.loc[df.Clean_text.isna(), \"State\"] = False\n \n df.Clean_text = df.Clean_text.str.replace(\"a{2,}\", \"a\")\n df.Clean_text = df.Clean_text.str.replace(\"e{3,}\", \"e\")\n df.Clean_text = df.Clean_text.str.replace(\"i{3,}\", \"i\")\n df.Clean_text = df.Clean_text.str.replace(\"o{3,}\", \"o\")\n df.Clean_text = df.Clean_text.str.replace(\"u{3,}\", \"u\")\n df.Clean_text = df.Clean_text.str.replace(\"y{2,}\", \"y\")\n \n df.Clean_text= df.Clean_text.str.replace(r\"\\bapp[s]?\\b\", \" aplicacion \")\n df.Clean_text= df.Clean_text.str.replace(\"^ns$\", \"no sirve\")\n df.Clean_text= df.Clean_text.str.replace(\"^ns .+\", \"no se\")\n df.Clean_text= df.Clean_text.str.replace(\"tlf\", \"telefono\")\n df.Clean_text= df.Clean_text.str.replace(\" si no \", \" sino \")\n df.Clean_text= df.Clean_text.str.replace(\" nose \", \" no se \")\n df.Clean_text= df.Clean_text.str.replace(\"extreno\", \"estreno\")\n df.Clean_text= df.Clean_text.str.replace(\"atravez\", \"a traves\")\n df.Clean_text= df.Clean_text.str.replace(\"root(\\w+)?\", \"root\")\n df.Clean_text= df.Clean_text.str.replace(\"(masomenos)|(mas menos)]\", \"mas_menos\")\n df.Clean_text= df.Clean_text.str.replace(\"tbn\", \"tambien\")\n df.Clean_text= df.Clean_text.str.replace(\"deverian\", \"deberian\")\n df.Clean_text= df.Clean_text.str.replace(\"malicima\", \"mala\")\n return df", "_____no_output_____" ], [ "seed(20)\ndf2 = clean_dataset(data)\ndf2.sample(20)", "_____no_output_____" ], [ "def load_ml_model(path, model_name, data_type=\"stopw\"):\n # Open tarfile\n tar = tarfile.open(mode=\"r:gz\", fileobj=open(os.path.join(path, f\"{model_name}_{data_type}.tar.gz\"), \"rb\"))\n\n for filename in tar.getnames():\n if filename == f\"{model_name}.pickle\":\n clf = pickle.loads(tar.extractfile(filename).read())\n if filename == \"vectorizer.pickle\":\n vectorizer = pickle.loads(tar.extractfile(filename).read())\n if filename == \"encoder.pickle\":\n encoder = pickle.loads(tar.extractfile(filename).read())\n\n return clf, vectorizer, encoder", "_____no_output_____" ], [ "clf, vectorizer, encoder = load_ml_model(path, \"linearSVM\")", "C:\\Users\\b.amh\\AppData\\Local\\Packages\\PythonSoftwareFoundation.Python.3.8_qbz5n2kfra8p0\\LocalCache\\local-packages\\Python38\\site-packages\\sklearn\\base.py:310: UserWarning: Trying to unpickle estimator LinearSVC from version 0.22.2.post1 when using version 0.24.0. This might lead to breaking code or invalid results. Use at your own risk.\n warnings.warn(\nC:\\Users\\b.amh\\AppData\\Local\\Packages\\PythonSoftwareFoundation.Python.3.8_qbz5n2kfra8p0\\LocalCache\\local-packages\\Python38\\site-packages\\sklearn\\base.py:310: UserWarning: Trying to unpickle estimator TfidfTransformer from version 0.22.2.post1 when using version 0.24.0. This might lead to breaking code or invalid results. Use at your own risk.\n warnings.warn(\nC:\\Users\\b.amh\\AppData\\Local\\Packages\\PythonSoftwareFoundation.Python.3.8_qbz5n2kfra8p0\\LocalCache\\local-packages\\Python38\\site-packages\\sklearn\\base.py:310: UserWarning: Trying to unpickle estimator TfidfVectorizer from version 0.22.2.post1 when using version 0.24.0. This might lead to breaking code or invalid results. Use at your own risk.\n warnings.warn(\nC:\\Users\\b.amh\\AppData\\Local\\Packages\\PythonSoftwareFoundation.Python.3.8_qbz5n2kfra8p0\\LocalCache\\local-packages\\Python38\\site-packages\\sklearn\\base.py:310: UserWarning: Trying to unpickle estimator LabelEncoder from version 0.22.2.post1 when using version 0.24.0. This might lead to breaking code or invalid results. Use at your own risk.\n warnings.warn(\n" ], [ "sample = df2.loc[df2.State, \"Clean_text\"].values\n# df2.head()\nsample_vect = vectorizer.transform(sample)\ncategorias = encoder.classes_[clf.predict(sample_vect)]", "_____no_output_____" ], [ "df2.loc[df2.State, \"Categorias\"] = categorias\ndf2.sample(20)", "_____no_output_____" ], [ "df2[\"Star.Rating\"] = df2[\"Star.Rating\"].astype(\"int\")", "_____no_output_____" ], [ "df2.loc[(df2.State==False)&(df2[\"Star.Rating\"]<3), \"Categorias\"] = \"Valoración negativa\"\ndf2.loc[(df2.State==False)&(df2[\"Star.Rating\"]>=3), \"Categorias\"] = \"Valoración positiva\"\ndf2.head()", "_____no_output_____" ], [ "df2[\"Review.Last.Update.Date.and.Time\"] = pd.to_datetime(df2[\"Review.Last.Update.Date.and.Time\"])", "_____no_output_____" ], [ "df2 = df2.drop(\"tipo\", axis=1)", "_____no_output_____" ], [ "df2.tipo_equivalencias.value_counts()\ndf2.tipo_equivalencias = df2.Categorias\ndf2.loc[df2.Categorias.str.contains(\"Actualización\"), \"tipo_equivalencias\"] = \"Actualizaciones\"\ndf2.loc[df2.Categorias.str.contains(\"Error de Reproducción\"), \"tipo_equivalencias\"] = \"Error de Reproducción\"", "_____no_output_____" ], [ "df2 = df2.rename(columns={\"Categorias\":\"tipo\"})\ndf_good = df2.loc[:,columns]", "_____no_output_____" ], [ "df_good.head()", "_____no_output_____" ], [ "scheme = \"MARKETv2\"\nfields = [\"%s\" for i in range(len(df_good.columns))]\nquery = f\"INSERT INTO {scheme} VALUES ({', '.join(fields)})\"\nvalues = df_good.values.tolist()\nvalues = [tuple(x) for x in values]", "_____no_output_____" ], [ "\n# query = f\"Select * from {scheme}\"\nconn = MySQLdb.connect(host=host, user=username, passwd=password, db=db)\ntry:\n cursor = conn.cursor()\n cursor.executemany(query, values)\n conn.commit()\nexcept Exception as e:\n print(\"Exception occur:\", e)\nfinally:\n conn.close()", "_____no_output_____" ], [ "data.sort_values(by=\"Review.Last.Update.Date.and.Time\", ascending=False).head(10)", "_____no_output_____" ], [ "texto = \"Mediocre\"\n# texto = text_to_word_list(texto)\nprint(texto)\n# v = vectorizer.transform(np.array([texto]))\n# encoder.inverse_transform(clf.predict(v))", "Mediocre\n" ], [ "df2.loc[(df2.State)&(df2.Comentario.str.contains(\"Muy mala aplicacion, \")), \"Comentario\"]", "_____no_output_____" ], [ "data.to_csv(\"data_db.csv\", index=False)", "_____no_output_____" ], [ "model = Word2VecKeras()\nmodel.load(path, filename=\"word2vec_kerasv2_base.tar.gz\")", "_____no_output_____" ], [ "sample = df2.loc[df2.State, \"Comentario\"].values[:200]", "_____no_output_____" ], [ "df2.loc[df2.State, [\"Comentarios\", \"Categoría\"]", "_____no_output_____" ], [ "# preds2.Predictions = \"Valoración negativa\"\npreds2 = preds2.rename(columns={\"Comments\":\"Comentarios\", \"Predictions\":\"Categorias\"})", "_____no_output_____" ], [ "model.retrain(data=preds.iloc[:200,:])", "_____no_output_____" ], [ "class Word2VecKeras(object):\n \"\"\"\n Wrapper class that combines word2vec with keras model in order to build strong text classifier.\n This class is adapted from the Word2VecKeras module that can be found on pypi, to fit the requirenments of our case\n \"\"\"\n\n def __init__(self, w2v_model=None):\n \"\"\"\n Initialize empty classifier\n \"\"\"\n self.w2v_size = None\n self.w2v_window = None\n self.w2v_min_count = None\n self.w2v_epochs = None\n self.label_encoder = None\n self.num_classes = None\n self.tokenizer = None\n self.k_max_sequence_len = None\n self.k_batch_size = None\n self.k_epochs = None\n self.k_lstm_neurons = None\n self.k_hidden_layer_neurons = None\n self.w2v_model = w2v_model\n self.k_model = None\n\n def train(self, x_train, y_train, corpus, x_test, y_test, w2v_size=300, w2v_window=5, w2v_min_count=1,\n w2v_epochs=100, k_max_sequence_len=350, k_batch_size=128, k_epochs=20, k_lstm_neurons=128,\n k_hidden_layer_neurons=(128, 64, 32), verbose=1):\n \"\"\"\n Train new Word2Vec & Keras model\n :param x_train: list of sentence for trainning\n :param y_train: list of categories for trainning\n :param x_test: list of sentence for testing\n :param y_test: list of categories for testing\n :param corpus: text corpus to create vocabulary\n :param w2v_size: Word2Vec vector size (embeddings dimensions)\n :param w2v_window: Word2Vec windows size\n :param w2v_min_count: Word2Vec min word count\n :param w2v_epochs: Word2Vec epochs number\n :param k_max_sequence_len: Max sequence length\n :param k_batch_size: Keras training batch size\n :param k_epochs: Keras epochs number\n :param k_lstm_neurons: neurons number for Keras LSTM layer\n :param k_hidden_layer_neurons: array of keras hidden layers\n :param verbose: Verbosity\n \"\"\"\n # Set variables\n self.w2v_size = w2v_size\n self.w2v_window = w2v_window\n self.w2v_min_count = w2v_min_count\n self.w2v_epochs = w2v_epochs\n self.k_max_sequence_len = k_max_sequence_len\n self.k_batch_size = k_batch_size\n self.k_epochs = k_epochs\n self.k_lstm_neurons = k_lstm_neurons\n self.k_hidden_layer_neurons = k_hidden_layer_neurons\n\n # split text in tokens\n # x_train = [gensim.utils.simple_preprocess(text) for text in x_train]\n # x_test = [gensim.utils.simple_preprocess(text) for text in x_test]\n corpus = [gensim.utils.simple_preprocess(corpus_text) for corpus_text in corpus]\n\n \n logging.info(\"Build & train Word2Vec model\")\n self.w2v_model = gensim.models.Word2Vec(min_count=self.w2v_min_count, window=self.w2v_window,\n size=self.w2v_size,\n workers=multiprocessing.cpu_count())\n self.w2v_model.build_vocab(corpus)\n self.w2v_model.train(corpus, total_examples=self.w2v_model.corpus_count, epochs=self.w2v_epochs)\n w2v_words = list(self.w2v_model.wv.vocab)\n logging.info(\"Vocabulary size: %i\" % len(w2v_words))\n logging.info(\"Word2Vec trained\")\n\n logging.info(\"Fit LabelEncoder\")\n self.label_encoder = LabelEncoder()\n y_train = self.label_encoder.fit_transform(y_train)\n self.num_classes = len(self.label_encoder.classes_)\n y_train = tf.keras.utils.to_categorical(y_train, self.num_classes)\n\n y_test = self.label_encoder.transform(y_test)\n y_test = tf.keras.utils.to_categorical(y_test, self.num_classes)\n\n logging.info(\"Fit Tokenizer\")\n self.tokenizer = Tokenizer()\n self.tokenizer.fit_on_texts(corpus)\n\n x_train = tf.keras.preprocessing.sequence.pad_sequences(self.tokenizer.texts_to_sequences(x_train),\n maxlen=self.k_max_sequence_len, padding=\"post\", truncating=\"post\")\n x_test = tf.keras.preprocessing.sequence.pad_sequences(self.tokenizer.texts_to_sequences(x_test),\n maxlen=self.k_max_sequence_len, padding=\"post\", truncating=\"post\")\n num_words = len(self.tokenizer.word_index) + 1\n logging.info(\"Number of unique words: %i\" % num_words)\n\n logging.info(\"Create Embedding matrix\")\n word_index = self.tokenizer.word_index\n vocab_size = len(word_index) + 1\n embedding_matrix = np.zeros((vocab_size, self.w2v_size))\n for word, idx in word_index.items():\n if word in w2v_words:\n embedding_vector = self.w2v_model.wv.get_vector(word)\n if embedding_vector is not None:\n embedding_matrix[idx] = self.w2v_model.wv[word]\n logging.info(\"Embedding matrix: %s\" % str(embedding_matrix.shape))\n\n logging.info(\"Build Keras model\")\n logging.info('x_train shape: %s' % str(x_train.shape))\n logging.info('y_train shape: %s' % str(y_train.shape))\n\n self.k_model = Sequential()\n self.k_model.add(Embedding(vocab_size,\n self.w2v_size,\n weights=[embedding_matrix],\n input_length=self.k_max_sequence_len,\n trainable=False, name=\"w2v_embeddings\"))\n self.k_model.add(Bidirectional(LSTM(self.k_lstm_neurons, dropout=0.5, return_sequences=True), name=\"Bidirectional_LSTM_1\"))\n self.k_model.add(Bidirectional(LSTM(self.k_lstm_neurons, dropout=0.5), name=\"Bidirectional_LSTM_2\"))\n for hidden_layer in self.k_hidden_layer_neurons:\n self.k_model.add(Dense(hidden_layer, activation='relu', name=\"dense_%s\"%hidden_layer))\n self.k_model.add(Dropout(0.2))\n if self.num_classes > 1:\n self.k_model.add(Dense(self.num_classes, activation='softmax', name=\"output_layer\"))\n else:\n self.k_model.add(Dense(self.num_classes, activation='sigmoid'))\n\n self.k_model.compile(loss='categorical_crossentropy' if self.num_classes > 1 else 'binary_crossentropy',\n optimizer=\"adam\",\n metrics=['accuracy'])\n logging.info(self.k_model.summary())\n print(tf.keras.utils.plot_model(self.k_model, show_shapes=True, rankdir=\"LR\"))\n\n # Callbacks\n early_stopping = EarlyStopping(monitor='val_accuracy', patience=6, verbose=0, mode='max', restore_best_weights=True)\n rop = ReduceLROnPlateau(monitor='val_accuracy', factor=0.1, patience=3, verbose=1, min_delta=1e-4, mode='max')\n callbacks = [early_stopping, rop]\n\n logging.info(\"Fit Keras model\")\n self.history = self.k_model.fit(x_train, y_train,\n batch_size=self.k_batch_size,\n epochs=self.k_epochs,\n callbacks=callbacks,\n verbose=verbose,\n validation_data=(x_test, y_test))\n\n logging.info(\"Done\")\n return self.history\n \n def preprocess(self, text):\n \"\"\"Not implemented\"\"\"\n pass\n \n def retrain(self, data=None, filename=\"new_data.csv\"):\n \"\"\"\n Method to train incrementally\n :param filename: CSV file that contains the new data to feed the algorithm. This CSV must contains as columns (\"Comentarios\", \"Categorías\")\n \"\"\"\n if data.empty:\n df = pd.read_csv(filename)\n else:\n df = data\n \n comments = df.Comentarios\n tokens = [self.text_to_word_list(text) for text in comments]\n labels = df.Categorias\n labels = self.label_encoder.fit_transform(labels)\n labels = tf.keras.utils.to_categorical(labels, self.num_classes)\n sequences = keras.preprocessing.sequence.pad_sequences(self.tokenizer.texts_to_sequences(tokens),\n maxlen=self.k_max_sequence_len, padding=\"post\", truncating=\"post\")\n \n early_stopping = EarlyStopping(monitor='val_accuracy', patience=6, verbose=0, mode='max', restore_best_weights=True)\n rop = ReduceLROnPlateau(monitor='val_accuracy', factor=0.1, patience=3, verbose=1, min_delta=1e-4, mode='max')\n callbacks = [early_stopping, rop]\n\n# logging.info(\"Fit Keras model\")\n history = self.k_model.fit(sequences, labels,\n batch_size=self.k_batch_size,\n epochs=10,\n callbacks=callbacks,\n verbose=1)\n\n def predict(self, texts: np.array, return_df=False):\n \"\"\"\n Predict and array of comments\n :param text: numpy array of shape (n_samples,)\n :param return_df: Whether return only predictions labels or a dataframe containing\n sentences and predicted labels\n \"\"\"\n comments = [self.text_to_word_list(text) for text in texts]\n\n sequences = keras.preprocessing.sequence.pad_sequences(self.tokenizer.texts_to_sequences(comments),\n maxlen=self.k_max_sequence_len, padding=\"post\", truncating=\"post\")\n confidences = self.k_model.predict(sequences, verbose=1)\n\n preds = [self.label_encoder.classes_[np.argmax(c)] for c in confidences]\n if return_df:\n results = pd.DataFrame(data={\"Comments\": texts, \"Predictions\": preds})\n else:\n results = np.array(preds)\n\n return results\n\n def evaluate(self, x_test, y_test):\n \"\"\"\n Evaluate Model with several KPI\n :param x_test: Text to test\n :param y_test: labels for text\n :return: dictionary with KPIs\n \"\"\"\n result = {}\n results = []\n # Prepare test\n x_test = [self.text_to_word_list(text) for text in x_test]\n x_test = keras.preprocessing.sequence.pad_sequences(\n self.tokenizer.texts_to_sequences(x_test),\n maxlen=self.k_max_sequence_len, padding=\"post\", truncating=\"post\")\n\n # Predict\n confidences = self.k_model.predict(x_test, verbose=1)\n\n y_pred_1d = []\n\n for confidence in confidences:\n idx = np.argmax(confidence)\n y_pred_1d.append(self.label_encoder.classes_[idx])\n\n y_pred_bin = []\n for i in range(0, len(results)):\n y_pred_bin.append(1 if y_pred_1d[i] == y_test[i] else 0)\n\n # Classification report\n result[\"CLASSIFICATION_REPORT\"] = classification_report(y_test, y_pred_1d, output_dict=True)\n result[\"CLASSIFICATION_REPORT_STR\"] = classification_report(y_test, y_pred_1d)\n # Confusion matrix\n result[\"CONFUSION_MATRIX\"] = confusion_matrix(y_test, y_pred_1d)\n\n # Accuracy\n result[\"ACCURACY\"] = accuracy_score(y_test, y_pred_1d)\n\n return result\n\n def save(self, path=\"word2vec_keras.tar.gz\"):\n \"\"\"\n Save all models in pickles file\n :param path: path to save\n \"\"\"\n tokenizer_path = os.path.join(tempfile.gettempdir(), \"tokenizer.pkl\")\n label_encoder_path = os.path.join(tempfile.gettempdir(), \"label_encoder.pkl\")\n params_path = os.path.join(tempfile.gettempdir(), \"params.pkl\")\n keras_path = os.path.join(tempfile.gettempdir(), \"model.h5\")\n w2v_path = os.path.join(tempfile.gettempdir(), \"model.w2v\")\n\n # Dump pickle\n pickle.dump(self.tokenizer, open(tokenizer_path, \"wb\"))\n pickle.dump(self.label_encoder, open(label_encoder_path, \"wb\"))\n pickle.dump(self.__attributes__(), open(params_path, \"wb\"))\n pickle.dump(self.w2v_model, open(w2v_path, \"wb\"))\n self.k_model.save(keras_path)\n # self.w2v_model.save(w2v_path)\n\n # Create Tar file\n tar = tarfile.open(path, \"w:gz\")\n for name in [tokenizer_path, label_encoder_path, params_path, keras_path, w2v_path]:\n tar.add(name, arcname=os.path.basename(name))\n tar.close()\n\n # Remove temp file\n for name in [tokenizer_path, label_encoder_path, params_path, keras_path, w2v_path]:\n os.remove(name)\n\n def load(self, path, filename=\"word2vec_keras.tar.gz\"):\n \"\"\"\n Load all attributes from path\n :param path: tar.gz dump\n \"\"\"\n # Open tarfile\n tar = tarfile.open(mode=\"r:gz\", fileobj=open(os.path.join(path, filename), \"rb\"))\n \n # Extract keras model\n temp_dir = tempfile.gettempdir()\n tar.extract(\"model.h5\", temp_dir)\n self.k_model = load_model(os.path.join(temp_dir, \"model.h5\"))\n os.remove(os.path.join(temp_dir, \"model.h5\"))\n\n # Iterate over every member\n for filename in tar.getnames():\n if filename == \"model.w2v\":\n self.w2v_model = pickle.loads(tar.extractfile(filename).read())\n if filename == \"tokenizer.pkl\":\n self.tokenizer = pickle.loads(tar.extractfile(filename).read())\n if filename == \"label_encoder.pkl\":\n self.label_encoder = pickle.loads(tar.extractfile(filename).read())\n if filename == \"params.pkl\":\n params = pickle.loads(tar.extractfile(filename).read())\n for k, v in params.items():\n self.__setattr__(k, v)\n\n def text_to_word_list(self, text, stem=False, stopw=False):\n ''' Pre process and convert texts to a list of words \n '''\n text = unidecode(text)\n text = str(text)\n text = text.lower()\n\n # Clean the text\n text = re.sub(r\"<u.+>\", \"\", text) # Remove emojis\n text = re.sub(r\"[^A-Za-z0-9^,!?.\\/'+]\", \" \", text)\n text = re.sub(r\",\", \" \", text)\n text = re.sub(r\"\\.\", \" \", text)\n text = re.sub(r\"!\", \" ! \", text)\n text = re.sub(r\"\\?\", \" ? \", text)\n text = re.sub(r\"'\", \" \", text)\n text = re.sub(r\":\", \" : \", text)\n \n text = re.sub(r\"\\s{2,}\", \" \", text)\n\n text = text.split()\n if stopw:\n # Remove stopw\n stopw = stopwords.words(\"spanish\")\n stopw.remove(\"no\")\n text = [word for word in text if word not in stopw and len(word) > 1]\n \n # if stem:\n # stemmer = SnowballStemmer(\"spanish\")\n # text = [stemmer.stem(word) for word in text]\n\n # text = \" \".join(text)\n\n return text \n\n def __attributes__(self):\n \"\"\"\n Attributes to dump\n :return: dictionary\n \"\"\"\n return {\n \"w2v_size\": self.w2v_size,\n \"w2v_window\": self.w2v_window,\n \"w2v_min_count\": self.w2v_min_count,\n \"w2v_epochs\": self.w2v_epochs,\n \"num_classes\": self.num_classes,\n \"k_max_sequence_len\": self.k_max_sequence_len,\n \"k_batch_size\": self.k_batch_size,\n \"k_epochs\": self.k_epochs,\n \"k_lstm_neurons\": self.k_lstm_neurons,\n \"k_hidden_layer_neurons\": self.k_hidden_layer_neurons,\n \"history\": self.history.history\n }\n\n", "_____no_output_____" ] ] ]
[ "code" ]
[ [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ] ]
d0207062d3a71fd632ea8360b4e9a2417560d6d8
3,670
ipynb
Jupyter Notebook
sample.ipynb
AI-Guru/MMM-JSB
2cf0faeedc402b4574f292712632855675ae4037
[ "Apache-2.0" ]
72
2021-05-10T11:12:24.000Z
2022-03-30T17:49:06.000Z
sample.ipynb
AI-Guru/MMM-JSB
2cf0faeedc402b4574f292712632855675ae4037
[ "Apache-2.0" ]
3
2021-06-12T10:10:44.000Z
2022-01-20T16:53:37.000Z
sample.ipynb
AI-Guru/MMM-JSB
2cf0faeedc402b4574f292712632855675ae4037
[ "Apache-2.0" ]
9
2021-05-10T12:21:38.000Z
2022-03-10T14:37:16.000Z
30.330579
125
0.618256
[ [ [ "# License.\nCopyright 2021 Tristan Behrens.\n\nLicensed under the Apache License, Version 2.0 (the \"License\");\nyou may not use this file except in compliance with the License.\nYou may obtain a copy of the License at\n\n http://www.apache.org/licenses/LICENSE-2.0\n\nUnless required by applicable law or agreed to in writing, software\ndistributed under the License is distributed on an \"AS IS\" BASIS,\nWITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\nSee the License for the specific language governing permissions and\nlimitations under the License.", "_____no_output_____" ], [ "# Sampling using the trained model.", "_____no_output_____" ] ], [ [ "import tensorflow as tf\nfrom transformers import GPT2LMHeadModel, TFGPT2LMHeadModel\nfrom transformers import PreTrainedTokenizerFast\nfrom tokenizers import Tokenizer\nimport os\nimport numpy as np\nfrom source.helpers.samplinghelpers import *\n\n# Where the checkpoint lives.\n# Note can be downloaded from: https://ai-guru.s3.eu-central-1.amazonaws.com/mmm-jsb/mmm_jsb_checkpoints.zip\ncheck_point_path = os.path.join(\"checkpoints\", \"20210411-1426\")\n\n# Load the validation data.\nvalidation_data_path = os.path.join(check_point_path, \"datasets\", \"jsb_mmmtrack\", \"token_sequences_valid.txt\")\n\n# Load the tokenizer.\ntokenizer_path = os.path.join(check_point_path, \"datasets\", \"jsb_mmmtrack\", \"tokenizer.json\")\ntokenizer = Tokenizer.from_file(tokenizer_path)\ntokenizer = PreTrainedTokenizerFast(tokenizer_file=tokenizer_path)\ntokenizer.add_special_tokens({'pad_token': '[PAD]'})\n\n# Load the model.\nmodel_path = os.path.join(check_point_path, \"training\", \"jsb_mmmtrack\", \"best_model\")\nmodel = GPT2LMHeadModel.from_pretrained(model_path)\n\nprint(\"Model loaded.\")", "_____no_output_____" ], [ "priming_sample, priming_sample_original = get_priming_token_sequence(\n validation_data_path,\n stop_on_track_end=0,\n stop_after_n_tokens=20,\n return_original=True\n)\n\ngenerated_sample = generate(model, tokenizer, priming_sample)\n\nprint(\"Original sample\")\nrender_token_sequence(priming_sample_original, use_program=False)\n\nprint(\"Reduced sample\")\nrender_token_sequence(priming_sample, use_program=False)\n\nprint(\"Reconstructed sample\")\nrender_token_sequence(generated_sample, use_program=False)", "_____no_output_____" ] ] ]
[ "markdown", "code" ]
[ [ "markdown", "markdown" ], [ "code", "code" ] ]
d020748f3703be79b4f4363621d29ad666d16585
14,139
ipynb
Jupyter Notebook
examples/notebooks/generic_mle.ipynb
KishManani/statsmodels
300b6fba90c65c8e94b4f83e04f7ae1b0ceeac2e
[ "BSD-3-Clause" ]
6,931
2015-01-01T11:41:55.000Z
2022-03-31T17:03:24.000Z
examples/notebooks/generic_mle.ipynb
Ajisusanto136/statsmodels
e741f3b22302199121090822353f20d794a02148
[ "BSD-3-Clause" ]
6,137
2015-01-01T00:33:45.000Z
2022-03-31T22:53:17.000Z
examples/notebooks/generic_mle.ipynb
Ajisusanto136/statsmodels
e741f3b22302199121090822353f20d794a02148
[ "BSD-3-Clause" ]
2,608
2015-01-02T21:32:31.000Z
2022-03-31T07:38:30.000Z
26.086716
624
0.557394
[ [ [ "# Maximum Likelihood Estimation (Generic models)", "_____no_output_____" ], [ "This tutorial explains how to quickly implement new maximum likelihood models in `statsmodels`. We give two examples: \n\n1. Probit model for binary dependent variables\n2. Negative binomial model for count data\n\nThe `GenericLikelihoodModel` class eases the process by providing tools such as automatic numeric differentiation and a unified interface to ``scipy`` optimization functions. Using ``statsmodels``, users can fit new MLE models simply by \"plugging-in\" a log-likelihood function. ", "_____no_output_____" ], [ "## Example 1: Probit model", "_____no_output_____" ] ], [ [ "import numpy as np\nfrom scipy import stats\nimport statsmodels.api as sm\nfrom statsmodels.base.model import GenericLikelihoodModel", "_____no_output_____" ] ], [ [ "The ``Spector`` dataset is distributed with ``statsmodels``. You can access a vector of values for the dependent variable (``endog``) and a matrix of regressors (``exog``) like this:", "_____no_output_____" ] ], [ [ "data = sm.datasets.spector.load_pandas()\nexog = data.exog\nendog = data.endog\nprint(sm.datasets.spector.NOTE)\nprint(data.exog.head())", "_____no_output_____" ] ], [ [ "Them, we add a constant to the matrix of regressors:", "_____no_output_____" ] ], [ [ "exog = sm.add_constant(exog, prepend=True)", "_____no_output_____" ] ], [ [ "To create your own Likelihood Model, you simply need to overwrite the loglike method.", "_____no_output_____" ] ], [ [ "class MyProbit(GenericLikelihoodModel):\n def loglike(self, params):\n exog = self.exog\n endog = self.endog\n q = 2 * endog - 1\n return stats.norm.logcdf(q*np.dot(exog, params)).sum()", "_____no_output_____" ] ], [ [ "Estimate the model and print a summary:", "_____no_output_____" ] ], [ [ "sm_probit_manual = MyProbit(endog, exog).fit()\nprint(sm_probit_manual.summary())", "_____no_output_____" ] ], [ [ "Compare your Probit implementation to ``statsmodels``' \"canned\" implementation:", "_____no_output_____" ] ], [ [ "sm_probit_canned = sm.Probit(endog, exog).fit()", "_____no_output_____" ], [ "print(sm_probit_canned.params)\nprint(sm_probit_manual.params)", "_____no_output_____" ], [ "print(sm_probit_canned.cov_params())\nprint(sm_probit_manual.cov_params())", "_____no_output_____" ] ], [ [ "Notice that the ``GenericMaximumLikelihood`` class provides automatic differentiation, so we did not have to provide Hessian or Score functions in order to calculate the covariance estimates.", "_____no_output_____" ], [ "\n\n## Example 2: Negative Binomial Regression for Count Data\n\nConsider a negative binomial regression model for count data with\nlog-likelihood (type NB-2) function expressed as:\n\n$$\n \\mathcal{L}(\\beta_j; y, \\alpha) = \\sum_{i=1}^n y_i ln \n \\left ( \\frac{\\alpha exp(X_i'\\beta)}{1+\\alpha exp(X_i'\\beta)} \\right ) -\n \\frac{1}{\\alpha} ln(1+\\alpha exp(X_i'\\beta)) + ln \\Gamma (y_i + 1/\\alpha) - ln \\Gamma (y_i+1) - ln \\Gamma (1/\\alpha)\n$$\n\nwith a matrix of regressors $X$, a vector of coefficients $\\beta$,\nand the negative binomial heterogeneity parameter $\\alpha$. \n\nUsing the ``nbinom`` distribution from ``scipy``, we can write this likelihood\nsimply as:\n", "_____no_output_____" ] ], [ [ "import numpy as np\nfrom scipy.stats import nbinom", "_____no_output_____" ], [ "def _ll_nb2(y, X, beta, alph):\n mu = np.exp(np.dot(X, beta))\n size = 1/alph\n prob = size/(size+mu)\n ll = nbinom.logpmf(y, size, prob)\n return ll", "_____no_output_____" ] ], [ [ "### New Model Class\n\nWe create a new model class which inherits from ``GenericLikelihoodModel``:", "_____no_output_____" ] ], [ [ "from statsmodels.base.model import GenericLikelihoodModel", "_____no_output_____" ], [ "class NBin(GenericLikelihoodModel):\n def __init__(self, endog, exog, **kwds):\n super(NBin, self).__init__(endog, exog, **kwds)\n \n def nloglikeobs(self, params):\n alph = params[-1]\n beta = params[:-1]\n ll = _ll_nb2(self.endog, self.exog, beta, alph)\n return -ll \n \n def fit(self, start_params=None, maxiter=10000, maxfun=5000, **kwds):\n # we have one additional parameter and we need to add it for summary\n self.exog_names.append('alpha')\n if start_params == None:\n # Reasonable starting values\n start_params = np.append(np.zeros(self.exog.shape[1]), .5)\n # intercept\n start_params[-2] = np.log(self.endog.mean())\n return super(NBin, self).fit(start_params=start_params, \n maxiter=maxiter, maxfun=maxfun, \n **kwds) ", "_____no_output_____" ] ], [ [ "Two important things to notice: \n\n+ ``nloglikeobs``: This function should return one evaluation of the negative log-likelihood function per observation in your dataset (i.e. rows of the endog/X matrix). \n+ ``start_params``: A one-dimensional array of starting values needs to be provided. The size of this array determines the number of parameters that will be used in optimization.\n \nThat's it! You're done!\n\n### Usage Example\n\nThe [Medpar](https://raw.githubusercontent.com/vincentarelbundock/Rdatasets/doc/COUNT/medpar.html)\ndataset is hosted in CSV format at the [Rdatasets repository](https://raw.githubusercontent.com/vincentarelbundock/Rdatasets). We use the ``read_csv``\nfunction from the [Pandas library](https://pandas.pydata.org) to load the data\nin memory. We then print the first few columns: \n", "_____no_output_____" ] ], [ [ "import statsmodels.api as sm", "_____no_output_____" ], [ "medpar = sm.datasets.get_rdataset(\"medpar\", \"COUNT\", cache=True).data\n\nmedpar.head()", "_____no_output_____" ] ], [ [ "The model we are interested in has a vector of non-negative integers as\ndependent variable (``los``), and 5 regressors: ``Intercept``, ``type2``,\n``type3``, ``hmo``, ``white``.\n\nFor estimation, we need to create two variables to hold our regressors and the outcome variable. These can be ndarrays or pandas objects.", "_____no_output_____" ] ], [ [ "y = medpar.los\nX = medpar[[\"type2\", \"type3\", \"hmo\", \"white\"]].copy()\nX[\"constant\"] = 1", "_____no_output_____" ] ], [ [ "Then, we fit the model and extract some information: ", "_____no_output_____" ] ], [ [ "mod = NBin(y, X)\nres = mod.fit()", "_____no_output_____" ] ], [ [ " Extract parameter estimates, standard errors, p-values, AIC, etc.:", "_____no_output_____" ] ], [ [ "print('Parameters: ', res.params)\nprint('Standard errors: ', res.bse)\nprint('P-values: ', res.pvalues)\nprint('AIC: ', res.aic)", "_____no_output_____" ] ], [ [ "As usual, you can obtain a full list of available information by typing\n``dir(res)``.\nWe can also look at the summary of the estimation results.", "_____no_output_____" ] ], [ [ "print(res.summary())", "_____no_output_____" ] ], [ [ "### Testing", "_____no_output_____" ], [ "We can check the results by using the statsmodels implementation of the Negative Binomial model, which uses the analytic score function and Hessian.", "_____no_output_____" ] ], [ [ "res_nbin = sm.NegativeBinomial(y, X).fit(disp=0)\nprint(res_nbin.summary())", "_____no_output_____" ], [ "print(res_nbin.params)", "_____no_output_____" ], [ "print(res_nbin.bse)", "_____no_output_____" ] ], [ [ "Or we could compare them to results obtained using the MASS implementation for R:\n\n url = 'https://raw.githubusercontent.com/vincentarelbundock/Rdatasets/csv/COUNT/medpar.csv'\n medpar = read.csv(url)\n f = los~factor(type)+hmo+white\n \n library(MASS)\n mod = glm.nb(f, medpar)\n coef(summary(mod))\n Estimate Std. Error z value Pr(>|z|)\n (Intercept) 2.31027893 0.06744676 34.253370 3.885556e-257\n factor(type)2 0.22124898 0.05045746 4.384861 1.160597e-05\n factor(type)3 0.70615882 0.07599849 9.291748 1.517751e-20\n hmo -0.06795522 0.05321375 -1.277024 2.015939e-01\n white -0.12906544 0.06836272 -1.887951 5.903257e-02\n\n### Numerical precision \n\nThe ``statsmodels`` generic MLE and ``R`` parameter estimates agree up to the fourth decimal. The standard errors, however, agree only up to the second decimal. This discrepancy is the result of imprecision in our Hessian numerical estimates. In the current context, the difference between ``MASS`` and ``statsmodels`` standard error estimates is substantively irrelevant, but it highlights the fact that users who need very precise estimates may not always want to rely on default settings when using numerical derivatives. In such cases, it is better to use analytical derivatives with the ``LikelihoodModel`` class.", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown", "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code", "code", "code" ], [ "markdown" ] ]
d02080362be91dc8902c5894338ef02c51236dcc
83,590
ipynb
Jupyter Notebook
source/Mlos.Notebooks/SmartCacheCPP.ipynb
HeatherJia/MLOS
b0a350fa817cd23763e29b3295a866838900f476
[ "MIT" ]
null
null
null
source/Mlos.Notebooks/SmartCacheCPP.ipynb
HeatherJia/MLOS
b0a350fa817cd23763e29b3295a866838900f476
[ "MIT" ]
null
null
null
source/Mlos.Notebooks/SmartCacheCPP.ipynb
HeatherJia/MLOS
b0a350fa817cd23763e29b3295a866838900f476
[ "MIT" ]
null
null
null
162.94347
34,128
0.869338
[ [ [ "# Connecting MLOS to a C++ application\n\nThis notebook walks through connecting MLOS to a C++ application within a docker container.\nWe will start a docker container, and run an MLOS Agent within it. The MLOS Agent will start the actual application, and communicate with it via a shared memory channel.\nIn this example, the MLOS Agent controls the execution of the workloads on the application, and we will later connect to the agent to optimize the configuration of our application.\n\nThe application is a \"SmartCache\" similar to the one in the SmartCacheOptimization notebook, though with some more parameters to tune.\nThe source for this example is in the `source/Examples/SmartCache` folder.", "_____no_output_____" ], [ "## Building the application\n\nTo build and run the necessary components for this example you need to create and run a docker image.\nTo that end, open a separate terminal and go to the MLOS main folder. Within that folder, run the following commands:\n\n1. [Build the Docker image](https://microsoft.github.io/MLOS/documentation/01-Prerequisites/#build-the-docker-image) using the [`Dockerfile`](../../Dockerfile#mlos-github-tree-view) at the root of the repository.\n\n ```shell\n docker build --build-arg=UbuntuVersion=20.04 -t mlos/build:ubuntu-20.04 .\n ```\n\n2. [Run the Docker image](https://microsoft.github.io/MLOS/documentation/02-Build/#create-a-new-container-instance) you just built.\n\n ```shell\n docker run -it -v $PWD:/src/MLOS -p 127.0.0.1:50051:50051/tcp \\\n --name mlos-build mlos/build:ubuntu-20.04\n \n ```\n This will open a shell inside the docker container.\n We're also exposing port 50051 on the docker container to port 50051 of our host machine.\n This will allow us later to connect to the optimizer that runs inside the docker container.\n\n3. Inside the container, [build the compiled software](https://microsoft.github.io/MLOS/documentation/02-Build/#cli-make) with `make`:\n\n ```sh\n make dotnet-build cmake-build cmake-install\n ```", "_____no_output_____" ], [ "The relevant output will be at:\n\n- Mlos.Agent.Server:\n\n This file corresponds to the main entry point for MLOS, written in C#. You can find the source in\n `source/Mlos.Agent.Server/MlosAgentServer.cs` and the binary at\n `target/bin/Release/Mlos.Agent.Server.dll`\n\n- SmartCache:\n\n This is the C++ executable that implements the SmartCache and executes some workloads.\n You can find the source in `source/Examples/SmartCache/Main.cpp` and the binary at\n `target/bin/Release/SmartCache`\n\n- SmartCache.SettingsRegistry:\n\n This is the C# code that declares the configuration options for the SmartCache component, and defines the communication\n between the the MLOS Agent and the SmartCache component. You can find the source in\n `source/Examples/SmartCache/SmartCache.SettingsRegistry/AssemblyInitializer.cs` and the binary at\n `target/bin/Release/SmartCache.SettingsRegistry.dll`\n ", "_____no_output_____" ], [ "## Starting the MLOS Agent and executing the workloads:\n\nWithin the docker container, we can now tell the agent where the configuration options are stored, by setting the `MLOS_Settings_REGISTRY_PATH`.\nThen, we can run the MLOS Agent, which will in turn run the SmartCache executable.\n```sh\nexport MLOS_SETTINGS_REGISTRY_PATH=\"target/bin/Release\"\n\ntools/bin/dotnet target/bin/Release/Mlos.Agent.Server.dll \\\n --executable target/bin/Release/SmartCache\n```", "_____no_output_____" ], [ "The main loop of ``SmartCache`` contains the following:\n\n```cpp\n for (int observations = 0; observations < 100; observations++)\n {\n // run 100 observations\n std::cout << \"observations: \" << observations << std::endl;\n\n for (int i = 0; i < 20; i++)\n {\n // run a workload 20 times\n CyclicalWorkload(2048, smartCache);\n }\n\n bool isConfigReady = false;\n std::mutex waitForConfigMutex;\n std::condition_variable waitForConfigCondVar;\n\n // Setup a callback.\n //\n // OMMITTED\n // [...]\n\n // Send a request to obtain a new configuration.\n SmartCache::RequestNewConfigurationMessage msg = { 0 };\n mlosContext.SendTelemetryMessage(msg);\n // wait for MLOS Agent so send a message with a new configuration\n std::unique_lock<std::mutex> lock(waitForConfigMutex);\n while (!isConfigReady)\n {\n waitForConfigCondVar.wait(lock);\n }\n\n config.Update();\n smartCache.Reconfigure();\n }\n```", "_____no_output_____" ], [ "After each iteration, a TelemetryMessage is sent to the MLOS Agent, and the SmartCache blocks until it receives a new configuration to run the next workload.\nBy default, the agent is not connected to any optimizer, and will not change the original configuration, so the workload will just run uninterrupted.", "_____no_output_____" ], [ "## Starting an Optimizer\nWe can now also start an Optimizer service for the MLOS Agent to connect to so that we can actually optimize the parameters for this workload.\nAs the optimizer is running in a separate process, we need to create a new shell on the running docker container using the following command:\n\n```shell\ndocker exec -it mlos-build /bin/bash\n```\n\nWithin the container, we now install the Python optimizer service:\n```shell\npip install -e source/Mlos.Python/\n```\n\nAnd run it:\n```shell\nstart_optimizer_microservice launch --port 50051\n```\n", "_____no_output_____" ], [ "## Connecting the Agent to the Optimizer\nNow we can start the agent again, this time also pointing it to the optimizer:\n```sh\ntools/bin/dotnet target/bin/Release/Mlos.Agent.Server.dll \\\n --executable target/bin/Release/SmartCache \\\n --optimizer-uri http://localhost:50051\n```", "_____no_output_____" ], [ "This will run the workload again, this time using the optimizer to suggest better configurations. You should see output both in the terminal the agent is running in and in the terminal the OptimizerMicroservice is running in.", "_____no_output_____" ], [ "## Inspecting results\nAfter (or even while) the optimization is running, we can connect to the optimizer via another GRPC channel.\nThe optimizer is running within the docker container, but when we started docker, we exposed the port 50051 as the same port 50051 on the host machine (on which this notebook is running). So we can now connect to the optimizer within the docker container at `127.0.0.1:50051`.\nThis assumes this notebook runs in an environment with the `mlos` Python package installed ([see the documentation](https://microsoft.github.io/MLOS/documentation/01-Prerequisites/#python-quickstart)).", "_____no_output_____" ] ], [ [ "from mlos.Grpc.OptimizerMonitor import OptimizerMonitor\nimport grpc\n# create a grpc channel and instantiate the OptimizerMonitor\nchannel = grpc.insecure_channel('127.0.0.1:50051')\noptimizer_monitor = OptimizerMonitor(grpc_channel=channel)\noptimizer_monitor", "_____no_output_____" ], [ "# There should be one optimizer running in the docker container\n# corresponding to the C++ SmartCache optimization problem\n# An OptimizerMicroservice can run multiple optimizers, which would all be listed here\noptimizers = optimizer_monitor.get_existing_optimizers()\noptimizers", "_____no_output_____" ] ], [ [ "We can now get the observations exactly the same way as for the Python example in `SmartCacheOptimization.ipynb`", "_____no_output_____" ] ], [ [ "optimizer = optimizers[0]\nfeatures_df, objectives_df = optimizer.get_all_observations()", "_____no_output_____" ], [ "import pandas as pd\nfeatures, targets = optimizer.get_all_observations()\ndata = pd.concat([features, targets], axis=1)\ndata.to_json(\"CacheLatencyMainCPP.json\")\ndata", "_____no_output_____" ], [ "lru_data, mru_data = data.groupby('cache_implementation')\n\nimport matplotlib.pyplot as plt\nline_lru = lru_data[1].plot( y='PushLatency', label='LRU', marker='o', linestyle='none', alpha=.6,figsize=(16, 6))\nmru_data[1].plot( y='PushLatency', label='MRU', marker='o', linestyle='none', alpha=.6, ax=plt.gca(),figsize=(16, 6))\nplt.ylabel(\"Cache Latency\")\nplt.xlabel(\"Observations\")\nplt.legend()\nplt.savefig(\"Cache Latency&Observations-Main.png\")", "_____no_output_____" ], [ "lru_data, mru_data = data.groupby('cache_implementation')\n\nimport matplotlib.pyplot as plt\nline_lru = lru_data[1].plot(x='lru_cache_config.cache_size', y='PushLatency', label='LRU', marker='o', linestyle='none', alpha=.6,figsize=(16, 6))\nmru_data[1].plot(x='mru_cache_config.cache_size', y='PushLatency', label='MRU', marker='o', linestyle='none', alpha=.6, ax=plt.gca(),figsize=(16, 6))\nplt.ylabel(\"Cache Latency\")\nplt.xlabel(\"Cache Size\")\nplt.legend()\nplt.savefig(\"Cache Latency & Size - Main.png\")", "_____no_output_____" ] ], [ [ "# Going Further\n1. Instead of cache hit rate, use a metric based on runtime (e.g. latency, throughput, etc) as performance metric. Environment (context) sensitive metrics can also be measured (e.g. [time](https://bduvenhage.me/performance/2019/06/22/high-performance-timer.html). How does the signal from the runtime based metric compare to the application specific one (hit rate)? How consistent are the runtime results across multiple runs?\n2. Pick another widely used [cache replacement policy](https://en.wikipedia.org/wiki/Cache_replacement_policies) such as LFU and construct a synthetic workload on which you expect this strategy to work well. Implement the policy and workload as part of the SmartCache example, and add a new option to the ``SmartCache.SettingsRegistry\\AssemblyInitializer.cs``. Run the optimization again with your new workload. Does the optimizer find that your new policy performs best?", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown" ]
[ [ "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code" ], [ "markdown" ] ]
d020886c0ea73756d5fc79f8e34898ec59765b30
79,879
ipynb
Jupyter Notebook
RML_example_org.ipynb
sa3036/Radio_ML_571M
fd034d4a390eea991e399882a39597b9ee36252a
[ "MIT" ]
2
2020-02-23T07:26:02.000Z
2022-01-28T07:15:33.000Z
RML_example_org.ipynb
sa3036/Radio_ML_571M
fd034d4a390eea991e399882a39597b9ee36252a
[ "MIT" ]
null
null
null
RML_example_org.ipynb
sa3036/Radio_ML_571M
fd034d4a390eea991e399882a39597b9ee36252a
[ "MIT" ]
3
2020-02-29T21:32:19.000Z
2021-04-12T01:28:23.000Z
119.222388
23,888
0.851525
[ [ [ "#Download the dataset from opensig\nimport urllib.request\nurllib.request.urlretrieve('http://opendata.deepsig.io/datasets/2016.10/RML2016.10a.tar.bz2', 'RML2016.10a.tar.bz2')", "_____no_output_____" ], [ "#decompress the .bz2 file into .tar file\nimport sys\nimport os\nimport bz2\n\nzipfile = bz2.BZ2File('./RML2016.10a.tar.bz2') # open the file\ndata = zipfile.read() # get the decompressed data", "_____no_output_____" ], [ "#write the .tar file\nopen('./RML2016.10a.tar', 'wb').write(data) # write a uncompressed file", "_____no_output_____" ], [ "#extract the .tar file\nimport tarfile\nmy_tar = tarfile.open('./RML2016.10a.tar')\nmy_tar.extractall('./') # specify which folder to extract to\nmy_tar.close()", "_____no_output_____" ], [ "#extract the pickle file\nimport pickle\nimport numpy as np\nXd = pickle.load(open(\"RML2016.10a_dict.pkl\",'rb'),encoding=\"bytes\")\nsnrs,mods = map(lambda j: sorted(list(set(map(lambda x: x[j], Xd.keys())))), [1,0])\nX = [] \nlbl = []\nfor mod in mods:\n for snr in snrs:\n X.append(Xd[(mod,snr)])\n for i in range(Xd[(mod,snr)].shape[0]): lbl.append((mod,snr))\nX = np.vstack(X)", "_____no_output_____" ], [ "# Import all the things we need ---\n%matplotlib inline\nimport random\nimport tensorflow.keras.utils\nimport tensorflow.keras.models as models\nfrom tensorflow.keras.layers import Reshape,Dense,Dropout,Activation,Flatten\nfrom tensorflow.keras.layers import GaussianNoise\nfrom tensorflow.keras.layers import Convolution2D, MaxPooling2D, ZeroPadding2D\nfrom tensorflow.keras.regularizers import *\nfrom tensorflow.keras.optimizers import *\nimport matplotlib.pyplot as plt\nimport seaborn as sns\nimport tensorflow.keras", "_____no_output_____" ], [ "# Partition the data\n# into training and test sets of the form we can train/test on \nnp.random.seed(2020)\nn_examples = X.shape[0]\nn_train = n_examples // 2\ntrain_idx = np.random.choice(range(0,n_examples), size=n_train, replace=False)\ntest_idx = list(set(range(0,n_examples))-set(train_idx))\nX_train = X[train_idx]\nX_test = X[test_idx]", "_____no_output_____" ], [ "#one-hot encoding the label\nfrom sklearn import preprocessing\nlb = preprocessing.LabelBinarizer()\nlb.fit(np.asarray(lbl)[:,0])\nprint(lb.classes_)\nlbl_encoded=lb.transform(np.asarray(lbl)[:,0])\ny_train=lbl_encoded[train_idx]\ny_test=lbl_encoded[test_idx]", "[b'8PSK' b'AM-DSB' b'AM-SSB' b'BPSK' b'CPFSK' b'GFSK' b'PAM4' b'QAM16'\n b'QAM64' b'QPSK' b'WBFM']\n" ], [ "in_shp = list(X_train.shape[1:])\nprint(X_train.shape, in_shp)\nclasses = mods", "(110000, 2, 128) [2, 128]\n" ], [ "dr = 0.5 # dropout rate (%)\nmodel = models.Sequential()\nmodel.add(Reshape([1]+in_shp, input_shape=in_shp))\nmodel.add(ZeroPadding2D((0, 2)))\nmodel.add(Convolution2D(256, 1, 3, activation=\"relu\", name=\"conv1\"))\nmodel.add(Dropout(dr))\nmodel.add(ZeroPadding2D((0, 2)))\nmodel.add(Convolution2D(80, 1, 3, activation=\"relu\", name=\"conv2\"))\nmodel.add(Dropout(dr))\nmodel.add(Flatten())\nmodel.add(Dense(256, activation='relu', name=\"dense1\"))\nmodel.add(Dropout(dr))\nmodel.add(Dense( len(classes), name=\"dense2\" ))\nmodel.add(Activation('softmax'))\nmodel.add(Reshape([len(classes)]))\nmodel.compile(loss='categorical_crossentropy', optimizer='adam')\nmodel.summary()", "Model: \"sequential\"\n_________________________________________________________________\nLayer (type) Output Shape Param # \n=================================================================\nreshape (Reshape) (None, 1, 2, 128) 0 \n_________________________________________________________________\nzero_padding2d (ZeroPadding2 (None, 1, 6, 128) 0 \n_________________________________________________________________\nconv1 (Conv2D) (None, 1, 2, 256) 33024 \n_________________________________________________________________\ndropout (Dropout) (None, 1, 2, 256) 0 \n_________________________________________________________________\nzero_padding2d_1 (ZeroPaddin (None, 1, 6, 256) 0 \n_________________________________________________________________\nconv2 (Conv2D) (None, 1, 2, 80) 20560 \n_________________________________________________________________\ndropout_1 (Dropout) (None, 1, 2, 80) 0 \n_________________________________________________________________\nflatten (Flatten) (None, 160) 0 \n_________________________________________________________________\ndense1 (Dense) (None, 256) 41216 \n_________________________________________________________________\ndropout_2 (Dropout) (None, 256) 0 \n_________________________________________________________________\ndense2 (Dense) (None, 11) 2827 \n_________________________________________________________________\nactivation (Activation) (None, 11) 0 \n_________________________________________________________________\nreshape_1 (Reshape) (None, 11) 0 \n=================================================================\nTotal params: 97,627\nTrainable params: 97,627\nNon-trainable params: 0\n_________________________________________________________________\n" ], [ "# Set up some params \nnb_epoch = 100 # number of epochs to train on\nbatch_size = 1024 # training batch size", "_____no_output_____" ], [ "from sklearn.model_selection import train_test_split\nX_train, X_valid, y_train, y_valid = train_test_split(X_train, y_train, test_size=0.2)", "_____no_output_____" ], [ "# perform training ...\n# - call the main training loop in keras for our network+dataset\nfilepath = 'convmodrecnets_CNN2_0.5.wts.h5'\nimport time\nt_0=time.time()\n\nhistory = model.fit(X_train,\n y_train,\n batch_size=batch_size,\n epochs=nb_epoch,\n verbose=2,\n validation_data=(X_valid, y_valid),\n callbacks = [\n tensorflow.keras.callbacks.ModelCheckpoint(filepath, monitor='val_loss', verbose=0, save_best_only=True, mode='auto'),\n tensorflow.keras.callbacks.EarlyStopping(monitor='val_loss', patience=5, verbose=0, mode='auto')\n ])\ndelta_t=time.time()-t_0\nprint(delta_t)\n# we re-load the best weights once training is finished\nmodel.load_weights(filepath)", "Train on 88000 samples, validate on 22000 samples\nEpoch 1/100\n88000/88000 - 10s - loss: 2.2975 - val_loss: 2.2294\nEpoch 2/100\n88000/88000 - 8s - loss: 2.2226 - val_loss: 2.1866\nEpoch 3/100\n88000/88000 - 8s - loss: 2.1108 - val_loss: 1.9805\nEpoch 4/100\n88000/88000 - 8s - loss: 2.0016 - val_loss: 1.9234\nEpoch 5/100\n88000/88000 - 8s - loss: 1.9478 - val_loss: 1.8697\nEpoch 6/100\n88000/88000 - 8s - loss: 1.9098 - val_loss: 1.8386\nEpoch 7/100\n88000/88000 - 8s - loss: 1.8859 - val_loss: 1.8196\nEpoch 8/100\n88000/88000 - 8s - loss: 1.8708 - val_loss: 1.8064\nEpoch 9/100\n88000/88000 - 8s - loss: 1.8560 - val_loss: 1.7905\nEpoch 10/100\n88000/88000 - 8s - loss: 1.8472 - val_loss: 1.7863\nEpoch 11/100\n88000/88000 - 9s - loss: 1.8383 - val_loss: 1.7816\nEpoch 12/100\n88000/88000 - 8s - loss: 1.8316 - val_loss: 1.7707\nEpoch 13/100\n88000/88000 - 8s - loss: 1.8249 - val_loss: 1.7675\nEpoch 14/100\n88000/88000 - 8s - loss: 1.8197 - val_loss: 1.7691\nEpoch 15/100\n88000/88000 - 8s - loss: 1.8138 - val_loss: 1.7612\nEpoch 16/100\n88000/88000 - 9s - loss: 1.8104 - val_loss: 1.7551\nEpoch 17/100\n88000/88000 - 8s - loss: 1.8039 - val_loss: 1.7546\nEpoch 18/100\n88000/88000 - 8s - loss: 1.8013 - val_loss: 1.7510\nEpoch 19/100\n88000/88000 - 8s - loss: 1.7968 - val_loss: 1.7515\nEpoch 20/100\n88000/88000 - 8s - loss: 1.7908 - val_loss: 1.7471\nEpoch 21/100\n88000/88000 - 8s - loss: 1.7902 - val_loss: 1.7431\nEpoch 22/100\n88000/88000 - 8s - loss: 1.7853 - val_loss: 1.7404\nEpoch 23/100\n88000/88000 - 8s - loss: 1.7826 - val_loss: 1.7416\nEpoch 24/100\n88000/88000 - 8s - loss: 1.7841 - val_loss: 1.7414\nEpoch 25/100\n88000/88000 - 8s - loss: 1.7769 - val_loss: 1.7401\nEpoch 26/100\n88000/88000 - 8s - loss: 1.7731 - val_loss: 1.7434\nEpoch 27/100\n88000/88000 - 9s - loss: 1.7716 - val_loss: 1.7344\nEpoch 28/100\n88000/88000 - 9s - loss: 1.7688 - val_loss: 1.7330\nEpoch 29/100\n88000/88000 - 8s - loss: 1.7672 - val_loss: 1.7374\nEpoch 30/100\n88000/88000 - 8s - loss: 1.7651 - val_loss: 1.7340\nEpoch 31/100\n88000/88000 - 9s - loss: 1.7624 - val_loss: 1.7307\nEpoch 32/100\n88000/88000 - 8s - loss: 1.7593 - val_loss: 1.7306\nEpoch 33/100\n88000/88000 - 8s - loss: 1.7577 - val_loss: 1.7369\nEpoch 34/100\n88000/88000 - 9s - loss: 1.7569 - val_loss: 1.7303\nEpoch 35/100\n88000/88000 - 8s - loss: 1.7545 - val_loss: 1.7290\nEpoch 36/100\n88000/88000 - 8s - loss: 1.7518 - val_loss: 1.7303\nEpoch 37/100\n88000/88000 - 9s - loss: 1.7510 - val_loss: 1.7272\nEpoch 38/100\n88000/88000 - 8s - loss: 1.7495 - val_loss: 1.7271\nEpoch 39/100\n88000/88000 - 10s - loss: 1.7473 - val_loss: 1.7276\nEpoch 40/100\n88000/88000 - 9s - loss: 1.7461 - val_loss: 1.7314\nEpoch 41/100\n88000/88000 - 9s - loss: 1.7447 - val_loss: 1.7254\nEpoch 42/100\n88000/88000 - 9s - loss: 1.7418 - val_loss: 1.7256\nEpoch 43/100\n88000/88000 - 9s - loss: 1.7427 - val_loss: 1.7256\nEpoch 44/100\n88000/88000 - 9s - loss: 1.7420 - val_loss: 1.7231\nEpoch 45/100\n88000/88000 - 8s - loss: 1.7395 - val_loss: 1.7232\nEpoch 46/100\n88000/88000 - 9s - loss: 1.7379 - val_loss: 1.7245\nEpoch 47/100\n88000/88000 - 9s - loss: 1.7356 - val_loss: 1.7216\nEpoch 48/100\n88000/88000 - 8s - loss: 1.7340 - val_loss: 1.7278\nEpoch 49/100\n88000/88000 - 9s - loss: 1.7310 - val_loss: 1.7207\nEpoch 50/100\n88000/88000 - 8s - loss: 1.7328 - val_loss: 1.7227\nEpoch 51/100\n88000/88000 - 9s - loss: 1.7307 - val_loss: 1.7244\nEpoch 52/100\n88000/88000 - 9s - loss: 1.7297 - val_loss: 1.7208\nEpoch 53/100\n88000/88000 - 8s - loss: 1.7288 - val_loss: 1.7248\nEpoch 54/100\n88000/88000 - 8s - loss: 1.7242 - val_loss: 1.7253\n453.65301418304443\n" ], [ "# Show simple version of performance\nscore = model.evaluate(X_test, y_test, verbose=0, batch_size=batch_size)\nprint(score)", "1.7183428375937722\n" ], [ "# Show loss curves \nplt.figure()\nplt.title('Training performance')\nplt.plot(history.epoch, history.history['loss'], label='train loss+error')\nplt.plot(history.epoch, history.history['val_loss'], label='val_error')\nplt.legend()", "_____no_output_____" ], [ "def plot_confusion_matrix(cm, title='Confusion matrix', cmap=plt.cm.Blues, labels=[]):\n plt.imshow(cm, interpolation='nearest', cmap=cmap)\n plt.title(title)\n plt.colorbar()\n tick_marks = np.arange(len(labels))\n plt.xticks(tick_marks, labels, rotation=45)\n plt.yticks(tick_marks, labels)\n plt.tight_layout()\n plt.ylabel('True label')\n plt.xlabel('Predicted label')", "_____no_output_____" ], [ "# Plot confusion matrix\ntest_Y_hat = model.predict(X_test, batch_size=batch_size)\nconf = np.zeros([len(classes),len(classes)])\nconfnorm = np.zeros([len(classes),len(classes)])\nfor i in range(0,X_test.shape[0]):\n j = list(y_test[i,:]).index(1)\n k = int(np.argmax(test_Y_hat[i,:]))\n conf[j,k] = conf[j,k] + 1\nfor i in range(0,len(classes)):\n confnorm[i,:] = conf[i,:] / np.sum(conf[i,:])\nplot_confusion_matrix(confnorm, labels=classes)", "_____no_output_____" ], [ "# Get the test accuracy for different SNRs\nacc = {}\nacc_array=[]\n\nsnr_array=np.asarray(lbl)[:,1]\nlb_temp = preprocessing.LabelBinarizer()\nlb_temp.fit(snr_array)\ntemp_array=lb_temp.classes_\nsnr_label_array = []\n\n\nsnr_label_array.append(temp_array[6])\nsnr_label_array.append(temp_array[4])\nsnr_label_array.append(temp_array[3])\nsnr_label_array.append(temp_array[2])\nsnr_label_array.append(temp_array[1])\nsnr_label_array.append(temp_array[0])\nsnr_label_array.append(temp_array[9])\nsnr_label_array.append(temp_array[8])\nsnr_label_array.append(temp_array[7])\nsnr_label_array.append(temp_array[5])\nsnr_label_array.append(temp_array[10])\nsnr_label_array.append(temp_array[16])\nsnr_label_array.append(temp_array[17])\nsnr_label_array.append(temp_array[18])\nsnr_label_array.append(temp_array[19])\nsnr_label_array.append(temp_array[11])\nsnr_label_array.append(temp_array[12])\nsnr_label_array.append(temp_array[13])\nsnr_label_array.append(temp_array[14])\nsnr_label_array.append(temp_array[15])\n\n\n#print(snr_label_array)\ny_test_snr=snr_array[test_idx]\n\n\n\nfor snr in snr_label_array:\n test_X_i = X_test[np.where(y_test_snr==snr)]\n test_Y_i = y_test[np.where(y_test_snr==snr)]\n \n test_Y_i_hat = model.predict(test_X_i)\n conf = np.zeros([len(classes),len(classes)])\n confnorm = np.zeros([len(classes),len(classes)])\n for i in range(0,test_X_i.shape[0]):\n j = list(test_Y_i[i,:]).index(1)\n k = int(np.argmax(test_Y_i_hat[i,:]))\n conf[j,k] = conf[j,k] + 1\n for i in range(0,len(classes)):\n confnorm[i,:] = conf[i,:] / np.sum(conf[i,:])\n \n #plt.figure()\n #plot_confusion_matrix(confnorm, labels=classes, title=\"ConvNet Confusion Matrix (SNR=%d)\"%(snr))\n \n cor = np.sum(np.diag(conf))\n ncor = np.sum(conf) - cor\n print(\"Overall Accuracy: \", cor / (cor+ncor),\"for SNR\",snr)\n acc[snr] = 1.0*cor/(cor+ncor)\n acc_array.append(1.0*cor/(cor+ncor))\n\nprint(\"Random Guess Accuracy:\",1/11)", "Overall Accuracy: 0.08926005747126436 for SNR b'-20'\nOverall Accuracy: 0.09472531483847417 for SNR b'-18'\nOverall Accuracy: 0.09718716191849983 for SNR b'-16'\nOverall Accuracy: 0.10534016093635698 for SNR b'-14'\nOverall Accuracy: 0.13100983020554066 for SNR b'-12'\nOverall Accuracy: 0.19256943167187787 for SNR b'-10'\nOverall Accuracy: 0.24149408284023668 for SNR b'-8'\nOverall Accuracy: 0.3078750228393934 for SNR b'-6'\nOverall Accuracy: 0.42667622803872357 for SNR b'-4'\nOverall Accuracy: 0.46278140885984026 for SNR b'-2'\nOverall Accuracy: 0.4788680632120544 for SNR b'0'\nOverall Accuracy: 0.4858490566037736 for SNR b'2'\nOverall Accuracy: 0.4775846294602013 for SNR b'4'\nOverall Accuracy: 0.49110254999082736 for SNR b'6'\nOverall Accuracy: 0.48256023013304566 for SNR b'8'\nOverall Accuracy: 0.4739277451503826 for SNR b'10'\nOverall Accuracy: 0.4846167849990898 for SNR b'12'\nOverall Accuracy: 0.494631483166515 for SNR b'14'\nOverall Accuracy: 0.4829535095715588 for SNR b'16'\nOverall Accuracy: 0.4878138847858198 for SNR b'18'\nRandom Guess Accuracy: 0.09090909090909091\n" ], [ "# Show loss curves \nplt.figure()\nplt.title('Accuracy vs SNRs')\nplt.plot(np.arange(-20,20,2), acc_array)", "_____no_output_____" ] ] ]
[ "code" ]
[ [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ] ]
d0208ea794343d2d6cac264eb0aadfe16c420f37
2,952
ipynb
Jupyter Notebook
100days/day 03 - next permutation.ipynb
gopala-kr/ds-notebooks
bc35430ecdd851f2ceab8f2437eec4d77cb59423
[ "MIT" ]
5
2018-05-09T04:02:04.000Z
2021-02-21T19:27:56.000Z
100days/day 03 - next permutation.ipynb
gopala-kr/ds-notebooks
bc35430ecdd851f2ceab8f2437eec4d77cb59423
[ "MIT" ]
null
null
null
100days/day 03 - next permutation.ipynb
gopala-kr/ds-notebooks
bc35430ecdd851f2ceab8f2437eec4d77cb59423
[ "MIT" ]
5
2018-02-23T22:08:28.000Z
2020-08-19T08:31:47.000Z
20.081633
64
0.392954
[ [ [ "## algorithm", "_____no_output_____" ] ], [ [ "def permute(values):\n n = len(values)\n \n # i: position of pivot\n for i in reversed(range(n - 1)):\n if values[i] < values[i + 1]:\n break\n else:\n # very last permutation\n values[:] = reversed(values[:])\n return values\n \n # j: position of the next candidate\n for j in reversed(range(i, n)):\n if values[i] < values[j]:\n # swap pivot and reverse the tail\n values[i], values[j] = values[j], values[i]\n values[i + 1:] = reversed(values[i + 1:])\n break\n \n return values", "_____no_output_____" ] ], [ [ "## run", "_____no_output_____" ] ], [ [ "x = [4, 3, 2, 1]\nfor i in range(25):\n print(permute(x))", "[1, 2, 3, 4]\n[1, 2, 4, 3]\n[1, 3, 2, 4]\n[1, 3, 4, 2]\n[1, 4, 2, 3]\n[1, 4, 3, 2]\n[2, 1, 3, 4]\n[2, 1, 4, 3]\n[2, 3, 1, 4]\n[2, 3, 4, 1]\n[2, 4, 1, 3]\n[2, 4, 3, 1]\n[3, 1, 2, 4]\n[3, 1, 4, 2]\n[3, 2, 1, 4]\n[3, 2, 4, 1]\n[3, 4, 1, 2]\n[3, 4, 2, 1]\n[4, 1, 2, 3]\n[4, 1, 3, 2]\n[4, 2, 1, 3]\n[4, 2, 3, 1]\n[4, 3, 1, 2]\n[4, 3, 2, 1]\n[1, 2, 3, 4]\n" ], [ "permute(list('FADE'))", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code" ]
[ [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ] ]
d020a55eca1e01c7f22e441e5ad6124ad968bc14
14,429
ipynb
Jupyter Notebook
fairness.ipynb
ravikirankb/machine-learning-tutorial
064937059ab7945d2c08ccdc839ca799f61bd1aa
[ "MIT" ]
null
null
null
fairness.ipynb
ravikirankb/machine-learning-tutorial
064937059ab7945d2c08ccdc839ca799f61bd1aa
[ "MIT" ]
null
null
null
fairness.ipynb
ravikirankb/machine-learning-tutorial
064937059ab7945d2c08ccdc839ca799f61bd1aa
[ "MIT" ]
null
null
null
14,429
14,429
0.722157
[ [ [ "### Fairness ###\n", "_____no_output_____" ], [ "##### This exercise we explore the concepts and techniques in fairness in machine learning #####\n<b> Through this exercise one can \n * Increase awareness of different types of biases that can occur\n * Explore feature data to identify potential sources of biases before training the model.\n * Evaluate model performance in subgroup rather than aggregate\n \n Dataset:\n We use the Adult census Income dataset commonly used in machine learning.\n \n Task is to predict if the person makes over $50,000 a year while performing different methodologies to ensure fairness\n</b>", "_____no_output_____" ] ], [ [ "### setup\n\n%tensorflow_version 2.x\nfrom __future__ import absolute_import, division, print_function, unicode_literals", "_____no_output_____" ], [ "## title Import revelant modules and install Facets\nimport numpy as np\nimport pandas as pd\nimport tensorflow as tf\nfrom tensorflow.keras import layers\nfrom matplotlib import pyplot as plt\nfrom matplotlib import rcParams\nimport seaborn as sns\n\n# adjust the granularity of reporting. \npd.options.display.max_rows = 10\npd.options.display.float_format = \"{:.1f}\".format\n\nfrom google.colab import widgets\n# code for facets\nfrom IPython.core.display import display, HTML\nimport base64\n!pip install facets-overview==1.0.0\nfrom facets_overview.feature_statistics_generator import FeatureStatisticsGenerator", "_____no_output_____" ], [ "## load the adult data set.\n\nCOLUMNS = [\"age\", \"workclass\", \"fnlwgt\", \"education\", \"education_num\",\n \"marital_status\", \"occupation\", \"relationship\", \"race\", \"gender\",\n \"capital_gain\", \"capital_loss\", \"hours_per_week\", \"native_country\",\n \"income_bracket\"]\n\ntrain_csv = tf.keras.utils.get_file('adult.data', \n 'https://download.mlcc.google.com/mledu-datasets/adult_census_train.csv')\ntest_csv = tf.keras.utils.get_file('adult.data', \n 'https://download.mlcc.google.com/mledu-datasets/adult_census_test.csv')\n\ntrain_df = pd.read_csv(train_csv, names=COLUMNS, sep=r'\\s*,\\s*', \n engine='python', na_values=\"?\")\ntest_df = pd.read_csv(test_csv, names=COLUMNS, sep=r'\\s*,\\s*', skiprows=[0],\n engine='python', na_values=\"?\")", "_____no_output_____" ] ], [ [ "<b> Analysing the dataset with facets \n We analyse the dataset to identify any peculiarities before we train the model\n \n Here are some of the questions to ask before we can go ahead with the training\n * Are there missing feature values for a large number of observations?\n * Are there features that are missing that might affect other features?\n * Are there any unexpected feature values?\n * What signs of data skew do you see?\n</b>", "_____no_output_____" ], [ "<b> We use the Facets overview to analyze the distribution of values across the Adult dataset </b> ", "_____no_output_____" ] ], [ [ "## title Visualize the Data in Facets\nfsg = FeatureStatisticsGenerator()\ndataframes = [{'table': train_df, 'name': 'trainData'}]\ncensusProto = fsg.ProtoFromDataFrames(dataframes)\nprotostr = base64.b64encode(censusProto.SerializeToString()).decode(\"utf-8\")\n\nHTML_TEMPLATE = \"\"\"<script src=\"https://cdnjs.cloudflare.com/ajax/libs/webcomponentsjs/1.3.3/webcomponents-lite.js\"></script>\n <link rel=\"import\" href=\"https://raw.githubusercontent.com/PAIR-code/facets/1.0.0/facets-dist/facets-jupyter.html\">\n <facets-overview id=\"elem\"></facets-overview>\n <script>\n document.querySelector(\"#elem\").protoInput = \"{protostr}\";\n </script>\"\"\"\nhtml = HTML_TEMPLATE.format(protostr=protostr)\ndisplay(HTML(html))", "_____no_output_____" ] ], [ [ "<b> Task #1\n We can perform the fairness analysis on the visualization dataset in the faucet, click on the Show Raw Data button on the histograms and categorical features to see the distribution of values, and from that try to find if there are any missing features?, features missing that can affect other features? are there any unexpected feature values? are there any skews in the dataset?\n </b>", "_____no_output_____" ], [ "<b> Going further, using the knowledge of the Adult datset we can now construct a neural network to predict income by using the Tensor\nflow's Keras API.</b>", "_____no_output_____" ] ], [ [ "## first convert the pandas data frame of the adult datset to tensor flow arrays.\n\ndef pandas_to_numpy(data):\n # Drop empty rows.\n data = data.dropna(how=\"any\", axis=0)\n\n # Separate DataFrame into two Numpy arrays\n labels = np.array(data['income_bracket'] == \">50K\")\n features = data.drop('income_bracket', axis=1)\n features = {name:np.array(value) for name, value in features.items()}\n \n return features, labels", "_____no_output_____" ], [ "## map the data to columns that maps to the tensor flow using tf.feature_columns\n\n##title Create categorical feature columns\n\n# we use categorical_column_with_hash_bucket() for the occupation and native_country columns to help map\n# each feature string into an integer ID.\n# since we dont know the full range of values for this columns.\noccupation = tf.feature_column.categorical_column_with_hash_bucket(\n \"occupation\", hash_bucket_size=1000)\nnative_country = tf.feature_column.categorical_column_with_hash_bucket(\n \"native_country\", hash_bucket_size=1000)\n\n# since we know what the possible values for the other columns\n# we can be more explicit and use categorical_column_with_vocabulary_list()\ngender = tf.feature_column.categorical_column_with_vocabulary_list(\n \"gender\", [\"Female\", \"Male\"])\nrace = tf.feature_column.categorical_column_with_vocabulary_list(\n \"race\", [\n \"White\", \"Asian-Pac-Islander\", \"Amer-Indian-Eskimo\", \"Other\", \"Black\"\n ])\neducation = tf.feature_column.categorical_column_with_vocabulary_list(\n \"education\", [\n \"Bachelors\", \"HS-grad\", \"11th\", \"Masters\", \"9th\",\n \"Some-college\", \"Assoc-acdm\", \"Assoc-voc\", \"7th-8th\",\n \"Doctorate\", \"Prof-school\", \"5th-6th\", \"10th\", \"1st-4th\",\n \"Preschool\", \"12th\"\n ])\nmarital_status = tf.feature_column.categorical_column_with_vocabulary_list(\n \"marital_status\", [\n \"Married-civ-spouse\", \"Divorced\", \"Married-spouse-absent\",\n \"Never-married\", \"Separated\", \"Married-AF-spouse\", \"Widowed\"\n ])\nrelationship = tf.feature_column.categorical_column_with_vocabulary_list(\n \"relationship\", [\n \"Husband\", \"Not-in-family\", \"Wife\", \"Own-child\", \"Unmarried\",\n \"Other-relative\"\n ])\nworkclass = tf.feature_column.categorical_column_with_vocabulary_list(\n \"workclass\", [\n \"Self-emp-not-inc\", \"Private\", \"State-gov\", \"Federal-gov\",\n \"Local-gov\", \"?\", \"Self-emp-inc\", \"Without-pay\", \"Never-worked\"\n ])", "_____no_output_____" ], [ "# title Create numeric feature columns\n# For Numeric features, we can just call on feature_column.numeric_column()\n# to use its raw value instead of having to create a map between value and ID.\nage = tf.feature_column.numeric_column(\"age\")\nfnlwgt = tf.feature_column.numeric_column(\"fnlwgt\")\neducation_num = tf.feature_column.numeric_column(\"education_num\")\ncapital_gain = tf.feature_column.numeric_column(\"capital_gain\")\ncapital_loss = tf.feature_column.numeric_column(\"capital_loss\")\nhours_per_week = tf.feature_column.numeric_column(\"hours_per_week\")", "_____no_output_____" ], [ "## make age a categorical feature\nage_buckets = tf.feature_column.bucketized_column(\n age, boundaries=[18, 25, 30, 35, 40, 45, 50, 55, 60, 65])", "_____no_output_____" ], [ "# Define the model features.\n\n# we define the gender as a subgroup and can be used later for special handling.\n# subgroup is a group of individuals who share a common set of characteristics.\n\n# List of variables, with special handling for gender subgroup.\nvariables = [native_country, education, occupation, workclass, \n relationship, age_buckets]\nsubgroup_variables = [gender]\nfeature_columns = variables + subgroup_variables", "_____no_output_____" ] ], [ [ "<b> We can now train a neural network based on the features which we derived earlier, we use a feed-forward neural network with\ntwo hidden layers.\nWe first convert our high dimensional categorical features into a real-valued vector, which we call an embedded vector.\nWe use 'gender' for filtering the test for subgroup evaluations.\n</b>", "_____no_output_____" ] ], [ [ "deep_columns = [\n tf.feature_column.indicator_column(workclass),\n tf.feature_column.indicator_column(education),\n tf.feature_column.indicator_column(age_buckets),\n tf.feature_column.indicator_column(relationship),\n tf.feature_column.embedding_column(native_country, dimension=8),\n tf.feature_column.embedding_column(occupation, dimension=8),\n]", "_____no_output_____" ], [ "## define Deep Neural Net Model\n\n# Parameters from form fill-ins\nHIDDEN_UNITS_LAYER_01 = 128 #@param\nHIDDEN_UNITS_LAYER_02 = 64 #@param\nLEARNING_RATE = 0.1 #@param\nL1_REGULARIZATION_STRENGTH = 0.001 #@param\nL2_REGULARIZATION_STRENGTH = 0.001 #@param\n\nRANDOM_SEED = 512\ntf.random.set_seed(RANDOM_SEED)\n\n# List of built-in metrics that we'll need to evaluate performance.\nMETRICS = [\n tf.keras.metrics.TruePositives(name='tp'),\n tf.keras.metrics.FalsePositives(name='fp'),\n tf.keras.metrics.TrueNegatives(name='tn'),\n tf.keras.metrics.FalseNegatives(name='fn'), \n tf.keras.metrics.BinaryAccuracy(name='accuracy'),\n tf.keras.metrics.Precision(name='precision'),\n tf.keras.metrics.Recall(name='recall'),\n tf.keras.metrics.AUC(name='auc'),\n]\n\nregularizer = tf.keras.regularizers.l1_l2(\n l1=L1_REGULARIZATION_STRENGTH, l2=L2_REGULARIZATION_STRENGTH)\n\nmodel = tf.keras.Sequential([\n layers.DenseFeatures(deep_columns),\n layers.Dense(\n HIDDEN_UNITS_LAYER_01, activation='relu', kernel_regularizer=regularizer),\n layers.Dense(\n HIDDEN_UNITS_LAYER_02, activation='relu', kernel_regularizer=regularizer),\n layers.Dense(\n 1, activation='sigmoid', kernel_regularizer=regularizer)\n])\n\nmodel.compile(optimizer=tf.keras.optimizers.Adagrad(LEARNING_RATE), \n loss=tf.keras.losses.BinaryCrossentropy(),\n metrics=METRICS)", "_____no_output_____" ], [ "## title Fit Deep Neural Net Model to the Adult Training Dataset\n\nEPOCHS = 10\nBATCH_SIZE = 1000\n\nfeatures, labels = pandas_to_numpy(train_df)\nmodel.fit(x=features, y=labels, epochs=EPOCHS, batch_size=BATCH_SIZE)\n\n## Evaluate Deep Neural Net Performance\n\nfeatures, labels = pandas_to_numpy(test_df)\nmodel.evaluate(x=features, y=labels);", "_____no_output_____" ] ], [ [ "#### Confusion Matrix ####\n<b> A confusion matrix is a gird which evaluates a models performance with predictions vs ground truth for your model and summarizes how often the model made the correct prediction and how often it made the wrong prediction. \n \n Let's start by creating a binary confusion matrix for our income-prediction model—binary because our label (income_bracket) has only two possible values (<50K or >50K). We'll define an income of >50K as our positive label, and an income of <50k as our negative label.\n \n The matrix represents four possible states\n * true positive: Model predicts >50K, and that is the ground truth.\n * true negative: Model predicts <50K, and that is the ground truth.\n * false positive: Model predicts >50K, and that contradicts reality.\n * false negative: Model predicts <50K, and that contradicts reality.", "_____no_output_____" ] ], [ [ "## Function to Visualize and plot the Binary Confusion Matrix\ndef plot_confusion_matrix(\n confusion_matrix, class_names, subgroup, figsize = (8,6)):\n \n df_cm = pd.DataFrame(\n confusion_matrix, index=class_names, columns=class_names, \n )\n\n rcParams.update({\n 'font.family':'sans-serif',\n 'font.sans-serif':['Liberation Sans'],\n })\n \n sns.set_context(\"notebook\", font_scale=1.25)\n\n fig = plt.figure(figsize=figsize)\n\n plt.title('Confusion Matrix for Performance Across ' + subgroup)\n\n # Combine the instance (numercial value) with its description\n strings = np.asarray([['True Positives', 'False Negatives'],\n ['False Positives', 'True Negatives']])\n labels = (np.asarray(\n [\"{0:g}\\n{1}\".format(value, string) for string, value in zip(\n strings.flatten(), confusion_matrix.flatten())])).reshape(2, 2)\n\n heatmap = sns.heatmap(df_cm, annot=labels, fmt=\"\", \n linewidths=2.0, cmap=sns.color_palette(\"GnBu_d\"));\n heatmap.yaxis.set_ticklabels(\n heatmap.yaxis.get_ticklabels(), rotation=0, ha='right')\n heatmap.xaxis.set_ticklabels(\n heatmap.xaxis.get_ticklabels(), rotation=45, ha='right')\n plt.ylabel('References')\n plt.xlabel('Predictions')\n return fig", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown", "markdown" ], [ "code", "code", "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code" ] ]
d020bc2497c70729421b2fcb9fde0abe570c4d96
155,193
ipynb
Jupyter Notebook
object_detection_face_detector.ipynb
lvisdd/object_detection_tutorial
bf201914392f3e0bb786f6c2724eff17df7e78f8
[ "Apache-2.0" ]
2
2019-08-18T02:43:25.000Z
2020-12-23T07:38:22.000Z
object_detection_face_detector.ipynb
lvisdd/object_detection_tutorial
bf201914392f3e0bb786f6c2724eff17df7e78f8
[ "Apache-2.0" ]
null
null
null
object_detection_face_detector.ipynb
lvisdd/object_detection_tutorial
bf201914392f3e0bb786f6c2724eff17df7e78f8
[ "Apache-2.0" ]
1
2019-08-27T09:57:13.000Z
2019-08-27T09:57:13.000Z
204.201316
121,306
0.864672
[ [ [ "<a href=\"https://colab.research.google.com/github/lvisdd/object_detection_tutorial/blob/master/object_detection_face_detector.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>", "_____no_output_____" ] ], [ [ "# restart (or reset) your virtual machine\n#!kill -9 -1", "_____no_output_____" ] ], [ [ "# [Tensorflow Object Detection API](https://github.com/tensorflow/models/tree/master/research/object_detection)", "_____no_output_____" ] ], [ [ "!git clone https://github.com/tensorflow/models.git", "Cloning into 'models'...\nremote: Enumerating objects: 18, done.\u001b[K\nremote: Counting objects: 100% (18/18), done.\u001b[K\nremote: Compressing objects: 100% (17/17), done.\u001b[K\nremote: Total 30176 (delta 7), reused 11 (delta 1), pack-reused 30158\u001b[K\nReceiving objects: 100% (30176/30176), 510.33 MiB | 15.16 MiB/s, done.\nResolving deltas: 100% (18883/18883), done.\nChecking out files: 100% (3061/3061), done.\n" ] ], [ [ "# COCO API installation", "_____no_output_____" ] ], [ [ "!git clone https://github.com/cocodataset/cocoapi.git\n%cd cocoapi/PythonAPI\n!make\n!cp -r pycocotools /content/models/research/", "Cloning into 'cocoapi'...\nremote: Enumerating objects: 959, done.\u001b[K\nremote: Total 959 (delta 0), reused 0 (delta 0), pack-reused 959\u001b[K\nReceiving objects: 100% (959/959), 11.69 MiB | 6.35 MiB/s, done.\nResolving deltas: 100% (571/571), done.\n/content/cocoapi/PythonAPI\npython setup.py build_ext --inplace\nrunning build_ext\ncythoning pycocotools/_mask.pyx to pycocotools/_mask.c\n/usr/local/lib/python3.6/dist-packages/Cython/Compiler/Main.py:369: FutureWarning: Cython directive 'language_level' not set, using 2 for now (Py2). This will change in a later release! File: /content/cocoapi/PythonAPI/pycocotools/_mask.pyx\n tree = Parsing.p_module(s, pxd, full_module_name)\nbuilding 'pycocotools._mask' extension\ncreating build\ncreating build/common\ncreating build/temp.linux-x86_64-3.6\ncreating build/temp.linux-x86_64-3.6/pycocotools\nx86_64-linux-gnu-gcc -pthread -DNDEBUG -g -fwrapv -O2 -Wall -g -fstack-protector-strong -Wformat -Werror=format-security -Wdate-time -D_FORTIFY_SOURCE=2 -fPIC -I/usr/local/lib/python3.6/dist-packages/numpy/core/include -I../common -I/usr/include/python3.6m -c ../common/maskApi.c -o build/temp.linux-x86_64-3.6/../common/maskApi.o -Wno-cpp -Wno-unused-function -std=c99\n\u001b[01m\u001b[K../common/maskApi.c:\u001b[m\u001b[K In function ‘\u001b[01m\u001b[KrleDecode\u001b[m\u001b[K’:\n\u001b[01m\u001b[K../common/maskApi.c:46:7:\u001b[m\u001b[K \u001b[01;35m\u001b[Kwarning: \u001b[m\u001b[Kthis ‘\u001b[01m\u001b[Kfor\u001b[m\u001b[K’ clause does not guard... [\u001b[01;35m\u001b[K-Wmisleading-indentation\u001b[m\u001b[K]\n \u001b[01;35m\u001b[Kfor\u001b[m\u001b[K( k=0; k<R[i].cnts[j]; k++ ) *(M++)=v; v=!v; }}\n \u001b[01;35m\u001b[K^~~\u001b[m\u001b[K\n\u001b[01m\u001b[K../common/maskApi.c:46:49:\u001b[m\u001b[K \u001b[01;36m\u001b[Knote: \u001b[m\u001b[K...this statement, but the latter is misleadingly indented as if it were guarded by the ‘\u001b[01m\u001b[Kfor\u001b[m\u001b[K’\n for( k=0; k<R[i].cnts[j]; k++ ) *(M++)=v; \u001b[01;36m\u001b[Kv\u001b[m\u001b[K=!v; }}\n \u001b[01;36m\u001b[K^\u001b[m\u001b[K\n\u001b[01m\u001b[K../common/maskApi.c:\u001b[m\u001b[K In function ‘\u001b[01m\u001b[KrleFrPoly\u001b[m\u001b[K’:\n\u001b[01m\u001b[K../common/maskApi.c:166:3:\u001b[m\u001b[K \u001b[01;35m\u001b[Kwarning: \u001b[m\u001b[Kthis ‘\u001b[01m\u001b[Kfor\u001b[m\u001b[K’ clause does not guard... [\u001b[01;35m\u001b[K-Wmisleading-indentation\u001b[m\u001b[K]\n \u001b[01;35m\u001b[Kfor\u001b[m\u001b[K(j=0; j<k; j++) x[j]=(int)(scale*xy[j*2+0]+.5); x[k]=x[0];\n \u001b[01;35m\u001b[K^~~\u001b[m\u001b[K\n\u001b[01m\u001b[K../common/maskApi.c:166:54:\u001b[m\u001b[K \u001b[01;36m\u001b[Knote: \u001b[m\u001b[K...this statement, but the latter is misleadingly indented as if it were guarded by the ‘\u001b[01m\u001b[Kfor\u001b[m\u001b[K’\n for(j=0; j<k; j++) x[j]=(int)(scale*xy[j*2+0]+.5); \u001b[01;36m\u001b[Kx\u001b[m\u001b[K[k]=x[0];\n \u001b[01;36m\u001b[K^\u001b[m\u001b[K\n\u001b[01m\u001b[K../common/maskApi.c:167:3:\u001b[m\u001b[K \u001b[01;35m\u001b[Kwarning: \u001b[m\u001b[Kthis ‘\u001b[01m\u001b[Kfor\u001b[m\u001b[K’ clause does not guard... [\u001b[01;35m\u001b[K-Wmisleading-indentation\u001b[m\u001b[K]\n \u001b[01;35m\u001b[Kfor\u001b[m\u001b[K(j=0; j<k; j++) y[j]=(int)(scale*xy[j*2+1]+.5); y[k]=y[0];\n \u001b[01;35m\u001b[K^~~\u001b[m\u001b[K\n\u001b[01m\u001b[K../common/maskApi.c:167:54:\u001b[m\u001b[K \u001b[01;36m\u001b[Knote: \u001b[m\u001b[K...this statement, but the latter is misleadingly indented as if it were guarded by the ‘\u001b[01m\u001b[Kfor\u001b[m\u001b[K’\n for(j=0; j<k; j++) y[j]=(int)(scale*xy[j*2+1]+.5); \u001b[01;36m\u001b[Ky\u001b[m\u001b[K[k]=y[0];\n \u001b[01;36m\u001b[K^\u001b[m\u001b[K\n\u001b[01m\u001b[K../common/maskApi.c:\u001b[m\u001b[K In function ‘\u001b[01m\u001b[KrleToString\u001b[m\u001b[K’:\n\u001b[01m\u001b[K../common/maskApi.c:212:7:\u001b[m\u001b[K \u001b[01;35m\u001b[Kwarning: \u001b[m\u001b[Kthis ‘\u001b[01m\u001b[Kif\u001b[m\u001b[K’ clause does not guard... [\u001b[01;35m\u001b[K-Wmisleading-indentation\u001b[m\u001b[K]\n \u001b[01;35m\u001b[Kif\u001b[m\u001b[K(more) c |= 0x20; c+=48; s[p++]=c;\n \u001b[01;35m\u001b[K^~\u001b[m\u001b[K\n\u001b[01m\u001b[K../common/maskApi.c:212:27:\u001b[m\u001b[K \u001b[01;36m\u001b[Knote: \u001b[m\u001b[K...this statement, but the latter is misleadingly indented as if it were guarded by the ‘\u001b[01m\u001b[Kif\u001b[m\u001b[K’\n if(more) c |= 0x20; \u001b[01;36m\u001b[Kc\u001b[m\u001b[K+=48; s[p++]=c;\n \u001b[01;36m\u001b[K^\u001b[m\u001b[K\n\u001b[01m\u001b[K../common/maskApi.c:\u001b[m\u001b[K In function ‘\u001b[01m\u001b[KrleFrString\u001b[m\u001b[K’:\n\u001b[01m\u001b[K../common/maskApi.c:220:3:\u001b[m\u001b[K \u001b[01;35m\u001b[Kwarning: \u001b[m\u001b[Kthis ‘\u001b[01m\u001b[Kwhile\u001b[m\u001b[K’ clause does not guard... [\u001b[01;35m\u001b[K-Wmisleading-indentation\u001b[m\u001b[K]\n \u001b[01;35m\u001b[Kwhile\u001b[m\u001b[K( s[m] ) m++; cnts=malloc(sizeof(uint)*m); m=0;\n \u001b[01;35m\u001b[K^~~~~\u001b[m\u001b[K\n\u001b[01m\u001b[K../common/maskApi.c:220:22:\u001b[m\u001b[K \u001b[01;36m\u001b[Knote: \u001b[m\u001b[K...this statement, but the latter is misleadingly indented as if it were guarded by the ‘\u001b[01m\u001b[Kwhile\u001b[m\u001b[K’\n while( s[m] ) m++; \u001b[01;36m\u001b[Kcnts\u001b[m\u001b[K=malloc(sizeof(uint)*m); m=0;\n \u001b[01;36m\u001b[K^~~~\u001b[m\u001b[K\n\u001b[01m\u001b[K../common/maskApi.c:228:5:\u001b[m\u001b[K \u001b[01;35m\u001b[Kwarning: \u001b[m\u001b[Kthis ‘\u001b[01m\u001b[Kif\u001b[m\u001b[K’ clause does not guard... [\u001b[01;35m\u001b[K-Wmisleading-indentation\u001b[m\u001b[K]\n \u001b[01;35m\u001b[Kif\u001b[m\u001b[K(m>2) x+=(long) cnts[m-2]; cnts[m++]=(uint) x;\n \u001b[01;35m\u001b[K^~\u001b[m\u001b[K\n\u001b[01m\u001b[K../common/maskApi.c:228:34:\u001b[m\u001b[K \u001b[01;36m\u001b[Knote: \u001b[m\u001b[K...this statement, but the latter is misleadingly indented as if it were guarded by the ‘\u001b[01m\u001b[Kif\u001b[m\u001b[K’\n if(m>2) x+=(long) cnts[m-2]; \u001b[01;36m\u001b[Kcnts\u001b[m\u001b[K[m++]=(uint) x;\n \u001b[01;36m\u001b[K^~~~\u001b[m\u001b[K\n\u001b[01m\u001b[K../common/maskApi.c:\u001b[m\u001b[K In function ‘\u001b[01m\u001b[KrleToBbox\u001b[m\u001b[K’:\n\u001b[01m\u001b[K../common/maskApi.c:141:31:\u001b[m\u001b[K \u001b[01;35m\u001b[Kwarning: \u001b[m\u001b[K‘\u001b[01m\u001b[Kxp\u001b[m\u001b[K’ may be used uninitialized in this function [\u001b[01;35m\u001b[K-Wmaybe-uninitialized\u001b[m\u001b[K]\n if(j%2==0) xp=x; else if\u001b[01;35m\u001b[K(\u001b[m\u001b[Kxp<x) { ys=0; ye=h-1; }\n \u001b[01;35m\u001b[K^\u001b[m\u001b[K\nx86_64-linux-gnu-gcc -pthread -DNDEBUG -g -fwrapv -O2 -Wall -g -fstack-protector-strong -Wformat -Werror=format-security -Wdate-time -D_FORTIFY_SOURCE=2 -fPIC -I/usr/local/lib/python3.6/dist-packages/numpy/core/include -I../common -I/usr/include/python3.6m -c pycocotools/_mask.c -o build/temp.linux-x86_64-3.6/pycocotools/_mask.o -Wno-cpp -Wno-unused-function -std=c99\ncreating build/lib.linux-x86_64-3.6\ncreating build/lib.linux-x86_64-3.6/pycocotools\nx86_64-linux-gnu-gcc -pthread -shared -Wl,-O1 -Wl,-Bsymbolic-functions -Wl,-Bsymbolic-functions -Wl,-z,relro -Wl,-Bsymbolic-functions -Wl,-z,relro -g -fstack-protector-strong -Wformat -Werror=format-security -Wdate-time -D_FORTIFY_SOURCE=2 build/temp.linux-x86_64-3.6/../common/maskApi.o build/temp.linux-x86_64-3.6/pycocotools/_mask.o -o build/lib.linux-x86_64-3.6/pycocotools/_mask.cpython-36m-x86_64-linux-gnu.so\ncopying build/lib.linux-x86_64-3.6/pycocotools/_mask.cpython-36m-x86_64-linux-gnu.so -> pycocotools\nrm -rf build\n" ] ], [ [ "# Protobuf Compilation", "_____no_output_____" ] ], [ [ "%cd /content/models/research/\n!protoc object_detection/protos/*.proto --python_out=.", "/content/models/research\n" ] ], [ [ "# Add Libraries to PYTHONPATH", "_____no_output_____" ] ], [ [ "%cd /content/models/research/\n%env PYTHONPATH=/env/python:/content/models/research:/content/models/research/slim:/content/models/research/object_detection\n%env", "/content/models/research\nenv: PYTHONPATH=/env/python:/content/models/research:/content/models/research/slim:/content/models/research/object_detection\n" ] ], [ [ "# Testing the Installation", "_____no_output_____" ] ], [ [ "!python object_detection/builders/model_builder_test.py", "WARNING: Logging before flag parsing goes to stderr.\nW0827 16:47:24.121168 140720291608448 lazy_loader.py:50] \nThe TensorFlow contrib module will not be included in TensorFlow 2.0.\nFor more information, please see:\n * https://github.com/tensorflow/community/blob/master/rfcs/20180907-contrib-sunset.md\n * https://github.com/tensorflow/addons\n * https://github.com/tensorflow/io (for I/O related ops)\nIf you depend on functionality not listed there, please file an issue.\n\nW0827 16:47:24.430399 140720291608448 deprecation_wrapper.py:119] From /content/models/research/slim/nets/inception_resnet_v2.py:373: The name tf.GraphKeys is deprecated. Please use tf.compat.v1.GraphKeys instead.\n\nW0827 16:47:24.483575 140720291608448 deprecation_wrapper.py:119] From /content/models/research/slim/nets/mobilenet/mobilenet.py:397: The name tf.nn.avg_pool is deprecated. Please use tf.nn.avg_pool2d instead.\n\nRunning tests under Python 3.6.8: /usr/bin/python3\n[ RUN ] ModelBuilderTest.test_create_faster_rcnn_model_from_config_with_example_miner\n[ OK ] ModelBuilderTest.test_create_faster_rcnn_model_from_config_with_example_miner\n[ RUN ] ModelBuilderTest.test_create_faster_rcnn_models_from_config_faster_rcnn_with_matmul\n[ OK ] ModelBuilderTest.test_create_faster_rcnn_models_from_config_faster_rcnn_with_matmul\n[ RUN ] ModelBuilderTest.test_create_faster_rcnn_models_from_config_faster_rcnn_without_matmul\n[ OK ] ModelBuilderTest.test_create_faster_rcnn_models_from_config_faster_rcnn_without_matmul\n[ RUN ] ModelBuilderTest.test_create_faster_rcnn_models_from_config_mask_rcnn_with_matmul\n[ OK ] ModelBuilderTest.test_create_faster_rcnn_models_from_config_mask_rcnn_with_matmul\n[ RUN ] ModelBuilderTest.test_create_faster_rcnn_models_from_config_mask_rcnn_without_matmul\n[ OK ] ModelBuilderTest.test_create_faster_rcnn_models_from_config_mask_rcnn_without_matmul\n[ RUN ] ModelBuilderTest.test_create_rfcn_model_from_config\n[ OK ] ModelBuilderTest.test_create_rfcn_model_from_config\n[ RUN ] ModelBuilderTest.test_create_ssd_fpn_model_from_config\n[ OK ] ModelBuilderTest.test_create_ssd_fpn_model_from_config\n[ RUN ] ModelBuilderTest.test_create_ssd_models_from_config\n[ OK ] ModelBuilderTest.test_create_ssd_models_from_config\n[ RUN ] ModelBuilderTest.test_invalid_faster_rcnn_batchnorm_update\n[ OK ] ModelBuilderTest.test_invalid_faster_rcnn_batchnorm_update\n[ RUN ] ModelBuilderTest.test_invalid_first_stage_nms_iou_threshold\n[ OK ] ModelBuilderTest.test_invalid_first_stage_nms_iou_threshold\n[ RUN ] ModelBuilderTest.test_invalid_model_config_proto\n[ OK ] ModelBuilderTest.test_invalid_model_config_proto\n[ RUN ] ModelBuilderTest.test_invalid_second_stage_batch_size\n[ OK ] ModelBuilderTest.test_invalid_second_stage_batch_size\n[ RUN ] ModelBuilderTest.test_session\n[ SKIPPED ] ModelBuilderTest.test_session\n[ RUN ] ModelBuilderTest.test_unknown_faster_rcnn_feature_extractor\n[ OK ] ModelBuilderTest.test_unknown_faster_rcnn_feature_extractor\n[ RUN ] ModelBuilderTest.test_unknown_meta_architecture\n[ OK ] ModelBuilderTest.test_unknown_meta_architecture\n[ RUN ] ModelBuilderTest.test_unknown_ssd_feature_extractor\n[ OK ] ModelBuilderTest.test_unknown_ssd_feature_extractor\n----------------------------------------------------------------------\nRan 16 tests in 0.152s\n\nOK (skipped=1)\n" ], [ "%cd /content/models/research/object_detection", "/content/models/research/object_detection\n" ] ], [ [ "## [Tensorflow Face Detector](https://github.com/yeephycho/tensorflow-face-detection)", "_____no_output_____" ] ], [ [ "%cd /content", "/content\n" ], [ "!git clone https://github.com/yeephycho/tensorflow-face-detection.git", "Cloning into 'tensorflow-face-detection'...\nremote: Enumerating objects: 118, done.\u001b[K\nremote: Total 118 (delta 0), reused 0 (delta 0), pack-reused 118\nReceiving objects: 100% (118/118), 20.31 MiB | 7.90 MiB/s, done.\nResolving deltas: 100% (59/59), done.\n" ], [ "%cd tensorflow-face-detection", "/content/tensorflow-face-detection\n" ], [ "!wget https://storage.googleapis.com/download.tensorflow.org/example_images/grace_hopper.jpg", "--2019-08-27 16:47:32-- https://storage.googleapis.com/download.tensorflow.org/example_images/grace_hopper.jpg\nResolving storage.googleapis.com (storage.googleapis.com)... 74.125.23.128, 2404:6800:4008:c02::80\nConnecting to storage.googleapis.com (storage.googleapis.com)|74.125.23.128|:443... connected.\nHTTP request sent, awaiting response... 200 OK\nLength: 61306 (60K) [image/jpeg]\nSaving to: ‘grace_hopper.jpg’\n\n\rgrace_hopper.jpg 0%[ ] 0 --.-KB/s \rgrace_hopper.jpg 100%[===================>] 59.87K --.-KB/s in 0s \n\n2019-08-27 16:47:32 (124 MB/s) - ‘grace_hopper.jpg’ saved [61306/61306]\n\n" ], [ "filename = 'grace_hopper.jpg'", "_____no_output_____" ], [ "#!python inference_usbCam_face.py grace_hopper.jpg", "_____no_output_____" ], [ "import sys\nimport time\nimport numpy as np\nimport tensorflow as tf\nimport cv2\n\nfrom utils import label_map_util\nfrom utils import visualization_utils_color as vis_util\n\n# Path to frozen detection graph. This is the actual model that is used for the object detection.\nPATH_TO_CKPT = './model/frozen_inference_graph_face.pb'\n\n# List of the strings that is used to add correct label for each box.\nPATH_TO_LABELS = './protos/face_label_map.pbtxt'\n\nNUM_CLASSES = 2\n\nlabel_map = label_map_util.load_labelmap(PATH_TO_LABELS)\ncategories = label_map_util.convert_label_map_to_categories(label_map, max_num_classes=NUM_CLASSES, use_display_name=True)\ncategory_index = label_map_util.create_category_index(categories)", "WARNING: Logging before flag parsing goes to stderr.\nW0827 16:47:35.285600 140077768431488 deprecation_wrapper.py:119] From /content/tensorflow-face-detection/utils/label_map_util.py:116: The name tf.gfile.GFile is deprecated. Please use tf.io.gfile.GFile instead.\n\n" ], [ "class TensoflowFaceDector(object):\n def __init__(self, PATH_TO_CKPT):\n \"\"\"Tensorflow detector\n \"\"\"\n\n self.detection_graph = tf.Graph()\n with self.detection_graph.as_default():\n od_graph_def = tf.GraphDef()\n with tf.gfile.GFile(PATH_TO_CKPT, 'rb') as fid:\n serialized_graph = fid.read()\n od_graph_def.ParseFromString(serialized_graph)\n tf.import_graph_def(od_graph_def, name='')\n\n\n with self.detection_graph.as_default():\n config = tf.ConfigProto()\n config.gpu_options.allow_growth = True\n self.sess = tf.Session(graph=self.detection_graph, config=config)\n self.windowNotSet = True\n\n\n def run(self, image):\n \"\"\"image: bgr image\n return (boxes, scores, classes, num_detections)\n \"\"\"\n\n image_np = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)\n\n # the array based representation of the image will be used later in order to prepare the\n # result image with boxes and labels on it.\n # Expand dimensions since the model expects images to have shape: [1, None, None, 3]\n image_np_expanded = np.expand_dims(image_np, axis=0)\n image_tensor = self.detection_graph.get_tensor_by_name('image_tensor:0')\n # Each box represents a part of the image where a particular object was detected.\n boxes = self.detection_graph.get_tensor_by_name('detection_boxes:0')\n # Each score represent how level of confidence for each of the objects.\n # Score is shown on the result image, together with the class label.\n scores = self.detection_graph.get_tensor_by_name('detection_scores:0')\n classes = self.detection_graph.get_tensor_by_name('detection_classes:0')\n num_detections = self.detection_graph.get_tensor_by_name('num_detections:0')\n # Actual detection.\n start_time = time.time()\n (boxes, scores, classes, num_detections) = self.sess.run(\n [boxes, scores, classes, num_detections],\n feed_dict={image_tensor: image_np_expanded})\n elapsed_time = time.time() - start_time\n print('inference time cost: {}'.format(elapsed_time))\n\n return (boxes, scores, classes, num_detections)", "_____no_output_____" ], [ "# This is needed to display the images.\n%matplotlib inline", "_____no_output_____" ], [ "tDetector = TensoflowFaceDector(PATH_TO_CKPT)\n\noriginal = cv2.imread(filename)\nimage = cv2.cvtColor(original, cv2.COLOR_BGR2RGB)\n\n(boxes, scores, classes, num_detections) = tDetector.run(image)\n\nvis_util.visualize_boxes_and_labels_on_image_array(\n image,\n np.squeeze(boxes),\n np.squeeze(classes).astype(np.int32),\n np.squeeze(scores),\n category_index,\n use_normalized_coordinates=True,\n line_thickness=4)\n\nfrom matplotlib import pyplot as plt\nplt.imshow(image)", "inference time cost: 2.3050696849823\n" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ] ]
d020c008258d93f9003722b2f6464169e94f20b5
16,517
ipynb
Jupyter Notebook
day3.ipynb
msse-2021-bootcamp/team2-project
3915fd811be09e79d7ea5c9a368d7849ef5b629b
[ "BSD-3-Clause" ]
null
null
null
day3.ipynb
msse-2021-bootcamp/team2-project
3915fd811be09e79d7ea5c9a368d7849ef5b629b
[ "BSD-3-Clause" ]
22
2021-08-10T20:36:55.000Z
2021-08-20T02:35:02.000Z
day3.ipynb
msse-2021-bootcamp/team2-project
3915fd811be09e79d7ea5c9a368d7849ef5b629b
[ "BSD-3-Clause" ]
null
null
null
30.250916
162
0.516256
[ [ [ "# Writing a Molecular Monte Carlo Simulation\n\nStarting today, make sure you have the functions\n\n1. `calculate_LJ` - written in class\n1. `read_xyz` - provided in class\n1. `calculate_total_energy` - modified version provided in this notebook written for homework which has cutoff\n1. `calculate_distance` - should be the version written for homework which accounts for periodic boundaries.\n1. `calculate_tail_correction` - written for homework \n", "_____no_output_____" ] ], [ [ "# add imports here\nimport math\nimport random\n", "_____no_output_____" ], [ "def calculate_total_energy(coordinates, box_length, cutoff):\n \"\"\"\n Calculate the total energy of a set of particles using the Lennard Jones potential.\n \n Parameters\n ----------\n coordinates : list\n A nested list containing the x, y,z coordinate for each particle\n box_length : float\n The length of the box. Assumes cubic box.\n cutoff : float\n The cutoff length\n \n Returns\n -------\n total_energy : float\n The total energy of the set of coordinates.\n \"\"\"\n \n total_energy = 0\n num_atoms = len(coordinates)\n\n for i in range(num_atoms):\n for j in range(i+1, num_atoms):\n # Calculate the distance between the particles - exercise.\n dist_ij = calculate_distance(coordinates[i], coordinates[j], box_length)\n\n if dist_ij < cutoff:\n # Calculate the pairwise LJ energy\n LJ_ij = calculate_LJ(dist_ij)\n\n # Add to total energy.\n total_energy += LJ_ij\n return total_energy\n\ndef read_xyz(filepath):\n \"\"\"\n Reads coordinates from an xyz file.\n \n Parameters\n ----------\n filepath : str\n The path to the xyz file to be processed.\n \n Returns\n -------\n atomic_coordinates : list\n A two dimensional list containing atomic coordinates\n \"\"\"\n \n with open(filepath) as f:\n box_length = float(f.readline().split()[0])\n num_atoms = float(f.readline())\n coordinates = f.readlines()\n \n atomic_coordinates = []\n \n for atom in coordinates:\n split_atoms = atom.split()\n \n float_coords = []\n \n # We split this way to get rid of the atom label.\n for coord in split_atoms[1:]:\n float_coords.append(float(coord))\n \n atomic_coordinates.append(float_coords)\n \n return atomic_coordinates, box_length\n\ndef calculate_LJ(r_ij):\n \"\"\"\n The LJ interaction energy between two particles.\n\n Computes the pairwise Lennard Jones interaction energy based on the separation distance in reduced units.\n\n Parameters\n ----------\n r_ij : float\n The distance between the particles in reduced units.\n \n Returns\n -------\n pairwise_energy : float\n The pairwise Lennard Jones interaction energy in reduced units.\n\n Examples\n --------\n >>> calculate_LJ(1)\n 0\n\n \"\"\"\n \n r6_term = math.pow(1/r_ij, 6)\n r12_term = math.pow(r6_term, 2)\n \n pairwise_energy = 4 * (r12_term - r6_term)\n \n return pairwise_energy\n\n\ndef calculate_distance(coord1, coord2, box_length=None):\n \"\"\"\n Calculate the distance between two points. When box_length is set, the minimum image convention is used to calculate the distance between the points.\n\n Parameters\n ----------\n coord1, coord2 : list\n The coordinates of the points, [x, y, z]\n \n box_length : float, optional\n The box length\n\n Returns\n -------\n distance : float\n The distance between the two points accounting for periodic boundaries\n \"\"\"\n distance = 0\n \n for i in range(3):\n hold_dist = abs(coord2[i] - coord1[i])\n \n if (box_length): \n if hold_dist > box_length/2:\n hold_dist = hold_dist - (box_length * round(hold_dist/box_length))\n distance += math.pow(hold_dist, 2)\n\n return math.sqrt(distance)\n\n## Add your group's tail correction function\n\ndef calculate_tail_correction(num_particles, box_length, cutoff):\n \"\"\"\n The tail correction associated with using a cutoff radius.\n \n Computes the tail correction based on a cutoff radius used in the LJ energy calculation in reduced units.\n \n Parameters\n ----------\n num_particles : int\n The number of particles in the system.\n \n box_length : int\n Size of the box length of the system, used to calculate volume.\n \n cutoff : int\n Cutoff distance.\n \n Returns\n -------\n tail_correction : float\n The tail correction associated with using the cutoff.\n \"\"\"\n \n brackets = (1/3*math.pow(1/cutoff,9)) - math.pow(1/cutoff,3)\n volume = box_length**3\n \n constant = ((8*math.pi*(num_particles**2))/(3*volume))\n \n tail_correction = constant * brackets\n \n return tail_correction\n ", "_____no_output_____" ] ], [ [ "The Metropolis Criterion\n$$ P_{acc}(m \\rightarrow n) = \\text{min} \\left[\n\t\t1,e^{-\\beta \\Delta U}\n\t\\right] $$", "_____no_output_____" ] ], [ [ "def accept_or_reject(delta_U, beta):\n \"\"\"\n Accept or reject a move based on the Metropolis criterion.\n \n Parameters\n ----------\n detlta_U : float\n The change in energy for moving system from state m to n.\n beta : float\n 1/temperature\n \n Returns\n -------\n boolean\n Whether the move is accepted.\n \"\"\"\n if delta_U <= 0.0:\n accept = True\n else:\n #Generate a random number on (0,1)\n random_number = random.random()\n p_acc = math.exp(-beta*delta_U)\n \n if random_number < p_acc:\n accept = True\n else:\n accept = False\n return accept", "_____no_output_____" ], [ "# Sanity checks - test cases\ndelta_energy = -1\nbeta = 1\naccepted = accept_or_reject(delta_energy, beta)\nassert accepted", "_____no_output_____" ], [ "# Sanity checks - test cases\ndelta_energy = 0\nbeta = 1\naccepted = accept_or_reject(delta_energy, beta)\nassert accepted", "_____no_output_____" ], [ "# To test function with random numbers\n# can set random seed\n\n#To set seed\nrandom.seed(0)\nrandom.random()", "_____no_output_____" ], [ "delta_energy = 1\nbeta = 1\nrandom.seed(0)\naccepted = accept_or_reject(delta_energy, beta)\nassert accepted is False", "_____no_output_____" ], [ "#Clear seed\nrandom.seed()", "_____no_output_____" ], [ "def calculate_pair_energy(coordinates, i_particle, box_length, cutoff):\n \"\"\"\n Calculate the interaction energy of a particle with its environment (all other particles in the system)\n \n Parameters\n ----------\n coordinates : list\n The coordinates for all the particles in the system.\n \n i_particle : int\n The particle number for which to calculate the energy.\n \n cutoff : float\n The simulation cutoff. Beyond this distance, interactions are not calculated.\n \n box_length : float\n The length of the box for periodic bounds\n \n Returns\n -------\n e_total : float\n The pairwise interaction energy of the ith particles with all other particles in the system\n \"\"\"\n \n e_total = 0.0\n #creates a list of the coordinates for the i_particle\n i_position = coordinates[i_particle]\n \n num_atoms = len(coordinates)\n \n for j_particle in range(num_atoms):\n \n if i_particle != j_particle:\n #creates a list of coordinates for the j_particle\n j_position = coordinates[j_particle]\n rij = calculate_distance(i_position, j_position, box_length)\n \n if rij < cutoff:\n e_pair = calculate_LJ(rij)\n e_total += e_pair\n \n return e_total\n ", "_____no_output_____" ], [ "## Sanity checks\ntest_coords = [[0, 0, 0], [0, 0, 2**(1/6)], [0, 0, 2*2**(1/6)]]\n\n# What do you expect the result to be for particle index 1 (use cutoff of 3)?\nassert calculate_pair_energy(test_coords, 1, 10, 3) == -2\n# What do you expect the result to be for particle index 0 (use cutoff of 2)?\nassert calculate_pair_energy(test_coords, 0, 10, 2) == -1\n\nassert calculate_pair_energy(test_coords, 0, 10, 3) == calculate_pair_energy(test_coords, 2, 10, 3)\n", "_____no_output_____" ] ], [ [ "# Monte Carlo Loop", "_____no_output_____" ] ], [ [ "# Read or generate initial coordinates\ncoordinates, box_length = read_xyz('lj_sample_configurations/lj_sample_config_periodic1.txt')\n\n# Set simulation parameters\nreduced_temperature = 0.9\nnum_steps = 5000\nmax_displacement = 0.1\ncutoff = 3\n #how often to print an update\nfreq = 1000\n\n# Calculated quantities\nbeta = 1 / reduced_temperature\nnum_particles = len(coordinates)\n\n# Energy calculations\ntotal_energy = calculate_total_energy(coordinates, box_length, cutoff)\nprint(total_energy)\ntotal_correction = calculate_tail_correction(num_particles, box_length, cutoff)\nprint(total_correction)\ntotal_energy += total_correction\n\n\nfor step in range(num_steps):\n # 1. Randomly pick one of the particles.\n random_particle = random.randrange(num_particles)\n \n # 2. Calculate the interaction energy of the selected particle with the system.\n current_energy = calculate_pair_energy(coordinates, random_particle, box_length, cutoff)\n \n # 3. Generate a random x, y, z displacement.\n x_rand = random.uniform(-max_displacement, max_displacement)\n y_rand = random.uniform(-max_displacement, max_displacement)\n z_rand = random.uniform(-max_displacement, max_displacement)\n \n # 4. Modify the coordinate of Nth particle by generated displacements.\n coordinates[random_particle][0] += x_rand\n coordinates[random_particle][1] += y_rand\n coordinates[random_particle][2] += z_rand\n \n # 5. Calculate the interaction energy of the moved particle with the system and store this value.\n proposed_energy = calculate_pair_energy(coordinates, random_particle, box_length, cutoff)\n delta_energy = proposed_energy - current_energy\n \n # 6. Calculate if we accept the move based on energy difference.\n accept = accept_or_reject(delta_energy, beta)\n \n # 7. If accepted, move the particle.\n if accept:\n total_energy += delta_energy\n else:\n #Move not accepted, roll back coordinates\n coordinates[random_particle][0] -= x_rand\n coordinates[random_particle][1] -= y_rand\n coordinates[random_particle][2] -= z_rand\n \n # 8. Print the energy if step is a multiple of freq.\n if step % freq == 0:\n print(step, total_energy/num_particles)", "-4351.540194543858\n-198.4888837441566\n0 -5.6871567358709845\n1000 -5.651180182170634\n2000 -5.637020769853117\n3000 -5.63623029990943\n4000 -5.62463482708468\n" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code" ] ]
d020d48211309c57de15216ab18f4dbbb2e32500
18,357
ipynb
Jupyter Notebook
playground/eda.ipynb
tukai21/arxiv-ranking
5b54c1049c3012bec8f30b9e1ff20a1caa024911
[ "MIT" ]
null
null
null
playground/eda.ipynb
tukai21/arxiv-ranking
5b54c1049c3012bec8f30b9e1ff20a1caa024911
[ "MIT" ]
null
null
null
playground/eda.ipynb
tukai21/arxiv-ranking
5b54c1049c3012bec8f30b9e1ff20a1caa024911
[ "MIT" ]
null
null
null
31.219388
960
0.467015
[ [ [ "%load_ext autoreload\n%autoreload 2\n\nimport os\nimport re\nfrom glob import glob\nimport json\nimport numpy as np\nimport pandas as pd\nfrom difflib import SequenceMatcher\n\nimport matplotlib.pyplot as plt\nimport seaborn as sns", "_____no_output_____" ] ], [ [ "## Data Acquisition", "_____no_output_____" ] ], [ [ "arxiv_files = sorted(glob('../data/arxiv/*'))\nscirate_files = sorted(glob('../data/scirate/*'))", "_____no_output_____" ], [ "arxiv_data = []\nfor file in arxiv_files:\n with open(file, 'r') as f:\n arxiv_data.append(json.load(f))\n \nprint(len(arxiv_data))\n\nscirate_data = []\nfor file in scirate_files:\n with open(file, 'r') as f:\n scirate_data.append(json.load(f))\n \nprint(len(scirate_data))", "62\n62\n" ], [ "arxiv_data[-1]['date']", "_____no_output_____" ], [ "# 2018-03-30 Arxiv top\narxiv_data[-1]['papers'][0]", "_____no_output_____" ], [ "# 2018-03-30 Scirate top\nscirate_data[-1]['papers'][0]", "_____no_output_____" ] ], [ [ "## EDA", "_____no_output_____" ], [ "Entry ID: paper name (DOI?)\nWe can create an arbitrary paper id that corresponds to each paper title, authors, and DOI.\n\nPossible features:\n\n- Arxiv order\n- Scirate order\n- Paper length (pages)\n- Title length (words)\n- Number of authors\n- Total # of citations of the authors (or first author? last author?)\n- Bag of Words of title\n- Bag of Words of abstract", "_____no_output_____" ] ], [ [ "# obtain features from both Arxiv and Scirate paper lists\n\nindex = []\ntitle = []\nauthors = []\nnum_authors = []\ntitle_length = []\narxiv_order = []\nsubmit_time = []\nsubmit_weekday = []\npaper_size = []\nnum_versions = []\n\nfor res in arxiv_data:\n date = res['date']\n papers = res['papers']\n for paper in papers:\n # create arbitrary paper id - currently, it is \"date + Arxiv order\"\n if paper['order'] < 10:\n idx = '_000' + str(paper['order'])\n elif 10 <= paper['order'] < 100:\n idx = '_00' + str(paper['order'])\n elif 100 <= paper['order'] < 1000:\n idx = '_0' + str(paper['order'])\n else:\n idx = '_' + str(paper['order'])\n index.append(date + idx)\n \n title.append(paper['title'])\n authors.append(paper['authors'])\n num_authors.append(len(paper['authors']))\n title_length.append(len(paper['title']))\n arxiv_order.append(paper['order'])\n submit_time.append(paper['submit_time'])\n submit_weekday.append(paper['submit_weekday'])\n paper_size.append(int(re.findall('\\d+', paper['size'])[0]))\n num_versions.append(paper['num_versions'])", "_____no_output_____" ], [ "len(index)", "_____no_output_____" ], [ "# Scirate rank - string matching to find index of each paper in Arxiv list\n### This process is pretty slow - needs to be refactored ###\n\nscirate_rank = [-1 for _ in range(len(index))]\nscite_score = [-1 for _ in range(len(index))]\n\nfor res in scirate_data:\n papers = res['papers']\n for paper in papers:\n title_sci = paper['title']\n try:\n idx = title.index(title_sci)\n except:\n # if there is no just match, use difflib SequenceMatcher for title matching\n str_match = np.array([SequenceMatcher(a=title_sci, b=title_arx).ratio() for title_arx in title])\n idx = np.argmax(str_match)\n scirate_rank[idx] = paper['rank']\n scite_score[idx] = paper['scite_count']", "_____no_output_____" ], [ "# columns for pandas DataFrame\ncolumns = ['title', 'authors', 'num_authors', 'title_length', 'arxiv_order', 'submit_time', 'submit_weekday',\n 'paper_size', 'num_versions', 'scirate_rank', 'scite_score']", "_____no_output_____" ], [ "# this is too dirty...\ntitle = np.array(title).reshape(-1, 1)\nauthors = np.array(authors).reshape(-1, 1)\nnum_authors = np.array(num_authors).reshape(-1, 1)\ntitle_length = np.array(title_length).reshape(-1, 1)\narxiv_order = np.array(arxiv_order).reshape(-1, 1)\nsubmit_time = np.array(submit_time).reshape(-1, 1)\nsubmit_weekday = np.array(submit_weekday).reshape(-1, 1)\npaper_size = np.array(paper_size).reshape(-1, 1)\nnum_versions = np.array(num_versions).reshape(-1, 1)\nscirate_rank = np.array(scirate_rank).reshape(-1, 1)\nscite_score = np.array(scite_score).reshape(-1, 1)\n\ndata = np.concatenate([\n title,\n authors,\n num_authors,\n title_length,\n arxiv_order,\n submit_time,\n submit_weekday,\n paper_size,\n num_versions,\n scirate_rank,\n scite_score\n], axis=1)\n\ndf = pd.DataFrame(data=data, columns=columns, index=index)", "_____no_output_____" ], [ "len(df)", "_____no_output_____" ], [ "df.head()", "_____no_output_____" ], [ "df[['arxiv_order', 'scite_score', 'scirate_rank']].astype(float).corr(method='spearman')", "_____no_output_____" ] ] ]
[ "code", "markdown", "code", "markdown", "code" ]
[ [ "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code" ], [ "markdown", "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code" ] ]
d020f5b1bd9f0c29260cc53abfe09763db1a4ae0
11,492
ipynb
Jupyter Notebook
Project/Starbucks/.ipynb_checkpoints/Starbucks-checkpoint.ipynb
kundan7kumar/Machine-Learning
8b62b68324713007c967a6120a0f48498992ce2f
[ "MIT" ]
null
null
null
Project/Starbucks/.ipynb_checkpoints/Starbucks-checkpoint.ipynb
kundan7kumar/Machine-Learning
8b62b68324713007c967a6120a0f48498992ce2f
[ "MIT" ]
null
null
null
Project/Starbucks/.ipynb_checkpoints/Starbucks-checkpoint.ipynb
kundan7kumar/Machine-Learning
8b62b68324713007c967a6120a0f48498992ce2f
[ "MIT" ]
null
null
null
33.8
918
0.505221
[ [ [ "## Portfolio Exercise: Starbucks\n<br>\n\n<img src=\"https://opj.ca/wp-content/uploads/2018/02/New-Starbucks-Logo-1200x969.jpg\" width=\"200\" height=\"200\">\n<br>\n<br>\n \n#### Background Information\n\nThe dataset you will be provided in this portfolio exercise was originally used as a take-home assignment provided by Starbucks for their job candidates. The data for this exercise consists of about 120,000 data points split in a 2:1 ratio among training and test files. In the experiment simulated by the data, an advertising promotion was tested to see if it would bring more customers to purchase a specific product priced at $10. Since it costs the company 0.15 to send out each promotion, it would be best to limit that promotion only to those that are most receptive to the promotion. Each data point includes one column indicating whether or not an individual was sent a promotion for the product, and one column indicating whether or not that individual eventually purchased that product. Each individual also has seven additional features associated with them, which are provided abstractly as V1-V7.\n\n#### Optimization Strategy\n\nYour task is to use the training data to understand what patterns in V1-V7 to indicate that a promotion should be provided to a user. Specifically, your goal is to maximize the following metrics:\n\n* **Incremental Response Rate (IRR)** \n\nIRR depicts how many more customers purchased the product with the promotion, as compared to if they didn't receive the promotion. Mathematically, it's the ratio of the number of purchasers in the promotion group to the total number of customers in the purchasers group (_treatment_) minus the ratio of the number of purchasers in the non-promotional group to the total number of customers in the non-promotional group (_control_).\n\n$$ IRR = \\frac{purch_{treat}}{cust_{treat}} - \\frac{purch_{ctrl}}{cust_{ctrl}} $$\n\n\n* **Net Incremental Revenue (NIR)**\n\nNIR depicts how much is made (or lost) by sending out the promotion. Mathematically, this is 10 times the total number of purchasers that received the promotion minus 0.15 times the number of promotions sent out, minus 10 times the number of purchasers who were not given the promotion.\n\n$$ NIR = (10\\cdot purch_{treat} - 0.15 \\cdot cust_{treat}) - 10 \\cdot purch_{ctrl}$$\n\nFor a full description of what Starbucks provides to candidates see the [instructions available here](https://drive.google.com/open?id=18klca9Sef1Rs6q8DW4l7o349r8B70qXM).\n\nBelow you can find the training data provided. Explore the data and different optimization strategies.\n\n#### How To Test Your Strategy?\n\nWhen you feel like you have an optimization strategy, complete the `promotion_strategy` function to pass to the `test_results` function. \nFrom past data, we know there are four possible outomes:\n\nTable of actual promotion vs. predicted promotion customers: \n\n<table>\n<tr><th></th><th colspan = '2'>Actual</th></tr>\n<tr><th>Predicted</th><th>Yes</th><th>No</th></tr>\n<tr><th>Yes</th><td>I</td><td>II</td></tr>\n<tr><th>No</th><td>III</td><td>IV</td></tr>\n</table>\n\nThe metrics are only being compared for the individuals we predict should obtain the promotion – that is, quadrants I and II. Since the first set of individuals that receive the promotion (in the training set) receive it randomly, we can expect that quadrants I and II will have approximately equivalent participants. \n\nComparing quadrant I to II then gives an idea of how well your promotion strategy will work in the future. \n\nGet started by reading in the data below. See how each variable or combination of variables along with a promotion influences the chance of purchasing. When you feel like you have a strategy for who should receive a promotion, test your strategy against the test dataset used in the final `test_results` function.", "_____no_output_____" ] ], [ [ "# load in packages\nfrom itertools import combinations\n\nfrom test_results import test_results, score\nimport numpy as np\nimport pandas as pd\nimport scipy as sp\nimport sklearn as sk\n\nimport matplotlib.pyplot as plt\nimport seaborn as sb\n%matplotlib inline\n\n# load in the data\ntrain_data = pd.read_csv('./training.csv')\ntrain_data.head()", "_____no_output_____" ], [ "# Cells for you to work and document as necessary - \n# definitely feel free to add more cells as you need", "_____no_output_____" ], [ "def promotion_strategy(df):\n '''\n INPUT \n df - a dataframe with *only* the columns V1 - V7 (same as train_data)\n\n OUTPUT\n promotion_df - np.array with the values\n 'Yes' or 'No' related to whether or not an \n individual should recieve a promotion \n should be the length of df.shape[0]\n \n Ex:\n INPUT: df\n \n V1\tV2\t V3\tV4\tV5\tV6\tV7\n 2\t30\t-1.1\t1\t1\t3\t2\n 3\t32\t-0.6\t2\t3\t2\t2\n 2\t30\t0.13\t1\t1\t4\t2\n \n OUTPUT: promotion\n \n array(['Yes', 'Yes', 'No'])\n indicating the first two users would recieve the promotion and \n the last should not.\n '''\n \n \n \n \n return promotion", "_____no_output_____" ], [ "# This will test your results, and provide you back some information \n# on how well your promotion_strategy will work in practice\n\ntest_results(promotion_strategy)", "_____no_output_____" ] ] ]
[ "markdown", "code" ]
[ [ "markdown" ], [ "code", "code", "code", "code" ] ]
d020fa3c92bb830b6bc8b09c8376d3ab37f1afb0
78,731
ipynb
Jupyter Notebook
0702_ML19_clustering_kmeans.ipynb
msio900/minsung_machinelearning
0ef5185ed460167686dfc6555115f28f27b5f2f3
[ "Apache-2.0" ]
null
null
null
0702_ML19_clustering_kmeans.ipynb
msio900/minsung_machinelearning
0ef5185ed460167686dfc6555115f28f27b5f2f3
[ "Apache-2.0" ]
null
null
null
0702_ML19_clustering_kmeans.ipynb
msio900/minsung_machinelearning
0ef5185ed460167686dfc6555115f28f27b5f2f3
[ "Apache-2.0" ]
null
null
null
115.272328
36,262
0.814825
[ [ [ "## 리눅스 명령어\n", "_____no_output_____" ] ], [ [ "!ls", "sample_data Wholesale_customers_data.csv\n" ], [ "!ls -l", "total 20\ndrwxr-xr-x 1 root root 4096 Jun 15 13:37 sample_data\n-rw-r--r-- 1 root root 15021 Jul 2 05:28 Wholesale_customers_data.csv\n" ], [ "!pwd # 현재 위치", "/content\n" ], [ "!ls -l ./sample_data", "total 55504\n-rwxr-xr-x 1 root root 1697 Jan 1 2000 anscombe.json\n-rw-r--r-- 1 root root 301141 Jun 15 13:37 california_housing_test.csv\n-rw-r--r-- 1 root root 1706430 Jun 15 13:37 california_housing_train.csv\n-rw-r--r-- 1 root root 18289443 Jun 15 13:37 mnist_test.csv\n-rw-r--r-- 1 root root 36523880 Jun 15 13:37 mnist_train_small.csv\n-rwxr-xr-x 1 root root 930 Jan 1 2000 README.md\n" ], [ "!ls -l ./", "total 20\ndrwxr-xr-x 1 root root 4096 Jun 15 13:37 sample_data\n-rw-r--r-- 1 root root 15021 Jul 2 05:28 Wholesale_customers_data.csv\n" ], [ "!ls -l ./Wholesale_customers_data.csv", "-rw-r--r-- 1 root root 15021 Jul 2 05:28 ./Wholesale_customers_data.csv\n" ], [ "import pandas as pd\ndf = pd.read_csv('./Wholesale_customers_data.csv')\ndf.info()", "<class 'pandas.core.frame.DataFrame'>\nRangeIndex: 440 entries, 0 to 439\nData columns (total 8 columns):\n # Column Non-Null Count Dtype\n--- ------ -------------- -----\n 0 Channel 440 non-null int64\n 1 Region 440 non-null int64\n 2 Fresh 440 non-null int64\n 3 Milk 440 non-null int64\n 4 Grocery 440 non-null int64\n 5 Frozen 440 non-null int64\n 6 Detergents_Paper 440 non-null int64\n 7 Delicassen 440 non-null int64\ndtypes: int64(8)\nmemory usage: 27.6 KB\n" ], [ "X = df.iloc[:,:]", "_____no_output_____" ], [ "X.shape", "_____no_output_____" ], [ "from sklearn.preprocessing import StandardScaler\nscaler = StandardScaler()\nscaler.fit(X)\nX = scaler.transform(X)", "_____no_output_____" ] ], [ [ "# K-means 클러스터\n> max_iterint, default=300 /\n> Maximum number of iterations of the k-means algorithm for a single run.", "_____no_output_____" ] ], [ [ "from sklearn import cluster\nkmeans =cluster.KMeans(n_clusters=5)", "_____no_output_____" ], [ "kmeans.fit(X)", "_____no_output_____" ], [ "kmeans.labels_", "_____no_output_____" ] ], [ [ "#### 첫번재 라인은 무엇으로 label , 두번째 라인은 무엇으로 label 해줌.\n> 이친구들을 df에 label을 붙여줌.", "_____no_output_____" ] ], [ [ "df['label'] = kmeans.labels_", "_____no_output_____" ], [ "df.head()", "_____no_output_____" ] ], [ [ "##### 보고서 작성에는 2차원으로 보는게 젤 좋음.시각화의 시점은 무조건 XY로", "_____no_output_____" ] ], [ [ "df.plot(kind='scatter', x='Grocery',y='Frozen',c='label', cmap='Set1', figsize=(10,10))", "_____no_output_____" ] ], [ [ "### 위의 그림에서 0~4는 레이블 컬럼에 들어가 있는 것!!\n마스크 방식은 `for문` + `if문`\n```python\nfor ...:\n if ...: \nfor ...:\n if (~(df['label'] ==0) | (df['label'] == 4)) : # 0도 아니고 4도 아니고\n\n```", "_____no_output_____" ] ], [ [ "dfx = df[(~(df['label'] ==0) | (df['label'] == 4))]\ndf.shape, dfx.shape", "_____no_output_____" ], [ "dfx.plot(kind='scatter', x='Grocery',y='Frozen',c='label', cmap='Set1', figsize=(7,7))", "_____no_output_____" ], [ "df.to_excel('./wholesale.xls')", "_____no_output_____" ], [ "", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code", "code" ] ]
d020fdc9f7e8f6b6a7b5309d973617e62cceaf2a
8,662
ipynb
Jupyter Notebook
docs/source/examples/Widget Basics.ipynb
akhand1111/ipywidgets
a6228df8a24079bd4f8b6c1645b31e1c00218535
[ "BSD-3-Clause" ]
1
2019-09-08T18:11:03.000Z
2019-09-08T18:11:03.000Z
docs/source/examples/Widget Basics.ipynb
akhand1111/ipywidgets
a6228df8a24079bd4f8b6c1645b31e1c00218535
[ "BSD-3-Clause" ]
3
2020-01-18T12:26:26.000Z
2020-01-20T13:17:32.000Z
docs/source/examples/Widget Basics.ipynb
akhand1111/ipywidgets
a6228df8a24079bd4f8b6c1645b31e1c00218535
[ "BSD-3-Clause" ]
1
2021-01-28T05:58:42.000Z
2021-01-28T05:58:42.000Z
20.673031
400
0.545255
[ [ [ "[Index](Index.ipynb) - [Next](Widget List.ipynb)", "_____no_output_____" ], [ "# Simple Widget Introduction", "_____no_output_____" ], [ "## What are widgets?", "_____no_output_____" ], [ "Widgets are eventful python objects that have a representation in the browser, often as a control like a slider, textbox, etc.", "_____no_output_____" ], [ "## What can they be used for?", "_____no_output_____" ], [ "You can use widgets to build **interactive GUIs** for your notebooks. \nYou can also use widgets to **synchronize stateful and stateless information** between Python and JavaScript.", "_____no_output_____" ], [ "## Using widgets ", "_____no_output_____" ], [ "To use the widget framework, you need to import `ipywidgets`.", "_____no_output_____" ] ], [ [ "import ipywidgets as widgets", "_____no_output_____" ] ], [ [ "### repr", "_____no_output_____" ], [ "Widgets have their own display `repr` which allows them to be displayed using IPython's display framework. Constructing and returning an `IntSlider` automatically displays the widget (as seen below). Widgets are displayed inside the output area below the code cell. Clearing cell output will also remove the widget.", "_____no_output_____" ] ], [ [ "widgets.IntSlider()", "_____no_output_____" ] ], [ [ "### display()", "_____no_output_____" ], [ "You can also explicitly display the widget using `display(...)`.", "_____no_output_____" ] ], [ [ "from IPython.display import display\nw = widgets.IntSlider()\ndisplay(w)", "_____no_output_____" ] ], [ [ "### Multiple display() calls", "_____no_output_____" ], [ "If you display the same widget twice, the displayed instances in the front-end will remain in sync with each other. Try dragging the slider below and watch the slider above.", "_____no_output_____" ] ], [ [ "display(w)", "_____no_output_____" ] ], [ [ "## Why does displaying the same widget twice work?", "_____no_output_____" ], [ "Widgets are represented in the back-end by a single object. Each time a widget is displayed, a new representation of that same object is created in the front-end. These representations are called views.\n\n![Kernel & front-end diagram](images/WidgetModelView.png)", "_____no_output_____" ], [ "### Closing widgets", "_____no_output_____" ], [ "You can close a widget by calling its `close()` method.", "_____no_output_____" ] ], [ [ "display(w)", "_____no_output_____" ], [ "w.close()", "_____no_output_____" ] ], [ [ "## Widget properties", "_____no_output_____" ], [ "All of the IPython widgets share a similar naming scheme. To read the value of a widget, you can query its `value` property.", "_____no_output_____" ] ], [ [ "w = widgets.IntSlider()\ndisplay(w)", "_____no_output_____" ], [ "w.value", "_____no_output_____" ] ], [ [ "Similarly, to set a widget's value, you can set its `value` property.", "_____no_output_____" ] ], [ [ "w.value = 100", "_____no_output_____" ] ], [ [ "### Keys", "_____no_output_____" ], [ "In addition to `value`, most widgets share `keys`, `description`, and `disabled`. To see the entire list of synchronized, stateful properties of any specific widget, you can query the `keys` property.", "_____no_output_____" ] ], [ [ "w.keys", "_____no_output_____" ] ], [ [ "### Shorthand for setting the initial values of widget properties", "_____no_output_____" ], [ "While creating a widget, you can set some or all of the initial values of that widget by defining them as keyword arguments in the widget's constructor (as seen below).", "_____no_output_____" ] ], [ [ "widgets.Text(value='Hello World!', disabled=True)", "_____no_output_____" ] ], [ [ "## Linking two similar widgets", "_____no_output_____" ], [ "If you need to display the same value two different ways, you'll have to use two different widgets. Instead of attempting to manually synchronize the values of the two widgets, you can use the `link` or `jslink` function to link two properties together (the difference between these is discussed in [Widget Events](Widget Events.ipynb)). Below, the values of two widgets are linked together.", "_____no_output_____" ] ], [ [ "a = widgets.FloatText()\nb = widgets.FloatSlider()\ndisplay(a,b)\n\nmylink = widgets.jslink((a, 'value'), (b, 'value'))", "_____no_output_____" ] ], [ [ "### Unlinking widgets", "_____no_output_____" ], [ "Unlinking the widgets is simple. All you have to do is call `.unlink` on the link object. Try changing one of the widgets above after unlinking to see that they can be independently changed.", "_____no_output_____" ] ], [ [ "# mylink.unlink()", "_____no_output_____" ] ], [ [ "[Index](Index.ipynb) - [Next](Widget List.ipynb)", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown", "markdown" ], [ "code", "code" ], [ "markdown", "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ] ]
d0210559e23b9ee3cab2c9074aa8e96f45814670
1,648
ipynb
Jupyter Notebook
Baselines/mtsc_weasel_muse/.ipynb_checkpoints/Preprocess_weasel_muse-checkpoint.ipynb
JingweiZuo/SMATE
d3e847038d9b7fb2bc08b3720b93f80b934e538d
[ "MIT" ]
11
2021-04-21T08:32:21.000Z
2022-02-28T06:12:10.000Z
Baselines/mtsc_weasel_muse/.ipynb_checkpoints/Preprocess_weasel_muse-checkpoint.ipynb
SMATE2021/SMATE
d3e847038d9b7fb2bc08b3720b93f80b934e538d
[ "MIT" ]
1
2022-02-24T10:38:46.000Z
2022-02-24T10:38:46.000Z
Baselines/mtsc_weasel_muse/.ipynb_checkpoints/Preprocess_weasel_muse-checkpoint.ipynb
SMATE2021/SMATE
d3e847038d9b7fb2bc08b3720b93f80b934e538d
[ "MIT" ]
null
null
null
22.575342
121
0.510316
[ [ [ "Data format conversion for WEASEL_MUSE\n===\n\n\n---\nInput\n---\n\nTwo file types, each **data file** represents a single sample; the **label file** contains labels of all samples\n\n***Note:*** *both training and testing data should do the conversion*\n\n**data files**: \n- file name: \"sample_id.csv\"\n- file contents: L * D, L is the MTS length, D is the dimension size\n\n**label file**: \n- file name: \"meta_data.csv\"\n- file contents: L * 2, L is the MTS length, each row is like \"sample_id, label\"\n\n---\nOutput\n---\n\nA single file contains all samples and their labels: ***L * (3 + D)***\n\n\n\n- 1st col: sample_id\n- 2nd col: timestamps\n- 3rd col: label\n- after the 4th col: mts vector with D dimensions \n", "_____no_output_____" ] ], [ [ "import numpy as np\n", "_____no_output_____" ] ] ]
[ "markdown", "code" ]
[ [ "markdown" ], [ "code" ] ]
d021093cdd43dfa1411099f7c9b3cfe8e5f5dd11
16,994
ipynb
Jupyter Notebook
08_AfterAcceptance/06_KNN/knn.ipynb
yazdipour/DM17
bcde44df990938723c843801c1333cbcf4e5bd76
[ "MIT" ]
2
2018-04-25T09:44:31.000Z
2018-07-28T20:20:39.000Z
08_AfterAcceptance/06_KNN/knn.ipynb
yazdipour/DM17
bcde44df990938723c843801c1333cbcf4e5bd76
[ "MIT" ]
1
2019-07-24T21:16:18.000Z
2020-03-11T11:43:32.000Z
08_AfterAcceptance/06_KNN/knn.ipynb
yazdipour/DM17
bcde44df990938723c843801c1333cbcf4e5bd76
[ "MIT" ]
null
null
null
31.354244
386
0.498764
[ [ [ "import numpy as np\nfrom pandas import Series, DataFrame\nimport pandas as pd\nfrom sklearn import preprocessing, tree\nfrom sklearn.metrics import accuracy_score\n# from sklearn.model_selection import train_test_split, KFold\nfrom sklearn.neighbors import KNeighborsClassifier\nfrom sklearn.cross_validation import KFold", "C:\\ProgramData\\Anaconda2\\lib\\site-packages\\sklearn\\cross_validation.py:44: DeprecationWarning: This module was deprecated in version 0.18 in favor of the model_selection module into which all the refactored classes and functions are moved. Also note that the interface of the new CV iterators are different from that of this module. This module will be removed in 0.20.\n \"This module will be removed in 0.20.\", DeprecationWarning)\n" ], [ "df=pd.read_json('../01_Preprocessing/First.json').sort_index()", "_____no_output_____" ], [ "df.head(2)", "_____no_output_____" ], [ "def mydist(x, y):\n return np.sum((x-y)**2)\ndef jaccard(a, b):\n intersection = float(len(set(a) & set(b)))\n union = float(len(set(a) | set(b)))\n return 1.0 - (intersection/union)", "_____no_output_____" ], [ "# http://scikit-learn.org/stable/modules/generated/sklearn.neighbors.DistanceMetric.html\n \ndist=['braycurtis','canberra','chebyshev','cityblock','correlation','cosine','euclidean','dice','hamming','jaccard','kulsinski','matching','rogerstanimoto','russellrao','sokalsneath','yule']\nalgorithm=['ball_tree', 'kd_tree', 'brute']", "_____no_output_____" ], [ "len(dist)", "_____no_output_____" ] ], [ [ "## On country (only MS)", "_____no_output_____" ] ], [ [ "df.fund= df.fund=='TRUE'\ndf.gre= df.gre=='TRUE'\ndf.highLevelBachUni= df.highLevelBachUni=='TRUE'\ndf.highLevelMasterUni= df.highLevelMasterUni=='TRUE'\ndf.uniRank.fillna(294,inplace=True)", "_____no_output_____" ], [ "df.columns", "_____no_output_____" ], [ "oldDf=df.copy()\ndf=df[['countryCoded','degreeCoded','engCoded', 'fieldGroup','fund','gpaBachelors','gre', 'highLevelBachUni', 'paper','uniRank']]\ndf=df[df.degreeCoded==0]\ndel df['degreeCoded']\nbestAvg=[]\nfor alg in algorithm:\n for dis in dist:\n k_fold = KFold(n=len(df), n_folds=5)\n scores = []\n try:\n clf = KNeighborsClassifier(n_neighbors=3, weights='distance',algorithm=alg, metric=dis)\n except Exception as err:\n# print(alg,dis,'err')\n continue\n for train_indices, test_indices in k_fold:\n xtr = df.iloc[train_indices,(df.columns != 'countryCoded')]\n ytr = df.iloc[train_indices]['countryCoded']\n xte = df.iloc[test_indices, (df.columns != 'countryCoded')]\n yte = df.iloc[test_indices]['countryCoded']\n clf.fit(xtr, ytr)\n ypred = clf.predict(xte)\n acc=accuracy_score(list(yte),list(ypred))\n scores.append(acc*100)\n print(alg,dis,np.average(scores))\n bestAvg.append(np.average(scores))\nprint('>>>>>>>Best: ',np.max(bestAvg))", "('ball_tree', 'braycurtis', 55.507529507529512)\n('ball_tree', 'canberra', 44.839072039072036)\n('ball_tree', 'chebyshev', 53.738054538054541)\n('ball_tree', 'cityblock', 55.735775335775337)\n('ball_tree', 'euclidean', 55.793080993080991)\n('ball_tree', 'dice', 46.14798534798534)\n('ball_tree', 'hamming', 47.408547008547011)\n('ball_tree', 'jaccard', 46.14798534798534)\n('ball_tree', 'kulsinski', 46.319413919413918)\n('ball_tree', 'matching', 46.14798534798534)\n('ball_tree', 'rogerstanimoto', 46.14798534798534)\n('ball_tree', 'russellrao', 48.896052096052095)\n('ball_tree', 'sokalsneath', 46.14798534798534)\n('kd_tree', 'chebyshev', 53.909483109483105)\n('kd_tree', 'cityblock', 55.67863247863248)\n('kd_tree', 'euclidean', 55.793080993080991)\n('brute', 'braycurtis', 55.393080993081)\n('brute', 'canberra', 45.066829466829468)\n('brute', 'chebyshev', 53.738217338217339)\n('brute', 'cityblock', 55.735449735449741)\n('brute', 'correlation', 42.444444444444443)\n('brute', 'cosine', 44.841025641025645)\n('brute', 'euclidean', 55.792755392755396)\n" ] ], [ [ "## On Fund (only MS)", "_____no_output_____" ] ], [ [ "bestAvg=[]\nfor alg in algorithm:\n for dis in dist:\n k_fold = KFold(n=len(df), n_folds=5)\n scores = []\n try:\n clf = KNeighborsClassifier(n_neighbors=3, weights='distance',algorithm=alg, metric=dis)\n except Exception as err:\n continue\n for train_indices, test_indices in k_fold:\n xtr = df.iloc[train_indices, (df.columns != 'fund')]\n ytr = df.iloc[train_indices]['fund']\n xte = df.iloc[test_indices, (df.columns != 'fund')]\n yte = df.iloc[test_indices]['fund']\n clf.fit(xtr, ytr)\n ypred = clf.predict(xte)\n acc=accuracy_score(list(yte),list(ypred))\n score=acc*100\n scores.append(score)\n if (len(bestAvg)>1) :\n if(score > np.max(bestAvg)) :\n bestClf=clf\n bestAvg.append(np.average(scores))\n print (alg,dis,np.average(scores))\nprint('>>>>>>>Best: ',np.max(bestAvg))", "('ball_tree', 'braycurtis', 76.495400895400905)\n('ball_tree', 'canberra', 75.354008954008961)\n('ball_tree', 'chebyshev', 75.584533984533977)\n('ball_tree', 'cityblock', 77.293935693935694)\n('ball_tree', 'euclidean', 76.496703296703302)\n('ball_tree', 'dice', 74.383557183557173)\n('ball_tree', 'hamming', 76.152706552706562)\n('ball_tree', 'jaccard', 74.383557183557173)\n('ball_tree', 'kulsinski', 74.497842897842901)\n('ball_tree', 'matching', 74.383557183557173)\n('ball_tree', 'rogerstanimoto', 74.383557183557173)\n('ball_tree', 'russellrao', 75.409361009360993)\n('ball_tree', 'sokalsneath', 74.383557183557173)\n('kd_tree', 'chebyshev', 75.641676841676855)\n('kd_tree', 'cityblock', 77.293935693935694)\n('kd_tree', 'euclidean', 76.553683353683354)\n('brute', 'braycurtis', 76.495563695563703)\n('brute', 'canberra', 75.411151811151825)\n('brute', 'chebyshev', 75.754008954008967)\n('brute', 'cityblock', 77.008547008547012)\n('brute', 'correlation', 73.528367928367928)\n('brute', 'cosine', 72.901912901912894)\n('brute', 'euclidean', 76.61066341066342)\n('brute', 'dice', 73.983882783882777)\n('brute', 'hamming', 76.0954008954009)\n('brute', 'jaccard', 73.983882783882777)\n('brute', 'kulsinski', 74.098168498168491)\n('brute', 'matching', 73.983882783882777)\n('brute', 'rogerstanimoto', 73.983882783882777)\n('brute', 'russellrao', 72.670411070411063)\n('brute', 'sokalsneath', 73.983882783882777)\n('brute', 'yule', 58.807651607651607)\n('>>>>>>>Best: ', 77.293935693935694)\n" ] ], [ [ "### Best : ('kd_tree', 'cityblock', 77.692144892144896)", "_____no_output_____" ] ], [ [ "me=[0,2,0,2.5,False,False,1.5,400]", "_____no_output_____" ], [ "n=bestClf.kneighbors([me])\nn", "_____no_output_____" ], [ "for i in n[1]:\n print(xtr.iloc[i])", " countryCoded engCoded fieldGroup gpaBachelors gre highLevelBachUni \\\n664 0 2 0 2.5 False False \n767 0 2 0 3.0 False False \n911 0 2 0 3.0 False False \n\n paper uniRank \n664 1.000000 72.0 \n767 1.333333 294.0 \n911 3.000000 294.0 \n" ] ] ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code" ] ]
d02110aa42fb891c670b0a513db0e717f63b2f08
166,371
ipynb
Jupyter Notebook
deep_learning/models/combine_processes/Data_Cleaning_NLP.ipynb
Claudio9701/mailbot
affa37f027ff09eb04bb5b65ed1a7eef9d43eed7
[ "MIT" ]
null
null
null
deep_learning/models/combine_processes/Data_Cleaning_NLP.ipynb
Claudio9701/mailbot
affa37f027ff09eb04bb5b65ed1a7eef9d43eed7
[ "MIT" ]
null
null
null
deep_learning/models/combine_processes/Data_Cleaning_NLP.ipynb
Claudio9701/mailbot
affa37f027ff09eb04bb5b65ed1a7eef9d43eed7
[ "MIT" ]
null
null
null
137.724338
31,908
0.865884
[ [ [ "import warnings\nimport collections\nimport os\nimport pandas as pd # manage data\nimport pickle as pk # load and save python objects\nimport numpy as np # matrix operations\nimport matplotlib.pyplot as plt\nimport unidecode # Deal with codifications\nimport regex # use regular expresions\nfrom email.header import Header, decode_header # e-mails helper functions\nfrom nltk.tokenize import word_tokenize # Natural Language Toolkit\nfrom selectolax.parser import HTMLParser # Optimized html library\nfrom tqdm import tqdm # For loops decorator", "_____no_output_____" ], [ "warnings.filterwarnings('ignore')", "_____no_output_____" ], [ "%matplotlib inline", "_____no_output_____" ], [ "# Helper functions\ndef get_text_from _html(html):\n '''\n Extracted from https://rushter.com/blog/python-fast-html-parser/ to eliminate html tags from email body\n \n Parameters\n html: html file\n Return\n text: html text content\n '''\n tree = HTMLParser(html)\n\n if tree.body is None:\n return html\n\n for tag in tree.css('script'):\n tag.decompose()\n for tag in tree.css('style'):\n tag.decompose()\n\n text = unidecode.unidecode(tree.body.text(separator=' '))\n \n return text\n\ndef clean_mail_subject(mail_header):\n '''\n Clean mail subject\n Parameters\n mail_header: email.Header object or string or None.\n Return\n decoded_header: string containing mail subject\n '''\n \n if type(mail_header) == Header:\n decoded_header = decode_header(mail_header.encode())[0][0].decode('utf-8')\n else:\n decoded_header = mail_header\n \n if decoded_header[:5] == 'Fwd: ':\n decoded_header = decoded_header[5:]\n elif decoded_header[:4] == 'Re: ':\n decoded_header = decoded_header[4:]\n \n decoded_header = re.sub(r\"[^a-zA-Z?.!,¿]+\", \" \", decoded_header)\n \n return decoded_header\n\ndef clean_mail_body(str_):\n '''Clean mail body'''\n str_ = str_.replace('\\t', '')\n new_str = regex.split(r'(\\bEl \\d{1,2} de [a-z]+ de 2\\d{3},)', str_)[0]\n new_str = regex.split(r'(\\bOn \\d{1,2} de [a-z]+. de 2\\d{3},)', new_str)[0]\n \n if len(new_str) > 0:\n return new_str\n else:\n return str_\n \ndef filter_firm(str_):\n '''Clean mail firm'''\n new_str = regex.split(r'(Adela C. Santillana Figueroa)|(Claudia Alarcon Burga)|(Miguel Koch Zavaleta)|(Rocio Villavicencio Ripas)|(Maria Alejandra Alba S.)|(Fiorella.)|(Fiorella Romero Cardenas)|(Directora de Servicios Academicos y Registro)|(Asistente Administrativ[a|o])|(Servicios Academicos y Registro)|(FORMAMOS LIDERES RESPONSABLES PARA EL MUNDO)|(up.edu.pe)|(Jr. Sanchez Cerro 2141 Jesus Maria, Lima 11)|(T. 511-219-0100 Ext. [0-9]{4})|([a-zA-z0-9-.]+@up.edu.pe)|(Pensemos en el AMBIENTE antes de imprimir este mensaje)', str_)[0]\n \n if len(new_str) > 0:\n return new_str\n else:\n return str_", "_____no_output_____" ], [ "# # Define output dir\n# outDir = 'output/'\n# actualDir = 'data_cleaning_nlp'\n\n# print()\n# if not(actualDir in os.listdir(outDir)):\n# os.mkdir(os.path.join(outDir, actualDir))\n# print('output dir created')\n# else:\n# print('output dir already created')\n# print()", "_____no_output_____" ], [ "ROOT = \"~/Documents/TF_chatbot\"", "_____no_output_____" ], [ "input_file = \"../../../text_data/mails.txt\"\n\nwith open(input_file, \"r\") as input_f:\n for lines in input_f:\n s = lines.readline()\n# for mail in mails:\n# for item in mail:\n# output.write(str(item) + '\\t')\n# output.write('\\n')", "_____no_output_____" ], [ "s", "_____no_output_____" ], [ "# Load complete email data\nmails = pk.load(open(\"../../../text_data/mails.txt\", 'rb'))\ndf = pd.DataFrame(mails, columns=['id','subject','date','sender','recipient','body'])", "_____no_output_____" ], [ "df.info()", "<class 'pandas.core.frame.DataFrame'>\nRangeIndex: 107484 entries, 0 to 107483\nData columns (total 6 columns):\nid 107484 non-null object\nsubject 107384 non-null object\ndate 107484 non-null object\nsender 107484 non-null object\nrecipient 107339 non-null object\nbody 107484 non-null object\ndtypes: object(6)\nmemory usage: 4.9+ MB\n" ], [ "print()\nprint(df.isna().sum())\nprint()", "\nid 0\nsubject 100\ndate 0\nsender 0\nrecipient 145\nbody 0\ndtype: int64\n\n" ], [ "df['date'] = pd.to_datetime(df['date'], infer_datetime_format=True) # transform dates to datetime format", "_____no_output_____" ], [ "df['date'].describe()", "_____no_output_____" ] ], [ [ "#### Periodo de los datos: 6 ciclos y medio (+ 3 ciclos-0)", "_____no_output_____" ] ], [ [ "df['date'].hist(bins=51, figsize=(10,5))\nplt.xlim(df['date'].min(), df['date'].max())\nplt.title('Histograma de la Fecha de Envío del Mensaje')\nplt.ylabel('Número de Mensajes')\nplt.xlabel('Año-Mes')\nplt.show()\n#plt.savefig('hist_fecha.svg', format='svg')", "_____no_output_____" ], [ "df['month'] = df['date'].dt.month\ndf['dayofweek'] = df['date'].dt.dayofweek\n\n# Plot for month and day of week variables\nday_value_counts = (df['dayofweek'].value_counts()/df.shape[0])*100\nmonth_value_counts = (df['month'].value_counts()/df.shape[0])*100\n\nmonthnames_ES = ['Enero','Febrero','Marzo','Abril','Mayo','Junio','Julio','Agosto','Septiembre','Octubre','Noviembre','Diciembre']\ndaynames_ES = ['Lunes','Martes','Miércoles','Jueves','Viernes','Sábado','Domingo']\n\nfig, (ax1, ax2) = plt.subplots(nrows=2, ncols=1, figsize=(10,10))\n\nmonth_value_counts.plot(ax=ax1, rot=0, kind='bar', title='Mensajes Enviados según el Mes (%)', color='b')\nax1.set_xticklabels(monthnames_ES)\nax1.set_ylabel('% de Mensajes')\nax1.set_xlabel('Mes')\n\nday_value_counts.plot(ax=ax2, rot=0, kind='bar', title='Mensajes Enviados según el Día de la Semana (%)', color='b')\nax2.set_xticklabels(daynames_ES)\nax2.set_ylabel('% de Mensajes')\nax2.set_xlabel('Día de la Semana')\n\nplt.tight_layout()\nplt.show()\n#fig.savefig('grafico_barras_dia_mes.svg', format='svg')\n#fig.savefig('grafico_barras_dia_mes.png', format='png')", "_____no_output_____" ], [ "%%time\ndf['body'] = df['body'].apply(get_text_selectolax) # filter text from hltm emails", "CPU times: user 45 s, sys: 2.35 s, total: 47.3 s\nWall time: 47.3 s\n" ], [ "# Extract sender and recipient email only\ndf['sender_email'] = df.sender.str.extract(\"([a-zA-z0-9-.]+@[a-zA-z0-9-.]+)\")[0].str.lower()\ndf['recipient_email'] = df.recipient.str.extract(\"([a-zA-z0-9-.]+@[a-zA-z0-9-.]+)\")[0].str.lower()", "_____no_output_____" ], [ "print()\nprint(df.isna().sum())\nprint()", "\nid 0\nsubject 100\ndate 0\nsender 0\nrecipient 145\nbody 0\nmonth 0\ndayofweek 0\nsender_email 1\nrecipient_email 422\ndtype: int64\n\n" ], [ "# eliminate 'no reply' and 'automatic' msgs\ndf_noreply = df[~df.sender.str.contains('noreply@google.com').fillna(False)]\ndf_noautom = df_noreply[~df_noreply.subject.str.contains('Respuesta automática').fillna(False)]\n\n# Separate msgs by type of sender\nsend_by_alumns = df_noautom[df_noautom.sender.str.contains('@alum.up.edu.pe').fillna(False)]\nsend_by_no_alumns = df_noautom[~df_noautom.sender.str.contains('@alum.up.edu.pe').fillna(False)]\nsend_by_internals = df_noautom[df_noautom.sender.str.contains('@up.edu.pe').fillna(False)]\n\nprint('# msgs send by alumns:', len(send_by_alumns))\nprint('# of alumns that send msgs:', len(send_by_alumns.sender_email.unique()))", "# msgs send by alumns: 14021\n# of alumns that send msgs: 3781\n" ], [ "len(send_by_internals)", "_____no_output_____" ], [ "# Clean mails subject\nsend_by_internals['subject'] = send_by_internals['subject'].apply(filterResponses)", "_____no_output_____" ] ], [ [ "## Email pairing algorithm\n\n1. Extrae los mensajes enviados por alumno y los mensajes enviados por usuarios internos a cada alumno, respectivamente\n2. Extrae el asunto de cada mensaje del punto 1. Si el asunto del mensaje es igual al asunto enviado en el mensaje anterior aumenta el contador de mensajes con el mismo asunto.\n3. Utilizando en contador de mensajes con el mismo asunto, busca el asunto extraido en el punto 2 entre los emails enviados por usuarios internos a ese alumno.\n4. Genera una lista con el asunto, los datos del mail enviado por el alumno y la respuesta que recibió.", "_____no_output_____" ] ], [ [ "# Separate mails sended to each alumn\ndfs = [send_by_internals[send_by_internals.recipient_email == alumn] for alumn in send_by_alumns.sender_email.unique()]", "_____no_output_____" ], [ "unique_alumns = send_by_alumns.sender_email.unique()\nn = len(unique_alumns)\n\n# Count causes to not being able to process a text\nresp_date_bigger_than_input_date = 0\nresponses_with_same_subject_lower_than_counter = 0\nsubject_equal_none = 0\nn_obs_less_than_0 = 0\nrepited_id = 0\n\nfor i, alumn in tqdm(enumerate(unique_alumns), total=n):\n if len(dfs[i]) > 0:\n temp_ = send_by_alumns[send_by_alumns.sender_email == alumn]\n indexes = temp_.index\n counter_subject = 0\n subject_pre = 'initial_value'\n \n for index in indexes:\n subject = filterResponses(temp_.subject[index])\n if subject != None:\n if subject_pre == subject:\n counter_subject += 1\n else:\n counter_subject = 0\n subject_pre = subject\n \n if len(dfs[i][dfs[i]['subject'] == subject]) > counter_subject: \n input_date = temp_.loc[index, 'date']\n resp_date = dfs[i]['date'][dfs[i]['subject'] == subject].iloc[counter_subject]\n if input_date < resp_date:\n input_id, sender, recipient, input_body = temp_.loc[index, ['id','sender','recipient','body']]\n resp_id, resp_body = dfs[i][['id','body']][dfs[i]['subject'] == subject].iloc[counter_subject]\n pair = np.array([[subject, sender, recipient, input_id, input_date, input_body, resp_id, resp_date, resp_body]],dtype=object)\n if i == 0:\n pairs = np.array(pair)\n elif all([not(pair[0,3] in pairs[:,3]), not(pair[0,6] in pairs[:,6])]):\n pairs = np.append(pairs, pair, axis=0)\n else:\n repited_id += 1\n else:\n resp_date_bigger_than_input_date += 1\n else:\n responses_with_same_subject_lower_than_counter += 1\n else:\n subject_equal_none += 1\n else:\n n_obs_less_than_0 += 1", "100%|██████████| 3781/3781 [00:57<00:00, 65.81it/s]\n" ] ], [ [ "# Format data", "_____no_output_____" ] ], [ [ "total_unpaired_mails = repited_id+resp_date_bigger_than_input_date+responses_with_same_subject_lower_than_counter+subject_equal_none+n_obs_less_than_0", "_____no_output_____" ], [ "print()\nprint('Filtros del algoritmo de emparejamiento')\nprint('resp_date_bigger_than_input_date:',resp_date_bigger_than_input_date)\nprint('subject_equal_none:',subject_equal_none)\nprint('repited_id:', repited_id)\nprint('no hay motivo pero no lo empareje:',len(send_by_alumns) - total_unpaired_mails - len(pairs) )\nprint('-'*50)\nprint('motivos de sar:')\nprint('el ultimo mensaje de la cadena del asunto no tuvo respuesta:',responses_with_same_subject_lower_than_counter)\nprint('no le respondieron ni el primer mensaje:',n_obs_less_than_0)\nprint('-'*50)\nprint('# of mails in total:', len(mails))\nprint('# msgs send by alumns:', len(send_by_alumns))\nprint('# of paired emails:', len(pairs))\nprint('% de paired mails:', round((len(pairs)/len(send_by_alumns))*100,2),'%')\nprint('total of unpaired mails: ', total_unpaired_mails)\nprint('% de unpaired mails:', round((total_unpaired_mails/len(send_by_alumns))*100,2),'%')\nprint()", "\nFiltros del algoritmo de emparejamiento\nresp_date_bigger_than_input_date: 711\nsubject_equal_none: 49\nrepited_id: 28\nno hay motivo pero no lo empareje: 180\n--------------------------------------------------\nmotivos de sar:\nel ultimo mensaje de la cadena del asunto no tuvo respuesta: 3906\nno le respondieron ni el primer mensaje: 398\n--------------------------------------------------\n# of mails in total: 107484\n# msgs send by alumns: 14021\n# of paired emails: 8749\n% de paired mails: 62.4 %\ntotal of unpaired mails: 5092\n% de unpaired mails: 36.32 %\n\n" ], [ "# Load paired mails in a DataFrame\ncolumns_names = ['subject', 'sender', 'recipient', 'input_id', 'input_date', 'input_body', 'resp_id', 'resp_date', 'resp_body']\npaired_mails = pd.DataFrame(data=pairs, columns=columns_names)", "_____no_output_____" ], [ "paired_mails['input_date'] = pd.to_datetime(paired_mails['input_date'], infer_datetime_format=True)\npaired_mails['resp_date'] = pd.to_datetime(paired_mails['resp_date'], infer_datetime_format=True)", "_____no_output_____" ], [ "paired_mails['input_month'] = paired_mails['input_date'].dt.month\npaired_mails['input_dayofweek'] = paired_mails['input_date'].dt.dayofweek\n\n# Plot for month and day of week variables\nday_value_counts = (paired_mails['input_dayofweek'].value_counts()/df.shape[0])*100\nmonth_value_counts = (paired_mails['input_month'].value_counts()/df.shape[0])*100\n\nmonthnames_ES = ['Enero','Febrero','Marzo','Abril','Mayo','Junio','Julio','Agosto','Septiembre','Octubre','Noviembre','Diciembre']\ndaynames_ES = ['Lunes','Martes','Miércoles','Jueves','Viernes','Sábado','Domingo']\n\nfig, (ax1, ax2) = plt.subplots(nrows=2, ncols=1, figsize=(5,5))\n\nmonth_value_counts.plot(ax=ax1, rot=45, kind='bar', title='Mensajes Enviados según el Mes (%)', color='b')\nax1.set_xticklabels(monthnames_ES)\nax1.set_ylabel('% de Mensajes')\nax1.set_xlabel('Mes')\n\nday_value_counts.plot(ax=ax2, rot=45, kind='bar', title='Mensajes Enviados según el Día de la Semana (%)', color='b')\nax2.set_xticklabels(daynames_ES)\nax2.set_ylabel('% de Mensajes')\nax2.set_xlabel('Día de la Semana')\n\nplt.tight_layout()\nplt.show()\n#fig.savefig('grafico_barras_dia_mes.svg', format='svg')\n#fig.savefig('grafico_barras_dia_mes.png', format='png')", "_____no_output_____" ], [ "paired_mails['input_date'].hist(bins=51, figsize=(10*1.5,5*1.5), color='blue')\n\nplt.xlim(df['date'].min(), df['date'].max())\nplt.title('Histograma de la Fecha de Envío del Mensaje de Alumnos',fontsize=20)\nplt.ylabel('Número de Mensajes',fontsize=15)\nplt.xlabel('Año-Mes',fontsize=15)\nplt.yticks(fontsize=12.5)\nplt.xticks(fontsize=12.5)\nplt.savefig('hist_fecha_inputs.svg', dpi=300, format='svg')\nplt.show()", "_____no_output_____" ], [ "fig, (ax1, ax2) = plt.subplots(1,2, figsize=(15*1.25,5*1.25))\n\nfor historyDir in historyDirs:\n params = historyDir.replace('.pk','').split('_')[-4:]\n try:\n history = pickle.load(open(historyDir,'rb'))\n ax1.plot(range(len(history['loss'])), history['loss'], linewidth=5)\n ax1.grid(True)\n ax1.set_ylabel('Entropía Cruzada (Error)',fontsize=20)\n ax1.set_xlabel('Época',fontsize=20)\n ax1.set_title('Entrenamiento',fontsize=20)\n ax1.set_xlim(-0.5, 100)\n ax1.set\n ax2.plot(range(len(history['val_loss'])), history['val_loss'], linewidth=5)\n ax2.grid(True)\n ax2.set_xlabel('Época',fontsize=20)\n ax2.set_title('Validación',fontsize=20)\n plt.suptitle('Curvas de Error',fontsize=25)\n ax2.set_xlim(-0.5, 100)\n except:\n pass\nfig.savefig('curvas_error.svg', dpi=300, format='svg')", "_____no_output_____" ], [ "paired_mails['resp_date'].hist(bins=51, figsize=(10,5), color='blue')\nplt.xlim(df['date'].min(), df['date'].max())\nplt.title('Histograma de la Fecha de Envío del Mensaje hacia Alumnos')\nplt.ylabel('Número de Mensajes')\nplt.xlabel('Año-Mes')\nplt.show()\n#fig.savefig('hist_fecha_resps.svg', format='svg')", "_____no_output_____" ], [ "# Create features to detect possible errors\npaired_mails['resp_time'] = paired_mails['resp_date'] - paired_mails['input_date']\npaired_mails['input_body_len'] = paired_mails['input_body'].apply(len)\npaired_mails['resp_body_len'] = paired_mails['resp_body'].apply(len)", "_____no_output_____" ], [ "# Calculate input messages lenghts\ninput_len_stats = paired_mails['input_body_len'].describe([0.01, 0.05, 0.1, 0.25, 0.5, 0.75, 0.8, 0.85, 0.9, 0.98, 0.99]).round()\nprint()\nprint(input_len_stats)\nprint()", "\ncount 8749.0\nmean 1551.0\nstd 62454.0\nmin 0.0\n1% 75.0\n5% 122.0\n10% 153.0\n25% 225.0\n50% 353.0\n75% 618.0\n80% 742.0\n85% 982.0\n90% 1361.0\n98% 3187.0\n99% 4055.0\nmax 4638334.0\nName: input_body_len, dtype: float64\n\n" ], [ "# Calculate response messages lenghts\nresp_len_stats = paired_mails['resp_body_len'].describe([0.05, 0.1, 0.25, 0.5, 0.75, 0.8, 0.85, 0.9, 0.98, 0.99]).round()\nprint()\nprint(resp_len_stats)\nprint()", "\ncount 8749.0\nmean 1406.0\nstd 1029.0\nmin 12.0\n5% 608.0\n10% 701.0\n25% 845.0\n50% 1076.0\n75% 1615.0\n80% 1826.0\n85% 2053.0\n90% 2427.0\n98% 4592.0\n99% 5584.0\nmax 16156.0\nName: resp_body_len, dtype: float64\n\n" ], [ "# Response time analysis\nresp_time_stats = paired_mails['resp_time'].describe([0.05, 0.1, 0.25, 0.5, 0.75, 0.8, 0.85, 0.9, 0.98, 0.99])\nprint()\nprint(resp_time_stats)\nprint()", "\ncount 8749\nmean 3 days 11:04:02.944565\nstd 26 days 00:35:37.054634\nmin 0 days 00:00:25\n5% 0 days 00:03:57\n10% 0 days 00:07:04\n25% 0 days 00:27:33\n50% 0 days 05:49:28\n75% 1 days 01:20:34\n80% 1 days 20:49:16\n85% 2 days 17:37:08.800000\n90% 4 days 05:23:37.799999\n98% 20 days 19:11:46.879999\n99% 36 days 14:59:36.480000\nmax 982 days 18:54:12\nName: resp_time, dtype: object\n\n" ], [ "# Filter errors using response time\npaired_mails = paired_mails[paired_mails['resp_time'] <= paired_mails['resp_time'].sort_values().iloc[-65]]", "_____no_output_____" ], [ "# Filter errors using messages body lenghts\npaired_mails = paired_mails[paired_mails['input_body_len'] <= paired_mails['input_body_len'].sort_values().iloc[-3]]\n# not errors caught using resp_body_len", "_____no_output_____" ], [ "# Response time analysis\nresp_time_stats = paired_mails['resp_time'].describe([0.05, 0.1, 0.25, 0.5, 0.75, 0.8, 0.85, 0.9, 0.98, 0.99])\nprint()\nprint(resp_time_stats)\nprint()", "\ncount 8683\nmean 1 days 16:19:46.557065\nstd 4 days 13:22:09.718730\nmin 0 days 00:00:25\n5% 0 days 00:03:55\n10% 0 days 00:06:59.200000\n25% 0 days 00:26:52.500000\n50% 0 days 05:31:06\n75% 1 days 00:37:58\n80% 1 days 19:33:16\n85% 2 days 16:04:25.199999\n90% 3 days 22:25:13.800000\n98% 16 days 22:32:56.960000\n99% 23 days 17:43:41.480000\nmax 65 days 22:04:33\nName: resp_time, dtype: object\n\n" ], [ "paired_mails['input_body'] = paired_mails['input_body'].apply(filterMail)\npaired_mails['resp_body'] = paired_mails['resp_body'].apply(filterMail)", "_____no_output_____" ], [ "paired_mails['resp_body'] = paired_mails['resp_body'].apply(filterFirm)", "_____no_output_____" ], [ "sentence_pairs = paired_mails[['input_body','resp_body']]", "_____no_output_____" ], [ "sentence_pairs.to_csv('output/data_cleaning_nlp/q_and_a.txt', sep='\\t', index=False, header=False)", "_____no_output_____" ], [ "paired_mails['input_body'] = paired_mails['input_body'].apply(lambda x: regex.sub(pattern='[`<@!*>-]', repl='', string=x))\npaired_mails['resp_body'] = paired_mails['resp_body'].apply(lambda x: regex.sub(pattern='[`<@!*>-]', repl='', string=x))", "_____no_output_____" ], [ "paired_mails.to_csv('output/data_cleaning_nlp/paired_emails.csv', encoding='utf-8', index=False)", "_____no_output_____" ] ], [ [ "## NLP", "_____no_output_____" ] ], [ [ "## Tokenization using NLTK\n# Define input (x) and target (y) sequences variables\nx = [word_tokenize(msg, language='spanish') for msg in paired_mails['input_body'].values]\ny = [word_tokenize(msg, language='spanish') for msg in paired_mails['resp_body'].values]", "_____no_output_____" ], [ "# Variables to store lenghts \nhist_len_inp = []\nhist_len_out = []\n\nmaxlen_inp = 0\nmaxlen_out = 0\n\n# Define word counter\nword_freqs_inp = collections.Counter()\nword_freqs_out = collections.Counter()\nnum_recs = 0\n\nfor inp, out in zip(x, y):\n # Get input and target sequence lenght\n hist_len_inp.append(len(inp))\n hist_len_out.append(len(out))\n \n # Calculate max sequence lenght\n if len(inp) > maxlen_inp: maxlen_inp = len(inp)\n if len(out) > maxlen_out: maxlen_out = len(out)\n \n # Count unique words\n for words in inp:\n word_freqs_inp[words] += 1\n \n for words in out:\n word_freqs_out[words] += 1\n\n num_recs += 1\n\nprint()\nprint(\"maxlen input:\", maxlen_inp)\nprint(\"maxlen output:\", maxlen_out)\nprint(\"features (words) - input:\", len(word_freqs_inp))\nprint(\"features (words) - output:\", len(word_freqs_out))\nprint(\"number of records:\", num_recs)\nprint()", "_____no_output_____" ], [ "plt.hist(hist_len_inp, bins =100)\nplt.xlim((0,850))\nplt.xticks(range(0,800,100))\nplt.title('input_len')\nplt.show()", "_____no_output_____" ], [ "plt.hist(hist_len_out, bins=100)\nplt.xlim((0,850))\nplt.xticks(range(0,800,100))\nplt.title('output_len')\nplt.show()", "_____no_output_____" ], [ "pk.dump(word_freqs_inp, open('output/data_cleaning_nlp/word_freqs_input.pk', 'wb'))\npk.dump(word_freqs_out, open('output/data_cleaning_nlp/word_freqs_output.pk', 'wb'))\npk.dump(x, open('output/data_cleaning_nlp/input_data.pk', 'wb'))\npk.dump(y, open('output/data_cleaning_nlp/target_data.pk', 'wb'))", "_____no_output_____" ] ] ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code" ] ]
d0211edcd40b7d9750abbdfb8861341442f657cc
565,492
ipynb
Jupyter Notebook
Run Project Models - Census Data.ipynb
SandyGuru/TeamFunFinalProject
0bcbdb32e7212423f9f94df489026041f00c8bbd
[ "Apache-2.0" ]
null
null
null
Run Project Models - Census Data.ipynb
SandyGuru/TeamFunFinalProject
0bcbdb32e7212423f9f94df489026041f00c8bbd
[ "Apache-2.0" ]
null
null
null
Run Project Models - Census Data.ipynb
SandyGuru/TeamFunFinalProject
0bcbdb32e7212423f9f94df489026041f00c8bbd
[ "Apache-2.0" ]
null
null
null
101.79874
44,012
0.737347
[ [ [ "from sklearn import *\nfrom sklearn import datasets\nfrom sklearn import linear_model\nfrom sklearn import metrics\nfrom sklearn import cross_validation\nfrom sklearn import tree\nfrom sklearn import neighbors\nfrom sklearn import svm\nfrom sklearn import ensemble\nfrom sklearn import cluster\nfrom sklearn import model_selection", "C:\\Users\\Victoria\\Anaconda3\\lib\\site-packages\\sklearn\\cross_validation.py:41: DeprecationWarning: This module was deprecated in version 0.18 in favor of the model_selection module into which all the refactored classes and functions are moved. Also note that the interface of the new CV iterators are different from that of this module. This module will be removed in 0.20.\n \"This module will be removed in 0.20.\", DeprecationWarning)\nC:\\Users\\Victoria\\Anaconda3\\lib\\site-packages\\sklearn\\grid_search.py:42: DeprecationWarning: This module was deprecated in version 0.18 in favor of the model_selection module into which all the refactored classes and functions are moved. This module will be removed in 0.20.\n DeprecationWarning)\nC:\\Users\\Victoria\\Anaconda3\\lib\\site-packages\\sklearn\\learning_curve.py:22: DeprecationWarning: This module was deprecated in version 0.18 in favor of the model_selection module into which all the functions are moved. This module will be removed in 0.20\n DeprecationWarning)\n" ], [ "import numpy as np\n%matplotlib inline\nimport matplotlib.pyplot as plt\nimport seaborn as sns #for graphics and figure styling\nimport pandas as pd", "_____no_output_____" ], [ "data = pd.read_csv('adult.data.txt', sep=\", \", encoding='latin1', header=None)", "C:\\Users\\Victoria\\Anaconda3\\lib\\site-packages\\ipykernel_launcher.py:1: ParserWarning: Falling back to the 'python' engine because the 'c' engine does not support regex separators (separators > 1 char and different from '\\s+' are interpreted as regex); you can avoid this warning by specifying engine='python'.\n \"\"\"Entry point for launching an IPython kernel.\n" ], [ "data.columns = ['Age', 'Status', 'Weight', 'Degree', 'Education', 'Married', 'Occupation', 'Relationship', 'Race', 'Sex', 'Gain', 'Loss', 'Hours', 'Country', 'Income']", "_____no_output_____" ], [ "data.head()", "_____no_output_____" ], [ "data.info()", "<class 'pandas.core.frame.DataFrame'>\nRangeIndex: 32561 entries, 0 to 32560\nData columns (total 15 columns):\nAge 32561 non-null int64\nStatus 32561 non-null object\nWeight 32561 non-null int64\nDegree 32561 non-null object\nEducation 32561 non-null int64\nMarried 32561 non-null object\nOccupation 32561 non-null object\nRelationship 32561 non-null object\nRace 32561 non-null object\nSex 32561 non-null object\nGain 32561 non-null int64\nLoss 32561 non-null int64\nHours 32561 non-null int64\nCountry 32561 non-null object\nIncome 32561 non-null object\ndtypes: int64(6), object(9)\nmemory usage: 3.7+ MB\n" ], [ "from sklearn.preprocessing import LabelEncoder", "_____no_output_____" ], [ "data = data.apply(LabelEncoder().fit_transform)", "_____no_output_____" ], [ "dataIncomeColumn = data.Income", "_____no_output_____" ], [ "dataIncomeColumn.head()", "_____no_output_____" ], [ "data= data.drop('Income', axis=1)", "_____no_output_____" ], [ "from sklearn.preprocessing import StandardScaler\nscaler = StandardScaler().fit(data)", "_____no_output_____" ], [ "data", "_____no_output_____" ], [ "standardized_data = scaler.transform(data)", "_____no_output_____" ], [ "data_Test = pd.read_csv('adult.test.txt', sep=\", \", encoding='latin1', header=None)\ndata_Test.columns = ['Age', 'Status', 'Weight', 'Degree', 'Education', 'Married', 'Occupation', 'Relationship', 'Race', 'Sex', 'Gain', 'Loss', 'Hours', 'Country', 'Income']\nenc = LabelEncoder()\ndata_Test = data_Test.apply(LabelEncoder().fit_transform)\ndata_TestIncomeColumn = data_Test.Income\ndata_Test=data_Test.drop('Income', axis=1)", "C:\\Users\\Victoria\\Anaconda3\\lib\\site-packages\\ipykernel_launcher.py:1: ParserWarning: Falling back to the 'python' engine because the 'c' engine does not support regex separators (separators > 1 char and different from '\\s+' are interpreted as regex); you can avoid this warning by specifying engine='python'.\n \"\"\"Entry point for launching an IPython kernel.\n" ], [ "data_Test", "_____no_output_____" ], [ "data_TestIncomeColumn.head()", "_____no_output_____" ], [ "standardized_test_data = scaler.transform(data_Test)", "_____no_output_____" ], [ "standardized_test_data", "_____no_output_____" ], [ "a=0\nb=0\nfor col in data:\n for i in data[col].isnull():\n if i:\n a+=1\n b+=1\n print('Missing data in',col,'is',a/b*100,'%')\n a=0\n b=0\n##check for missing data \n##so now, we have standardized_data and standardized_test_data that we can run our models on", "Missing data in Age is 0.0 %\nMissing data in Status is 0.0 %\nMissing data in Weight is 0.0 %\nMissing data in Degree is 0.0 %\nMissing data in Education is 0.0 %\nMissing data in Married is 0.0 %\nMissing data in Occupation is 0.0 %\nMissing data in Relationship is 0.0 %\nMissing data in Race is 0.0 %\nMissing data in Sex is 0.0 %\nMissing data in Gain is 0.0 %\nMissing data in Loss is 0.0 %\nMissing data in Hours is 0.0 %\nMissing data in Country is 0.0 %\n" ], [ "from sklearn.ensemble import RandomForestClassifier\nfrom sklearn.datasets import make_classification\ncensusIDM = RandomForestClassifier(max_depth=3, random_state=0)\nfrom sklearn.feature_selection import RFE\nrfe = RFE(censusIDM, n_features_to_select=6)\nrfe.fit(standardized_data, dataIncomeColumn)", "_____no_output_____" ], [ "rfe.ranking_", "_____no_output_____" ], [ "predict_TestOutput=rfe.predict(standardized_test_data)\npredictOutput=rfe.predict(standardized_data)\n#standardized_data for the training", "_____no_output_____" ], [ "goodTest=(predict_TestOutput==data_TestIncomeColumn).sum();print(goodTest)\ngood=(predictOutput==dataIncomeColumn).sum();print(good)\n#good=(predictOutput==dataIncomeColumn).sum();good - for training error#", "13606\n27347\n" ], [ "badTest=(predict_TestOutput!=data_TestIncomeColumn).sum();print(badTest)\nbad=(predictOutput!=dataIncomeColumn).sum();print(bad)", "2675\n5214\n" ], [ "good/(good+bad)*100", "_____no_output_____" ], [ "goodTest/(goodTest+badTest)*100", "_____no_output_____" ], [ "#Using the Random Forest Classifier on our Data, with depth 3.\ncensusIDM = RandomForestClassifier(max_depth=3, random_state=0)\nfrfe = RFE(censusIDM, n_features_to_select=3)\nfrfe.fit(standardized_data, dataIncomeColumn)\nprint(frfe.ranking_)\npredict_TestOutput=frfe.predict(standardized_test_data)\npredictOutput=frfe.predict(standardized_data)\ngoodTest=(predict_TestOutput==data_TestIncomeColumn).sum();print(goodTest)\ngood=(predictOutput==dataIncomeColumn).sum();print(good)\nbadTest=(predict_TestOutput!=data_TestIncomeColumn).sum();print(badTest)\nbad=(predictOutput!=dataIncomeColumn).sum();print(bad)", "[ 3 9 12 5 1 2 8 1 11 6 1 7 4 10]\n13623\n27354\n2658\n5207\n" ], [ "#Using the Random Forest Classifier on our Data, with depth 7.\ncensusIDM = RandomForestClassifier(max_depth=7, random_state=0)\nfrfe = RFE(censusIDM, n_features_to_select=3)\nfrfe.fit(standardized_data, dataIncomeColumn)\nprint(frfe.ranking_)\npredict_TestOutput=frfe.predict(standardized_test_data)\npredictOutput=frfe.predict(standardized_data)\ngoodTest=(predict_TestOutput==data_TestIncomeColumn).sum();print(goodTest)\ngood=(predictOutput==dataIncomeColumn).sum();print(good)\nbadTest=(predict_TestOutput!=data_TestIncomeColumn).sum();print(badTest)\nbad=(predictOutput!=dataIncomeColumn).sum();print(bad)", "[ 4 10 9 5 1 2 7 1 12 8 1 3 6 11]\n13670\n27656\n2611\n4905\n" ], [ "#Testing the Linear Regression Model on a large numer of different features to select to see if the accuracy changes significantly or not\nfrom sklearn.linear_model import LinearRegression\nbeerIDM = linear_model.LogisticRegression()\nrfe2 = RFE(beerIDM, n_features_to_select=4)\nrfe2.fit(standardized_data, dataIncomeColumn)\nprint(rfe2.ranking_)\npredict_TestOutput=rfe2.predict(standardized_test_data)\npredictOutput=rfe2.predict(standardized_data)\ngoodTest=(predict_TestOutput==data_TestIncomeColumn).sum();print(goodTest)\ngood=(predictOutput==dataIncomeColumn).sum();print(good)\nbadTest=(predict_TestOutput!=data_TestIncomeColumn).sum();print(badTest)\nbad=(predictOutput!=dataIncomeColumn).sum();print(bad)\ngood/(good+bad)", "[ 1 10 8 7 1 3 9 5 6 1 1 4 2 11]\n13258\n26575\n3023\n5986\n" ], [ "n=50\nprecision=[0]*n\nfor i in range(1,n+1):\n censusIDM = RandomForestClassifier(max_depth=i, random_state=0)\n rfe = RFE(censusIDM, n_features_to_select=4)\n rfe.fit(standardized_data, dataIncomeColumn)\n predict_TestOutput=rfe.predict(standardized_test_data)\n predictOutput=rfe.predict(standardized_data)\n #Predictive Accuracy\n goodTest=(predict_TestOutput==data_TestIncomeColumn).sum();\n good=(predictOutput==dataIncomeColumn).sum();\n badTest=(predict_TestOutput!=data_TestIncomeColumn).sum();\n bad=(predictOutput!=dataIncomeColumn).sum();\n precision[i-1]=good/(good+bad);\n", "_____no_output_____" ], [ "fig=plt.figure(figsize=[20,10])\nplt.plot(range(1,n+1),precision)\nplt.xlabel('Depth', fontsize=20)\nplt.ylabel('Precision', fontsize=20)\nplt.title('RandomForestClassifier', fontsize=20)\nfig.savefig('RandomForest2.pdf',dpi=200)", "_____no_output_____" ], [ "#Linear Model Lasso curently not working.\nfrom sklearn import linear_model\nclf = linear_model.Lasso(alpha=0.1)\nrfe = RFE(clf, n_features_to_select=4)\nrfe.fit(standardized_data, dataIncomeColumn)\nprint(rfe.ranking_)\npredict_TestOutput=rfe.predict(standardized_test_data)\npredictOutput=rfe.predict(standardized_data)\n#Predictive Accuracy\ngoodTest=(predict_TestOutput==data_TestIncomeColumn).sum();print(goodTest)\ngood=(predictOutput==dataIncomeColumn).sum();print(good)\nbadTest=(predict_TestOutput!=data_TestIncomeColumn).sum();print(badTest)\nbad=(predictOutput!=dataIncomeColumn).sum();print(bad)\n\n", "[11 10 9 8 7 6 5 4 3 2 1 1 1 1]\n0\n0\n16281\n32561\n" ], [ "#Running the Perceptron Model on our data\nfrom sklearn.linear_model import Perceptron\nclf = linear_model.Perceptron()\nrfe = RFE(clf, n_features_to_select=4)\nrfe.fit(standardized_data, dataIncomeColumn)\nprint(rfe.ranking_)\npredict_TestOutput=rfe.predict(standardized_test_data)\npredictOutput=rfe.predict(standardized_data)\n#Predictive Accuracy\ngoodTest=(predict_TestOutput==data_TestIncomeColumn).sum();print(goodTest)\ngood=(predictOutput==dataIncomeColumn).sum();print(good)\nbadTest=(predict_TestOutput!=data_TestIncomeColumn).sum();print(badTest)\nbad=(predictOutput!=dataIncomeColumn).sum();print(bad)", "C:\\Users\\Victoria\\Anaconda3\\lib\\site-packages\\sklearn\\linear_model\\stochastic_gradient.py:128: FutureWarning: max_iter and tol parameters have been added in <class 'sklearn.linear_model.perceptron.Perceptron'> in 0.19. If both are left unset, they default to max_iter=5 and tol=None. If tol is not None, max_iter defaults to max_iter=1000. From 0.21, default max_iter will be 1000, and default tol will be 1e-3.\n \"and default tol will be 1e-3.\" % type(self), FutureWarning)\nC:\\Users\\Victoria\\Anaconda3\\lib\\site-packages\\sklearn\\linear_model\\stochastic_gradient.py:128: FutureWarning: max_iter and tol parameters have been added in <class 'sklearn.linear_model.perceptron.Perceptron'> in 0.19. If both are left unset, they default to max_iter=5 and tol=None. If tol is not None, max_iter defaults to max_iter=1000. From 0.21, default max_iter will be 1000, and default tol will be 1e-3.\n \"and default tol will be 1e-3.\" % type(self), FutureWarning)\nC:\\Users\\Victoria\\Anaconda3\\lib\\site-packages\\sklearn\\linear_model\\stochastic_gradient.py:128: FutureWarning: max_iter and tol parameters have been added in <class 'sklearn.linear_model.perceptron.Perceptron'> in 0.19. If both are left unset, they default to max_iter=5 and tol=None. If tol is not None, max_iter defaults to max_iter=1000. From 0.21, default max_iter will be 1000, and default tol will be 1e-3.\n \"and default tol will be 1e-3.\" % type(self), FutureWarning)\nC:\\Users\\Victoria\\Anaconda3\\lib\\site-packages\\sklearn\\linear_model\\stochastic_gradient.py:128: FutureWarning: max_iter and tol parameters have been added in <class 'sklearn.linear_model.perceptron.Perceptron'> in 0.19. If both are left unset, they default to max_iter=5 and tol=None. If tol is not None, max_iter defaults to max_iter=1000. From 0.21, default max_iter will be 1000, and default tol will be 1e-3.\n \"and default tol will be 1e-3.\" % type(self), FutureWarning)\nC:\\Users\\Victoria\\Anaconda3\\lib\\site-packages\\sklearn\\linear_model\\stochastic_gradient.py:128: FutureWarning: max_iter and tol parameters have been added in <class 'sklearn.linear_model.perceptron.Perceptron'> in 0.19. If both are left unset, they default to max_iter=5 and tol=None. If tol is not None, max_iter defaults to max_iter=1000. From 0.21, default max_iter will be 1000, and default tol will be 1e-3.\n \"and default tol will be 1e-3.\" % type(self), FutureWarning)\nC:\\Users\\Victoria\\Anaconda3\\lib\\site-packages\\sklearn\\linear_model\\stochastic_gradient.py:128: FutureWarning: max_iter and tol parameters have been added in <class 'sklearn.linear_model.perceptron.Perceptron'> in 0.19. If both are left unset, they default to max_iter=5 and tol=None. If tol is not None, max_iter defaults to max_iter=1000. From 0.21, default max_iter will be 1000, and default tol will be 1e-3.\n \"and default tol will be 1e-3.\" % type(self), FutureWarning)\nC:\\Users\\Victoria\\Anaconda3\\lib\\site-packages\\sklearn\\linear_model\\stochastic_gradient.py:128: FutureWarning: max_iter and tol parameters have been added in <class 'sklearn.linear_model.perceptron.Perceptron'> in 0.19. If both are left unset, they default to max_iter=5 and tol=None. If tol is not None, max_iter defaults to max_iter=1000. From 0.21, default max_iter will be 1000, and default tol will be 1e-3.\n \"and default tol will be 1e-3.\" % type(self), FutureWarning)\nC:\\Users\\Victoria\\Anaconda3\\lib\\site-packages\\sklearn\\linear_model\\stochastic_gradient.py:128: FutureWarning: max_iter and tol parameters have been added in <class 'sklearn.linear_model.perceptron.Perceptron'> in 0.19. If both are left unset, they default to max_iter=5 and tol=None. If tol is not None, max_iter defaults to max_iter=1000. From 0.21, default max_iter will be 1000, and default tol will be 1e-3.\n \"and default tol will be 1e-3.\" % type(self), FutureWarning)\nC:\\Users\\Victoria\\Anaconda3\\lib\\site-packages\\sklearn\\linear_model\\stochastic_gradient.py:128: FutureWarning: max_iter and tol parameters have been added in <class 'sklearn.linear_model.perceptron.Perceptron'> in 0.19. If both are left unset, they default to max_iter=5 and tol=None. If tol is not None, max_iter defaults to max_iter=1000. From 0.21, default max_iter will be 1000, and default tol will be 1e-3.\n \"and default tol will be 1e-3.\" % type(self), FutureWarning)\n" ], [ "standardized_data2 = pd.DataFrame(standardized_data)\nstandardized_test_data2 = pd.DataFrame(standardized_test_data)\nstandardizedFrames = [standardized_data2, standardized_test_data2]\nstandardizedResult = pd.concat(standardizedFrames)\ndataIncomeColumn2 = pd.DataFrame(dataIncomeColumn)\ndata_TestIncomeColumn2 = pd.DataFrame(data_TestIncomeColumn)\ncombinedIncomeColumn = [dataIncomeColumn2, data_TestIncomeColumn2]\ncombinedResult = pd.concat(combinedIncomeColumn) ", "_____no_output_____" ], [ "from sklearn.model_selection import train_test_split\nfrom sklearn.linear_model import SGDClassifier\n#where L is in the loop\nrng = np.random.RandomState(42)\nyy = []\n\nheldout = [0.95, 0.90, .85, .8, 0.75, .7, .65, 0.6, .55, 0.5, 0.45, 0.4, 0.35, .3, .25, .2, .15, .1, .05, 0.01]\nxx = 1. - np.array(heldout)\nrounds = 20\nfor i in heldout:\n yy_ = []\n for r in range(rounds):\n #clf = SGDClassifier()\n clf = SVR(kernel=\"linear\")\n standardized_dataL, standardized_test_dataL, dataIncomeColumnL, data_TestIncomeColumnL = \\\n train_test_split(standardizedResult, combinedResult, test_size=i, random_state=rng)\n clf.fit(standardized_dataL, dataIncomeColumnL)\n y_pred = clf.predict(standardized_test_dataL)\n yy_.append(1 - sum(y_pred == data_TestIncomeColumnL.Income)/len(y_pred))\n yy.append(np.mean(yy_))\nplt.plot(xx, yy, label='Linear Regression')\n\nplt.legend(loc=\"upper right\")\nplt.xlabel(\"Proportion train\")\nplt.ylabel(\"Test Error Rate\")\nplt.show()", "C:\\Users\\Victoria\\Anaconda3\\lib\\site-packages\\sklearn\\utils\\validation.py:578: DataConversionWarning: A column-vector y was passed when a 1d array was expected. Please change the shape of y to (n_samples, ), for example using ravel().\n y = column_or_1d(y, warn=True)\n" ], [ "\nfig=plt.figure(figsize=[20,10])\nplt.plot(xx, yy, label='Linear Regression')\nplt.legend(loc=\"upper right\")\nplt.xlabel(\"Proportion train\")\nplt.ylabel(\"Test Error Rate\")\nplt.show()\nfig.savefig('test2png.pdf', dpi=100)", "_____no_output_____" ], [ "xx,yy", "_____no_output_____" ], [ "#k-nearest Neighbors\nfrom sklearn.neighbors import NearestNeighbors\nclf = NearestNeighbors(n_neighbors=2, algorithm='ball_tree').fit(standardized_data)\npredict_TestOutput=clf.predict(standardized_test_data)\npredictOutput=clf.predict(standardized_data)\n#Predictive Accuracy\ngoodTest=(predict_TestOutput==data_TestIncomeColumn).sum();print(goodTest)\ngood=(predictOutput==dataIncomeColumn).sum();print(good)\nbadTest=(predict_TestOutput!=data_TestIncomeColumn).sum();print(badTest)\nbad=(predictOutput!=dataIncomeColumn).sum();print(bad)", "_____no_output_____" ], [ "from sklearn.neural_network import MLPClassifier\nmlp = MLPClassifier(verbose=0, random_state=0)\nmlp.fit(standardized_data, dataIncomeColumn)\npredict_TestOutput=mlp.predict(standardized_test_data)\npredictOutput=mlp.predict(standardized_data)\n#Predictive Accuracy\ngoodTest=(predict_TestOutput==data_TestIncomeColumn).sum();print(goodTest)\ngood=(predictOutput==dataIncomeColumn).sum();print(good)\nbadTest=(predict_TestOutput!=data_TestIncomeColumn).sum();print(badTest)\nbad=(predictOutput!=dataIncomeColumn).sum();print(bad)", "13867\n28048\n2414\n4513\n" ], [ "#SVM\nfrom sklearn.svm import SVR\nclf = SVR(kernel=\"linear\")\nrfe4 = RFE(clf, n_features_to_select=5)\nrfe4.fit(standardized_data, dataIncomeColumn)\npredict_TestOutput=rfe.predict(standardized_test_data)\npredictOutput=rfe.predict(standardized_data)\n#Predictive Accuracy\ngoodTest=(predict_TestOutput==data_TestIncomeColumn).sum();print(goodTest)\ngood=(predictOutput==dataIncomeColumn).sum();print(good)\nbadTest=(predict_TestOutput!=data_TestIncomeColumn).sum();print(badTest)\nbad=(predictOutput!=dataIncomeColumn).sum();print(bad)", "13606\n27347\n2675\n5214\n" ], [ "#Running The Random Forest OOB Error Rate Chart\nimport matplotlib.pyplot as plt\n\nfrom collections import OrderedDict\nfrom sklearn.datasets import make_classification\nfrom sklearn.ensemble import RandomForestClassifier, ExtraTreesClassifier\nRANDOM_STATE = 123\nensemble_clfs = [\n (\"RandomForestClassifier, max_features='sqrt'\",\n RandomForestClassifier(warm_start=True, max_features='sqrt',\n oob_score=True,\n random_state=RANDOM_STATE)),\n (\"RandomForestClassifier, max_features='log2'\",\n RandomForestClassifier(warm_start=True, max_features='log2',\n oob_score=True,\n random_state=RANDOM_STATE)),\n (\"RandomForestClassifier, max_features=None\",\n RandomForestClassifier(warm_start=True, max_features=None,\n oob_score=True,\n random_state=RANDOM_STATE))\n]\n\n# Map a classifier name to a list of (<n_estimators>, <error rate>) pairs.\nerror_rate = OrderedDict((label, []) for label, _ in ensemble_clfs)\n\n# Range of `n_estimators` values to explore.\nmin_estimators = 5\nmax_estimators = 300\n\nfor label, clf in ensemble_clfs:\n for i in range(min_estimators, max_estimators + 1):\n clf.set_params(n_estimators=i)\n clf.fit(standardized_data, dataIncomeColumn)\n\n # Record the OOB error for each `n_estimators=i` setting.\n oob_error = 1 - clf.oob_score_\n error_rate[label].append((i, oob_error))\n\n# Generate the \"OOB error rate\" vs. \"n_estimators\" plot.\nfor label, clf_err in error_rate.items():\n xs, ys = zip(*clf_err)\n plt.plot(xs, ys, label=label)\n\nplt.xlim(min_estimators, max_estimators)\nplt.xlabel(\"n_estimators\")\nplt.ylabel(\"OOB error rate\")\nplt.legend(loc=\"upper right\")\nplt.show()", "C:\\Users\\Victoria\\Anaconda3\\lib\\site-packages\\sklearn\\ensemble\\forest.py:453: UserWarning: Some inputs do not have OOB scores. This probably means too few trees were used to compute any reliable oob estimates.\n warn(\"Some inputs do not have OOB scores. \"\nC:\\Users\\Victoria\\Anaconda3\\lib\\site-packages\\sklearn\\ensemble\\forest.py:458: RuntimeWarning: invalid value encountered in true_divide\n predictions[k].sum(axis=1)[:, np.newaxis])\nC:\\Users\\Victoria\\Anaconda3\\lib\\site-packages\\sklearn\\ensemble\\forest.py:453: UserWarning: Some inputs do not have OOB scores. This probably means too few trees were used to compute any reliable oob estimates.\n warn(\"Some inputs do not have OOB scores. \"\nC:\\Users\\Victoria\\Anaconda3\\lib\\site-packages\\sklearn\\ensemble\\forest.py:458: RuntimeWarning: invalid value encountered in true_divide\n predictions[k].sum(axis=1)[:, np.newaxis])\nC:\\Users\\Victoria\\Anaconda3\\lib\\site-packages\\sklearn\\ensemble\\forest.py:453: UserWarning: Some inputs do not have OOB scores. This probably means too few trees were used to compute any reliable oob estimates.\n warn(\"Some inputs do not have OOB scores. \"\nC:\\Users\\Victoria\\Anaconda3\\lib\\site-packages\\sklearn\\ensemble\\forest.py:458: RuntimeWarning: invalid value encountered in true_divide\n predictions[k].sum(axis=1)[:, np.newaxis])\nC:\\Users\\Victoria\\Anaconda3\\lib\\site-packages\\sklearn\\ensemble\\forest.py:453: UserWarning: Some inputs do not have OOB scores. This probably means too few trees were used to compute any reliable oob estimates.\n warn(\"Some inputs do not have OOB scores. \"\nC:\\Users\\Victoria\\Anaconda3\\lib\\site-packages\\sklearn\\ensemble\\forest.py:458: RuntimeWarning: invalid value encountered in true_divide\n predictions[k].sum(axis=1)[:, np.newaxis])\nC:\\Users\\Victoria\\Anaconda3\\lib\\site-packages\\sklearn\\ensemble\\forest.py:453: UserWarning: Some inputs do not have OOB scores. This probably means too few trees were used to compute any reliable oob estimates.\n warn(\"Some inputs do not have OOB scores. \"\nC:\\Users\\Victoria\\Anaconda3\\lib\\site-packages\\sklearn\\ensemble\\forest.py:458: RuntimeWarning: invalid value encountered in true_divide\n predictions[k].sum(axis=1)[:, np.newaxis])\nC:\\Users\\Victoria\\Anaconda3\\lib\\site-packages\\sklearn\\ensemble\\forest.py:453: UserWarning: Some inputs do not have OOB scores. This probably means too few trees were used to compute any reliable oob estimates.\n warn(\"Some inputs do not have OOB scores. \"\nC:\\Users\\Victoria\\Anaconda3\\lib\\site-packages\\sklearn\\ensemble\\forest.py:458: RuntimeWarning: invalid value encountered in true_divide\n predictions[k].sum(axis=1)[:, np.newaxis])\nC:\\Users\\Victoria\\Anaconda3\\lib\\site-packages\\sklearn\\ensemble\\forest.py:453: UserWarning: Some inputs do not have OOB scores. This probably means too few trees were used to compute any reliable oob estimates.\n warn(\"Some inputs do not have OOB scores. \"\nC:\\Users\\Victoria\\Anaconda3\\lib\\site-packages\\sklearn\\ensemble\\forest.py:458: RuntimeWarning: invalid value encountered in true_divide\n predictions[k].sum(axis=1)[:, np.newaxis])\nC:\\Users\\Victoria\\Anaconda3\\lib\\site-packages\\sklearn\\ensemble\\forest.py:453: UserWarning: Some inputs do not have OOB scores. This probably means too few trees were used to compute any reliable oob estimates.\n warn(\"Some inputs do not have OOB scores. \"\nC:\\Users\\Victoria\\Anaconda3\\lib\\site-packages\\sklearn\\ensemble\\forest.py:458: RuntimeWarning: invalid value encountered in true_divide\n predictions[k].sum(axis=1)[:, np.newaxis])\nC:\\Users\\Victoria\\Anaconda3\\lib\\site-packages\\sklearn\\ensemble\\forest.py:453: UserWarning: Some inputs do not have OOB scores. This probably means too few trees were used to compute any reliable oob estimates.\n warn(\"Some inputs do not have OOB scores. \"\nC:\\Users\\Victoria\\Anaconda3\\lib\\site-packages\\sklearn\\ensemble\\forest.py:458: RuntimeWarning: invalid value encountered in true_divide\n predictions[k].sum(axis=1)[:, np.newaxis])\nC:\\Users\\Victoria\\Anaconda3\\lib\\site-packages\\sklearn\\ensemble\\forest.py:453: UserWarning: Some inputs do not have OOB scores. This probably means too few trees were used to compute any reliable oob estimates.\n warn(\"Some inputs do not have OOB scores. \"\nC:\\Users\\Victoria\\Anaconda3\\lib\\site-packages\\sklearn\\ensemble\\forest.py:458: RuntimeWarning: invalid value encountered in true_divide\n predictions[k].sum(axis=1)[:, np.newaxis])\nC:\\Users\\Victoria\\Anaconda3\\lib\\site-packages\\sklearn\\ensemble\\forest.py:453: UserWarning: Some inputs do not have OOB scores. This probably means too few trees were used to compute any reliable oob estimates.\n warn(\"Some inputs do not have OOB scores. \"\nC:\\Users\\Victoria\\Anaconda3\\lib\\site-packages\\sklearn\\ensemble\\forest.py:458: RuntimeWarning: invalid value encountered in true_divide\n predictions[k].sum(axis=1)[:, np.newaxis])\nC:\\Users\\Victoria\\Anaconda3\\lib\\site-packages\\sklearn\\ensemble\\forest.py:453: UserWarning: Some inputs do not have OOB scores. This probably means too few trees were used to compute any reliable oob estimates.\n warn(\"Some inputs do not have OOB scores. \"\nC:\\Users\\Victoria\\Anaconda3\\lib\\site-packages\\sklearn\\ensemble\\forest.py:458: RuntimeWarning: invalid value encountered in true_divide\n predictions[k].sum(axis=1)[:, np.newaxis])\nC:\\Users\\Victoria\\Anaconda3\\lib\\site-packages\\sklearn\\ensemble\\forest.py:453: UserWarning: Some inputs do not have OOB scores. This probably means too few trees were used to compute any reliable oob estimates.\n warn(\"Some inputs do not have OOB scores. \"\nC:\\Users\\Victoria\\Anaconda3\\lib\\site-packages\\sklearn\\ensemble\\forest.py:458: RuntimeWarning: invalid value encountered in true_divide\n predictions[k].sum(axis=1)[:, np.newaxis])\nC:\\Users\\Victoria\\Anaconda3\\lib\\site-packages\\sklearn\\ensemble\\forest.py:453: UserWarning: Some inputs do not have OOB scores. This probably means too few trees were used to compute any reliable oob estimates.\n warn(\"Some inputs do not have OOB scores. \"\nC:\\Users\\Victoria\\Anaconda3\\lib\\site-packages\\sklearn\\ensemble\\forest.py:458: RuntimeWarning: invalid value encountered in true_divide\n predictions[k].sum(axis=1)[:, np.newaxis])\nC:\\Users\\Victoria\\Anaconda3\\lib\\site-packages\\sklearn\\ensemble\\forest.py:453: UserWarning: Some inputs do not have OOB scores. This probably means too few trees were used to compute any reliable oob estimates.\n warn(\"Some inputs do not have OOB scores. \"\nC:\\Users\\Victoria\\Anaconda3\\lib\\site-packages\\sklearn\\ensemble\\forest.py:458: RuntimeWarning: invalid value encountered in true_divide\n predictions[k].sum(axis=1)[:, np.newaxis])\nC:\\Users\\Victoria\\Anaconda3\\lib\\site-packages\\sklearn\\ensemble\\forest.py:453: UserWarning: Some inputs do not have OOB scores. This probably means too few trees were used to compute any reliable oob estimates.\n warn(\"Some inputs do not have OOB scores. \"\nC:\\Users\\Victoria\\Anaconda3\\lib\\site-packages\\sklearn\\ensemble\\forest.py:458: RuntimeWarning: invalid value encountered in true_divide\n predictions[k].sum(axis=1)[:, np.newaxis])\nC:\\Users\\Victoria\\Anaconda3\\lib\\site-packages\\sklearn\\ensemble\\forest.py:453: UserWarning: Some inputs do not have OOB scores. This probably means too few trees were used to compute any reliable oob estimates.\n warn(\"Some inputs do not have OOB scores. \"\nC:\\Users\\Victoria\\Anaconda3\\lib\\site-packages\\sklearn\\ensemble\\forest.py:458: RuntimeWarning: invalid value encountered in true_divide\n predictions[k].sum(axis=1)[:, np.newaxis])\nC:\\Users\\Victoria\\Anaconda3\\lib\\site-packages\\sklearn\\ensemble\\forest.py:453: UserWarning: Some inputs do not have OOB scores. This probably means too few trees were used to compute any reliable oob estimates.\n warn(\"Some inputs do not have OOB scores. \"\nC:\\Users\\Victoria\\Anaconda3\\lib\\site-packages\\sklearn\\ensemble\\forest.py:458: RuntimeWarning: invalid value encountered in true_divide\n predictions[k].sum(axis=1)[:, np.newaxis])\nC:\\Users\\Victoria\\Anaconda3\\lib\\site-packages\\sklearn\\ensemble\\forest.py:453: UserWarning: Some inputs do not have OOB scores. This probably means too few trees were used to compute any reliable oob estimates.\n warn(\"Some inputs do not have OOB scores. \"\nC:\\Users\\Victoria\\Anaconda3\\lib\\site-packages\\sklearn\\ensemble\\forest.py:458: RuntimeWarning: invalid value encountered in true_divide\n predictions[k].sum(axis=1)[:, np.newaxis])\nC:\\Users\\Victoria\\Anaconda3\\lib\\site-packages\\sklearn\\ensemble\\forest.py:453: UserWarning: Some inputs do not have OOB scores. This probably means too few trees were used to compute any reliable oob estimates.\n warn(\"Some inputs do not have OOB scores. \"\nC:\\Users\\Victoria\\Anaconda3\\lib\\site-packages\\sklearn\\ensemble\\forest.py:458: RuntimeWarning: invalid value encountered in true_divide\n predictions[k].sum(axis=1)[:, np.newaxis])\nC:\\Users\\Victoria\\Anaconda3\\lib\\site-packages\\sklearn\\ensemble\\forest.py:453: UserWarning: Some inputs do not have OOB scores. This probably means too few trees were used to compute any reliable oob estimates.\n warn(\"Some inputs do not have OOB scores. \"\nC:\\Users\\Victoria\\Anaconda3\\lib\\site-packages\\sklearn\\ensemble\\forest.py:458: RuntimeWarning: invalid value encountered in true_divide\n predictions[k].sum(axis=1)[:, np.newaxis])\nC:\\Users\\Victoria\\Anaconda3\\lib\\site-packages\\sklearn\\ensemble\\forest.py:453: UserWarning: Some inputs do not have OOB scores. This probably means too few trees were used to compute any reliable oob estimates.\n warn(\"Some inputs do not have OOB scores. \"\nC:\\Users\\Victoria\\Anaconda3\\lib\\site-packages\\sklearn\\ensemble\\forest.py:458: RuntimeWarning: invalid value encountered in true_divide\n predictions[k].sum(axis=1)[:, np.newaxis])\n" ], [ "#Running The Extra Trees OOB Error Rate Chart\nimport matplotlib.pyplot as plt\n\nfrom collections import OrderedDict\nfrom sklearn.datasets import make_classification\nfrom sklearn.ensemble import RandomForestClassifier, ExtraTreesClassifier\nRANDOM_STATE = 123\nensemble_clfs = [\n (\"RandomForestClassifier, max_features='sqrt'\",\n RandomForestClassifier(warm_start=True, max_features='sqrt',\n oob_score=True,\n random_state=RANDOM_STATE)),\n (\"RandomForestClassifier, max_features='log2'\",\n RandomForestClassifier(warm_start=True, max_features='log2',\n oob_score=True,\n random_state=RANDOM_STATE)),\n (\"RandomForestClassifier, max_features=None\",\n RandomForestClassifier(warm_start=True, max_features=None,\n oob_score=True,\n random_state=RANDOM_STATE))\n]\n\n# Map a classifier name to a list of (<n_estimators>, <error rate>) pairs.\nerror_rate = OrderedDict((label, []) for label, _ in ensemble_clfs)\n\n# Range of `n_estimators` values to explore.\nmin_estimators = 1\nmax_estimators = 25\n\nfor label, clf in ensemble_clfs:\n for i in range(min_estimators, max_estimators + 1):\n clf.set_params(n_estimators=i)\n clf.fit(standardized_data, dataIncomeColumn)\n\n # Record the OOB error for each `n_estimators=i` setting.\n oob_error = 1 - clf.oob_score_\n error_rate[label].append((i, oob_error))\n\n# Generate the \"OOB error rate\" vs. \"n_estimators\" plot.\nxss=[0]*3\nyss=[0]*3\ni=0\nfor label, clf_err in error_rate.items():\n xs, ys = zip(*clf_err)\n xss[i]=xs\n yss[i]=ys\n i=i+1\n plt.plot(xs, ys, label=label)\n\nplt.xlim(min_estimators, max_estimators)\nplt.xlabel(\"n_estimators\")\nplt.ylabel(\"OOB error rate\")\nplt.legend(loc=\"upper right\")\nplt.show()", "C:\\Users\\Victoria\\Anaconda3\\lib\\site-packages\\sklearn\\ensemble\\forest.py:453: UserWarning: Some inputs do not have OOB scores. This probably means too few trees were used to compute any reliable oob estimates.\n warn(\"Some inputs do not have OOB scores. \"\nC:\\Users\\Victoria\\Anaconda3\\lib\\site-packages\\sklearn\\ensemble\\forest.py:458: RuntimeWarning: invalid value encountered in true_divide\n predictions[k].sum(axis=1)[:, np.newaxis])\nC:\\Users\\Victoria\\Anaconda3\\lib\\site-packages\\sklearn\\ensemble\\forest.py:453: UserWarning: Some inputs do not have OOB scores. This probably means too few trees were used to compute any reliable oob estimates.\n warn(\"Some inputs do not have OOB scores. \"\nC:\\Users\\Victoria\\Anaconda3\\lib\\site-packages\\sklearn\\ensemble\\forest.py:458: RuntimeWarning: invalid value encountered in true_divide\n predictions[k].sum(axis=1)[:, np.newaxis])\nC:\\Users\\Victoria\\Anaconda3\\lib\\site-packages\\sklearn\\ensemble\\forest.py:453: UserWarning: Some inputs do not have OOB scores. This probably means too few trees were used to compute any reliable oob estimates.\n warn(\"Some inputs do not have OOB scores. \"\nC:\\Users\\Victoria\\Anaconda3\\lib\\site-packages\\sklearn\\ensemble\\forest.py:458: RuntimeWarning: invalid value encountered in true_divide\n predictions[k].sum(axis=1)[:, np.newaxis])\nC:\\Users\\Victoria\\Anaconda3\\lib\\site-packages\\sklearn\\ensemble\\forest.py:453: UserWarning: Some inputs do not have OOB scores. This probably means too few trees were used to compute any reliable oob estimates.\n warn(\"Some inputs do not have OOB scores. \"\nC:\\Users\\Victoria\\Anaconda3\\lib\\site-packages\\sklearn\\ensemble\\forest.py:458: RuntimeWarning: invalid value encountered in true_divide\n predictions[k].sum(axis=1)[:, np.newaxis])\nC:\\Users\\Victoria\\Anaconda3\\lib\\site-packages\\sklearn\\ensemble\\forest.py:453: UserWarning: Some inputs do not have OOB scores. This probably means too few trees were used to compute any reliable oob estimates.\n warn(\"Some inputs do not have OOB scores. \"\nC:\\Users\\Victoria\\Anaconda3\\lib\\site-packages\\sklearn\\ensemble\\forest.py:458: RuntimeWarning: invalid value encountered in true_divide\n predictions[k].sum(axis=1)[:, np.newaxis])\nC:\\Users\\Victoria\\Anaconda3\\lib\\site-packages\\sklearn\\ensemble\\forest.py:453: UserWarning: Some inputs do not have OOB scores. This probably means too few trees were used to compute any reliable oob estimates.\n warn(\"Some inputs do not have OOB scores. \"\nC:\\Users\\Victoria\\Anaconda3\\lib\\site-packages\\sklearn\\ensemble\\forest.py:458: RuntimeWarning: invalid value encountered in true_divide\n predictions[k].sum(axis=1)[:, np.newaxis])\nC:\\Users\\Victoria\\Anaconda3\\lib\\site-packages\\sklearn\\ensemble\\forest.py:453: UserWarning: Some inputs do not have OOB scores. This probably means too few trees were used to compute any reliable oob estimates.\n warn(\"Some inputs do not have OOB scores. \"\nC:\\Users\\Victoria\\Anaconda3\\lib\\site-packages\\sklearn\\ensemble\\forest.py:458: RuntimeWarning: invalid value encountered in true_divide\n predictions[k].sum(axis=1)[:, np.newaxis])\nC:\\Users\\Victoria\\Anaconda3\\lib\\site-packages\\sklearn\\ensemble\\forest.py:453: UserWarning: Some inputs do not have OOB scores. This probably means too few trees were used to compute any reliable oob estimates.\n warn(\"Some inputs do not have OOB scores. \"\nC:\\Users\\Victoria\\Anaconda3\\lib\\site-packages\\sklearn\\ensemble\\forest.py:458: RuntimeWarning: invalid value encountered in true_divide\n predictions[k].sum(axis=1)[:, np.newaxis])\nC:\\Users\\Victoria\\Anaconda3\\lib\\site-packages\\sklearn\\ensemble\\forest.py:453: UserWarning: Some inputs do not have OOB scores. This probably means too few trees were used to compute any reliable oob estimates.\n warn(\"Some inputs do not have OOB scores. \"\nC:\\Users\\Victoria\\Anaconda3\\lib\\site-packages\\sklearn\\ensemble\\forest.py:458: RuntimeWarning: invalid value encountered in true_divide\n predictions[k].sum(axis=1)[:, np.newaxis])\nC:\\Users\\Victoria\\Anaconda3\\lib\\site-packages\\sklearn\\ensemble\\forest.py:453: UserWarning: Some inputs do not have OOB scores. This probably means too few trees were used to compute any reliable oob estimates.\n warn(\"Some inputs do not have OOB scores. \"\nC:\\Users\\Victoria\\Anaconda3\\lib\\site-packages\\sklearn\\ensemble\\forest.py:458: RuntimeWarning: invalid value encountered in true_divide\n predictions[k].sum(axis=1)[:, np.newaxis])\nC:\\Users\\Victoria\\Anaconda3\\lib\\site-packages\\sklearn\\ensemble\\forest.py:453: UserWarning: Some inputs do not have OOB scores. This probably means too few trees were used to compute any reliable oob estimates.\n warn(\"Some inputs do not have OOB scores. \"\nC:\\Users\\Victoria\\Anaconda3\\lib\\site-packages\\sklearn\\ensemble\\forest.py:458: RuntimeWarning: invalid value encountered in true_divide\n predictions[k].sum(axis=1)[:, np.newaxis])\nC:\\Users\\Victoria\\Anaconda3\\lib\\site-packages\\sklearn\\ensemble\\forest.py:453: UserWarning: Some inputs do not have OOB scores. This probably means too few trees were used to compute any reliable oob estimates.\n warn(\"Some inputs do not have OOB scores. \"\nC:\\Users\\Victoria\\Anaconda3\\lib\\site-packages\\sklearn\\ensemble\\forest.py:458: RuntimeWarning: invalid value encountered in true_divide\n predictions[k].sum(axis=1)[:, np.newaxis])\nC:\\Users\\Victoria\\Anaconda3\\lib\\site-packages\\sklearn\\ensemble\\forest.py:453: UserWarning: Some inputs do not have OOB scores. This probably means too few trees were used to compute any reliable oob estimates.\n warn(\"Some inputs do not have OOB scores. \"\nC:\\Users\\Victoria\\Anaconda3\\lib\\site-packages\\sklearn\\ensemble\\forest.py:458: RuntimeWarning: invalid value encountered in true_divide\n predictions[k].sum(axis=1)[:, np.newaxis])\nC:\\Users\\Victoria\\Anaconda3\\lib\\site-packages\\sklearn\\ensemble\\forest.py:453: UserWarning: Some inputs do not have OOB scores. This probably means too few trees were used to compute any reliable oob estimates.\n warn(\"Some inputs do not have OOB scores. \"\nC:\\Users\\Victoria\\Anaconda3\\lib\\site-packages\\sklearn\\ensemble\\forest.py:458: RuntimeWarning: invalid value encountered in true_divide\n predictions[k].sum(axis=1)[:, np.newaxis])\nC:\\Users\\Victoria\\Anaconda3\\lib\\site-packages\\sklearn\\ensemble\\forest.py:453: UserWarning: Some inputs do not have OOB scores. This probably means too few trees were used to compute any reliable oob estimates.\n warn(\"Some inputs do not have OOB scores. \"\nC:\\Users\\Victoria\\Anaconda3\\lib\\site-packages\\sklearn\\ensemble\\forest.py:458: RuntimeWarning: invalid value encountered in true_divide\n predictions[k].sum(axis=1)[:, np.newaxis])\nC:\\Users\\Victoria\\Anaconda3\\lib\\site-packages\\sklearn\\ensemble\\forest.py:453: UserWarning: Some inputs do not have OOB scores. This probably means too few trees were used to compute any reliable oob estimates.\n warn(\"Some inputs do not have OOB scores. \"\nC:\\Users\\Victoria\\Anaconda3\\lib\\site-packages\\sklearn\\ensemble\\forest.py:458: RuntimeWarning: invalid value encountered in true_divide\n predictions[k].sum(axis=1)[:, np.newaxis])\nC:\\Users\\Victoria\\Anaconda3\\lib\\site-packages\\sklearn\\ensemble\\forest.py:453: UserWarning: Some inputs do not have OOB scores. This probably means too few trees were used to compute any reliable oob estimates.\n warn(\"Some inputs do not have OOB scores. \"\nC:\\Users\\Victoria\\Anaconda3\\lib\\site-packages\\sklearn\\ensemble\\forest.py:458: RuntimeWarning: invalid value encountered in true_divide\n predictions[k].sum(axis=1)[:, np.newaxis])\nC:\\Users\\Victoria\\Anaconda3\\lib\\site-packages\\sklearn\\ensemble\\forest.py:453: UserWarning: Some inputs do not have OOB scores. This probably means too few trees were used to compute any reliable oob estimates.\n warn(\"Some inputs do not have OOB scores. \"\nC:\\Users\\Victoria\\Anaconda3\\lib\\site-packages\\sklearn\\ensemble\\forest.py:458: RuntimeWarning: invalid value encountered in true_divide\n predictions[k].sum(axis=1)[:, np.newaxis])\nC:\\Users\\Victoria\\Anaconda3\\lib\\site-packages\\sklearn\\ensemble\\forest.py:453: UserWarning: Some inputs do not have OOB scores. This probably means too few trees were used to compute any reliable oob estimates.\n warn(\"Some inputs do not have OOB scores. \"\nC:\\Users\\Victoria\\Anaconda3\\lib\\site-packages\\sklearn\\ensemble\\forest.py:458: RuntimeWarning: invalid value encountered in true_divide\n predictions[k].sum(axis=1)[:, np.newaxis])\n" ], [ "#Running The Extra Trees OOB Error Rate Chart\nimport matplotlib.pyplot as plt\n\nfrom collections import OrderedDict\nfrom sklearn.datasets import make_classification\nfrom sklearn.ensemble import RandomForestClassifier, ExtraTreesClassifier\nRANDOM_STATE = 123\nensemble_clfs = [\n (\"ExtraTreesClassifier, max_features='sqrt'\",\n ExtraTreesClassifier(warm_start=True, max_features='sqrt',\n oob_score=True, bootstrap=True,\n random_state=RANDOM_STATE)),\n (\"ExtraTreesClassifier, max_features='log2'\",\n ExtraTreesClassifier(warm_start=True, max_features='log2',\n oob_score=True, bootstrap=True,\n random_state=RANDOM_STATE)),\n (\"ExtraTreesClassifier, max_features=None\",\n ExtraTreesClassifier(warm_start=True, max_features=None,\n oob_score=True, bootstrap=True,\n random_state=RANDOM_STATE))\n]\n\n# Map a classifier name to a list of (<n_estimators>, <error rate>) pairs.\nerror_rate = OrderedDict((label, []) for label, _ in ensemble_clfs)\n\n# Range of `n_estimators` values to explore.\nmin_estimators = 5\nmax_estimators = 300\n\nfor label, clf in ensemble_clfs:\n for i in range(min_estimators, max_estimators + 1):\n clf.set_params(n_estimators=i)\n clf.fit(standardized_data, dataIncomeColumn)\n\n # Record the OOB error for each `n_estimators=i` setting.\n oob_error = 1 - clf.oob_score_\n error_rate[label].append((i, oob_error))\n\n# Generate the \"OOB error rate\" vs. \"n_estimators\" plot.\nxss=[0]*3\nyss=[0]*3\ni=0\nfor label, clf_err in error_rate.items():\n xs, ys = zip(*clf_err)\n xss[i]=xs\n yss[i]=ys\n i=i+1\n plt.plot(xs, ys, label=label)\n\nplt.xlim(min_estimators, max_estimators)\nplt.xlabel(\"n_estimators\")\nplt.ylabel(\"OOB error rate\")\nplt.legend(loc=\"upper right\")\nplt.show()", "C:\\Users\\Victoria\\Anaconda3\\lib\\site-packages\\sklearn\\ensemble\\forest.py:453: UserWarning: Some inputs do not have OOB scores. This probably means too few trees were used to compute any reliable oob estimates.\n warn(\"Some inputs do not have OOB scores. \"\nC:\\Users\\Victoria\\Anaconda3\\lib\\site-packages\\sklearn\\ensemble\\forest.py:458: RuntimeWarning: invalid value encountered in true_divide\n predictions[k].sum(axis=1)[:, np.newaxis])\nC:\\Users\\Victoria\\Anaconda3\\lib\\site-packages\\sklearn\\ensemble\\forest.py:453: UserWarning: Some inputs do not have OOB scores. This probably means too few trees were used to compute any reliable oob estimates.\n warn(\"Some inputs do not have OOB scores. \"\nC:\\Users\\Victoria\\Anaconda3\\lib\\site-packages\\sklearn\\ensemble\\forest.py:458: RuntimeWarning: invalid value encountered in true_divide\n predictions[k].sum(axis=1)[:, np.newaxis])\nC:\\Users\\Victoria\\Anaconda3\\lib\\site-packages\\sklearn\\ensemble\\forest.py:453: UserWarning: Some inputs do not have OOB scores. This probably means too few trees were used to compute any reliable oob estimates.\n warn(\"Some inputs do not have OOB scores. \"\nC:\\Users\\Victoria\\Anaconda3\\lib\\site-packages\\sklearn\\ensemble\\forest.py:458: RuntimeWarning: invalid value encountered in true_divide\n predictions[k].sum(axis=1)[:, np.newaxis])\nC:\\Users\\Victoria\\Anaconda3\\lib\\site-packages\\sklearn\\ensemble\\forest.py:453: UserWarning: Some inputs do not have OOB scores. This probably means too few trees were used to compute any reliable oob estimates.\n warn(\"Some inputs do not have OOB scores. \"\nC:\\Users\\Victoria\\Anaconda3\\lib\\site-packages\\sklearn\\ensemble\\forest.py:458: RuntimeWarning: invalid value encountered in true_divide\n predictions[k].sum(axis=1)[:, np.newaxis])\nC:\\Users\\Victoria\\Anaconda3\\lib\\site-packages\\sklearn\\ensemble\\forest.py:453: UserWarning: Some inputs do not have OOB scores. This probably means too few trees were used to compute any reliable oob estimates.\n warn(\"Some inputs do not have OOB scores. \"\nC:\\Users\\Victoria\\Anaconda3\\lib\\site-packages\\sklearn\\ensemble\\forest.py:458: RuntimeWarning: invalid value encountered in true_divide\n predictions[k].sum(axis=1)[:, np.newaxis])\nC:\\Users\\Victoria\\Anaconda3\\lib\\site-packages\\sklearn\\ensemble\\forest.py:453: UserWarning: Some inputs do not have OOB scores. This probably means too few trees were used to compute any reliable oob estimates.\n warn(\"Some inputs do not have OOB scores. \"\nC:\\Users\\Victoria\\Anaconda3\\lib\\site-packages\\sklearn\\ensemble\\forest.py:458: RuntimeWarning: invalid value encountered in true_divide\n predictions[k].sum(axis=1)[:, np.newaxis])\nC:\\Users\\Victoria\\Anaconda3\\lib\\site-packages\\sklearn\\ensemble\\forest.py:453: UserWarning: Some inputs do not have OOB scores. This probably means too few trees were used to compute any reliable oob estimates.\n warn(\"Some inputs do not have OOB scores. \"\nC:\\Users\\Victoria\\Anaconda3\\lib\\site-packages\\sklearn\\ensemble\\forest.py:458: RuntimeWarning: invalid value encountered in true_divide\n predictions[k].sum(axis=1)[:, np.newaxis])\nC:\\Users\\Victoria\\Anaconda3\\lib\\site-packages\\sklearn\\ensemble\\forest.py:453: UserWarning: Some inputs do not have OOB scores. This probably means too few trees were used to compute any reliable oob estimates.\n warn(\"Some inputs do not have OOB scores. \"\nC:\\Users\\Victoria\\Anaconda3\\lib\\site-packages\\sklearn\\ensemble\\forest.py:458: RuntimeWarning: invalid value encountered in true_divide\n predictions[k].sum(axis=1)[:, np.newaxis])\nC:\\Users\\Victoria\\Anaconda3\\lib\\site-packages\\sklearn\\ensemble\\forest.py:453: UserWarning: Some inputs do not have OOB scores. This probably means too few trees were used to compute any reliable oob estimates.\n warn(\"Some inputs do not have OOB scores. \"\nC:\\Users\\Victoria\\Anaconda3\\lib\\site-packages\\sklearn\\ensemble\\forest.py:458: RuntimeWarning: invalid value encountered in true_divide\n predictions[k].sum(axis=1)[:, np.newaxis])\nC:\\Users\\Victoria\\Anaconda3\\lib\\site-packages\\sklearn\\ensemble\\forest.py:453: UserWarning: Some inputs do not have OOB scores. This probably means too few trees were used to compute any reliable oob estimates.\n warn(\"Some inputs do not have OOB scores. \"\nC:\\Users\\Victoria\\Anaconda3\\lib\\site-packages\\sklearn\\ensemble\\forest.py:458: RuntimeWarning: invalid value encountered in true_divide\n predictions[k].sum(axis=1)[:, np.newaxis])\nC:\\Users\\Victoria\\Anaconda3\\lib\\site-packages\\sklearn\\ensemble\\forest.py:453: UserWarning: Some inputs do not have OOB scores. This probably means too few trees were used to compute any reliable oob estimates.\n warn(\"Some inputs do not have OOB scores. \"\nC:\\Users\\Victoria\\Anaconda3\\lib\\site-packages\\sklearn\\ensemble\\forest.py:458: RuntimeWarning: invalid value encountered in true_divide\n predictions[k].sum(axis=1)[:, np.newaxis])\nC:\\Users\\Victoria\\Anaconda3\\lib\\site-packages\\sklearn\\ensemble\\forest.py:453: UserWarning: Some inputs do not have OOB scores. This probably means too few trees were used to compute any reliable oob estimates.\n warn(\"Some inputs do not have OOB scores. \"\nC:\\Users\\Victoria\\Anaconda3\\lib\\site-packages\\sklearn\\ensemble\\forest.py:458: RuntimeWarning: invalid value encountered in true_divide\n predictions[k].sum(axis=1)[:, np.newaxis])\nC:\\Users\\Victoria\\Anaconda3\\lib\\site-packages\\sklearn\\ensemble\\forest.py:453: UserWarning: Some inputs do not have OOB scores. This probably means too few trees were used to compute any reliable oob estimates.\n warn(\"Some inputs do not have OOB scores. \"\nC:\\Users\\Victoria\\Anaconda3\\lib\\site-packages\\sklearn\\ensemble\\forest.py:458: RuntimeWarning: invalid value encountered in true_divide\n predictions[k].sum(axis=1)[:, np.newaxis])\nC:\\Users\\Victoria\\Anaconda3\\lib\\site-packages\\sklearn\\ensemble\\forest.py:453: UserWarning: Some inputs do not have OOB scores. This probably means too few trees were used to compute any reliable oob estimates.\n warn(\"Some inputs do not have OOB scores. \"\nC:\\Users\\Victoria\\Anaconda3\\lib\\site-packages\\sklearn\\ensemble\\forest.py:458: RuntimeWarning: invalid value encountered in true_divide\n predictions[k].sum(axis=1)[:, np.newaxis])\nC:\\Users\\Victoria\\Anaconda3\\lib\\site-packages\\sklearn\\ensemble\\forest.py:453: UserWarning: Some inputs do not have OOB scores. This probably means too few trees were used to compute any reliable oob estimates.\n warn(\"Some inputs do not have OOB scores. \"\nC:\\Users\\Victoria\\Anaconda3\\lib\\site-packages\\sklearn\\ensemble\\forest.py:458: RuntimeWarning: invalid value encountered in true_divide\n predictions[k].sum(axis=1)[:, np.newaxis])\nC:\\Users\\Victoria\\Anaconda3\\lib\\site-packages\\sklearn\\ensemble\\forest.py:453: UserWarning: Some inputs do not have OOB scores. This probably means too few trees were used to compute any reliable oob estimates.\n warn(\"Some inputs do not have OOB scores. \"\nC:\\Users\\Victoria\\Anaconda3\\lib\\site-packages\\sklearn\\ensemble\\forest.py:458: RuntimeWarning: invalid value encountered in true_divide\n predictions[k].sum(axis=1)[:, np.newaxis])\nC:\\Users\\Victoria\\Anaconda3\\lib\\site-packages\\sklearn\\ensemble\\forest.py:453: UserWarning: Some inputs do not have OOB scores. This probably means too few trees were used to compute any reliable oob estimates.\n warn(\"Some inputs do not have OOB scores. \"\nC:\\Users\\Victoria\\Anaconda3\\lib\\site-packages\\sklearn\\ensemble\\forest.py:458: RuntimeWarning: invalid value encountered in true_divide\n predictions[k].sum(axis=1)[:, np.newaxis])\nC:\\Users\\Victoria\\Anaconda3\\lib\\site-packages\\sklearn\\ensemble\\forest.py:453: UserWarning: Some inputs do not have OOB scores. This probably means too few trees were used to compute any reliable oob estimates.\n warn(\"Some inputs do not have OOB scores. \"\nC:\\Users\\Victoria\\Anaconda3\\lib\\site-packages\\sklearn\\ensemble\\forest.py:458: RuntimeWarning: invalid value encountered in true_divide\n predictions[k].sum(axis=1)[:, np.newaxis])\nC:\\Users\\Victoria\\Anaconda3\\lib\\site-packages\\sklearn\\ensemble\\forest.py:453: UserWarning: Some inputs do not have OOB scores. This probably means too few trees were used to compute any reliable oob estimates.\n warn(\"Some inputs do not have OOB scores. \"\nC:\\Users\\Victoria\\Anaconda3\\lib\\site-packages\\sklearn\\ensemble\\forest.py:458: RuntimeWarning: invalid value encountered in true_divide\n predictions[k].sum(axis=1)[:, np.newaxis])\nC:\\Users\\Victoria\\Anaconda3\\lib\\site-packages\\sklearn\\ensemble\\forest.py:453: UserWarning: Some inputs do not have OOB scores. This probably means too few trees were used to compute any reliable oob estimates.\n warn(\"Some inputs do not have OOB scores. \"\nC:\\Users\\Victoria\\Anaconda3\\lib\\site-packages\\sklearn\\ensemble\\forest.py:458: RuntimeWarning: invalid value encountered in true_divide\n predictions[k].sum(axis=1)[:, np.newaxis])\nC:\\Users\\Victoria\\Anaconda3\\lib\\site-packages\\sklearn\\ensemble\\forest.py:453: UserWarning: Some inputs do not have OOB scores. This probably means too few trees were used to compute any reliable oob estimates.\n warn(\"Some inputs do not have OOB scores. \"\nC:\\Users\\Victoria\\Anaconda3\\lib\\site-packages\\sklearn\\ensemble\\forest.py:458: RuntimeWarning: invalid value encountered in true_divide\n predictions[k].sum(axis=1)[:, np.newaxis])\nC:\\Users\\Victoria\\Anaconda3\\lib\\site-packages\\sklearn\\ensemble\\forest.py:453: UserWarning: Some inputs do not have OOB scores. This probably means too few trees were used to compute any reliable oob estimates.\n warn(\"Some inputs do not have OOB scores. \"\nC:\\Users\\Victoria\\Anaconda3\\lib\\site-packages\\sklearn\\ensemble\\forest.py:458: RuntimeWarning: invalid value encountered in true_divide\n predictions[k].sum(axis=1)[:, np.newaxis])\n" ], [ "plt.plot(xss[0],yss[0],'v');\nplt.plot(xss[2],yss[2],'o');\nplt.plot(xss[1],yss[1],'-')\n", "_____no_output_____" ], [ "yss=np.asarray(yss)\nxss=np.asarray(xss)", "_____no_output_____" ], [ "help(plt.plot\n )", "Help on function plot in module matplotlib.pyplot:\n\nplot(*args, **kwargs)\n Plot lines and/or markers to the\n :class:`~matplotlib.axes.Axes`. *args* is a variable length\n argument, allowing for multiple *x*, *y* pairs with an\n optional format string. For example, each of the following is\n legal::\n \n plot(x, y) # plot x and y using default line style and color\n plot(x, y, 'bo') # plot x and y using blue circle markers\n plot(y) # plot y using x as index array 0..N-1\n plot(y, 'r+') # ditto, but with red plusses\n \n If *x* and/or *y* is 2-dimensional, then the corresponding columns\n will be plotted.\n \n If used with labeled data, make sure that the color spec is not\n included as an element in data, as otherwise the last case\n ``plot(\"v\",\"r\", data={\"v\":..., \"r\":...)``\n can be interpreted as the first case which would do ``plot(v, r)``\n using the default line style and color.\n \n If not used with labeled data (i.e., without a data argument),\n an arbitrary number of *x*, *y*, *fmt* groups can be specified, as in::\n \n a.plot(x1, y1, 'g^', x2, y2, 'g-')\n \n Return value is a list of lines that were added.\n \n By default, each line is assigned a different style specified by a\n 'style cycle'. To change this behavior, you can edit the\n axes.prop_cycle rcParam.\n \n The following format string characters are accepted to control\n the line style or marker:\n \n ================ ===============================\n character description\n ================ ===============================\n ``'-'`` solid line style\n ``'--'`` dashed line style\n ``'-.'`` dash-dot line style\n ``':'`` dotted line style\n ``'.'`` point marker\n ``','`` pixel marker\n ``'o'`` circle marker\n ``'v'`` triangle_down marker\n ``'^'`` triangle_up marker\n ``'<'`` triangle_left marker\n ``'>'`` triangle_right marker\n ``'1'`` tri_down marker\n ``'2'`` tri_up marker\n ``'3'`` tri_left marker\n ``'4'`` tri_right marker\n ``'s'`` square marker\n ``'p'`` pentagon marker\n ``'*'`` star marker\n ``'h'`` hexagon1 marker\n ``'H'`` hexagon2 marker\n ``'+'`` plus marker\n ``'x'`` x marker\n ``'D'`` diamond marker\n ``'d'`` thin_diamond marker\n ``'|'`` vline marker\n ``'_'`` hline marker\n ================ ===============================\n \n \n The following color abbreviations are supported:\n \n ========== ========\n character color\n ========== ========\n 'b' blue\n 'g' green\n 'r' red\n 'c' cyan\n 'm' magenta\n 'y' yellow\n 'k' black\n 'w' white\n ========== ========\n \n In addition, you can specify colors in many weird and\n wonderful ways, including full names (``'green'``), hex\n strings (``'#008000'``), RGB or RGBA tuples (``(0,1,0,1)``) or\n grayscale intensities as a string (``'0.8'``). Of these, the\n string specifications can be used in place of a ``fmt`` group,\n but the tuple forms can be used only as ``kwargs``.\n \n Line styles and colors are combined in a single format string, as in\n ``'bo'`` for blue circles.\n \n The *kwargs* can be used to set line properties (any property that has\n a ``set_*`` method). You can use this to set a line label (for auto\n legends), linewidth, anitialising, marker face color, etc. Here is an\n example::\n \n plot([1,2,3], [1,2,3], 'go-', label='line 1', linewidth=2)\n plot([1,2,3], [1,4,9], 'rs', label='line 2')\n axis([0, 4, 0, 10])\n legend()\n \n If you make multiple lines with one plot command, the kwargs\n apply to all those lines, e.g.::\n \n plot(x1, y1, x2, y2, antialiased=False)\n \n Neither line will be antialiased.\n \n You do not need to use format strings, which are just\n abbreviations. All of the line properties can be controlled\n by keyword arguments. For example, you can set the color,\n marker, linestyle, and markercolor with::\n \n plot(x, y, color='green', linestyle='dashed', marker='o',\n markerfacecolor='blue', markersize=12).\n \n See :class:`~matplotlib.lines.Line2D` for details.\n \n The kwargs are :class:`~matplotlib.lines.Line2D` properties:\n \n agg_filter: a filter function, which takes a (m, n, 3) float array and a dpi value, and returns a (m, n, 3) array \n alpha: float (0.0 transparent through 1.0 opaque) \n animated: bool \n antialiased or aa: [True | False] \n clip_box: a `~.Bbox` instance \n clip_on: bool \n clip_path: [(`~matplotlib.path.Path`, `~.Transform`) | `~.Patch` | None] \n color or c: any matplotlib color \n contains: a callable function \n dash_capstyle: ['butt' | 'round' | 'projecting'] \n dash_joinstyle: ['miter' | 'round' | 'bevel'] \n dashes: sequence of on/off ink in points \n drawstyle: ['default' | 'steps' | 'steps-pre' | 'steps-mid' | 'steps-post'] \n figure: a `~.Figure` instance \n fillstyle: ['full' | 'left' | 'right' | 'bottom' | 'top' | 'none'] \n gid: an id string \n label: object \n linestyle or ls: ['solid' | 'dashed', 'dashdot', 'dotted' | (offset, on-off-dash-seq) | ``'-'`` | ``'--'`` | ``'-.'`` | ``':'`` | ``'None'`` | ``' '`` | ``''``]\n linewidth or lw: float value in points \n marker: :mod:`A valid marker style <matplotlib.markers>`\n markeredgecolor or mec: any matplotlib color \n markeredgewidth or mew: float value in points \n markerfacecolor or mfc: any matplotlib color \n markerfacecoloralt or mfcalt: any matplotlib color \n markersize or ms: float \n markevery: [None | int | length-2 tuple of int | slice | list/array of int | float | length-2 tuple of float]\n path_effects: `~.AbstractPathEffect` \n picker: float distance in points or callable pick function ``fn(artist, event)`` \n pickradius: float distance in points\n rasterized: bool or None \n sketch_params: (scale: float, length: float, randomness: float) \n snap: bool or None \n solid_capstyle: ['butt' | 'round' | 'projecting'] \n solid_joinstyle: ['miter' | 'round' | 'bevel'] \n transform: a :class:`matplotlib.transforms.Transform` instance \n url: a url string \n visible: bool \n xdata: 1D array \n ydata: 1D array \n zorder: float \n \n kwargs *scalex* and *scaley*, if defined, are passed on to\n :meth:`~matplotlib.axes.Axes.autoscale_view` to determine\n whether the *x* and *y* axes are autoscaled; the default is\n *True*.\n \n .. note::\n In addition to the above described arguments, this function can take a\n **data** keyword argument. If such a **data** argument is given, the\n following arguments are replaced by **data[<arg>]**:\n \n * All arguments with the following names: 'x', 'y'.\n\n" ], [ "#Running The Random Forest OOB Error Rate Chart\nimport matplotlib.pyplot as plt\n\nfrom collections import OrderedDict\nfrom sklearn.datasets import make_classification\nfrom sklearn.ensemble import RandomForestClassifier, ExtraTreesClassifier\nRANDOM_STATE = 123\nensemble_clfs = [\n (\"RandomForestClassifier, max_features='sqrt'\",\n RandomForestClassifier(warm_start=True, max_features='sqrt',\n oob_score=True,\n random_state=RANDOM_STATE)),\n (\"RandomForestClassifier, max_features='log2'\",\n RandomForestClassifier(warm_start=True, max_features='log2',\n oob_score=True,\n random_state=RANDOM_STATE)),\n (\"RandomForestClassifier, max_features=None\",\n RandomForestClassifier(warm_start=True, max_features=None,\n oob_score=True,\n random_state=RANDOM_STATE))\n]\n\n# Map a classifier name to a list of (<n_estimators>, <error rate>) pairs.\nerror_rate = OrderedDict((label, []) for label, _ in ensemble_clfs)\n\n# Range of `n_estimators` values to explore.\nmin_estimators = 5\nmax_estimators = 300\n\nfor label, clf in ensemble_clfs:\n for i in range(min_estimators, max_estimators + 1):\n clf.set_params(n_estimators=i)\n clf.fit(standardized_data, dataIncomeColumn)\n\n # Record the OOB error for each `n_estimators=i` setting.\n y_pred = clf.predict(standardized_test_data)\n test_errorCLF = (1 - sum(y_pred == data_TestIncomeColumn)/len(y_pred))\n error_rate[label].append((i, test_errorCLF))\n\n# Generate the \"OOB error rate\" vs. \"n_estimators\" plot.\nfor label, clf_err in error_rate.items():\n xs, ys = zip(*clf_err)\n plt.plot(xs, ys, label=label)\n\nplt.xlim(min_estimators, max_estimators)\nplt.xlabel(\"n_estimators\")\nplt.ylabel(\"OOB error rate\")\nplt.legend(loc=\"upper right\")\nplt.show()", "C:\\Users\\Victoria\\Anaconda3\\lib\\site-packages\\sklearn\\ensemble\\forest.py:453: UserWarning: Some inputs do not have OOB scores. This probably means too few trees were used to compute any reliable oob estimates.\n warn(\"Some inputs do not have OOB scores. \"\nC:\\Users\\Victoria\\Anaconda3\\lib\\site-packages\\sklearn\\ensemble\\forest.py:458: RuntimeWarning: invalid value encountered in true_divide\n predictions[k].sum(axis=1)[:, np.newaxis])\nC:\\Users\\Victoria\\Anaconda3\\lib\\site-packages\\sklearn\\ensemble\\forest.py:453: UserWarning: Some inputs do not have OOB scores. This probably means too few trees were used to compute any reliable oob estimates.\n warn(\"Some inputs do not have OOB scores. \"\nC:\\Users\\Victoria\\Anaconda3\\lib\\site-packages\\sklearn\\ensemble\\forest.py:458: RuntimeWarning: invalid value encountered in true_divide\n predictions[k].sum(axis=1)[:, np.newaxis])\nC:\\Users\\Victoria\\Anaconda3\\lib\\site-packages\\sklearn\\ensemble\\forest.py:453: UserWarning: Some inputs do not have OOB scores. This probably means too few trees were used to compute any reliable oob estimates.\n warn(\"Some inputs do not have OOB scores. \"\nC:\\Users\\Victoria\\Anaconda3\\lib\\site-packages\\sklearn\\ensemble\\forest.py:458: RuntimeWarning: invalid value encountered in true_divide\n predictions[k].sum(axis=1)[:, np.newaxis])\nC:\\Users\\Victoria\\Anaconda3\\lib\\site-packages\\sklearn\\ensemble\\forest.py:453: UserWarning: Some inputs do not have OOB scores. This probably means too few trees were used to compute any reliable oob estimates.\n warn(\"Some inputs do not have OOB scores. \"\nC:\\Users\\Victoria\\Anaconda3\\lib\\site-packages\\sklearn\\ensemble\\forest.py:458: RuntimeWarning: invalid value encountered in true_divide\n predictions[k].sum(axis=1)[:, np.newaxis])\nC:\\Users\\Victoria\\Anaconda3\\lib\\site-packages\\sklearn\\ensemble\\forest.py:453: UserWarning: Some inputs do not have OOB scores. This probably means too few trees were used to compute any reliable oob estimates.\n warn(\"Some inputs do not have OOB scores. \"\nC:\\Users\\Victoria\\Anaconda3\\lib\\site-packages\\sklearn\\ensemble\\forest.py:458: RuntimeWarning: invalid value encountered in true_divide\n predictions[k].sum(axis=1)[:, np.newaxis])\nC:\\Users\\Victoria\\Anaconda3\\lib\\site-packages\\sklearn\\ensemble\\forest.py:453: UserWarning: Some inputs do not have OOB scores. This probably means too few trees were used to compute any reliable oob estimates.\n warn(\"Some inputs do not have OOB scores. \"\nC:\\Users\\Victoria\\Anaconda3\\lib\\site-packages\\sklearn\\ensemble\\forest.py:458: RuntimeWarning: invalid value encountered in true_divide\n predictions[k].sum(axis=1)[:, np.newaxis])\nC:\\Users\\Victoria\\Anaconda3\\lib\\site-packages\\sklearn\\ensemble\\forest.py:453: UserWarning: Some inputs do not have OOB scores. This probably means too few trees were used to compute any reliable oob estimates.\n warn(\"Some inputs do not have OOB scores. \"\nC:\\Users\\Victoria\\Anaconda3\\lib\\site-packages\\sklearn\\ensemble\\forest.py:458: RuntimeWarning: invalid value encountered in true_divide\n predictions[k].sum(axis=1)[:, np.newaxis])\nC:\\Users\\Victoria\\Anaconda3\\lib\\site-packages\\sklearn\\ensemble\\forest.py:453: UserWarning: Some inputs do not have OOB scores. This probably means too few trees were used to compute any reliable oob estimates.\n warn(\"Some inputs do not have OOB scores. \"\nC:\\Users\\Victoria\\Anaconda3\\lib\\site-packages\\sklearn\\ensemble\\forest.py:458: RuntimeWarning: invalid value encountered in true_divide\n predictions[k].sum(axis=1)[:, np.newaxis])\nC:\\Users\\Victoria\\Anaconda3\\lib\\site-packages\\sklearn\\ensemble\\forest.py:453: UserWarning: Some inputs do not have OOB scores. This probably means too few trees were used to compute any reliable oob estimates.\n warn(\"Some inputs do not have OOB scores. \"\nC:\\Users\\Victoria\\Anaconda3\\lib\\site-packages\\sklearn\\ensemble\\forest.py:458: RuntimeWarning: invalid value encountered in true_divide\n predictions[k].sum(axis=1)[:, np.newaxis])\nC:\\Users\\Victoria\\Anaconda3\\lib\\site-packages\\sklearn\\ensemble\\forest.py:453: UserWarning: Some inputs do not have OOB scores. This probably means too few trees were used to compute any reliable oob estimates.\n warn(\"Some inputs do not have OOB scores. \"\nC:\\Users\\Victoria\\Anaconda3\\lib\\site-packages\\sklearn\\ensemble\\forest.py:458: RuntimeWarning: invalid value encountered in true_divide\n predictions[k].sum(axis=1)[:, np.newaxis])\nC:\\Users\\Victoria\\Anaconda3\\lib\\site-packages\\sklearn\\ensemble\\forest.py:453: UserWarning: Some inputs do not have OOB scores. This probably means too few trees were used to compute any reliable oob estimates.\n warn(\"Some inputs do not have OOB scores. \"\nC:\\Users\\Victoria\\Anaconda3\\lib\\site-packages\\sklearn\\ensemble\\forest.py:458: RuntimeWarning: invalid value encountered in true_divide\n predictions[k].sum(axis=1)[:, np.newaxis])\nC:\\Users\\Victoria\\Anaconda3\\lib\\site-packages\\sklearn\\ensemble\\forest.py:453: UserWarning: Some inputs do not have OOB scores. This probably means too few trees were used to compute any reliable oob estimates.\n warn(\"Some inputs do not have OOB scores. \"\nC:\\Users\\Victoria\\Anaconda3\\lib\\site-packages\\sklearn\\ensemble\\forest.py:458: RuntimeWarning: invalid value encountered in true_divide\n predictions[k].sum(axis=1)[:, np.newaxis])\nC:\\Users\\Victoria\\Anaconda3\\lib\\site-packages\\sklearn\\ensemble\\forest.py:453: UserWarning: Some inputs do not have OOB scores. This probably means too few trees were used to compute any reliable oob estimates.\n warn(\"Some inputs do not have OOB scores. \"\nC:\\Users\\Victoria\\Anaconda3\\lib\\site-packages\\sklearn\\ensemble\\forest.py:458: RuntimeWarning: invalid value encountered in true_divide\n predictions[k].sum(axis=1)[:, np.newaxis])\nC:\\Users\\Victoria\\Anaconda3\\lib\\site-packages\\sklearn\\ensemble\\forest.py:453: UserWarning: Some inputs do not have OOB scores. This probably means too few trees were used to compute any reliable oob estimates.\n warn(\"Some inputs do not have OOB scores. \"\nC:\\Users\\Victoria\\Anaconda3\\lib\\site-packages\\sklearn\\ensemble\\forest.py:458: RuntimeWarning: invalid value encountered in true_divide\n predictions[k].sum(axis=1)[:, np.newaxis])\nC:\\Users\\Victoria\\Anaconda3\\lib\\site-packages\\sklearn\\ensemble\\forest.py:453: UserWarning: Some inputs do not have OOB scores. This probably means too few trees were used to compute any reliable oob estimates.\n warn(\"Some inputs do not have OOB scores. \"\nC:\\Users\\Victoria\\Anaconda3\\lib\\site-packages\\sklearn\\ensemble\\forest.py:458: RuntimeWarning: invalid value encountered in true_divide\n predictions[k].sum(axis=1)[:, np.newaxis])\nC:\\Users\\Victoria\\Anaconda3\\lib\\site-packages\\sklearn\\ensemble\\forest.py:453: UserWarning: Some inputs do not have OOB scores. This probably means too few trees were used to compute any reliable oob estimates.\n warn(\"Some inputs do not have OOB scores. \"\nC:\\Users\\Victoria\\Anaconda3\\lib\\site-packages\\sklearn\\ensemble\\forest.py:458: RuntimeWarning: invalid value encountered in true_divide\n predictions[k].sum(axis=1)[:, np.newaxis])\nC:\\Users\\Victoria\\Anaconda3\\lib\\site-packages\\sklearn\\ensemble\\forest.py:453: UserWarning: Some inputs do not have OOB scores. This probably means too few trees were used to compute any reliable oob estimates.\n warn(\"Some inputs do not have OOB scores. \"\nC:\\Users\\Victoria\\Anaconda3\\lib\\site-packages\\sklearn\\ensemble\\forest.py:458: RuntimeWarning: invalid value encountered in true_divide\n predictions[k].sum(axis=1)[:, np.newaxis])\nC:\\Users\\Victoria\\Anaconda3\\lib\\site-packages\\sklearn\\ensemble\\forest.py:453: UserWarning: Some inputs do not have OOB scores. This probably means too few trees were used to compute any reliable oob estimates.\n warn(\"Some inputs do not have OOB scores. \"\nC:\\Users\\Victoria\\Anaconda3\\lib\\site-packages\\sklearn\\ensemble\\forest.py:458: RuntimeWarning: invalid value encountered in true_divide\n predictions[k].sum(axis=1)[:, np.newaxis])\nC:\\Users\\Victoria\\Anaconda3\\lib\\site-packages\\sklearn\\ensemble\\forest.py:453: UserWarning: Some inputs do not have OOB scores. This probably means too few trees were used to compute any reliable oob estimates.\n warn(\"Some inputs do not have OOB scores. \"\nC:\\Users\\Victoria\\Anaconda3\\lib\\site-packages\\sklearn\\ensemble\\forest.py:458: RuntimeWarning: invalid value encountered in true_divide\n predictions[k].sum(axis=1)[:, np.newaxis])\nC:\\Users\\Victoria\\Anaconda3\\lib\\site-packages\\sklearn\\ensemble\\forest.py:453: UserWarning: Some inputs do not have OOB scores. This probably means too few trees were used to compute any reliable oob estimates.\n warn(\"Some inputs do not have OOB scores. \"\nC:\\Users\\Victoria\\Anaconda3\\lib\\site-packages\\sklearn\\ensemble\\forest.py:458: RuntimeWarning: invalid value encountered in true_divide\n predictions[k].sum(axis=1)[:, np.newaxis])\nC:\\Users\\Victoria\\Anaconda3\\lib\\site-packages\\sklearn\\ensemble\\forest.py:453: UserWarning: Some inputs do not have OOB scores. This probably means too few trees were used to compute any reliable oob estimates.\n warn(\"Some inputs do not have OOB scores. \"\nC:\\Users\\Victoria\\Anaconda3\\lib\\site-packages\\sklearn\\ensemble\\forest.py:458: RuntimeWarning: invalid value encountered in true_divide\n predictions[k].sum(axis=1)[:, np.newaxis])\n" ], [ "#Running The Extra Trees Test Error Plot\nimport matplotlib.pyplot as plt\n\nfrom collections import OrderedDict\nfrom sklearn.datasets import make_classification\nfrom sklearn.ensemble import RandomForestClassifier, ExtraTreesClassifier\nRANDOM_STATE = 123\nensemble_clfs = [\n (\"ExtraTreesClassifier, max_features='sqrt'\",\n ExtraTreesClassifier(warm_start=True, max_features='sqrt',\n oob_score=True, bootstrap=True,\n random_state=RANDOM_STATE)),\n (\"ExtraTreesClassifier, max_features='log2'\",\n ExtraTreesClassifier(warm_start=True, max_features='log2',\n oob_score=True, bootstrap=True,\n random_state=RANDOM_STATE)),\n (\"ExtraTrees, max_features=None\",\n ExtraTreesClassifier(warm_start=True, max_features=None,\n oob_score=True, bootstrap=True,\n random_state=RANDOM_STATE))\n]\n\n# Map a classifier name to a list of (<n_estimators>, <error rate>) pairs.\nerror_rate = OrderedDict((label, []) for label, _ in ensemble_clfs)\n\n# Range of `n_estimators` values to explore.\nmin_estimators = 5\nmax_estimators = 300\n\nfor label, clf in ensemble_clfs:\n for i in range(min_estimators, max_estimators + 1):\n clf.set_params(n_estimators=i)\n clf.fit(standardized_data, dataIncomeColumn)\n\n # Record the OOB error for each `n_estimators=i` setting.\n y_pred = clf.predict(standardized_test_data)\n test_errorCLF = (1 - sum(y_pred == data_TestIncomeColumn)/len(y_pred))\n error_rate[label].append((i, test_errorCLF))\n\n# Generate the \"OOB error rate\" vs. \"n_estimators\" plot.\nfor label, clf_err in error_rate.items():\n xs, ys = zip(*clf_err)\n plt.plot(xs, ys, label=label)\n\nplt.xlim(min_estimators, max_estimators)\nplt.xlabel(\"n_estimators\")\nplt.ylabel(\"Test Error Rate\")\nplt.legend(loc=\"upper right\")\nplt.show()", "C:\\Users\\Victoria\\Anaconda3\\lib\\site-packages\\sklearn\\ensemble\\forest.py:453: UserWarning: Some inputs do not have OOB scores. This probably means too few trees were used to compute any reliable oob estimates.\n warn(\"Some inputs do not have OOB scores. \"\nC:\\Users\\Victoria\\Anaconda3\\lib\\site-packages\\sklearn\\ensemble\\forest.py:458: RuntimeWarning: invalid value encountered in true_divide\n predictions[k].sum(axis=1)[:, np.newaxis])\nC:\\Users\\Victoria\\Anaconda3\\lib\\site-packages\\sklearn\\ensemble\\forest.py:453: UserWarning: Some inputs do not have OOB scores. This probably means too few trees were used to compute any reliable oob estimates.\n warn(\"Some inputs do not have OOB scores. \"\nC:\\Users\\Victoria\\Anaconda3\\lib\\site-packages\\sklearn\\ensemble\\forest.py:458: RuntimeWarning: invalid value encountered in true_divide\n predictions[k].sum(axis=1)[:, np.newaxis])\nC:\\Users\\Victoria\\Anaconda3\\lib\\site-packages\\sklearn\\ensemble\\forest.py:453: UserWarning: Some inputs do not have OOB scores. This probably means too few trees were used to compute any reliable oob estimates.\n warn(\"Some inputs do not have OOB scores. \"\nC:\\Users\\Victoria\\Anaconda3\\lib\\site-packages\\sklearn\\ensemble\\forest.py:458: RuntimeWarning: invalid value encountered in true_divide\n predictions[k].sum(axis=1)[:, np.newaxis])\nC:\\Users\\Victoria\\Anaconda3\\lib\\site-packages\\sklearn\\ensemble\\forest.py:453: UserWarning: Some inputs do not have OOB scores. This probably means too few trees were used to compute any reliable oob estimates.\n warn(\"Some inputs do not have OOB scores. \"\nC:\\Users\\Victoria\\Anaconda3\\lib\\site-packages\\sklearn\\ensemble\\forest.py:458: RuntimeWarning: invalid value encountered in true_divide\n predictions[k].sum(axis=1)[:, np.newaxis])\nC:\\Users\\Victoria\\Anaconda3\\lib\\site-packages\\sklearn\\ensemble\\forest.py:453: UserWarning: Some inputs do not have OOB scores. This probably means too few trees were used to compute any reliable oob estimates.\n warn(\"Some inputs do not have OOB scores. \"\nC:\\Users\\Victoria\\Anaconda3\\lib\\site-packages\\sklearn\\ensemble\\forest.py:458: RuntimeWarning: invalid value encountered in true_divide\n predictions[k].sum(axis=1)[:, np.newaxis])\nC:\\Users\\Victoria\\Anaconda3\\lib\\site-packages\\sklearn\\ensemble\\forest.py:453: UserWarning: Some inputs do not have OOB scores. This probably means too few trees were used to compute any reliable oob estimates.\n warn(\"Some inputs do not have OOB scores. \"\nC:\\Users\\Victoria\\Anaconda3\\lib\\site-packages\\sklearn\\ensemble\\forest.py:458: RuntimeWarning: invalid value encountered in true_divide\n predictions[k].sum(axis=1)[:, np.newaxis])\nC:\\Users\\Victoria\\Anaconda3\\lib\\site-packages\\sklearn\\ensemble\\forest.py:453: UserWarning: Some inputs do not have OOB scores. This probably means too few trees were used to compute any reliable oob estimates.\n warn(\"Some inputs do not have OOB scores. \"\nC:\\Users\\Victoria\\Anaconda3\\lib\\site-packages\\sklearn\\ensemble\\forest.py:458: RuntimeWarning: invalid value encountered in true_divide\n predictions[k].sum(axis=1)[:, np.newaxis])\nC:\\Users\\Victoria\\Anaconda3\\lib\\site-packages\\sklearn\\ensemble\\forest.py:453: UserWarning: Some inputs do not have OOB scores. This probably means too few trees were used to compute any reliable oob estimates.\n warn(\"Some inputs do not have OOB scores. \"\nC:\\Users\\Victoria\\Anaconda3\\lib\\site-packages\\sklearn\\ensemble\\forest.py:458: RuntimeWarning: invalid value encountered in true_divide\n predictions[k].sum(axis=1)[:, np.newaxis])\nC:\\Users\\Victoria\\Anaconda3\\lib\\site-packages\\sklearn\\ensemble\\forest.py:453: UserWarning: Some inputs do not have OOB scores. This probably means too few trees were used to compute any reliable oob estimates.\n warn(\"Some inputs do not have OOB scores. \"\nC:\\Users\\Victoria\\Anaconda3\\lib\\site-packages\\sklearn\\ensemble\\forest.py:458: RuntimeWarning: invalid value encountered in true_divide\n predictions[k].sum(axis=1)[:, np.newaxis])\nC:\\Users\\Victoria\\Anaconda3\\lib\\site-packages\\sklearn\\ensemble\\forest.py:453: UserWarning: Some inputs do not have OOB scores. This probably means too few trees were used to compute any reliable oob estimates.\n warn(\"Some inputs do not have OOB scores. \"\nC:\\Users\\Victoria\\Anaconda3\\lib\\site-packages\\sklearn\\ensemble\\forest.py:458: RuntimeWarning: invalid value encountered in true_divide\n predictions[k].sum(axis=1)[:, np.newaxis])\nC:\\Users\\Victoria\\Anaconda3\\lib\\site-packages\\sklearn\\ensemble\\forest.py:453: UserWarning: Some inputs do not have OOB scores. This probably means too few trees were used to compute any reliable oob estimates.\n warn(\"Some inputs do not have OOB scores. \"\nC:\\Users\\Victoria\\Anaconda3\\lib\\site-packages\\sklearn\\ensemble\\forest.py:458: RuntimeWarning: invalid value encountered in true_divide\n predictions[k].sum(axis=1)[:, np.newaxis])\nC:\\Users\\Victoria\\Anaconda3\\lib\\site-packages\\sklearn\\ensemble\\forest.py:453: UserWarning: Some inputs do not have OOB scores. This probably means too few trees were used to compute any reliable oob estimates.\n warn(\"Some inputs do not have OOB scores. \"\nC:\\Users\\Victoria\\Anaconda3\\lib\\site-packages\\sklearn\\ensemble\\forest.py:458: RuntimeWarning: invalid value encountered in true_divide\n predictions[k].sum(axis=1)[:, np.newaxis])\nC:\\Users\\Victoria\\Anaconda3\\lib\\site-packages\\sklearn\\ensemble\\forest.py:453: UserWarning: Some inputs do not have OOB scores. This probably means too few trees were used to compute any reliable oob estimates.\n warn(\"Some inputs do not have OOB scores. \"\nC:\\Users\\Victoria\\Anaconda3\\lib\\site-packages\\sklearn\\ensemble\\forest.py:458: RuntimeWarning: invalid value encountered in true_divide\n predictions[k].sum(axis=1)[:, np.newaxis])\nC:\\Users\\Victoria\\Anaconda3\\lib\\site-packages\\sklearn\\ensemble\\forest.py:453: UserWarning: Some inputs do not have OOB scores. This probably means too few trees were used to compute any reliable oob estimates.\n warn(\"Some inputs do not have OOB scores. \"\nC:\\Users\\Victoria\\Anaconda3\\lib\\site-packages\\sklearn\\ensemble\\forest.py:458: RuntimeWarning: invalid value encountered in true_divide\n predictions[k].sum(axis=1)[:, np.newaxis])\nC:\\Users\\Victoria\\Anaconda3\\lib\\site-packages\\sklearn\\ensemble\\forest.py:453: UserWarning: Some inputs do not have OOB scores. This probably means too few trees were used to compute any reliable oob estimates.\n warn(\"Some inputs do not have OOB scores. \"\nC:\\Users\\Victoria\\Anaconda3\\lib\\site-packages\\sklearn\\ensemble\\forest.py:458: RuntimeWarning: invalid value encountered in true_divide\n predictions[k].sum(axis=1)[:, np.newaxis])\nC:\\Users\\Victoria\\Anaconda3\\lib\\site-packages\\sklearn\\ensemble\\forest.py:453: UserWarning: Some inputs do not have OOB scores. This probably means too few trees were used to compute any reliable oob estimates.\n warn(\"Some inputs do not have OOB scores. \"\nC:\\Users\\Victoria\\Anaconda3\\lib\\site-packages\\sklearn\\ensemble\\forest.py:458: RuntimeWarning: invalid value encountered in true_divide\n predictions[k].sum(axis=1)[:, np.newaxis])\nC:\\Users\\Victoria\\Anaconda3\\lib\\site-packages\\sklearn\\ensemble\\forest.py:453: UserWarning: Some inputs do not have OOB scores. This probably means too few trees were used to compute any reliable oob estimates.\n warn(\"Some inputs do not have OOB scores. \"\nC:\\Users\\Victoria\\Anaconda3\\lib\\site-packages\\sklearn\\ensemble\\forest.py:458: RuntimeWarning: invalid value encountered in true_divide\n predictions[k].sum(axis=1)[:, np.newaxis])\nC:\\Users\\Victoria\\Anaconda3\\lib\\site-packages\\sklearn\\ensemble\\forest.py:453: UserWarning: Some inputs do not have OOB scores. This probably means too few trees were used to compute any reliable oob estimates.\n warn(\"Some inputs do not have OOB scores. \"\nC:\\Users\\Victoria\\Anaconda3\\lib\\site-packages\\sklearn\\ensemble\\forest.py:458: RuntimeWarning: invalid value encountered in true_divide\n predictions[k].sum(axis=1)[:, np.newaxis])\nC:\\Users\\Victoria\\Anaconda3\\lib\\site-packages\\sklearn\\ensemble\\forest.py:453: UserWarning: Some inputs do not have OOB scores. This probably means too few trees were used to compute any reliable oob estimates.\n warn(\"Some inputs do not have OOB scores. \"\nC:\\Users\\Victoria\\Anaconda3\\lib\\site-packages\\sklearn\\ensemble\\forest.py:458: RuntimeWarning: invalid value encountered in true_divide\n predictions[k].sum(axis=1)[:, np.newaxis])\nC:\\Users\\Victoria\\Anaconda3\\lib\\site-packages\\sklearn\\ensemble\\forest.py:453: UserWarning: Some inputs do not have OOB scores. This probably means too few trees were used to compute any reliable oob estimates.\n warn(\"Some inputs do not have OOB scores. \"\nC:\\Users\\Victoria\\Anaconda3\\lib\\site-packages\\sklearn\\ensemble\\forest.py:458: RuntimeWarning: invalid value encountered in true_divide\n predictions[k].sum(axis=1)[:, np.newaxis])\n" ], [ "import matplotlib.pyplot as plt\n\nfrom collections import OrderedDict\nfrom sklearn.datasets import make_classification\nfrom sklearn.ensemble import RandomForestClassifier, ExtraTreesClassifier\nRANDOM_STATE = 123\nensemble_clfs = [\n (\"RandomForestClassifier, max_features='sqrt'\",\n RandomForestClassifier(warm_start=True, max_features='sqrt',\n oob_score=True,\n random_state=RANDOM_STATE)),\n (\"RandomForestClassifier, max_features='log2'\",\n RandomForestClassifier(warm_start=True, max_features='log2',\n oob_score=True,\n random_state=RANDOM_STATE)),\n (\"RandomForestClassifier, max_features=None\",\n RandomForestClassifier(warm_start=True, max_features=None,\n oob_score=True,\n random_state=RANDOM_STATE))\n]\n\n# Map a classifier name to a list of (<n_estimators>, <error rate>) pairs.\nerror_rate = OrderedDict((label, []) for label, _ in ensemble_clfs)\n\n# Range of `n_estimators` values to explore.\nmin_estimators = 20\nmax_estimators = 30\n\nfor label, clf in ensemble_clfs:\n for i in range(min_estimators, max_estimators + 1):\n clf.set_params(n_estimators=i)\n clf.fit(standardized_data, dataIncomeColumn)\n\n # Record the OOB error for each `n_estimators=i` setting.\n y_pred = clf.predict(standardized_test_data)\n test_errorCLF = (1 - sum(y_pred == data_TestIncomeColumn)/len(y_pred))\n error_rate[label].append((i, test_errorCLF))\n\n# Generate the \"OOB error rate\" vs. \"n_estimators\" plot.\nfor label, clf_err in error_rate.items():\n xs, ys = zip(*clf_err)\n plt.plot(xs, ys, label=label)\n\nplt.xlim(min_estimators, max_estimators)\nplt.xlabel(\"n_estimators\")\nplt.ylabel(\"Test error rate\")\nplt.legend(loc=\"upper right\")\nplt.show()", "C:\\Users\\Victoria\\Anaconda3\\lib\\site-packages\\sklearn\\ensemble\\forest.py:453: UserWarning: Some inputs do not have OOB scores. This probably means too few trees were used to compute any reliable oob estimates.\n warn(\"Some inputs do not have OOB scores. \"\nC:\\Users\\Victoria\\Anaconda3\\lib\\site-packages\\sklearn\\ensemble\\forest.py:458: RuntimeWarning: invalid value encountered in true_divide\n predictions[k].sum(axis=1)[:, np.newaxis])\nC:\\Users\\Victoria\\Anaconda3\\lib\\site-packages\\sklearn\\ensemble\\forest.py:453: UserWarning: Some inputs do not have OOB scores. This probably means too few trees were used to compute any reliable oob estimates.\n warn(\"Some inputs do not have OOB scores. \"\nC:\\Users\\Victoria\\Anaconda3\\lib\\site-packages\\sklearn\\ensemble\\forest.py:458: RuntimeWarning: invalid value encountered in true_divide\n predictions[k].sum(axis=1)[:, np.newaxis])\nC:\\Users\\Victoria\\Anaconda3\\lib\\site-packages\\sklearn\\ensemble\\forest.py:453: UserWarning: Some inputs do not have OOB scores. This probably means too few trees were used to compute any reliable oob estimates.\n warn(\"Some inputs do not have OOB scores. \"\nC:\\Users\\Victoria\\Anaconda3\\lib\\site-packages\\sklearn\\ensemble\\forest.py:458: RuntimeWarning: invalid value encountered in true_divide\n predictions[k].sum(axis=1)[:, np.newaxis])\nC:\\Users\\Victoria\\Anaconda3\\lib\\site-packages\\sklearn\\ensemble\\forest.py:453: UserWarning: Some inputs do not have OOB scores. This probably means too few trees were used to compute any reliable oob estimates.\n warn(\"Some inputs do not have OOB scores. \"\nC:\\Users\\Victoria\\Anaconda3\\lib\\site-packages\\sklearn\\ensemble\\forest.py:458: RuntimeWarning: invalid value encountered in true_divide\n predictions[k].sum(axis=1)[:, np.newaxis])\nC:\\Users\\Victoria\\Anaconda3\\lib\\site-packages\\sklearn\\ensemble\\forest.py:453: UserWarning: Some inputs do not have OOB scores. This probably means too few trees were used to compute any reliable oob estimates.\n warn(\"Some inputs do not have OOB scores. \"\nC:\\Users\\Victoria\\Anaconda3\\lib\\site-packages\\sklearn\\ensemble\\forest.py:458: RuntimeWarning: invalid value encountered in true_divide\n predictions[k].sum(axis=1)[:, np.newaxis])\nC:\\Users\\Victoria\\Anaconda3\\lib\\site-packages\\sklearn\\ensemble\\forest.py:453: UserWarning: Some inputs do not have OOB scores. This probably means too few trees were used to compute any reliable oob estimates.\n warn(\"Some inputs do not have OOB scores. \"\nC:\\Users\\Victoria\\Anaconda3\\lib\\site-packages\\sklearn\\ensemble\\forest.py:458: RuntimeWarning: invalid value encountered in true_divide\n predictions[k].sum(axis=1)[:, np.newaxis])\nC:\\Users\\Victoria\\Anaconda3\\lib\\site-packages\\sklearn\\ensemble\\forest.py:453: UserWarning: Some inputs do not have OOB scores. This probably means too few trees were used to compute any reliable oob estimates.\n warn(\"Some inputs do not have OOB scores. \"\nC:\\Users\\Victoria\\Anaconda3\\lib\\site-packages\\sklearn\\ensemble\\forest.py:458: RuntimeWarning: invalid value encountered in true_divide\n predictions[k].sum(axis=1)[:, np.newaxis])\nC:\\Users\\Victoria\\Anaconda3\\lib\\site-packages\\sklearn\\ensemble\\forest.py:453: UserWarning: Some inputs do not have OOB scores. This probably means too few trees were used to compute any reliable oob estimates.\n warn(\"Some inputs do not have OOB scores. \"\nC:\\Users\\Victoria\\Anaconda3\\lib\\site-packages\\sklearn\\ensemble\\forest.py:458: RuntimeWarning: invalid value encountered in true_divide\n predictions[k].sum(axis=1)[:, np.newaxis])\nC:\\Users\\Victoria\\Anaconda3\\lib\\site-packages\\sklearn\\ensemble\\forest.py:453: UserWarning: Some inputs do not have OOB scores. This probably means too few trees were used to compute any reliable oob estimates.\n warn(\"Some inputs do not have OOB scores. \"\nC:\\Users\\Victoria\\Anaconda3\\lib\\site-packages\\sklearn\\ensemble\\forest.py:458: RuntimeWarning: invalid value encountered in true_divide\n predictions[k].sum(axis=1)[:, np.newaxis])\n" ] ] ]
[ "code" ]
[ [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ] ]
d0215074d489d9993a951a338a520c482f933fee
700,505
ipynb
Jupyter Notebook
Matplotlib-BEst.ipynb
imamol555/Machine-Learning
aa0a4914db85ae7e1b38774425f2bcc1468a7e4e
[ "MIT" ]
null
null
null
Matplotlib-BEst.ipynb
imamol555/Machine-Learning
aa0a4914db85ae7e1b38774425f2bcc1468a7e4e
[ "MIT" ]
null
null
null
Matplotlib-BEst.ipynb
imamol555/Machine-Learning
aa0a4914db85ae7e1b38774425f2bcc1468a7e4e
[ "MIT" ]
null
null
null
1,209.853195
151,836
0.945875
[ [ [ "# Matplotlib", "_____no_output_____" ], [ "Matplotlib is a python 2D plotting library which produces publication quality figures in a variety of hardcopy formats and interactive environments across platforms. matplotlib can be used in python scripts, the python and ipython shell, web application servers, and six graphical user interface toolkits.\n\nMatplotlib tries to make easy things easy and hard things possible. You can generate plots, histograms, power spectra, bar charts, errorcharts, scatterplots, etc, with just a few lines of code.\n\nLibrary documentation: <a>http://matplotlib.org/</a>", "_____no_output_____" ] ], [ [ "# needed to display the graphs\n%matplotlib inline\nimport numpy as np\nimport matplotlib.pyplot as plt", "_____no_output_____" ], [ "x = np.linspace(0, 5, 10)\ny = x ** 2\n\nfig = plt.figure()\n\n# left, bottom, width, height (range 0 to 1)\naxes = fig.add_axes([0.1, 0.1, 0.8, 0.8])\n\naxes.plot(x, y, 'r')\n\naxes.set_xlabel('x')\naxes.set_ylabel('y')\naxes.set_title('title');", "_____no_output_____" ], [ "fig = plt.figure()\n\naxes1 = fig.add_axes([0.1, 0.1, 0.8, 0.8]) # main axes\naxes2 = fig.add_axes([0.2, 0.5, 0.4, 0.3]) # inset axes\n\n# main figure\naxes1.plot(x, y, 'r')\naxes1.set_xlabel('x')\naxes1.set_ylabel('y')\naxes1.set_title('title')\n\n# insert\naxes2.plot(y, x, 'g')\naxes2.set_xlabel('y')\naxes2.set_ylabel('x')\naxes2.set_title('insert title');", "_____no_output_____" ], [ "fig, axes = plt.subplots(nrows=1, ncols=2)\n\nfor ax in axes:\n ax.plot(x, y, 'r')\n ax.set_xlabel('x')\n ax.set_ylabel('y')\n ax.set_title('title')\n \nfig.tight_layout()", "_____no_output_____" ], [ "# example with a legend and latex symbols\nfig, ax = plt.subplots()\n\nax.plot(x, x**2, label=r\"$y = \\alpha^2$\")\nax.plot(x, x**3, label=r\"$y = \\alpha^3$\")\nax.legend(loc=2) # upper left corner\nax.set_xlabel(r'$\\alpha$', fontsize=18)\nax.set_ylabel(r'$y$', fontsize=18)\nax.set_title('title');", "_____no_output_____" ], [ "# line customization\nfig, ax = plt.subplots(figsize=(12,6))\n\nax.plot(x, x+1, color=\"blue\", linewidth=0.25)\nax.plot(x, x+2, color=\"blue\", linewidth=0.50)\nax.plot(x, x+3, color=\"blue\", linewidth=1.00)\nax.plot(x, x+4, color=\"blue\", linewidth=2.00)\n\n# possible linestype options ‘-‘, ‘–’, ‘-.’, ‘:’, ‘steps’\nax.plot(x, x+5, color=\"red\", lw=2, linestyle='-')\nax.plot(x, x+6, color=\"red\", lw=2, ls='-.')\nax.plot(x, x+7, color=\"red\", lw=2, ls=':')\n\n# custom dash\nline, = ax.plot(x, x+8, color=\"black\", lw=1.50)\nline.set_dashes([5, 10, 15, 10]) # format: line length, space length, ...\n\n# possible marker symbols: marker = '+', 'o', '*', 's', ',', '.', \n# '1', '2', '3', '4', ...\nax.plot(x, x+ 9, color=\"green\", lw=2, ls='*', marker='+')\nax.plot(x, x+10, color=\"green\", lw=2, ls='*', marker='o')\nax.plot(x, x+11, color=\"green\", lw=2, ls='*', marker='s')\nax.plot(x, x+12, color=\"green\", lw=2, ls='*', marker='1')\n\n# marker size and color\nax.plot(x, x+13, color=\"purple\", lw=1, ls='-', marker='o', markersize=2)\nax.plot(x, x+14, color=\"purple\", lw=1, ls='-', marker='o', markersize=4)\nax.plot(x, x+15, color=\"purple\", lw=1, ls='-', marker='o', markersize=8, \n markerfacecolor=\"red\")\nax.plot(x, x+16, color=\"purple\", lw=1, ls='-', marker='s', markersize=8, \n markerfacecolor=\"yellow\", markeredgewidth=2, markeredgecolor=\"blue\");", "_____no_output_____" ], [ "# axis controls\nfig, axes = plt.subplots(1, 3, figsize=(12, 4))\n\naxes[0].plot(x, x**2, x, x**3)\naxes[0].set_title(\"default axes ranges\")\n\naxes[1].plot(x, x**2, x, x**3)\naxes[1].axis('tight')\naxes[1].set_title(\"tight axes\")\n\naxes[2].plot(x, x**2, x, x**3)\naxes[2].set_ylim([0, 60])\naxes[2].set_xlim([2, 5])\naxes[2].set_title(\"custom axes range\");", "_____no_output_____" ], [ "# scaling\nfig, axes = plt.subplots(1, 2, figsize=(10,4))\n \naxes[0].plot(x, x**2, x, exp(x))\naxes[0].set_title(\"Normal scale\")\n\naxes[1].plot(x, x**2, x, exp(x))\naxes[1].set_yscale(\"log\")\naxes[1].set_title(\"Logarithmic scale (y)\");", "_____no_output_____" ], [ "# axis grid\nfig, axes = plt.subplots(1, 2, figsize=(10,3))\n\n# default grid appearance\naxes[0].plot(x, x**2, x, x**3, lw=2)\naxes[0].grid(True)\n\n# custom grid appearance\naxes[1].plot(x, x**2, x, x**3, lw=2)\naxes[1].grid(color='b', alpha=0.5, linestyle='dashed', linewidth=0.5)", "_____no_output_____" ], [ "# twin axes example\nfig, ax1 = plt.subplots()\n\nax1.plot(x, x**2, lw=2, color=\"blue\")\nax1.set_ylabel(r\"area $(m^2)$\", fontsize=18, color=\"blue\")\nfor label in ax1.get_yticklabels():\n label.set_color(\"blue\")\n \nax2 = ax1.twinx()\nax2.plot(x, x**3, lw=2, color=\"red\")\nax2.set_ylabel(r\"volume $(m^3)$\", fontsize=18, color=\"red\")\nfor label in ax2.get_yticklabels():\n label.set_color(\"red\")", "_____no_output_____" ], [ "# other plot styles\nxx = np.linspace(-0.75, 1., 100)\nn = array([0,1,2,3,4,5])\n\nfig, axes = plt.subplots(1, 4, figsize=(12,3))\n\naxes[0].scatter(xx, xx + 0.25*randn(len(xx)))\naxes[0].set_title(\"scatter\")\n\naxes[1].step(n, n**2, lw=2)\naxes[1].set_title(\"step\")\n\naxes[2].bar(n, n**2, align=\"center\", width=0.5, alpha=0.5)\naxes[2].set_title(\"bar\")\n\naxes[3].fill_between(x, x**2, x**3, color=\"green\", alpha=0.5);\naxes[3].set_title(\"fill_between\");", "_____no_output_____" ], [ "# histograms\nn = np.random.randn(100000)\nfig, axes = plt.subplots(1, 2, figsize=(12,4))\n\naxes[0].hist(n)\naxes[0].set_title(\"Default histogram\")\naxes[0].set_xlim((min(n), max(n)))\n\naxes[1].hist(n, cumulative=True, bins=50)\naxes[1].set_title(\"Cumulative detailed histogram\")\naxes[1].set_xlim((min(n), max(n)));", "_____no_output_____" ], [ "# annotations\nfig, ax = plt.subplots()\n\nax.plot(xx, xx**2, xx, xx**3)\n\nax.text(0.15, 0.2, r\"$y=x^2$\", fontsize=20, color=\"blue\")\nax.text(0.65, 0.1, r\"$y=x^3$\", fontsize=20, color=\"green\");", "_____no_output_____" ], [ "# color map\nalpha = 0.7\nphi_ext = 2 * pi * 0.5\n\ndef flux_qubit_potential(phi_m, phi_p):\n return ( + alpha - 2 * cos(phi_p)*cos(phi_m) - \n alpha * cos(phi_ext - 2*phi_p))\n\nphi_m = linspace(0, 2*pi, 100)\nphi_p = linspace(0, 2*pi, 100)\nX,Y = meshgrid(phi_p, phi_m)\nZ = flux_qubit_potential(X, Y).T\n\nfig, ax = plt.subplots()\n\np = ax.pcolor(X/(2*pi), Y/(2*pi), Z, \n cmap=cm.RdBu, vmin=abs(Z).min(), vmax=abs(Z).max())\ncb = fig.colorbar(p, ax=ax)", "_____no_output_____" ], [ "from mpl_toolkits.mplot3d.axes3d import Axes3D", "_____no_output_____" ], [ "# surface plots\nfig = plt.figure(figsize=(14,6))\n\n# `ax` is a 3D-aware axis instance because of the projection='3d' \n# keyword argument to add_subplot\nax = fig.add_subplot(1, 2, 1, projection='3d')\n\np = ax.plot_surface(X, Y, Z, rstride=4, cstride=4, linewidth=0)\n\n# surface_plot with color grading and color bar\nax = fig.add_subplot(1, 2, 2, projection='3d')\np = ax.plot_surface(X, Y, Z, rstride=1, cstride=1, \n cmap=cm.coolwarm, linewidth=0, antialiased=False)\ncb = fig.colorbar(p, shrink=0.5)", "_____no_output_____" ], [ "# wire frame\nfig = plt.figure(figsize=(8,6))\n\nax = fig.add_subplot(1, 1, 1, projection='3d')\n\np = ax.plot_wireframe(X, Y, Z, rstride=4, cstride=4)", "_____no_output_____" ], [ "# contour plot with projections\nfig = plt.figure(figsize=(8,6))\n\nax = fig.add_subplot(1,1,1, projection='3d')\n\nax.plot_surface(X, Y, Z, rstride=4, cstride=4, alpha=0.25)\ncset = ax.contour(X, Y, Z, zdir='z', offset=-pi, cmap=cm.coolwarm)\ncset = ax.contour(X, Y, Z, zdir='x', offset=-pi, cmap=cm.coolwarm)\ncset = ax.contour(X, Y, Z, zdir='y', offset=3*pi, cmap=cm.coolwarm)\n\nax.set_xlim3d(-pi, 2*pi);\nax.set_ylim3d(0, 3*pi);\nax.set_zlim3d(-pi, 2*pi);", "_____no_output_____" ] ] ]
[ "markdown", "code" ]
[ [ "markdown", "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ] ]
d0215c4f6f9c756d38a28bb76af90e07f9e34c90
27,870
ipynb
Jupyter Notebook
site/en-snapshot/agents/tutorials/2_environments_tutorial.ipynb
secsilm/docs-l10n
2acda8cb1671a826f44115e2fa6dd593756ba969
[ "Apache-2.0" ]
null
null
null
site/en-snapshot/agents/tutorials/2_environments_tutorial.ipynb
secsilm/docs-l10n
2acda8cb1671a826f44115e2fa6dd593756ba969
[ "Apache-2.0" ]
null
null
null
site/en-snapshot/agents/tutorials/2_environments_tutorial.ipynb
secsilm/docs-l10n
2acda8cb1671a826f44115e2fa6dd593756ba969
[ "Apache-2.0" ]
null
null
null
38.601108
422
0.564227
[ [ [ "##### Copyright 2018 The TF-Agents Authors.", "_____no_output_____" ] ], [ [ "#@title Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.", "_____no_output_____" ] ], [ [ "# Environments\n\n<table class=\"tfo-notebook-buttons\" align=\"left\">\n <td>\n <a target=\"_blank\" href=\"https://www.tensorflow.org/agents/tutorials/2_environments_tutorial\">\n <img src=\"https://www.tensorflow.org/images/tf_logo_32px.png\" />\n View on TensorFlow.org</a>\n </td>\n <td>\n <a target=\"_blank\" href=\"https://colab.research.google.com/github/tensorflow/agents/blob/master/docs/tutorials/2_environments_tutorial.ipynb\">\n <img src=\"https://www.tensorflow.org/images/colab_logo_32px.png\" />\n Run in Google Colab</a>\n </td>\n <td>\n <a target=\"_blank\" href=\"https://github.com/tensorflow/agents/blob/master/docs/tutorials/2_environments_tutorial.ipynb\">\n <img src=\"https://www.tensorflow.org/images/GitHub-Mark-32px.png\" />\n View source on GitHub</a>\n </td>\n <td>\n <a href=\"https://storage.googleapis.com/tensorflow_docs/agents/docs/tutorials/2_environments_tutorial.ipynb\"><img src=\"https://www.tensorflow.org/images/download_logo_32px.png\" />Download notebook</a>\n </td>\n</table>", "_____no_output_____" ], [ "## Introduction", "_____no_output_____" ], [ "The goal of Reinforcement Learning (RL) is to design agents that learn by interacting with an environment. In the standard RL setting, the agent receives an observation at every time step and chooses an action. The action is applied to the environment and the environment returns a reward and a new observation. The agent trains a policy to choose actions to maximize the sum of rewards, also known as return.\n\nIn TF-Agents, environments can be implemented either in Python or TensorFlow. Python environments are usually easier to implement, understand, and debug, but TensorFlow environments are more efficient and allow natural parallelization. The most common workflow is to implement an environment in Python and use one of our wrappers to automatically convert it into TensorFlow.\n\nLet us look at Python environments first. TensorFlow environments follow a very similar API.", "_____no_output_____" ], [ "## Setup\n", "_____no_output_____" ], [ "If you haven't installed tf-agents or gym yet, run:", "_____no_output_____" ] ], [ [ "!pip install tf-agents\n!pip install 'gym==0.10.11'", "_____no_output_____" ], [ "from __future__ import absolute_import\nfrom __future__ import division\nfrom __future__ import print_function\n\nimport abc\nimport tensorflow as tf\nimport numpy as np\n\nfrom tf_agents.environments import py_environment\nfrom tf_agents.environments import tf_environment\nfrom tf_agents.environments import tf_py_environment\nfrom tf_agents.environments import utils\nfrom tf_agents.specs import array_spec\nfrom tf_agents.environments import wrappers\nfrom tf_agents.environments import suite_gym\nfrom tf_agents.trajectories import time_step as ts\n\ntf.compat.v1.enable_v2_behavior()", "_____no_output_____" ] ], [ [ "## Python Environments", "_____no_output_____" ], [ "Python environments have a `step(action) -> next_time_step` method that applies an action to the environment, and returns the following information about the next step:\n1. `observation`: This is the part of the environment state that the agent can observe to choose its actions at the next step.\n2. `reward`: The agent is learning to maximize the sum of these rewards across multiple steps.\n3. `step_type`: Interactions with the environment are usually part of a sequence/episode. e.g. multiple moves in a game of chess. step_type can be either `FIRST`, `MID` or `LAST` to indicate whether this time step is the first, intermediate or last step in a sequence.\n4. `discount`: This is a float representing how much to weight the reward at the next time step relative to the reward at the current time step.\n\nThese are grouped into a named tuple `TimeStep(step_type, reward, discount, observation)`.\n\nThe interface that all python environments must implement is in `environments/py_environment.PyEnvironment`. The main methods are:", "_____no_output_____" ] ], [ [ "class PyEnvironment(object):\n\n def reset(self):\n \"\"\"Return initial_time_step.\"\"\"\n self._current_time_step = self._reset()\n return self._current_time_step\n\n def step(self, action):\n \"\"\"Apply action and return new time_step.\"\"\"\n if self._current_time_step is None:\n return self.reset()\n self._current_time_step = self._step(action)\n return self._current_time_step\n\n def current_time_step(self):\n return self._current_time_step\n\n def time_step_spec(self):\n \"\"\"Return time_step_spec.\"\"\"\n\n @abc.abstractmethod\n def observation_spec(self):\n \"\"\"Return observation_spec.\"\"\"\n\n @abc.abstractmethod\n def action_spec(self):\n \"\"\"Return action_spec.\"\"\"\n\n @abc.abstractmethod\n def _reset(self):\n \"\"\"Return initial_time_step.\"\"\"\n\n @abc.abstractmethod\n def _step(self, action):\n \"\"\"Apply action and return new time_step.\"\"\"\n self._current_time_step = self._step(action)\n return self._current_time_step", "_____no_output_____" ] ], [ [ "In addition to the `step()` method, environments also provide a `reset()` method that starts a new sequence and provides an initial `TimeStep`. It is not necessary to call the `reset` method explicitly. We assume that environments reset automatically, either when they get to the end of an episode or when step() is called the first time.\n\nNote that subclasses do not implement `step()` or `reset()` directly. They instead override the `_step()` and `_reset()` methods. The time steps returned from these methods will be cached and exposed through `current_time_step()`.\n\nThe `observation_spec` and the `action_spec` methods return a nest of `(Bounded)ArraySpecs` that describe the name, shape, datatype and ranges of the observations and actions respectively.\n\nIn TF-Agents we repeatedly refer to nests which are defined as any tree like structure composed of lists, tuples, named-tuples, or dictionaries. These can be arbitrarily composed to maintain structure of observations and actions. We have found this to be very useful for more complex environments where you have many observations and actions.", "_____no_output_____" ], [ "### Using Standard Environments\n\nTF Agents has built-in wrappers for many standard environments like the OpenAI Gym, DeepMind-control and Atari, so that they follow our `py_environment.PyEnvironment` interface. These wrapped evironments can be easily loaded using our environment suites. Let's load the CartPole environment from the OpenAI gym and look at the action and time_step_spec.", "_____no_output_____" ] ], [ [ "environment = suite_gym.load('CartPole-v0')\nprint('action_spec:', environment.action_spec())\nprint('time_step_spec.observation:', environment.time_step_spec().observation)\nprint('time_step_spec.step_type:', environment.time_step_spec().step_type)\nprint('time_step_spec.discount:', environment.time_step_spec().discount)\nprint('time_step_spec.reward:', environment.time_step_spec().reward)\n", "_____no_output_____" ] ], [ [ "So we see that the environment expects actions of type `int64` in [0, 1] and returns `TimeSteps` where the observations are a `float32` vector of length 4 and discount factor is a `float32` in [0.0, 1.0]. Now, let's try to take a fixed action `(1,)` for a whole episode.", "_____no_output_____" ] ], [ [ "action = np.array(1, dtype=np.int32)\ntime_step = environment.reset()\nprint(time_step)\nwhile not time_step.is_last():\n time_step = environment.step(action)\n print(time_step)", "_____no_output_____" ] ], [ [ "### Creating your own Python Environment\n\nFor many clients, a common use case is to apply one of the standard agents (see agents/) in TF-Agents to their problem. To do this, they have to frame their problem as an environment. So let us look at how to implement an environment in Python.\n\nLet's say we want to train an agent to play the following (Black Jack inspired) card game:\n\n1. The game is played using an infinite deck of cards numbered 1...10.\n2. At every turn the agent can do 2 things: get a new random card, or stop the current round.\n3. The goal is to get the sum of your cards as close to 21 as possible at the end of the round, without going over.\n\nAn environment that represents the game could look like this:\n\n1. Actions: We have 2 actions. Action 0: get a new card, and Action 1: terminate the current round.\n2. Observations: Sum of the cards in the current round.\n3. Reward: The objective is to get as close to 21 as possible without going over, so we can achieve this using the following reward at the end of the round:\n sum_of_cards - 21 if sum_of_cards <= 21, else -21\n", "_____no_output_____" ] ], [ [ "class CardGameEnv(py_environment.PyEnvironment):\n\n def __init__(self):\n self._action_spec = array_spec.BoundedArraySpec(\n shape=(), dtype=np.int32, minimum=0, maximum=1, name='action')\n self._observation_spec = array_spec.BoundedArraySpec(\n shape=(1,), dtype=np.int32, minimum=0, name='observation')\n self._state = 0\n self._episode_ended = False\n\n def action_spec(self):\n return self._action_spec\n\n def observation_spec(self):\n return self._observation_spec\n\n def _reset(self):\n self._state = 0\n self._episode_ended = False\n return ts.restart(np.array([self._state], dtype=np.int32))\n\n def _step(self, action):\n\n if self._episode_ended:\n # The last action ended the episode. Ignore the current action and start\n # a new episode.\n return self.reset()\n\n # Make sure episodes don't go on forever.\n if action == 1:\n self._episode_ended = True\n elif action == 0:\n new_card = np.random.randint(1, 11)\n self._state += new_card\n else:\n raise ValueError('`action` should be 0 or 1.')\n\n if self._episode_ended or self._state >= 21:\n reward = self._state - 21 if self._state <= 21 else -21\n return ts.termination(np.array([self._state], dtype=np.int32), reward)\n else:\n return ts.transition(\n np.array([self._state], dtype=np.int32), reward=0.0, discount=1.0)", "_____no_output_____" ] ], [ [ "Let's make sure we did everything correctly defining the above environment. When creating your own environment you must make sure the observations and time_steps generated follow the correct shapes and types as defined in your specs. These are used to generate the TensorFlow graph and as such can create hard to debug problems if we get them wrong.\n\nTo validate our environment we will use a random policy to generate actions and we will iterate over 5 episodes to make sure things are working as intended. An error is raised if we receive a time_step that does not follow the environment specs.", "_____no_output_____" ] ], [ [ "environment = CardGameEnv()\nutils.validate_py_environment(environment, episodes=5)", "_____no_output_____" ] ], [ [ "Now that we know the environment is working as intended, let's run this environment using a fixed policy: ask for 3 cards and then end the round.", "_____no_output_____" ] ], [ [ "get_new_card_action = np.array(0, dtype=np.int32)\nend_round_action = np.array(1, dtype=np.int32)\n\nenvironment = CardGameEnv()\ntime_step = environment.reset()\nprint(time_step)\ncumulative_reward = time_step.reward\n\nfor _ in range(3):\n time_step = environment.step(get_new_card_action)\n print(time_step)\n cumulative_reward += time_step.reward\n\ntime_step = environment.step(end_round_action)\nprint(time_step)\ncumulative_reward += time_step.reward\nprint('Final Reward = ', cumulative_reward)", "_____no_output_____" ] ], [ [ "### Environment Wrappers\n\nAn environment wrapper takes a python environment and returns a modified version of the environment. Both the original environment and the modified environment are instances of `py_environment.PyEnvironment`, and multiple wrappers can be chained together.\n\nSome common wrappers can be found in `environments/wrappers.py`. For example:\n\n1. `ActionDiscretizeWrapper`: Converts a continuous action space to a discrete action space.\n2. `RunStats`: Captures run statistics of the environment such as number of steps taken, number of episodes completed etc.\n3. `TimeLimit`: Terminates the episode after a fixed number of steps.\n", "_____no_output_____" ], [ "#### Example 1: Action Discretize Wrapper", "_____no_output_____" ], [ "InvertedPendulum is a PyBullet environment that accepts continuous actions in the range `[-2, 2]`. If we want to train a discrete action agent such as DQN on this environment, we have to discretize (quantize) the action space. This is exactly what the `ActionDiscretizeWrapper` does. Compare the `action_spec` before and after wrapping:", "_____no_output_____" ] ], [ [ "env = suite_gym.load('Pendulum-v0')\nprint('Action Spec:', env.action_spec())\n\ndiscrete_action_env = wrappers.ActionDiscretizeWrapper(env, num_actions=5)\nprint('Discretized Action Spec:', discrete_action_env.action_spec())", "_____no_output_____" ] ], [ [ "The wrapped `discrete_action_env` is an instance of `py_environment.PyEnvironment` and can be treated like a regular python environment.\n", "_____no_output_____" ], [ "## TensorFlow Environments", "_____no_output_____" ], [ "The interface for TF environments is defined in `environments/tf_environment.TFEnvironment` and looks very similar to the Python environments. TF Environments differ from python envs in a couple of ways:\n\n* They generate tensor objects instead of arrays\n* TF environments add a batch dimension to the tensors generated when compared to the specs. \n\nConverting the python environments into TFEnvs allows tensorflow to parallelize operations. For example, one could define a `collect_experience_op` that collects data from the environment and adds to a `replay_buffer`, and a `train_op` that reads from the `replay_buffer` and trains the agent, and run them in parallel naturally in TensorFlow.", "_____no_output_____" ] ], [ [ "class TFEnvironment(object):\n\n def time_step_spec(self):\n \"\"\"Describes the `TimeStep` tensors returned by `step()`.\"\"\"\n\n def observation_spec(self):\n \"\"\"Defines the `TensorSpec` of observations provided by the environment.\"\"\"\n\n def action_spec(self):\n \"\"\"Describes the TensorSpecs of the action expected by `step(action)`.\"\"\"\n\n def reset(self):\n \"\"\"Returns the current `TimeStep` after resetting the Environment.\"\"\"\n return self._reset()\n\n def current_time_step(self):\n \"\"\"Returns the current `TimeStep`.\"\"\"\n return self._current_time_step()\n\n def step(self, action):\n \"\"\"Applies the action and returns the new `TimeStep`.\"\"\"\n return self._step(action)\n\n @abc.abstractmethod\n def _reset(self):\n \"\"\"Returns the current `TimeStep` after resetting the Environment.\"\"\"\n\n @abc.abstractmethod\n def _current_time_step(self):\n \"\"\"Returns the current `TimeStep`.\"\"\"\n\n @abc.abstractmethod\n def _step(self, action):\n \"\"\"Applies the action and returns the new `TimeStep`.\"\"\"", "_____no_output_____" ] ], [ [ "The `current_time_step()` method returns the current time_step and initializes the environment if needed.\n\nThe `reset()` method forces a reset in the environment and returns the current_step.\n\nIf the `action` doesn't depend on the previous `time_step` a `tf.control_dependency` is needed in `Graph` mode.\n\nFor now, let us look at how `TFEnvironments` are created.", "_____no_output_____" ], [ "### Creating your own TensorFlow Environment\n\nThis is more complicated than creating environments in Python, so we will not cover it in this colab. An example is available [here](https://github.com/tensorflow/agents/blob/master/tf_agents/environments/tf_environment_test.py). The more common use case is to implement your environment in Python and wrap it in TensorFlow using our `TFPyEnvironment` wrapper (see below).", "_____no_output_____" ], [ "### Wrapping a Python Environment in TensorFlow", "_____no_output_____" ], [ "We can easily wrap any Python environment into a TensorFlow environment using the `TFPyEnvironment` wrapper.", "_____no_output_____" ] ], [ [ "env = suite_gym.load('CartPole-v0')\ntf_env = tf_py_environment.TFPyEnvironment(env)\n\nprint(isinstance(tf_env, tf_environment.TFEnvironment))\nprint(\"TimeStep Specs:\", tf_env.time_step_spec())\nprint(\"Action Specs:\", tf_env.action_spec())", "_____no_output_____" ] ], [ [ "Note the specs are now of type: `(Bounded)TensorSpec`.", "_____no_output_____" ], [ "### Usage Examples", "_____no_output_____" ], [ "#### Simple Example", "_____no_output_____" ] ], [ [ "env = suite_gym.load('CartPole-v0')\n\ntf_env = tf_py_environment.TFPyEnvironment(env)\n# reset() creates the initial time_step after resetting the environment.\ntime_step = tf_env.reset()\nnum_steps = 3\ntransitions = []\nreward = 0\nfor i in range(num_steps):\n action = tf.constant([i % 2])\n # applies the action and returns the new TimeStep.\n next_time_step = tf_env.step(action)\n transitions.append([time_step, action, next_time_step])\n reward += next_time_step.reward\n time_step = next_time_step\n\nnp_transitions = tf.nest.map_structure(lambda x: x.numpy(), transitions)\nprint('\\n'.join(map(str, np_transitions)))\nprint('Total reward:', reward.numpy())", "_____no_output_____" ] ], [ [ "#### Whole Episodes", "_____no_output_____" ] ], [ [ "env = suite_gym.load('CartPole-v0')\ntf_env = tf_py_environment.TFPyEnvironment(env)\n\ntime_step = tf_env.reset()\nrewards = []\nsteps = []\nnum_episodes = 5\n\nfor _ in range(num_episodes):\n episode_reward = 0\n episode_steps = 0\n while not time_step.is_last():\n action = tf.random.uniform([1], 0, 2, dtype=tf.int32)\n time_step = tf_env.step(action)\n episode_steps += 1\n episode_reward += time_step.reward.numpy()\n rewards.append(episode_reward)\n steps.append(episode_steps)\n time_step = tf_env.reset()\n\nnum_steps = np.sum(steps)\navg_length = np.mean(steps)\navg_reward = np.mean(rewards)\n\nprint('num_episodes:', num_episodes, 'num_steps:', num_steps)\nprint('avg_length', avg_length, 'avg_reward:', avg_reward)", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown", "markdown", "markdown" ], [ "code", "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ] ]
d0216fd701d7db487674b85843bac74588acde0e
33,197
ipynb
Jupyter Notebook
lectures/02-functions.ipynb
sir-rois/mipt-python
da5f1861e17a31da4a2930a423f4dc0ce434bef0
[ "MIT" ]
40
2020-02-27T10:18:37.000Z
2022-03-02T21:16:27.000Z
lectures/02-functions.ipynb
sir-rois/mipt-python
da5f1861e17a31da4a2930a423f4dc0ce434bef0
[ "MIT" ]
null
null
null
lectures/02-functions.ipynb
sir-rois/mipt-python
da5f1861e17a31da4a2930a423f4dc0ce434bef0
[ "MIT" ]
37
2020-03-05T09:24:28.000Z
2022-03-03T13:10:48.000Z
19.701484
852
0.460313
[ [ [ "# `Практикум по программированию на языке Python`\n<br>\n\n## `Занятие 2: Пользовательские и встроенные функции, итераторы и генераторы`\n<br><br>\n\n### `Мурат Апишев (mel-lain@yandex.ru)`\n\n#### `Москва, 2021`", "_____no_output_____" ], [ "### `Функции range и enumerate`", "_____no_output_____" ] ], [ [ "r = range(2, 10, 3)\nprint(type(r))\n\nfor e in r:\n print(e, end=' ')", "<class 'range'>\n2 5 8 " ], [ "for index, element in enumerate(list('abcdef')):\n print(index, element, end=' ')", "0 a 1 b 2 c 3 d 4 e 5 f " ] ], [ [ "### `Функция zip`", "_____no_output_____" ] ], [ [ "z = zip([1, 2, 3], 'abc')\nprint(type(z))\n\nfor a, b in z:\n print(a, b, end=' ')", "<class 'zip'>\n1 a 2 b 3 c " ], [ "for e in zip('abcdef', 'abc'):\n print(e)", "('a', 'a')\n('b', 'b')\n('c', 'c')\n" ], [ "for a, b, c, d in zip('abc', [1,2,3], [True, False, None], 'xyz'):\n print(a, b, c, d)", "a 1 True x\nb 2 False y\nc 3 None z\n" ] ], [ [ "### `Определение собственных функций`", "_____no_output_____" ] ], [ [ "def function(arg_1, arg_2=None):\n print(arg_1, arg_2)\n\nfunction(10)\nfunction(10, 20)", "10 None\n10 20\n" ] ], [ [ "Функция - это тоже объект, её имя - просто символическая ссылка:", "_____no_output_____" ] ], [ [ "f = function\nf(10)\n\nprint(function is f)", "10 None\nTrue\n" ] ], [ [ "### `Определение собственных функций`", "_____no_output_____" ] ], [ [ "retval = f(10)\nprint(retval)", "10 None\nNone\n" ], [ "def factorial(n):\n return n * factorial(n - 1) if n > 1 else 1 # recursion\n\nprint(factorial(1))\nprint(factorial(2))\nprint(factorial(4))", "1\n2\n24\n" ] ], [ [ "### `Передача аргументов в функцию`\n\nПараметры в Python всегда передаются по ссылке", "_____no_output_____" ] ], [ [ "def function(scalar, lst):\n scalar += 10\n print(f'Scalar in function: {scalar}')\n\n lst.append(None)\n print(f'Scalar in function: {lst}')", "_____no_output_____" ], [ "s, l = 5, []\nfunction(s, l)\n\nprint(s, l)", "Scalar in function: 15\nScalar in function: [None]\n5 [None]\n" ] ], [ [ "### `Передача аргументов в функцию`", "_____no_output_____" ] ], [ [ "def f(a, *args):\n print(type(args))\n print([v for v in [a] + list(args)])\n \nf(10, 2, 6, 8)", "<class 'tuple'>\n[10, 2, 6, 8]\n" ], [ "def f(*args, a):\n print([v for v in [a] + list(args)])\n print()\n\nf(2, 6, 8, a=10)", "[10, 2, 6, 8]\n\n" ], [ "def f(a, *args, **kw):\n print(type(kw))\n print([v for v in [a] + list(args) + [(k, v) for k, v in kw.items()]])\n\nf(2, *(6, 8), **{'arg1': 1, 'arg2': 2})", "<class 'dict'>\n[2, 6, 8, ('arg1', 1), ('arg2', 2)]\n" ] ], [ [ "### `Области видимости переменных`\n\nВ Python есть 4 основных уровня видимости:\n\n- Встроенная (buildins) - на этом уровне находятся все встроенные объекты (функции, классы исключений и т.п.)<br><br>\n- Глобальная в рамках модуля (global) - всё, что определяется в коде модуля на верхнем уровне<br><br>\n- Объемлюшей функции (enclosed) - всё, что определено в функции верхнего уровня<br><br>\n- Локальной функции (local) - всё, что определено в функции нижнего уровня\n\n<br><br>\nЕсть ещё области видимости переменных циклов, списковых включений и т.п.", "_____no_output_____" ], [ "### `Правило разрешения области видимости LEGB при чтении`", "_____no_output_____" ] ], [ [ "def outer_func(x):\n def inner_func(x):\n return len(x)\n return inner_func(x)", "_____no_output_____" ], [ "print(outer_func([1, 2]))", "2\n" ] ], [ [ "Кто определил имя `len`?\n\n- на уровне вложенной функции такого имени нет, смотрим выше\n- на уровне объемлющей функции такого имени нет, смотрим выше\n- на уровне модуля такого имени нет, смотрим выше\n- на уровне builtins такое имя есть, используем его", "_____no_output_____" ], [ "### `На builtins можно посмотреть`", "_____no_output_____" ] ], [ [ "import builtins\n\ncounter = 0\nlst = []\nfor name in dir(builtins):\n if name[0].islower():\n lst.append(name)\n counter += 1\n \n if counter == 5:\n break\n\nlst", "_____no_output_____" ] ], [ [ "Кстати, то же самое можно сделать более pythonic кодом:", "_____no_output_____" ] ], [ [ "list(filter(lambda x: x[0].islower(), dir(builtins)))[: 5]", "_____no_output_____" ] ], [ [ "### `Локальные и глобальные переменные`", "_____no_output_____" ] ], [ [ "x = 2\ndef func():\n print('Inside: ', x) # read\n \nfunc()\nprint('Outside: ', x)", "Inside: 2\nOutside: 2\n" ], [ "x = 2\ndef func():\n x += 1 # write\n print('Inside: ', x)\n \nfunc() # UnboundLocalError: local variable 'x' referenced before assignment\nprint('Outside: ', x)", "_____no_output_____" ], [ "x = 2\ndef func():\n x = 3\n x += 1\n print('Inside: ', x)\n \nfunc()\nprint('Outside: ', x)", "Inside: 4\nOutside: 2\n" ] ], [ [ "### `Ключевое слово global`", "_____no_output_____" ] ], [ [ "x = 2\ndef func():\n global x\n x += 1 # write\n print('Inside: ', x)\n \nfunc()\nprint('Outside: ', x)", "Inside: 3\nOutside: 3\n" ], [ "x = 2\ndef func(x):\n x += 1\n print('Inside: ', x)\n return x\n \nx = func(x)\nprint('Outside: ', x)", "Inside: 3\nOutside: 3\n" ] ], [ [ "### `Ключевое слово nonlocal`", "_____no_output_____" ] ], [ [ "a = 0\ndef out_func():\n b = 10\n def mid_func():\n c = 20\n def in_func():\n global a\n a += 100\n \n nonlocal c\n c += 100\n \n nonlocal b\n b += 100\n\n print(a, b, c)\n \n in_func()\n mid_func()\n\nout_func()", "100 110 120\n" ] ], [ [ "__Главный вывод:__ не надо злоупотреблять побочными эффектами при работе с переменными верхних уровней", "_____no_output_____" ], [ "### `Пример вложенных функций: замыкания`\n\n- В большинстве случаев вложенные функции не нужны, плоская иерархия будет и проще, и понятнее\n- Одно из исключений - фабричные функции (замыкания)", "_____no_output_____" ] ], [ [ "def function_creator(n):\n def function(x):\n return x ** n\n\n return function\n\nf = function_creator(5)\nf(2)", "_____no_output_____" ] ], [ [ "Объект-функция, на который ссылается `f`, хранит в себе значение `n`", "_____no_output_____" ], [ "### `Анонимные функции`\n\n- `def` - не единственный способ объявления функции\n- `lambda` создаёт анонимную (lambda) функцию\n\n\nТакие функции часто используются там, где синтаксически нельзя записать определение через `def`", "_____no_output_____" ] ], [ [ "def func(x): return x ** 2\nfunc(6)", "_____no_output_____" ], [ "lambda_func = lambda x: x ** 2 # should be an expression\nlambda_func(6)", "_____no_output_____" ], [ "def func(x): print(x)\nfunc(6)", "6\n" ], [ "lambda_func = lambda x: print(x ** 2) # as print is function in Python 3.*\nlambda_func(6)", "36\n" ] ], [ [ "### `Встроенная функция sorted`", "_____no_output_____" ] ], [ [ "lst = [5, 2, 7, -9, -1]", "_____no_output_____" ], [ "def abs_comparator(x):\n return abs(x)\n\nprint(sorted(lst, key=abs_comparator))", "[-1, 2, 5, 7, -9]\n" ], [ "sorted(lst, key=lambda x: abs(x))", "_____no_output_____" ], [ "sorted(lst, key=lambda x: abs(x), reverse=True)", "_____no_output_____" ] ], [ [ "\n### `Встроенная функция filter`", "_____no_output_____" ] ], [ [ "lst = [5, 2, 7, -9, -1]", "_____no_output_____" ], [ "f = filter(lambda x: x < 0, lst) # True condition\ntype(f) # iterator", "_____no_output_____" ], [ "list(f)", "_____no_output_____" ] ], [ [ "### `Встроенная функция map`", "_____no_output_____" ] ], [ [ "lst = [5, 2, 7, -9, -1]", "_____no_output_____" ], [ "m = map(lambda x: abs(x), lst)\ntype(m) # iterator", "_____no_output_____" ], [ "list(m)", "_____no_output_____" ] ], [ [ "### `Ещё раз сравним два подхода`\n\nНапишем функцию скалярного произведения в императивном и функциональном стилях:", "_____no_output_____" ] ], [ [ "def dot_product_imp(v, w):\n result = 0\n for i in range(len(v)):\n result += v[i] * w[i]\n return result", "_____no_output_____" ], [ "dot_product_func = lambda v, w: sum(map(lambda x: x[0] * x[1], zip(v, w)))", "_____no_output_____" ], [ "print(dot_product_imp([1, 2, 3], [4, 5, 6]))\nprint(dot_product_func([1, 2, 3], [4, 5, 6]))", "32\n32\n" ] ], [ [ "### `Функция reduce`\n\n`functools` - стандартный модуль с другими функциями высшего порядка.\n\nРассмотрим пока только функцию `reduce`:", "_____no_output_____" ] ], [ [ "from functools import reduce\n\nlst = list(range(1, 10))\n\nreduce(lambda x, y: x * y, lst)", "_____no_output_____" ] ], [ [ "### `Итерирование, функции iter и next`", "_____no_output_____" ] ], [ [ "r = range(3)\n\nfor e in r:\n print(e)", "0\n1\n2\n" ], [ "it = iter(r) # r.__iter__() - gives us an iterator\n\nprint(next(it))\nprint(it.__next__())\nprint(next(it))\nprint(next(it))", "0\n1\n2\n" ] ], [ [ "### `Итераторы часто используются неявно`\n\nКак выглядит для нас цикл `for`:", "_____no_output_____" ] ], [ [ "for i in 'seq':\n print(i)", "s\ne\nq\n" ] ], [ [ "Как он работает на самом деле:", "_____no_output_____" ] ], [ [ "iterator = iter('seq')\nwhile True:\n try:\n i = next(iterator)\n print(i)\n except StopIteration:\n break", "s\ne\nq\n" ] ], [ [ "### `Генераторы`\n\n- Генераторы, как и итераторы, предназначены для итерирования по коллекции, но устроены несколько иначе\n- Они определяются с помощью функций с оператором `yield` или генераторов списков, а не вызовов `iter()` и `next()`\n- В генераторе есть внутреннее изменяемое состояние в виде локальных переменных, которое он хранит автоматически\n- Генератор - более простой способ создания собственного итератора, чем его прямое определение\n- Все генераторы являются итераторами, но не наоборот<br><br>", "_____no_output_____" ], [ "\n- Примеры функций-генераторов:\n - `zip`\n - `enumerate`\n - `reversed`\n - `map`\n - `filter`", "_____no_output_____" ], [ "### `Ключевое слово yield`\n\n- `yield` - это слово, по смыслу похожее на `return`<br><br>\n- Но используется в функциях, возвращающих генераторы<br><br>\n- При вызове такой функции тело не выполняется, функция только возвращает генератор<br><br>\n- В первых запуск функция будет выполняться от начала и до `yield`<br><br>\n- После выхода состояние функции сохраняется<br><br>\n- На следующий вызов будет проводиться итерация цикла и возвращаться следующее значение<br><br>\n- И так далее, пока не кончится цикл каждого `yield` в теле функции<br><br>\n- После этого генератор станет пустым", "_____no_output_____" ], [ "### `Пример генератора`", "_____no_output_____" ] ], [ [ "def my_range(n):\n yield 'You really want to run this generator?'\n\n i = -1\n while i < n:\n i += 1\n yield i", "_____no_output_____" ], [ "gen = my_range(3)\nwhile True:\n try:\n print(next(gen), end=' ')\n except StopIteration: # we want to catch this type of exceptions\n break", "You really want to run this generator? 0 1 2 3 " ], [ "for e in my_range(3):\n print(e, end=' ')", "You really want to run this generator? 0 1 2 3 " ] ], [ [ "### `Особенность range`\n\n`range` не является генератором, хотя и похож, поскольку не хранит всю последовательность", "_____no_output_____" ] ], [ [ "print('__next__' in dir(zip([], [])))\nprint('__next__' in dir(range(3)))", "True\nFalse\n" ] ], [ [ "Полезные особенности:\n- объекты `range` неизменяемые (могут быть ключами словаря)\n- имеют полезные атрибуты (`len`, `index`, `__getitem__`)\n- по ним можно итерироваться многократно", "_____no_output_____" ], [ "### `Модуль itetools`\n\n- Модуль представляет собой набор инструментов для работы с итераторами и последовательностями<br><br>\n- Содержит три основных типа итераторов:<br><br>\n - бесконечные итераторы\n - конечные итераторы\n - комбинаторные итераторы<br><br>\n\n- Позволяет эффективно решать небольшие задачи вида:<br><br>\n - итерирование по бесконечному потоку\n - слияние в один список вложенных списков\n - генерация комбинаторного перебора сочетаний элементов последовательности\n - аккумуляция и агрегация данных внутри последовательности", "_____no_output_____" ], [ "### `Модуль itetools: примеры`", "_____no_output_____" ] ], [ [ "from itertools import count\n\nfor i in count(start=0):\n print(i, end=' ')\n if i == 5:\n break", "0 1 2 3 4 5 " ], [ "from itertools import cycle\n \ncount = 0\nfor item in cycle('XYZ'):\n if count > 4:\n break\n print(item, end=' ')\n count += 1", "X Y Z X Y " ] ], [ [ "### `Модуль itetools: примеры`", "_____no_output_____" ] ], [ [ "from itertools import accumulate\n\nfor i in accumulate(range(1, 5), lambda x, y: x * y):\n print(i)", "1\n2\n6\n24\n" ], [ "from itertools import chain\n\nfor i in chain([1, 2], [3], [4]):\n print(i)", "1\n2\n3\n4\n" ] ], [ [ "### `Модуль itetools: примеры`", "_____no_output_____" ] ], [ [ "from itertools import groupby\n \nvehicles = [('Ford', 'Taurus'), ('Dodge', 'Durango'),\n ('Chevrolet', 'Cobalt'), ('Ford', 'F150'),\n ('Dodge', 'Charger'), ('Ford', 'GT')]\n \nsorted_vehicles = sorted(vehicles)\n \nfor key, group in groupby(sorted_vehicles, lambda x: x[0]):\n for maker, model in group:\n print('{model} is made by {maker}'.format(model=model, maker=maker))\n \n print (\"**** END OF THE GROUP ***\\n\")", "Cobalt is made by Chevrolet\n**** END OF THE GROUP ***\n\nCharger is made by Dodge\nDurango is made by Dodge\n**** END OF THE GROUP ***\n\nF150 is made by Ford\nGT is made by Ford\nTaurus is made by Ford\n**** END OF THE GROUP ***\n\n" ] ], [ [ "## `Спасибо за внимание!`", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "markdown", "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown", "markdown" ], [ "code", "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown", "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ] ]
d0217dde9a568b00e7a75b98db70934e40da47e5
1,634
ipynb
Jupyter Notebook
notebooks/bestoffer_travelrepublic.ipynb
lordoftheflies/gargantula-scrapersite
0abcb82bf30540ac5cd57d5ec9178e692a1a2ca6
[ "Apache-2.0" ]
null
null
null
notebooks/bestoffer_travelrepublic.ipynb
lordoftheflies/gargantula-scrapersite
0abcb82bf30540ac5cd57d5ec9178e692a1a2ca6
[ "Apache-2.0" ]
null
null
null
notebooks/bestoffer_travelrepublic.ipynb
lordoftheflies/gargantula-scrapersite
0abcb82bf30540ac5cd57d5ec9178e692a1a2ca6
[ "Apache-2.0" ]
null
null
null
22.383562
97
0.511628
[ [ [ "empty" ] ] ]
[ "empty" ]
[ [ "empty" ] ]
d02182b56fc7b86aa031f44046e3fb6e7ae4aeaa
137,027
ipynb
Jupyter Notebook
jupyter/Chapter05/coherent_detector.ipynb
miltondsantos/software
d27375257b0260cad901837612fbca0174134229
[ "Apache-2.0" ]
1
2020-12-18T04:07:54.000Z
2020-12-18T04:07:54.000Z
jupyter/Chapter05/coherent_detector.ipynb
miltondsantos/software
d27375257b0260cad901837612fbca0174134229
[ "Apache-2.0" ]
null
null
null
jupyter/Chapter05/coherent_detector.ipynb
miltondsantos/software
d27375257b0260cad901837612fbca0174134229
[ "Apache-2.0" ]
null
null
null
539.476378
131,640
0.950499
[ [ [ "# ***Introduction to Radar Using Python and MATLAB***\n## Andy Harrison - Copyright (C) 2019 Artech House\n<br/>\n\n# Coherent Detector\n***", "_____no_output_____" ], [ "The in-phase and quadrature signal components from a coherent detector may be written as (Equation 5.13)\n\n$$\n x(t) = a(t) \\cos(2\\pi f_0 t) \\cos(\\phi(t)) - a(t) \\sin(2 \\pi f_0 t) \\sin(\\phi(t))\n = X_I(t) \\cos(2 \\pi f_0 t) - X_Q \\sin(2 \\pi f_0 t)\n$$\n***", "_____no_output_____" ], [ "Begin by setting the library path", "_____no_output_____" ] ], [ [ "import lib_path", "_____no_output_____" ] ], [ [ "Set the sampling frequency (Hz), the start frequency (Hz), the end frequency (Hz), the amplitude modulation frequency (Hz) and amplitude (relative) for the sample signal", "_____no_output_____" ] ], [ [ "sampling_frequency = 100\n\nstart_frequency = 4\n\nend_frequency = 25\n\nam_amplitude = 0.1\n\nam_frequency = 9", "_____no_output_____" ] ], [ [ "Calculate the bandwidth (Hz) and center frequency (Hz)", "_____no_output_____" ] ], [ [ "bandwidth = end_frequency - start_frequency\n\ncenter_frequency = 0.5 * bandwidth + start_frequency", "_____no_output_____" ] ], [ [ "Set up the waveform", "_____no_output_____" ] ], [ [ "from numpy import arange, sin\n\nfrom scipy.constants import pi\n\nfrom scipy.signal import chirp\n\ntime = arange(sampling_frequency) / sampling_frequency\n\nif_signal = chirp(time, start_frequency, time[-1], end_frequency)\n\nif_signal *= (1.0 + am_amplitude * sin(2.0 * pi * am_frequency * time))", "_____no_output_____" ] ], [ [ "Set up the keyword args", "_____no_output_____" ] ], [ [ "kwargs = {'if_signal': if_signal,\n \n 'center_frequency': center_frequency,\n\n 'bandwidth': bandwidth,\n\n 'sample_frequency': sampling_frequency,\n\n 'time': time}", "_____no_output_____" ] ], [ [ "Calculate the baseband in-phase and quadrature signals", "_____no_output_____" ] ], [ [ "from Libs.receivers import coherent_detector\n\ni_signal, q_signal = coherent_detector.iq(**kwargs)", "_____no_output_____" ] ], [ [ "Use the `matplotlib` routines to display the results", "_____no_output_____" ] ], [ [ "from matplotlib import pyplot as plt\n\nfrom numpy import real, imag\n\n\n# Set the figure size\n\nplt.rcParams[\"figure.figsize\"] = (15, 10)\n\n\n# Display the results\n\nplt.plot(time, real(i_signal), '', label='In Phase')\n\nplt.plot(time, real(q_signal), '-.', label='Quadrature')\n\n\n# Set the plot title and labels\n\nplt.title('Coherent Detector', size=14)\n\nplt.xlabel('Time (s)', size=12)\n\nplt.ylabel('Amplitude (V)', size=12)\n\n\n# Set the tick label size\n\nplt.tick_params(labelsize=12)\n\n\n# Turn on the grid\n\nplt.grid(linestyle=':', linewidth=0.5)\n\n\n# Show the legend\n\nplt.legend(loc='upper right', prop={'size': 10})", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ] ]
d0218fda8119db12c97f9d6ca4517cc899013c78
24,082
ipynb
Jupyter Notebook
.ipynb_checkpoints/Visualizing and Analyzing Jigsaw-checkpoint.ipynb
dudaspm/LDA_Bias_Data
ffbabb5765a878bf49bac68baaa083342243a616
[ "BSD-3-Clause" ]
null
null
null
.ipynb_checkpoints/Visualizing and Analyzing Jigsaw-checkpoint.ipynb
dudaspm/LDA_Bias_Data
ffbabb5765a878bf49bac68baaa083342243a616
[ "BSD-3-Clause" ]
null
null
null
.ipynb_checkpoints/Visualizing and Analyzing Jigsaw-checkpoint.ipynb
dudaspm/LDA_Bias_Data
ffbabb5765a878bf49bac68baaa083342243a616
[ "BSD-3-Clause" ]
null
null
null
41.592401
843
0.541193
[ [ [ "# Visualizing and Analyzing Jigsaw", "_____no_output_____" ] ], [ [ "import pandas as pd\nimport re\nimport numpy as np", "_____no_output_____" ] ], [ [ "In the previous section, we explored how to generate topics from a textual dataset using LDA. But how can this be used as an application? \n\nTherefore, in this section, we will look into the possible ways to read the topics as well as understand how it can be used.", "_____no_output_____" ], [ "We will now import the preloaded data of the LDA result that was achieved in the previous section. ", "_____no_output_____" ] ], [ [ "df = pd.read_csv(\"https://raw.githubusercontent.com/dudaspm/LDA_Bias_Data/main/topics.csv\")", "_____no_output_____" ], [ "df.head()", "_____no_output_____" ] ], [ [ "We will visualize these results to understand what major themes are present in them. ", "_____no_output_____" ] ], [ [ "%%html\n\n<iframe src='https://flo.uri.sh/story/941631/embed' title='Interactive or visual content' class='flourish-embed-iframe' frameborder='0' scrolling='no' style='width:100%;height:600px;' sandbox='allow-same-origin allow-forms allow-scripts allow-downloads allow-popups allow-popups-to-escape-sandbox allow-top-navigation-by-user-activation'></iframe><div style='width:100%!;margin-top:4px!important;text-align:right!important;'><a class='flourish-credit' href='https://public.flourish.studio/story/941631/?utm_source=embed&utm_campaign=story/941631' target='_top' style='text-decoration:none!important'><img alt='Made with Flourish' src='https://public.flourish.studio/resources/made_with_flourish.svg' style='width:105px!important;height:16px!important;border:none!important;margin:0!important;'> </a></div>", "_____no_output_____" ] ], [ [ "### An Overview of the analysis \nFrom the above visualization, an anomaly that we come across is that the dataset we are examining is supposed to be related to people with physical, mental and learning disability. But unfortunately based on the topics that were extracted, we notice just a small subset of words that are related to this topic. \nTopic 2 have words that addresses themes related to what we were expecting the dataset to have. But the major theme that was noticed in the Top 5 topics are mainly terms that are political. \n(The Top 10 topics show themes related to Religion as well, which is quite interesting.)\nLDA hence helped in understanding what the conversations the dataset consisted. ", "_____no_output_____" ], [ "From the word collection, we also notice that there were certain words such as \\'kill' that can be categorized as \\'Toxic'\\. To analyse this more, we can classify each word based on the fact that it can be categorized wi by an NLP classifier. ", "_____no_output_____" ], [ "To demonstrate an example of a toxic analysis framework, the below code shows the working of the Unitary library in python.{cite}`Detoxify`\n\nThis library provides a toxicity score (from a scale of 0 to 1) for the sentece that is passed through it.", "_____no_output_____" ] ], [ [ "headers = {\"Authorization\": f\"Bearer api_ZtUEFtMRVhSLdyTNrRAmpxXgMAxZJpKLQb\"}", "_____no_output_____" ] ], [ [ "To get access to this software, you will need to get an API KEY at https://huggingface.co/unitary/toxic-bert\nHere is an example of what this would look like.\n```python\nheaders = {\"Authorization\": f\"Bearer api_XXXXXXXXXXXXXXXXXXXXXXXXXXX\"}\n```", "_____no_output_____" ] ], [ [ "import requests\n\nAPI_URL = \"https://api-inference.huggingface.co/models/unitary/toxic-bert\"\n\ndef query(payload):\n response = requests.post(API_URL, headers=headers, json=payload)\n return response.json()", "_____no_output_____" ], [ "query({\"inputs\": \"addict\"})", "_____no_output_____" ] ], [ [ "You can input words or sentences in \\<insert word here>, in the code, to look at the results that are generated through this.\n\nThis example can provide an idea as to how ML can be used for toxicity analysis.", "_____no_output_____" ] ], [ [ "query({\"inputs\": \"<insert word here>\"})", "_____no_output_____" ], [ "%%html\n\n<iframe src='https://flo.uri.sh/story/941681/embed' title='Interactive or visual content' class='flourish-embed-iframe' frameborder='0' scrolling='no' style='width:100%;height:600px;' sandbox='allow-same-origin allow-forms allow-scripts allow-downloads allow-popups allow-popups-to-escape-sandbox allow-top-navigation-by-user-activation'></iframe><div style='width:100%!;margin-top:4px!important;text-align:right!important;'><a class='flourish-credit' href='https://public.flourish.studio/story/941681/?utm_source=embed&utm_campaign=story/941681' target='_top' style='text-decoration:none!important'><img alt='Made with Flourish' src='https://public.flourish.studio/resources/made_with_flourish.svg' style='width:105px!important;height:16px!important;border:none!important;margin:0!important;'> </a></div>", "_____no_output_____" ] ], [ [ "#### The Bias\nThe visualization shows how contextually toxic words are derived as important words within various topics related to this dataset. This can lead to any Natural Language Processing kernel learning this dataset to provide skewed analysis for the population in consideration, i.e. people with mental, physical and learning disability. This can lead to very discriminatory classifications. ", "_____no_output_____" ], [ "##### An Example\nTo illustrate the impact better, we will be taking the most associated words to the word 'mental' from the results. Below is a network graph that shows the commonly associated words. It is seen that words such as 'Kill' and 'Gun' appear with the closest association. This can lead to the machine contextualizing the word 'mental' to be associated with such words. ", "_____no_output_____" ] ], [ [ "%%html\n<iframe src='https://flo.uri.sh/visualisation/6867000/embed' title='Interactive or visual content' class='flourish-embed-iframe' frameborder='0' scrolling='no' style='width:100%;height:600px;' sandbox='allow-same-origin allow-forms allow-scripts allow-downloads allow-popups allow-popups-to-escape-sandbox allow-top-navigation-by-user-activation'></iframe><div style='width:100%!;margin-top:4px!important;text-align:right!important;'><a class='flourish-credit' href='https://public.flourish.studio/visualisation/6867000/?utm_source=embed&utm_campaign=visualisation/6867000' target='_top' style='text-decoration:none!important'><img alt='Made with Flourish' src='https://public.flourish.studio/resources/made_with_flourish.svg' style='width:105px!important;height:16px!important;border:none!important;margin:0!important;'> </a></div>", "_____no_output_____" ] ], [ [ "It is hence important to be aware of the dataset that is being used to analyse a specific population. With LDA, we were able to understand that this dataset cannot be used as a good representation of the disabled community. To bring about a movement of unbiased AI, we need to perform such preliminary analysis and more, to not cause unintended descrimination. ", "_____no_output_____" ], [ "## The Dashboard\n\nBelow is the complete data visaulization dashboard of the topic analysis. Feel feel to experiment and compare various labels to your liking. ", "_____no_output_____" ] ], [ [ "%%html\n\n<iframe src='https://flo.uri.sh/visualisation/6856937/embed' title='Interactive or visual content' class='flourish-embed-iframe' frameborder='0' scrolling='no' style='width:100%;height:600px;' sandbox='allow-same-origin allow-forms allow-scripts allow-downloads allow-popups allow-popups-to-escape-sandbox allow-top-navigation-by-user-activation'></iframe><div style='width:100%!;margin-top:4px!important;text-align:right!important;'><a class='flourish-credit' href='https://public.flourish.studio/visualisation/6856937/?utm_source=embed&utm_campaign=visualisation/6856937' target='_top' style='text-decoration:none!important'><img alt='Made with Flourish' src='https://public.flourish.studio/resources/made_with_flourish.svg' style='width:105px!important;height:16px!important;border:none!important;margin:0!important;'> </a></div>", "_____no_output_____" ] ], [ [ "## Thank you!\n\nWe thank you for your time! ", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ] ]
d021952bba7c48e92d0cd692e169507f0fe57ce6
855,102
ipynb
Jupyter Notebook
figure_data/Make Plots.ipynb
willwx/XDream
ee7022a35e94f00d08fdb1e49ca784fc497740c0
[ "MIT" ]
38
2019-04-19T16:37:37.000Z
2022-02-15T21:42:24.000Z
figure_data/Make Plots.ipynb
willwx/XDream
ee7022a35e94f00d08fdb1e49ca784fc497740c0
[ "MIT" ]
null
null
null
figure_data/Make Plots.ipynb
willwx/XDream
ee7022a35e94f00d08fdb1e49ca784fc497740c0
[ "MIT" ]
12
2019-05-01T20:29:26.000Z
2021-04-30T07:49:25.000Z
412.893288
121,312
0.922683
[ [ [ "%pylab inline\nimport re\nfrom pathlib import Path\nimport pandas as pd\nimport seaborn as sns", "Populating the interactive namespace from numpy and matplotlib\n" ], [ "datdir = Path('data')\nfigdir = Path('plots')\nfigdir.mkdir(exist_ok=True)", "_____no_output_____" ], [ "mpl.rcParams.update({'figure.figsize': (2.5,1.75), 'figure.dpi': 300,\n 'axes.spines.right': False, 'axes.spines.top': False,\n 'axes.titlesize': 10, 'axes.labelsize': 10,\n 'legend.fontsize': 10, 'legend.title_fontsize': 10,\n 'xtick.labelsize': 8, 'ytick.labelsize': 8,\n 'font.family': 'sans-serif', 'font.sans-serif': ['Arial'],\n 'svg.fonttype': 'none', 'lines.solid_capstyle': 'round'})", "_____no_output_____" ] ], [ [ "# Figure 1 - Overview", "_____no_output_____" ] ], [ [ "df = pd.read_csv(datdir / 'fig_1.csv')\nscores = df[list(map(str, range(20)))].values\nselected = ~np.isnan(df['Selected'].values)", "_____no_output_____" ], [ "gens_sel = np.nonzero(selected)[0]\nscores_sel = np.array([np.max(scores[g]) for g in gens_sel])\nims_sel = [plt.imread(str(datdir / 'images' / 'overview' / f'gen{gen:03d}.png'))\n for gen in gens_sel]\nims_sel = np.array(ims_sel)\nprint('gens to visualize:', gens_sel)\nwith np.printoptions(precision=2, suppress=True):\n print('corresponding scores:', scores_sel)\nprint('ims_sel shape:', ims_sel.shape)", "gens to visualize: [ 0 5 10 17 26 41 73 112 201 281]\ncorresponding scores: [ 6.41 13.27 19.27 26.69 32.4 40.35 45.97 54.07 61.09 68.52]\nims_sel shape: (10, 256, 256, 4)\n" ], [ "c0 = array((255,92,0)) / 255 # highlight color\nfigure(figsize=(2.5, 0.8), dpi=150)\nplot(scores.mean(1))\n\nxlim(0, 500)\nylim(bottom=0)\nxticks((250,500))\nyticks((0,50))\ngca().set_xticks(np.nonzero(selected)[0], minor=True)\ngca().tick_params(axis='x', which='minor', colors=c0, width=1)\ntitle('CaffeNet layer fc8, unit 1')\nxlabel('Generation')\nylabel('Activation')\n\nsavefig(figdir / f'overview-evo_scores.png', dpi=300, bbox_inches='tight')\nsavefig(figdir / f'overview-evo_scores.svg', dpi=300, bbox_inches='tight')", "_____no_output_____" ], [ "def make_canvas(ims, nrows=None, ncols=None, margin=15, margin_colors=None):\n if margin_colors is not None:\n assert len(ims) == len(margin_colors)\n if ncols is None:\n assert nrows is not None\n ncols = int(np.ceil(len(ims) / nrows))\n else:\n nrows = int(np.ceil(len(ims) / ncols))\n im0 = ims.__iter__().__next__()\n imsize = im0.shape[0]\n size = imsize + margin\n w = margin + size * ncols\n h = margin + size * nrows\n canvas = np.ones((h, w, 3), dtype=im0.dtype)\n for i, im in enumerate(ims):\n ih = i // ncols\n iw = i % ncols\n if len(im.shape) > 2 and im.shape[-1] == 4:\n im = im[..., :3]\n if margin_colors is not None:\n canvas[size * ih:size * (ih + 1) + margin, size * iw:size * (iw + 1) + margin] = margin_colors[i]\n canvas[margin + size * ih:margin + size * ih + imsize, margin + size * iw:margin + size * iw + imsize] = im\n return canvas", "_____no_output_____" ], [ "scores_sel_max = scores_sel.max()\nmargin_colors = np.array([(s / scores_sel_max * c0) for s in scores_sel])\n\nfor i, im_idc in enumerate((slice(0,5), slice(5,None))):\n canvas = make_canvas(ims_sel[im_idc], nrows=1,\n margin_colors=margin_colors[im_idc])\n figure(dpi=150)\n imshow(canvas)\n \n # turn off axis decorators to make tight plot\n ax = gca()\n ax.tick_params(labelcolor='none', bottom=False, left=False, right=False)\n ax.set_frame_on(False)\n for sp in ax.spines.values():\n sp.set_visible(False)\n ax.xaxis.set_ticks([])\n ax.yaxis.set_ticks([])\n \n plt.imsave(figdir / f'overview-evo_ims_{i}.png', canvas)", "_____no_output_____" ] ], [ [ "# Define Custom Violinplot", "_____no_output_____" ] ], [ [ "def violinplot2(data=None, x=None, y=None, hue=None,\n palette=None, linewidth=1, orient=None,\n order=None, hue_order=None, x_disp=None,\n palette_per_violin=None, hline_at_1=True,\n legend_palette=None, legend_kwargs=None,\n width=0.7, control_width=0.8, control_y=None,\n hues_share_control=False,\n ax=None, **kwargs):\n \"\"\"\n width: width of a group of violins (\"hues\") as fraction of between-group distance\n contorl_width: width of a group of bars (control) as fraction of hue width\n \"\"\"\n if order is None:\n n_groups = len(set(data[x])) if orient != 'h' else len(set(data[y]))\n else:\n n_groups = len(order)\n extra_plot_handles = []\n if ax is None:\n ax = plt.gca()\n if orient == 'h':\n fill_between = ax.fill_betweenx\n plot = ax.vlines\n else:\n fill_between = ax.fill_between\n plot = ax.hlines\n\n ############ drawing ############\n if not isinstance(y, str) and hasattr(y, '__iter__'):\n ys = y\n else:\n ys = (y,)\n for y in ys:\n ax = sns.violinplot(data=data, x=x, y=y, hue=hue, ax=ax, \n palette=palette, linewidth=linewidth, orient=orient,\n width=width, order=order, hue_order=hue_order, **kwargs)\n if legend_kwargs is not None:\n lgnd = plt.legend(**legend_kwargs)\n else:\n lgnd = None\n \n if hline_at_1:\n hdl = plot(1, -0.45, n_groups-0.55, linestyle='--', linewidth=.75, zorder=-3)\n extra_plot_handles.append(hdl)\n ############ drawing ############\n \n ############ styling ############\n if orient != 'h':\n ax.xaxis.set_ticks_position('none')\n if x_disp is not None:\n ax.set_xticklabels(x_disp)\n \n # enlarge the circle for median\n median_marks = [o for o in ax.get_children() if isinstance(o, matplotlib.collections.PathCollection)]\n for o in median_marks:\n o.set_sizes([10,])\n\n # recolor the violins\n violins = np.array([o for o in ax.get_children() if isinstance(o, matplotlib.collections.PolyCollection)])\n violins = violins[np.argsort([int(v.get_label().replace('_collection','')) for v in violins])]\n for i, o in enumerate(violins):\n if palette_per_violin is not None:\n i %= len(palette_per_violin)\n c = palette_per_violin[i]\n if len(c) == 2:\n o.set_facecolor(c[0])\n o.set_edgecolor(c[1])\n else:\n o.set_facecolor(c)\n o.set_edgecolor('none')\n else:\n o.set_edgecolor('none')\n \n # recolor the legend patches\n if lgnd is not None:\n for v in (legend_palette, palette_per_violin, palette):\n if v is not None:\n legend_palette = v\n break\n if legend_palette is not None:\n for o, c in zip(lgnd.get_patches(), legend_palette):\n o.set_facecolor(c)\n o.set_edgecolor('none')\n ############ styling ############\n\n ############ control ############\n # done last to not interfere with coloring violins \n if control_y is not None:\n assert control_y in df.columns\n assert hue is not None and order is not None and hue_order is not None\n nhues = len(hue_order)\n vw = width # width per control (long)\n if not hues_share_control:\n vw /= nhues\n cw = vw * control_width # width per control (short)\n ctl_hdl = None\n for i, xval in enumerate(order):\n if not hues_share_control:\n for j, hval in enumerate(hue_order):\n df_ = df[(df[x] == xval) & (df[hue] == hval)]\n if not len(df_):\n continue\n lq, mq, uq = np.nanpercentile(df_[control_y].values, (25, 50, 75))\n xs_qtl = i + vw * (-nhues/2 + 1/2 + j) + cw/2 * np.array((-1,1))\n xs_med = i + vw * (-nhues/2 + j) + vw * np.array((0,1))\n ctl_hdl = fill_between(xs_qtl, lq, uq, color=(0.9,0.9,0.9), zorder=-2) # upper & lower quartiles\n plot(mq, *xs_med, color=(0.5,0.5,0.5), linewidth=1, zorder=-1) # median\n else:\n df_ = df[(df[x] == xval)]\n if not len(df_):\n continue\n lq, mq, uq = np.nanpercentile(df_[control_y].values, (25, 50, 75))\n xs_qtl = i + cw/2 * np.array((-1,1))\n xs_med = i + vw/2 * np.array((-1,1))\n ctl_hdl = fill_between(xs_qtl, lq, uq, color=(0.9,0.9,0.9), zorder=-2)\n plot(mq, *xs_med, color=(0.5,0.5,0.5), linewidth=1, zorder=-1)\n extra_plot_handles.append(ctl_hdl)\n ############ control ############\n \n return n_groups, ax, lgnd, extra_plot_handles\n\n\ndef default_ax_lims(ax, n_groups=None, orient=None):\n if orient == 'h':\n ax.set_xticks((0,1,2,3))\n ax.set_xlim(-0.25, 3.5)\n else:\n if n_groups is not None:\n ax.set_xlim(-0.65, n_groups-0.35)\n ax.set_yticks((0,1,2,3))\n ax.set_ylim(-0.25, 3.5)\n\n\ndef rotate_xticklabels(ax, rotation=10, pad=5):\n for i, tick in enumerate(ax.xaxis.get_major_ticks()):\n if tick.label.get_text() == 'none':\n tick.set_visible(False)\n tick.label.set(va='top', ha='center', rotation=rotation, rotation_mode='anchor')\n tick.set_pad(pad)", "_____no_output_____" ] ], [ [ "# Figure 3 - Compare Target Nets, Layers", "_____no_output_____" ] ], [ [ "df = pd.read_csv(datdir/'fig_2.csv')\ndf = df[~np.isnan(df['Rel_act'])] # remove invalid data\ndf.head()", "_____no_output_____" ], [ "nets = ('caffenet', 'resnet-152-v2', 'resnet-269-v2', 'inception-v3', 'inception-v4', 'inception-resnet-v2', 'placesCNN')\nlayers = {'caffenet': ('conv2', 'conv4', 'fc6', 'fc8'),\n 'resnet-152-v2': ('res15_eletwise', 'res25_eletwise', 'res35_eletwise', 'classifier'),\n 'resnet-269-v2': ('res25_eletwise', 'res45_eletwise', 'res60_eletwise', 'classifier'),\n 'inception-v3': ('pool2_3x3_s2', 'reduction_a_concat', 'reduction_b_concat', 'classifier'),\n 'inception-v4': ('inception_stem3', 'reduction_a_concat', 'reduction_b_concat', 'classifier'),\n 'inception-resnet-v2': ('stem_concat', 'reduction_a_concat', 'reduction_b_concat', 'classifier'),\n 'placesCNN': ('conv2', 'conv4', 'fc6', 'fc8')}\nget_layer_level = lambda r: ('Early', 'Middle', 'Late', 'Output')[layers[r[1]['Classifier']].index(r[1]['Layer'])]\ndf['Layer_level'] = list(map(get_layer_level, df.iterrows()))\n\nx_disp = ('CaffeNet', 'ResNet-152-v2', 'ResNet-269-v2', 'Inception-v3', 'Inception-v4', 'Inception-ResNet-v2', 'PlacesCNN')\npalette = get_cmap('Blues')(np.linspace(0.3,0.8,4))", "_____no_output_____" ], [ "fig = figure(figsize=(6.3,2.5), dpi=150)\nn_groups, ax, lgnd, hdls = violinplot2(\n data=df, x='Classifier', y='Rel_act', hue='Layer_level', cut=0,\n order=nets, hue_order=('Early', 'Middle', 'Late', 'Output'), x_disp=x_disp,\n legend_kwargs=dict(title='Evolved,\\ntarget layer', loc='upper left', bbox_to_anchor=(1,1.05)),\n palette_per_violin=palette, control_y='Rel_exp_max')\n\ndefault_ax_lims(ax, n_groups)\nrotate_xticklabels(ax)\nylabel('Relative activation')\nxlabel('Target architecture')\n\n# another legend\nlegend(handles=hdls, labels=['Overall', 'In 10k'], title='ImageNet max',\n loc='upper left', bbox_to_anchor=(1,0.4))\nax.add_artist(lgnd)\n\nsavefig(figdir / f'nets.png', dpi=300, bbox_inches='tight')\nsavefig(figdir / f'nets.svg', dpi=300, bbox_inches='tight')", "_____no_output_____" ] ], [ [ "# Figure 5 - Compare Generators", "_____no_output_____" ], [ "## Compare representation \"depth\"", "_____no_output_____" ] ], [ [ "df = pd.read_csv(datdir / 'fig_5-repr_depth.csv')\ndf = df[~np.isnan(df['Rel_act'])]\ndf['Classifier, layer'] = [', '.join(tuple(a)) for a in df[['Classifier', 'Layer']].values]\ndf.head()", "_____no_output_____" ], [ "nets = ('caffenet', 'inception-resnet-v2')\nlayers = {'caffenet': ('conv2', 'fc6', 'fc8'), \n 'inception-resnet-v2': ('classifier',)}\ngenerators = ('raw_pixel', 'deepsim-norm1', 'deepsim-norm2', 'deepsim-conv3',\n 'deepsim-conv4', 'deepsim-pool5', 'deepsim-fc6', 'deepsim-fc7', 'deepsim-fc8')\n\nxorder = ('caffenet, conv2', 'caffenet, fc6', 'caffenet, fc8', 'inception-resnet-v2, classifier')\nx_disp = ('CaffeNet, conv2', 'CaffeNet, fc6', 'CaffeNet, fc8', 'Inception-ResNet-v2,\\nclassifier')\nlbl_disp = ('Raw pixel',) + tuple(v.replace('deepsim', 'DeePSiM') for v in generators[1:])\npalette = ([[0.75, 0.75, 0.75]] + # raw pixel\n sns.husl_palette(len(generators)-1, h=0.05, l=0.65)) # deepsim 1--8", "_____no_output_____" ], [ "fig = figure(figsize=(5.6,2.4), dpi=150)\nn_groups, ax, lgnd, hdls = violinplot2(\n data=df, x='Classifier, layer', y='Rel_act', hue='Generator',\n cut=0, linewidth=.75, width=0.9, control_width=0.9,\n order=xorder, hue_order=generators, x_disp=x_disp, \n legend_kwargs=dict(title='Generator', loc='upper left', bbox_to_anchor=(1,1.05)),\n palette=palette, control_y='Rel_exp_max', hues_share_control=True)\n\ndefault_ax_lims(ax, n_groups)\nylabel('Relative activation')\nxlabel('Target layer')\n\n# change legend label text\nfor txt, lbl in zip(lgnd.get_texts(), lbl_disp):\n txt.set_text(lbl)\n\nsavefig(figdir / f'generators.png', dpi=300, bbox_inches='tight')\nsavefig(figdir / f'generators.svg', dpi=300, bbox_inches='tight')", "_____no_output_____" ] ], [ [ "## Compare training dataset", "_____no_output_____" ] ], [ [ "df = pd.read_csv(datdir / 'fig_5-training_set.csv')\ndf = df[~np.isnan(df['Rel_act'])]\ndf['Classifier, layer'] = [', '.join(tuple(a)) for a in df[['Classifier', 'Layer']].values]\ndf.head()", "_____no_output_____" ], [ "nets = ('caffenet', 'inception-resnet-v2')\ncs = ('caffenet', 'placesCNN', 'inception-resnet-v2')\nlayers = {c: ('conv2', 'conv4', 'fc6', 'fc8') for c in cs}\nlayers['inception-resnet-v2'] = ('classifier',)\ngs = ('deepsim-fc6', 'deepsim-fc6-places365')\ncls = ('caffenet, conv2', 'caffenet, conv4', 'caffenet, fc6', 'caffenet, fc8', 'inception-resnet-v2, classifier',\n 'placesCNN, conv2', 'placesCNN, conv4', 'placesCNN, fc6', 'placesCNN, fc8')\ncls_spaced = cls[:5] + ('none',) + cls[5:]\n\nx_disp = tuple(f'CaffeNet, {v}' for v in ('conv2', 'conv4', 'fc6', 'fc8')) + \\\n ('Inception-ResNet-v2,\\nclassifier', 'none') + \\\n tuple(f'PlacesCNN, {v}' for v in ('conv2', 'conv4', 'fc6', 'fc8'))\nlbl_disp = ('DeePSiM-fc6', 'DeePSiM-fc6-Places365')\npalette = [get_cmap(main_c)(np.linspace(0.3,0.8,4))\n for main_c in ('Blues', 'Oranges')]\npalette = list(np.array(palette).transpose(1,0,2).reshape(-1, 4))\npalette = palette + palette[-2:] + palette", "_____no_output_____" ], [ "fig = figure(figsize=(5.15,1.8), dpi=150)\nn_groups, ax, lgnd, hdls = violinplot2(\n data=df, x='Classifier, layer', y='Rel_act', hue='Generator',\n cut=0, split=True, inner='quartile',\n order=cls_spaced, hue_order=gs, x_disp=x_disp,\n legend_kwargs=dict(title='Generator', loc='upper left', bbox_to_anchor=(.97,1.05)),\n palette_per_violin=palette, legend_palette=palette[4:],\n control_y='Rel_exp_max', hues_share_control=True)\n\nrotate_xticklabels(ax, rotation=15, pad=10)\nylabel('Relative activation')\nxlabel('Target layer')\n\n# change legend label text\nfor txt, lbl in zip(lgnd.get_texts(), lbl_disp):\n txt.set_text(lbl)\n\nsavefig(figdir / f'generators2.png', dpi=300, bbox_inches='tight')\nsavefig(figdir / f'generators2.svg', dpi=300, bbox_inches='tight')", "_____no_output_____" ] ], [ [ "# Figure 4 - Compare Inits", "_____no_output_____" ] ], [ [ "layers = ('conv2', 'conv4', 'fc6', 'fc8')\nlayers_disp = tuple(v.capitalize() for v in layers)", "_____no_output_____" ] ], [ [ "## Rand inits, fraction change", "_____no_output_____" ] ], [ [ "df = pd.read_csv(datdir/'fig_4-rand_init.csv').set_index(['Layer', 'Unit', 'Init_seed'])\ndf = (df.drop(0, level='Init_seed') - df.xs(0, level='Init_seed')).mean(axis=0,level=('Layer','Unit'))\ndf = df.rename({'Rel_act': 'Fraction change'}, axis=1)\ndf = df.reset_index()\ndf.head()", "_____no_output_____" ], [ "palette = get_cmap('Blues')(np.linspace(0.2,0.9,6)[1:-1])", "_____no_output_____" ], [ "fig = figure(figsize=(1.75,1.5), dpi=150)\nn_groups, ax, lgnd, hdls = violinplot2(\n data=df, x='Layer', y='Fraction change',\n cut=0, width=0.9, palette=palette,\n order=layers, x_disp=layers_disp, hline_at_1=False)\n\nxlabel('Target CaffeNet layer')\nylim(-0.35, 0.35)\nyticks((-0.25,0,0.25))\nax.set_yticklabels([f'{t:.2f}' for t in (-0.25,0,0.25)])\nax.set_yticks(np.arange(-0.3,0.30,0.05), minor=True)\n\nsavefig(figdir / f'inits-change.png', dpi=300, bbox_inches='tight')\nsavefig(figdir / f'inits-change.svg', dpi=300, bbox_inches='tight')", "_____no_output_____" ] ], [ [ "## Rand inits, interpolation", "_____no_output_____" ] ], [ [ "df = pd.read_csv(datdir/'fig_4-rand_init_interp.csv').set_index(['Layer', 'Unit', 'Seed_i0', 'Seed_i1'])\ndf = df.mean(axis=0,level=('Layer','Unit'))\ndf2 = pd.read_csv(datdir/'fig_4-rand_init_interp-2.csv').set_index(['Layer', 'Unit']) # control conditions\ndf2_normed = df2.divide(df[['Rel_act_loc_0.0','Rel_act_loc_1.0']].mean(axis=1),axis=0)\ndf_normed = df.divide(df[['Rel_act_loc_0.0','Rel_act_loc_1.0']].mean(axis=1),axis=0)\ndf_normed.head()", "_____no_output_____" ], [ "fig, axs = subplots(1, 2, figsize=(3.5,1.5), dpi=150)\nsubplots_adjust(wspace=0.5)\n\ninterp_xs = np.array([float(i[i.rfind('_')+1:]) for i in df.columns])\nfor ax, df_ in zip(axs, (df, df_normed)):\n df_mean = df_.mean(axis=0, level='Layer')\n df_std = df_.std(axis=0, level='Layer')\n for l, ld, c in zip(layers, layers_disp, palette):\n m = df_mean.loc[l].values\n s = df_std.loc[l].values\n ax.plot(interp_xs, m, c=c, label=ld)\n ax.fill_between(interp_xs, m-s, m+s, fc=c, ec='none', alpha=0.1)\n\n# plot control\nxs2 = (interp_xs.min(), interp_xs.max())\naxs[0].hlines(1, *xs2, linestyle='--', linewidth=1)\nfor l, c in zip(layers, palette):\n # left subplot: relative activation\n df_ = df2.loc[l]\n mq = np.nanmedian(df_['Rel_ImNet_median_act'].values)\n axs[0].plot(xs2, (mq, mq), color=c, linewidth=1.15, zorder=-2)\n # right subplot: normalized to endpoints\n df_ = df2_normed.loc[l]\n for k, ls, lw in zip(('Rel_exp_max', 'Rel_ImNet_median_act'), ('--','-'), (1, 1.15)):\n mq = np.nanmedian(df_[k].values)\n axs[1].plot(xs2, (mq, mq), color=c, ls=ls, linewidth=lw, zorder=-2)\n \naxs[0].set_yticks((0, 1, 2))\naxs[1].set_yticks((0, 0.5, 1))\naxs[0].set_ylabel('Relative activation')\naxs[1].set_ylabel('Normalized activation')\nfor ax in axs:\n ax.set_xlabel('Interpolation location')\nlgnd = axs[-1].legend(loc='upper left', bbox_to_anchor=(1.05, 1.05))\nlegend(handles=[Line2D([0], [0], color='k', lw=1, ls='--', label='Max'),\n Line2D([0], [0], color='k', lw=1.15, label='Median')],\n title='ImageNet ref.',\n loc='upper left', bbox_to_anchor=(1.05,0.3))\nax.add_artist(lgnd)\n\nsavefig(figdir / f'inits-interp.png', dpi=300, bbox_inches='tight')\nsavefig(figdir / f'inits-interp.svg', dpi=300, bbox_inches='tight')", "_____no_output_____" ] ], [ [ "## Per-neuron inits", "_____no_output_____" ] ], [ [ "df = pd.read_csv(datdir/'fig_4-per_neuron_init.csv')\ndf.head()", "_____no_output_____" ], [ "hue_order = ('rand', 'none', 'worst_opt', 'mid_opt', 'best_opt',\n 'worst_ivt', 'mid_ivt', 'best_ivt')\npalette = [get_cmap(main_c)(np.linspace(0.3,0.8,4))\n for main_c in ('Blues', 'Greens', 'Purples')]\npalette = np.concatenate([[\n palette[0][i]] * 1 + [palette[1][i]] * 3 + [palette[2][i]] * 3\n for i in range(4)])\npalette = tuple(palette) + tuple(('none', c) for c in palette)", "_____no_output_____" ], [ "fig = figure(figsize=(6.3,2), dpi=150)\n\nn_groups, ax, lgnd, hdls = violinplot2(\n data=df, x='Layer', y=('Rel_act', 'Rel_act_init'), hue='Init_name', cut=0,\n order=layers, hue_order=hue_order, x_disp=x_disp,\n palette_per_violin=palette)\n\nylabel('Relative activation')\nylabel('Target CaffeNet layer')\n\n# create custom legends\n# for init methods\nlegend_elements = [\n matplotlib.patches.Patch(facecolor=palette[14+3*i], edgecolor='none', label=l)\n for i, l in enumerate(('Random', 'Opt', 'Ivt'))]\nlgnd1 = legend(handles=legend_elements, title='Init. method',\n loc='upper left', bbox_to_anchor=(1,1.05))\n# for generation condition\nlegend_elements = [\n matplotlib.patches.Patch(facecolor='gray', edgecolor='none', label='Final'),\n matplotlib.patches.Patch(facecolor='none', edgecolor='gray', label='Initial')]\nax.legend(handles=legend_elements, title='Generation',\n loc='upper left', bbox_to_anchor=(1,.45))\nax.add_artist(lgnd1)\n\nsavefig(figdir / f'inits-per_neuron.png', dpi=300, bbox_inches='tight')\nsavefig(figdir / f'inits-per_neuron.svg', dpi=300, bbox_inches='tight')", "_____no_output_____" ] ], [ [ "# Figure 6 - Compare Optimizers & Stoch Scales", "_____no_output_____" ], [ "## Compare optimizers", "_____no_output_____" ] ], [ [ "df = pd.read_csv(datdir/'fig_6-optimizers.csv')\ndf['OCL'] = ['_'.join(v) for v in df[['Optimizer','Classifier','Layer']].values]\ndf.head()", "_____no_output_____" ], [ "opts = ('genetic', 'FDGD', 'NES')\nlayers = {'caffenet': ('conv2', 'conv4', 'fc6', 'fc8'),\n 'inception-resnet-v2': ('classifier',)}\ncls = [(c, l) for c in layers for l in layers[c]]\n\nxorder = tuple(f'{opt}_{c}_{l}' for c in layers for l in layers[c]\n for opt in (opts + ('none',)))[:-1]\nx_disp = ('CaffeNet, conv2', 'CaffeNet, conv4', 'CaffeNet, fc6', 'CaffeNet, fc8',\n 'Inception-ResNet-v2,\\nclassifier')\nopts_disp = ('Genetic', 'FDGD', 'NES')\npalette = [get_cmap(main_c)(np.linspace(0.3,0.8,4))\n for main_c in ('Blues', 'Oranges', 'Greens')]\npalette = np.concatenate([\n np.concatenate([[palette[j][i], palette[j][i]/2+0.5] for j in range(3)])\n for i in (0,1,2,3,3)])", "_____no_output_____" ], [ "fig = figure(figsize=(6.75,2.75), dpi=150)\nn_groups, ax, lgnd, hdls = violinplot2(\n data=df, x='OCL', y='Rel_act', hue='Noisy',\n cut=0, inner='quartiles', split=True, width=1,\n order=xorder, palette_per_violin=palette)\n\ndefault_ax_lims(ax, n_groups)\nxticks(np.arange(1,20,4), labels=x_disp)\nxlabel('Target layer', labelpad=0)\nylabel('Relative activation')\n\n# create custom legends\n# for optimizers\nlegend_patches = [matplotlib.patches.Patch(facecolor=palette[i], edgecolor='none', label=opt)\n for i, opt in zip(range(12,18,2), opts_disp)]\nlgnd1 = legend(handles=legend_patches, title='Optimization alg.',\n loc='upper left', bbox_to_anchor=(0,1))\n# for noise condition\nlegend_patches = [matplotlib.patches.Patch(facecolor=(0.5,0.5,0.5), edgecolor='none', label='Noiseless'),\n matplotlib.patches.Patch(facecolor=(0.8,0.8,0.8), edgecolor='none', label='Noisy')]\nlegend(handles=legend_patches, loc='upper right', bbox_to_anchor=(1,1))\nax.add_artist(lgnd1)\n\n# plot control\ngroup_width_ = 4\nfor i, cl in enumerate(cls):\n i = i * group_width_ + 1\n df_ = df[(df['Classifier'] == cl[0]) & (df['Layer'] == cl[1])]\n lq, mq, uq = np.nanpercentile(df_['Rel_exp_max'].values, (25, 50, 75))\n xs_qtl = i+np.array((-1,1))*group_width_*0.7/2\n xs_med = i+np.array((-1,1))*group_width_*0.75/2\n fill_between(xs_qtl, lq, uq, color=(0.9,0.9,0.9), zorder=-2)\n plot(xs_med, (mq, mq), color=(0.5,0.5,0.5), linewidth=1.15, zorder=-1)\n\nsavefig(figdir / f'optimizers.png', dpi=300, bbox_inches='tight')\nsavefig(figdir / f'optimizers.svg', dpi=300, bbox_inches='tight')", "_____no_output_____" ] ], [ [ "## Compare varying amounts of noise", "_____no_output_____" ] ], [ [ "df = pd.read_csv(datdir/'fig_6-stoch_scales.csv')\ndf = df[~np.isnan(df['Rel_noise'])]\ndf['Stoch_scale_plot'] = [str(int(v)) if ~np.isnan(v) else 'None' for v in df['Stoch_scale']]\ndf.head()", "_____no_output_____" ], [ "layers = ('conv2', 'conv4', 'fc6', 'fc8')\nstoch_scales = list(map(str, (5, 10, 20, 50, 75, 100, 250))) + ['None']\nstoch_scales_disp = stoch_scales[:-1] + ['No\\nnoise']\n\nstat_keys = ('Self_correlation', 'Rel_noise', 'SNR')\nstat_keys_disp = ('Self correlation', 'Stdev. : mean ratio', 'Signal-to-noise ratio')\npalette = [get_cmap('Blues')(np.linspace(0.3,0.8,4))[2]] # to match previous color", "_____no_output_____" ], [ "# calculate noise statstics and define their formatting\nformat_frac = lambda v: ('%.2f' % v)[1:] if (0 < v < 1) else '0' if v == 0 else str(v)\n\ndef format_sci(v):\n v = '%.0e' % v\n if v == 'inf':\n return v\n m, s = v.split('e')\n s = int(s)\n if s:\n if False: #s > 1:\n m = re.split('0+$', m)[0]\n m += 'e%d' % s\n else:\n m = str(int((float(m) * np.power(10, s))))\n return m\n\nfmts = (format_frac, format_frac, format_sci)\n\nbyl_byss_stats = {k: {} for k in stat_keys}\nfor l in layers:\n df_ = df[df['Layer'] == l]\n stats = {k: [] for k in stat_keys}\n for ss in stoch_scales:\n df__ = df_[df_['Stoch_scale_plot'] == ss]\n for k in stat_keys:\n stats[k].append(np.median(df__[k]))\n for k in stats.keys():\n byl_byss_stats[k][l] = stats[k]", "_____no_output_____" ], [ "fig, axs = subplots(1, 4, figsize=(5.25, 2), dpi=150, sharex=True, sharey=True, squeeze=False)\naxs = axs.flatten()\nsubplots_adjust(wspace=0.05)\n\nfor l, ax in zip(layers, axs):\n df_ = df[df['Layer'] == l]\n n_groups, ax, lgnd, hdls = violinplot2(\n data=df_, x='Rel_act', y='Stoch_scale_plot', orient='h',\n cut=0, width=.85, scale='width',\n palette=palette, ax=ax)\n ax.set_title(f'CaffeNet, {l}', fontsize=8)\n default_ax_lims(ax, n_groups, orient='h')\n ax.set_xlabel(None)\n\n# append more y-axes to last axis\npars = [twinx(ax) for _ in range(len(stat_keys))]\nylim_ = ax.get_ylim()\nfor i, (par, k, fmt, k_disp) in enumerate(zip(pars, stat_keys, fmts, stat_keys_disp)):\n par.set_frame_on(True)\n par.patch.set_visible(False)\n par.spines['right'].set_visible(True)\n par.yaxis.set_ticks_position('right')\n par.yaxis.set_label_position('right')\n par.yaxis.labelpad = 2\n par.spines['right'].set_position(('axes', 1+.6*i))\n par.set_ylabel(k_disp)\n par.set_yticks(range(len(stoch_scales)))\n par.set_yticklabels(map(fmt, byl_byss_stats[k][l]))\n par.set_ylim(ylim_)\n \naxs[0].set_ylabel('Expected max firing rate, spks')\naxs[0].set_yticklabels(stoch_scales_disp)\nfor ax in axs[1:]:\n ax.set_ylabel(None)\n ax.yaxis.set_tick_params(left=False)\n# joint \nax = fig.add_subplot(111, frameon=False)\nax.tick_params(labelcolor='none', bottom=False, left=False, right=False)\nax.set_frame_on(False)\nax.set_xlabel('Relative activation')\n\nsavefig(figdir / 'stoch_scales.png', dpi=300, bbox_inches='tight')\nsavefig(figdir / 'stoch_scales.svg', dpi=300, bbox_inches='tight')", "_____no_output_____" ] ] ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown", "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown", "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code" ] ]
d0219a2b3767e15b15af171547c2a5daf04208cb
2,514
ipynb
Jupyter Notebook
JupyterNotebooks/Labs/Lab 2.ipynb
WolfyVST/CMPT-220L-203-22S
200cc519c0d177fc71d6c945328e35f6ce907c47
[ "MIT" ]
null
null
null
JupyterNotebooks/Labs/Lab 2.ipynb
WolfyVST/CMPT-220L-203-22S
200cc519c0d177fc71d6c945328e35f6ce907c47
[ "MIT" ]
null
null
null
JupyterNotebooks/Labs/Lab 2.ipynb
WolfyVST/CMPT-220L-203-22S
200cc519c0d177fc71d6c945328e35f6ce907c47
[ "MIT" ]
30
2022-01-21T00:05:12.000Z
2022-02-24T19:41:48.000Z
59.857143
804
0.700875
[ [ [ "empty" ] ] ]
[ "empty" ]
[ [ "empty" ] ]
d021b4846093e2685a6b202bb31a23b6f87f8a65
36,793
ipynb
Jupyter Notebook
Assignment_TaskC_Streaming_Application.ipynb
tonbao30/Parallel-dataprocessing-simulation
2674ad83009be73af719e0a837970e45857b7517
[ "MIT" ]
null
null
null
Assignment_TaskC_Streaming_Application.ipynb
tonbao30/Parallel-dataprocessing-simulation
2674ad83009be73af719e0a837970e45857b7517
[ "MIT" ]
null
null
null
Assignment_TaskC_Streaming_Application.ipynb
tonbao30/Parallel-dataprocessing-simulation
2674ad83009be73af719e0a837970e45857b7517
[ "MIT" ]
null
null
null
45.423457
1,061
0.427581
[ [ [ "#uncomment this to install the library\n# !pip3 install pygeohash", "_____no_output_____" ] ], [ [ "## Libraries and auxiliary functions", "_____no_output_____" ] ], [ [ "#load the libraries\nfrom time import sleep\nfrom kafka import KafkaConsumer\nimport datetime as dt\nimport pygeohash as pgh", "_____no_output_____" ], [ "#fuctions to check the location based on the geo hash (precision =5)\n#function to check location between 2 data\ndef close_location (data1,data2):\n print(\"checking location...of sender\",data1.get(\"id\"),\" and sender\" , data2.get(\"id\"))\n\n #with the precision =5 , we find the location that close together with the radius around 2.4km\n if data1.get(\"geohash\")== data2.get(\"geohash\"): \n print(\"=>>>>>sender\",str(data1.get(\"id\")),\"location near \", \"sender\",str(data2.get(\"id\")),\"location\")\n else:\n print('>>>not close together<<<')\n \n#function to check location between the joined data and another data (e.g hotspot data)\ndef close_location_2 (data1,data2): \n print(\"checking location...of joined data id:\",data1.get(\"id\"),\" and sender\" , data2.get(\"id\"))\n \n #with the precision =5 , we find the location that close together with the radius 2.4km\n if data1.get(\"geohash\")== data2.get(\"geohash\"): \n print(\"=>>>> location\",str(data1.get(\"geohash\")),\"location near \", str(data2.get(\"geohash\")),\"location\")\n else:\n print('>>>not close together<<<')\n\n\n# check location of 2 climate data stored in the list\ndef close_location_in_list(a_list):\n print('check 2 climate location data')\n data_1 = a_list[0]\n data_2 = a_list[1]\n close_location (data_1,data_2)", "_____no_output_____" ], [ "#auxilary function to handle the average and join of the json file\n#function to merge satellite data\ndef merge_sat(data1,data2):\n result ={}\n \n result[\"_id\"] = data1.get(\"_id\") # take satellite _id ,we will store this joined data to the hotspot collection\n result[\"created_time\"] = data1.get(\"created_time\")\n \n \n #average the result of the location\n result['surface_temperature_celsius'] = (float(data1.get(\"surface_temperature_celsius\"))+float(data2.get(\"surface_temperature_celsius\")))/2 \n result[\"confidence\"] = (float(data1.get(\"confidence\"))+float(data2.get(\"confidence\")))/2\n \n #reassign the location like the initial data structure\n result['geohash'] = data2.get('geohash')\n result[\"location\"] = data1.get(\"location\")\n\n \n return result\n\n# function to join climate data and satellite data\ndef join_data_cli_sat(climData,satData):\n result={}\n\n #get location and id of the join data\n result[\"_id\"] = climData.get(\"_id\") # take climate _id ,we will store this joined data to the climate collection\n result['geohash'] = climData.get('geohash')\n result[\"location\"] = climData.get(\"location\")\n result[\"created_time\"] = climData.get(\"created_time\")\n \n\n #get climate data\n result[\"air_temperature_celsius\"] = climData.get(\"air_temperature_celsius\")\n result[\"relative_humidity\"] = climData.get(\"relative_humidity\")\n result[\"max_wind_speed\"] = climData.get(\"max_wind_speed\")\n result[\"windspeed_knots\"] = climData.get(\"windspeed_knots\")\n result[\"precipitation\"] = climData.get(\"precipitation\")\n \n #get satellite data\n result[\"surface_temperature_celsius\"] = satData.get(\"surface_temperature_celsius\")\n result[\"confidence\"] = satData.get(\"confidence\")\n result[\"hotspots\"] = satData.get(\"_id\") #reference to the hotspot data like in the task A_B\n \n return result\n \n", "_____no_output_____" ] ], [ [ "## Streaming Application", "_____no_output_____" ] ], [ [ "import os\nos.environ['PYSPARK_SUBMIT_ARGS'] = '--packages org.apache.spark:spark-streaming-kafka-0-8_2.11:2.3.0 pyspark-shell'\n\n\nimport sys\nimport time\nimport json\nfrom pymongo import MongoClient\nfrom pyspark import SparkContext, SparkConf\nfrom pyspark.streaming import StreamingContext\nfrom pyspark.streaming.kafka import KafkaUtils\n\n\n\ndef sendDataToDB(iter):\n client = MongoClient()\n db = client.fit5148_assignment_db\n \n # MongoDB design\n sat_col = db.hotspot #to store satellite data and joined satellite data \n \n # to store the join between climate and satellite\n clim_col = db.climate #to store the climate data\n \n #list of senders per iter\n sender = []\n \n #variable to store the data from 3 unique senders per iter\n climList = []\n satData_2 = {}\n satData_3 = {}\n##################################### PARSING THE DATA FROM SENDERS PER ITER###########################################\n for record in iter: \n sender.append(record[0])\n data_id = json.loads(record[1])\n data = data_id.get('data')\n \n\n\n if record[0] == \"sender_2\" : #parse AQUA satelite data\n\n \n #main data\n #add \"AQUA\" string to the \"_id\" to handle the case when 2 satellite data come at the same time\n #to make sure the incomming data from AQUA at a specific time is unique\n satData_2[\"_id\"] = \"AQUA\" +str(dt.datetime.strptime(str(data_id.get(\"created_time\")), \"%Y-%m-%dT%H:%M:%S\"))\n satData_2[\"id\"] = data_id.get(\"sender_id\") #unique sender_id\n \n #use datetime as ISO format for readable in mongoDB\n satData_2[\"created_time\"] = dt.datetime.strptime(str(data_id.get(\"created_time\")), \"%Y-%m-%dT%H:%M:%S\")\n \n # parse other data\n satData_2[\"location\"] = {\"latitude\" : float(data.get(\"lat\")), \"longitude\" : float(data.get(\"lon\"))}\n satData_2[\"surface_temperature_celsius\"] = float(data.get(\"surface_temp\"))\n satData_2[\"confidence\"] = float(data.get(\"confidence\"))\n geohash = pgh.encode(float(data.get(\"lat\")),float(data.get(\"lon\")),precision=5) \n satData_2[\"geohash\"] = geohash #unique_location \n\n\n\n if record[0] == \"sender_3\": #parse TERRA satelite data\n\n #main data\n #add \"TERRA\" string to the \"_id\" to handle the case when 2 satellite data come at the same time\n #to make sure the incomming data for TERRA at a specific time is unique\n satData_3[\"_id\"] = \"TERRA\" +str(dt.datetime.strptime(str(data_id.get(\"created_time\")), \"%Y-%m-%dT%H:%M:%S\"))\n satData_3[\"id\"] = data_id.get(\"sender_id\") #unique sender_id\n \n #use datetime as ISO format for readable in mongoDB\n satData_3[\"created_time\"] = dt.datetime.strptime(str(data_id.get(\"created_time\")), \"%Y-%m-%dT%H:%M:%S\")\n # parse other data\n satData_3[\"location\"] = {\"latitude\" : float(data.get(\"lat\")), \"longitude\" : float(data.get(\"lon\"))}\n satData_3[\"surface_temperature_celsius\"] = float(data.get(\"surface_temp\"))\n satData_3[\"confidence\"] = float(data.get(\"confidence\"))\n geohash = pgh.encode(float(data.get(\"lat\")),float(data.get(\"lon\")),precision=5) \n satData_3[\"geohash\"] = geohash #unique_location \n\n\n\n if record[0] == \"sender_1\": #parse climate data\n climData = {}\n\n \n \n #main data\n #add \"CLIM\" string to the \"_id\" to handle to make sure the incomming data for \n #climate at a specific time is unique\n climData[\"_id\"] = \"CLIM\" + str(dt.datetime.strptime(str(data_id.get(\"created_time\")), \"%Y-%m-%dT%H:%M:%S\"))\n climData[\"id\"] = data_id.get(\"sender_id\") #unique sender_id\n \n #use datetime as ISO format for readable in mongoDB\n climData[\"created_time\"] = dt.datetime.strptime(str(data_id.get(\"created_time\")), \"%Y-%m-%dT%H:%M:%S\")\n climData[\"location\"] = {\"latitude\" : float(data.get(\"lat\")), \"longitude\" : float(data.get(\"lon\"))}\n\n \n climData[\"air_temperature_celsius\"] = float(data.get(\"air_temp\"))\n climData[\"relative_humidity\"] = float(data.get(\"relative_humid\"))\n climData[\"max_wind_speed\"] = float(data.get(\"max_wind_speed\"))\n climData[\"windspeed_knots\"] = float(data.get(\"windspeed\"))\n climData[\"precipitation\"] = data.get(\"prep\")\n geohash = pgh.encode(float(data.get(\"lat\")),float(data.get(\"lon\")),precision=5) \n climData[\"geohash\"] = geohash\n climList.append(climData)\n\n uniq_sender_id = set(sender) #check unique sender for each iter\n\n################################ PERFOMING JOIN AND CHECK LOCATION THEN PUSH TO MONGODB ##################################\n\n####################### Received only from unique one sender\n \n #for climate data, there will be the case with on 2 streams of climate data go throught the app\n if len(uniq_sender_id) == 1 and \"sender_1\" in uniq_sender_id:#store to climate data to mongoDB\n print(\"---------------------received CLIMATE data------------------------\")\n try:\n #find close location in climate data and print out\n if len(climList) > 1:\n #check 2 climate location data\n close_location_in_list(climList)\n \n for data in climList:\n clim_col.insert(data)\n \n except Exception as ex:\n print(\"Exception Occured. Message: {0}\".format(str(ex)))\n \n # if there is one satellite data (AQUA), there will be no case with 2 same satelite data\n if len(uniq_sender_id) == 1 and \"sender_2\" in uniq_sender_id:#store to climate data to mongoDB\n print(\"---------------------received AQUA data------------------------\")\n try:\n\n sat_col.insert(satData_2)\n \n except Exception as ex:\n print(\"Exception Occured. Message: {0}\".format(str(ex)))\n \n # if there is one satellite data (TERRA) , there will be no case with 2 same satelite data\n if len(uniq_sender_id) == 1 and \"sender_3\" in uniq_sender_id:#store to climate data to mongoDB\n print(\"---------------------received TERRA data------------------------\")\n try:\n\n sat_col.insert(satData_3)\n \n except Exception as ex:\n print(\"Exception Occured. Message: {0}\".format(str(ex)))\n \n########################## Received from 2 unique senders\n\n elif len(sender) == 2 and len(uniq_sender_id) == 2:\n print(\"---------------------received 2 streams------------------------\")\n #will have 1 case, because there will be at least 1 climate data \n #if the consummer received 2, that will be the climat data and one sat data\n #or 2 climate data because we assume that there is at least 1 climate data in the stream\n \n \n try:\n\n for climate in climList:\n\n\n if len(satData_3)!=0:\n \n #check location\n close_location(climate,satData_3)\n\n #check lat lon first!!!\n print('---checking TERRA and Climate location---')\n if satData_3[\"location\"] == climate[\"location\"]:\n print('joining....')\n join_cli_sat = join_data_cli_sat(climate,satData_3)\n clim_col.insert(join_cli_sat)\n sat_col.insert(satData_3)\n else:\n print('no join')\n sat_col.insert(satData_3)\n clim_col.insert(climate)\n\n elif len(satData_2)!=0:\n #check close location\n close_location(climate,satData_2)\n\n print('---checking AQUA and Climate location---')\n #check lat lon first!!!\n if satData_2[\"location\"] == climate[\"location\"]:\n print('joining....')\n join_cli_sat = join_data_cli_sat(climate,satData_2)\n clim_col.insert(join_cli_sat)\n sat_col.insert(satData_2)\n else:\n print('no join')\n sat_col.insert(satData_2)\n clim_col.insert(climate)\n else: #received only 2 climate data\n\n print('received 2 climate data')\n clim_col.insert(climate)\n\n # if we received 2 sattelite data only (rare case, we ran out of climate data)\n if len(climList) == 0:\n if len(satData_3)!=0 and len(satData_2)!=0:\n #check location\n close_location(satData_3,satData_2)\n print('---checking AQUA and TERRA location---')\n if satData_2[\"location\"] == satData_3[\"location\"]:\n print('joining....')\n sat_data = merge_sat(satData_2,satData_3)\n\n #insert the data into the mongo with handling the exceptions : duplicate\n sat_col.insert(sat_data)\n \n else:\n sat_col.update(satData_3, satData_3, upsert=True)\n sat_col.update(satData_2, satData_2, upsert=True)\n\n\n\n \n \n except Exception as ex:\n print(\"Exception Occured. Message: {0}\".format(str(ex))) #exception will occur with empty satelite data\n\n \n#########################################################Received 3 stream\n########################## Received from 2 unique sender \n\n#we assume that there is at least 1 climate data in the stream , so if we have 3 streams of data\n# there will be 2 climate data and 1 satelite data because the app process 10 secs batch\n# if received 3 streams, there will be 2 climate data and 1 satellite data\n\n if len(sender) == 3: \n print(\"---------------------received 3 streams------------------------\")\n try:\n \n if len(climList) > 1:\n #check 2 climate location data\n close_location_in_list(climList)\n\n for climate2 in climList:\n\n if len(satData_3)!=0:\n \n \n \n #check location\n close_location(climate2,satData_3)\n \n print('---checking TERRA and Climate location---')\n if satData_3[\"location\"] == climate2[\"location\"]:\n print('joining....')\n\n join_data = join_data_cli_sat(climate2,satData_3)\n clim_col.insert(join_data)\n sat_col.update(satData_3, satData_3, upsert=True)\n \n\n else:\n print('no join')\n\n clim_col.insert(climate2)\n \n #insert the data into the mongo with handling the exceptions : duplicate\n sat_col.update(satData_3, satData_3, upsert=True)\n\n\n elif len(satData_2)!=0:\n \n \n #check location\n close_location(climate2,satData_2)\n \n print('---checking AQUA and Climate location---')\n \n if satData_2[\"location\"] == climate2[\"location\"]:\n print('joining....')\n \n join_data = join_data_cli_sat(climate2,satData_2)\n clim_col.insert(join_data)\n sat_col.update(satData_2, satData_2, upsert=True)\n \n else:\n print('no join')\n \n clim_col.insert(climate2)\n #insert the data into the mongo with handling the exceptions : duplicate\n sat_col.update(satData_2, satData_2, upsert=True)\n\n\n\n except Exception as ex:\n print(\"Exception Occured. Message: {0}\".format(str(ex)))\n \n ########################################Received 4 streams of data################################# \n# There will be 2 climate data and 2 satellite data from AQUA and TERRA\n\n elif len(sender) ==4 : # 4 will have 2 climate data and 2 sat data\n print(\"---------------------received 4 streams------------------------\")\n try:\n \n if len(climList) > 1:\n #check 2 climate location data\n close_location_in_list(climList)\n\n for climate2 in climList:\n print('---checking AQUA , TERRA and Climate location---')\n #location sat2=sat3=climate\n if (satData_2[\"location\"] == satData_3[\"location\"])\\\n and (satData_2[\"location\"] == climate2[\"location\"]):\n print('joining....')\n \n #join 2 satellite data\n sat_data = merge_sat(satData_2,satData_3)\n sat_col.update(sat_data, sat_data, upsert=True)\n \n #join with the climate file\n final_data = join_data_cli_sat(climate2,sat_data)\n clim_col.insert(final_data)\n \n \n #location sat2=sat3\n elif (satData_2[\"location\"] == satData_3[\"location\"])\\\n and (satData_2[\"location\"] != climate2[\"location\"]):\n print('joining....')\n sat_data = merge_sat(satData_2,satData_3)\n \n #insert the data into the mongo with handling the exceptions : duplicate\n sat_col.update(sat_data, sat_data, upsert=True)\n clim_col.insert(climate2)\n \n #check location\n close_location_2(sat_data,climate2)\n \n #location sat2=climate \n elif (satData_2[\"location\"] != satData_3[\"location\"])\\\n and (satData_2[\"location\"] == climate2[\"location\"]):\n print('joining....')\n \n join_data = join_data_cli_sat(climate2,satData_2)\n clim_col.insert(join_data)\n \n #insert the data into the mongo with handling the exceptions : duplicate\n sat_col.update(satData_3, satData_3, upsert=True)\n sat_col.update(satData_2, satData_2, upsert=True)\n#\n #check location\n close_location_2(join_data,satData_3)\n \n #location sat3 =climate\n elif (satData_2[\"location\"] != satData_3[\"location\"])\\\n and (satData_3[\"location\"] == climate2[\"location\"]):\n print('joining....')\n \n join_data = join_data_cli_sat(climate2,satData_3)\n clim_col.insert(join_data)\n \n #insert the data into the mongo with handling the exceptions : duplicate\n sat_col.update(satData_3, satData_3, upsert=True)\n sat_col.update(satData_2, satData_2, upsert=True)\n# \n #check location\n close_location_2(join_data,satData_2)\n \n #if nothing to merge\n else:\n \n\n print('no join')\n\n #check location\n close_location(climate2,satData_2)\n close_location(climate2,satData_3)\n close_location(satData_2,satData_3)\n clim_col.insert(climate2)\n \n #insert the data into the mongo with handling the exceptions\n sat_col.update(satData_3, satData_3, upsert=True)\n sat_col.update(satData_2, satData_2, upsert=True)\n \n\n\n \n \n except Exception as ex:\n print(\"Exception Occured. Message: {0}\".format(str(ex)))\n \n client.close()\n\n\n \n################################################ INITIATE THE STREAM ################################################\nn_secs = 10 # set batch to 10 seconds\ntopic = 'TaskC'\n\nconf = SparkConf().setAppName(\"KafkaStreamProcessor\").setMaster(\"local[2]\") #set 2 processors\nsc = SparkContext.getOrCreate()\nif sc is None:\n sc = SparkContext(conf=conf)\nsc.setLogLevel(\"WARN\")\nssc = StreamingContext(sc, n_secs)\n \nkafkaStream = KafkaUtils.createDirectStream(ssc, [topic], {\n 'bootstrap.servers':'localhost:9092', \n 'group.id':'taskC-group', \n 'fetch.message.max.bytes':'15728640',\n 'auto.offset.reset':'largest'})\n # Group ID is completely arbitrary\n \nlines= kafkaStream.foreachRDD(lambda rdd: rdd.foreachPartition(sendDataToDB))\n\n\n# this line print to check the data IDs has gone through the app for a specific time\na = kafkaStream.map(lambda x:x[0])\na.pprint()\n\n\nssc.start() \n\n# ssc.awaitTermination()\n\n# ssc.start()\ntime.sleep(3000) # Run stream for 20 mins just to get the data for visualisation\n# # ssc.awaitTermination()\nssc.stop(stopSparkContext=True,stopGraceFully=True)", "-------------------------------------------\nTime: 2019-05-24 17:45:20\n-------------------------------------------\nsender_2\nsender_3\nsender_1\nsender_1\n\n-------------------------------------------\nTime: 2019-05-24 17:45:30\n-------------------------------------------\nsender_3\nsender_1\nsender_1\n\n-------------------------------------------\nTime: 2019-05-24 17:45:40\n-------------------------------------------\nsender_2\nsender_1\nsender_1\n\n-------------------------------------------\nTime: 2019-05-24 17:45:50\n-------------------------------------------\nsender_1\nsender_2\nsender_1\n\n-------------------------------------------\nTime: 2019-05-24 17:46:00\n-------------------------------------------\nsender_3\nsender_1\nsender_1\n\n-------------------------------------------\nTime: 2019-05-24 17:46:10\n-------------------------------------------\nsender_1\nsender_2\nsender_3\nsender_1\n\n-------------------------------------------\nTime: 2019-05-24 17:46:20\n-------------------------------------------\nsender_1\nsender_1\n\n-------------------------------------------\nTime: 2019-05-24 17:46:30\n-------------------------------------------\nsender_2\nsender_1\nsender_1\n\n-------------------------------------------\nTime: 2019-05-24 17:46:40\n-------------------------------------------\nsender_1\nsender_3\nsender_1\n\n-------------------------------------------\nTime: 2019-05-24 17:46:50\n-------------------------------------------\nsender_1\nsender_1\n\n-------------------------------------------\nTime: 2019-05-24 17:47:00\n-------------------------------------------\nsender_2\nsender_3\nsender_1\nsender_1\n\n-------------------------------------------\nTime: 2019-05-24 17:47:10\n-------------------------------------------\nsender_2\nsender_1\nsender_3\nsender_1\n\n-------------------------------------------\nTime: 2019-05-24 17:47:20\n-------------------------------------------\nsender_1\n\n-------------------------------------------\nTime: 2019-05-24 17:47:30\n-------------------------------------------\nsender_1\nsender_3\nsender_1\n\n-------------------------------------------\nTime: 2019-05-24 17:47:40\n-------------------------------------------\nsender_1\nsender_2\nsender_1\nsender_3\n\n-------------------------------------------\nTime: 2019-05-24 17:47:50\n-------------------------------------------\nsender_1\nsender_1\nsender_3\n\n-------------------------------------------\nTime: 2019-05-24 17:48:00\n-------------------------------------------\nsender_1\nsender_1\n\n-------------------------------------------\nTime: 2019-05-24 17:48:10\n-------------------------------------------\nsender_1\nsender_2\nsender_1\n\n-------------------------------------------\nTime: 2019-05-24 17:48:20\n-------------------------------------------\nsender_1\nsender_3\nsender_1\nsender_2\n\n-------------------------------------------\nTime: 2019-05-24 17:48:30\n-------------------------------------------\nsender_1\nsender_1\n\n-------------------------------------------\nTime: 2019-05-24 17:48:40\n-------------------------------------------\nsender_1\nsender_1\nsender_3\n\n" ] ] ]
[ "code", "markdown", "code", "markdown", "code" ]
[ [ "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code" ] ]
d021bf8d23dd16f44565e18a49eb899ef3998f2e
205,313
ipynb
Jupyter Notebook
text-summarization-attention-mechanism.ipynb
buddhadeb33/Text-Summarization-Attention-Mechanism
e8ab5f81ec6d2f57238de3102f28bbe9f68a05be
[ "MIT" ]
null
null
null
text-summarization-attention-mechanism.ipynb
buddhadeb33/Text-Summarization-Attention-Mechanism
e8ab5f81ec6d2f57238de3102f28bbe9f68a05be
[ "MIT" ]
null
null
null
text-summarization-attention-mechanism.ipynb
buddhadeb33/Text-Summarization-Attention-Mechanism
e8ab5f81ec6d2f57238de3102f28bbe9f68a05be
[ "MIT" ]
null
null
null
102,656.5
205,312
0.884922
[ [ [ "<a href=\"https://www.kaggle.com/aaroha33/text-summarization-attention-mechanism?scriptVersionId=85928705\" target=\"_blank\"><img align=\"left\" alt=\"Kaggle\" title=\"Open in Kaggle\" src=\"https://kaggle.com/static/images/open-in-kaggle.svg\"></a>", "_____no_output_____" ], [ "<font size=\"+5\" color=Green > <b> <center><u>\n <br>Text Summarization \n <br>Sequenece to Sequence Modelling\n <br>Attention Mechanism </u> </font>", "_____no_output_____" ], [ "# Import Libraries", "_____no_output_____" ] ], [ [ "#import all the required libraries\nimport numpy as np\nimport pandas as pd\nimport pickle\nfrom statistics import mode\nimport nltk\nfrom nltk import word_tokenize\nfrom nltk.stem import LancasterStemmer\nnltk.download('wordnet')\nnltk.download('stopwords')\nnltk.download('punkt')\nfrom nltk.corpus import stopwords\nfrom tensorflow.keras.models import Model\nfrom tensorflow.keras import models\nfrom tensorflow.keras import backend as K\nfrom tensorflow.keras.preprocessing.sequence import pad_sequences\nfrom tensorflow.keras.preprocessing.text import Tokenizer \nfrom tensorflow.keras.utils import plot_model\nfrom tensorflow.keras.layers import Input,LSTM,Embedding,Dense,Concatenate,Attention\nfrom sklearn.model_selection import train_test_split\nfrom bs4 import BeautifulSoup\n\nimport warnings\npd.set_option(\"display.max_colwidth\", 200)\nwarnings.filterwarnings(\"ignore\")\n\nfrom tensorflow.keras.callbacks import EarlyStopping", "[nltk_data] Downloading package wordnet to /usr/share/nltk_data...\n[nltk_data] Package wordnet is already up-to-date!\n[nltk_data] Downloading package stopwords to /usr/share/nltk_data...\n[nltk_data] Package stopwords is already up-to-date!\n[nltk_data] Downloading package punkt to /usr/share/nltk_data...\n[nltk_data] Package punkt is already up-to-date!\n" ] ], [ [ "# Parse the Data", "_____no_output_____" ], [ "We’ll take a sample of 100,000 reviews to reduce the training time of our model.", "_____no_output_____" ] ], [ [ "#read the dataset file for text Summarizer\ndf=pd.read_csv(\"../input/amazon-fine-food-reviews/Reviews.csv\",nrows=10000)\n# df = pd.read_csv(\"../input/amazon-fine-food-reviews/Reviews.csv\")\n#drop the duplicate and na values from the records\ndf.drop_duplicates(subset=['Text'],inplace=True)\ndf.dropna(axis=0,inplace=True) #dropping na\ninput_data = df.loc[:,'Text']\ntarget_data = df.loc[:,'Summary']\ntarget_data.replace('', np.nan, inplace=True)", "_____no_output_____" ], [ "\ndf.info()", "<class 'pandas.core.frame.DataFrame'>\nInt64Index: 9513 entries, 0 to 9999\nData columns (total 10 columns):\n # Column Non-Null Count Dtype \n--- ------ -------------- ----- \n 0 Id 9513 non-null int64 \n 1 ProductId 9513 non-null object\n 2 UserId 9513 non-null object\n 3 ProfileName 9513 non-null object\n 4 HelpfulnessNumerator 9513 non-null int64 \n 5 HelpfulnessDenominator 9513 non-null int64 \n 6 Score 9513 non-null int64 \n 7 Time 9513 non-null int64 \n 8 Summary 9513 non-null object\n 9 Text 9513 non-null object\ndtypes: int64(5), object(5)\nmemory usage: 817.5+ KB\n" ], [ "df['Summary'][:10]", "_____no_output_____" ], [ "df['Text'][:10]", "_____no_output_____" ] ], [ [ "# Preprocessing", "_____no_output_____" ], [ "Performing basic preprocessing steps is very important before we get to the model building part. Using messy and uncleaned text data is a potentially disastrous move. So in this step, we will drop all the unwanted symbols, characters, etc. from the text that do not affect the objective of our problem.\n\nHere is the dictionary that we will use for expanding the contractions:", "_____no_output_____" ] ], [ [ "contraction_mapping = {\"ain't\": \"is not\", \"aren't\": \"are not\",\"can't\": \"cannot\", \"'cause\": \"because\", \"could've\": \"could have\", \"couldn't\": \"could not\",\n \"didn't\": \"did not\", \"doesn't\": \"does not\", \"don't\": \"do not\", \"hadn't\": \"had not\", \"hasn't\": \"has not\", \"haven't\": \"have not\",\n \"he'd\": \"he would\",\"he'll\": \"he will\", \"he's\": \"he is\", \"how'd\": \"how did\", \"how'd'y\": \"how do you\", \"how'll\": \"how will\", \"how's\": \"how is\",\n \"I'd\": \"I would\", \"I'd've\": \"I would have\", \"I'll\": \"I will\", \"I'll've\": \"I will have\",\"I'm\": \"I am\", \"I've\": \"I have\", \"i'd\": \"i would\",\n \"i'd've\": \"i would have\", \"i'll\": \"i will\", \"i'll've\": \"i will have\",\"i'm\": \"i am\", \"i've\": \"i have\", \"isn't\": \"is not\", \"it'd\": \"it would\",\n \"it'd've\": \"it would have\", \"it'll\": \"it will\", \"it'll've\": \"it will have\",\"it's\": \"it is\", \"let's\": \"let us\", \"ma'am\": \"madam\",\n \"mayn't\": \"may not\", \"might've\": \"might have\",\"mightn't\": \"might not\",\"mightn't've\": \"might not have\", \"must've\": \"must have\",\n \"mustn't\": \"must not\", \"mustn't've\": \"must not have\", \"needn't\": \"need not\", \"needn't've\": \"need not have\",\"o'clock\": \"of the clock\",\n \"oughtn't\": \"ought not\", \"oughtn't've\": \"ought not have\", \"shan't\": \"shall not\", \"sha'n't\": \"shall not\", \"shan't've\": \"shall not have\",\n \"she'd\": \"she would\", \"she'd've\": \"she would have\", \"she'll\": \"she will\", \"she'll've\": \"she will have\", \"she's\": \"she is\",\n \"should've\": \"should have\", \"shouldn't\": \"should not\", \"shouldn't've\": \"should not have\", \"so've\": \"so have\",\"so's\": \"so as\",\n \"this's\": \"this is\",\"that'd\": \"that would\", \"that'd've\": \"that would have\", \"that's\": \"that is\", \"there'd\": \"there would\",\n \"there'd've\": \"there would have\", \"there's\": \"there is\", \"here's\": \"here is\",\"they'd\": \"they would\", \"they'd've\": \"they would have\",\n \"they'll\": \"they will\", \"they'll've\": \"they will have\", \"they're\": \"they are\", \"they've\": \"they have\", \"to've\": \"to have\",\n \"wasn't\": \"was not\", \"we'd\": \"we would\", \"we'd've\": \"we would have\", \"we'll\": \"we will\", \"we'll've\": \"we will have\", \"we're\": \"we are\",\n \"we've\": \"we have\", \"weren't\": \"were not\", \"what'll\": \"what will\", \"what'll've\": \"what will have\", \"what're\": \"what are\",\n \"what's\": \"what is\", \"what've\": \"what have\", \"when's\": \"when is\", \"when've\": \"when have\", \"where'd\": \"where did\", \"where's\": \"where is\",\n \"where've\": \"where have\", \"who'll\": \"who will\", \"who'll've\": \"who will have\", \"who's\": \"who is\", \"who've\": \"who have\",\n \"why's\": \"why is\", \"why've\": \"why have\", \"will've\": \"will have\", \"won't\": \"will not\", \"won't've\": \"will not have\",\n \"would've\": \"would have\", \"wouldn't\": \"would not\", \"wouldn't've\": \"would not have\", \"y'all\": \"you all\",\n \"y'all'd\": \"you all would\",\"y'all'd've\": \"you all would have\",\"y'all're\": \"you all are\",\"y'all've\": \"you all have\",\n \"you'd\": \"you would\", \"you'd've\": \"you would have\", \"you'll\": \"you will\", \"you'll've\": \"you will have\",\n \"you're\": \"you are\", \"you've\": \"you have\"}", "_____no_output_____" ] ], [ [ "We can use the contraction using two method, one we can use the above dictionary or we can keep the contraction file as a data set and import it. ", "_____no_output_____" ] ], [ [ "input_texts=[] # Text column\ntarget_texts=[] # summary column\ninput_words=[]\ntarget_words=[]\n# contractions=pickle.load(open(\"../input/contraction/contractions.pkl\",\"rb\"))['contractions']\ncontractions = contraction_mapping\n\n#initialize stop words and LancasterStemmer\nstop_words=set(stopwords.words('english'))\nstemm=LancasterStemmer()", "_____no_output_____" ] ], [ [ "# Data Cleaning", "_____no_output_____" ] ], [ [ "def clean(texts,src):\n texts = BeautifulSoup(texts, \"lxml\").text #remove the html tags\n words=word_tokenize(texts.lower()) #tokenize the text into words \n #filter words which contains \\ \n #integers or their length is less than or equal to 3\n words= list(filter(lambda w:(w.isalpha() and len(w)>=3),words))\n #contraction file to expand shortened words\n words= [contractions[w] if w in contractions else w for w in words ]\n\n #stem the words to their root word and filter stop words\n if src==\"inputs\":\n words= [stemm.stem(w) for w in words if w not in stop_words]\n else:\n words= [w for w in words if w not in stop_words]\n return words", "_____no_output_____" ], [ "#pass the input records and target records\nfor in_txt,tr_txt in zip(input_data,target_data):\n in_words= clean(in_txt,\"inputs\")\n input_texts+= [' '.join(in_words)]\n input_words+= in_words\n #add 'sos' at start and 'eos' at end of text\n tr_words= clean(\"sos \"+tr_txt+\" eos\",\"target\")\n target_texts+= [' '.join(tr_words)]\n target_words+= tr_words", "_____no_output_____" ], [ "#store only unique words from input and target list of words\ninput_words = sorted(list(set(input_words)))\ntarget_words = sorted(list(set(target_words)))\nnum_in_words = len(input_words) #total number of input words\nnum_tr_words = len(target_words) #total number of target words\n \n#get the length of the input and target texts which appears most often \nmax_in_len = mode([len(i) for i in input_texts])\nmax_tr_len = mode([len(i) for i in target_texts])\n \nprint(\"number of input words : \",num_in_words)\nprint(\"number of target words : \",num_tr_words)\nprint(\"maximum input length : \",max_in_len)\nprint(\"maximum target length : \",max_tr_len)", "number of input words : 10344\nnumber of target words : 4169\nmaximum input length : 73\nmaximum target length : 17\n" ] ], [ [ "# Split it", "_____no_output_____" ] ], [ [ "#split the input and target text into 80:20 ratio or testing size of 20%.\nx_train,x_test,y_train,y_test=train_test_split(input_texts,target_texts,test_size=0.2,random_state=0) ", "_____no_output_____" ], [ "#train the tokenizer with all the words\nin_tokenizer = Tokenizer()\nin_tokenizer.fit_on_texts(x_train)\ntr_tokenizer = Tokenizer()\ntr_tokenizer.fit_on_texts(y_train)\n \n#convert text into sequence of integers\n#where the integer will be the index of that word\nx_train= in_tokenizer.texts_to_sequences(x_train) \ny_train= tr_tokenizer.texts_to_sequences(y_train)", "_____no_output_____" ], [ "#pad array of 0's if the length is less than the maximum length \nen_in_data= pad_sequences(x_train, maxlen=max_in_len, padding='post') \ndec_data= pad_sequences(y_train, maxlen=max_tr_len, padding='post')\n \n#decoder input data will not include the last word \n#i.e. 'eos' in decoder input data\ndec_in_data = dec_data[:,:-1]\n#decoder target data will be one time step ahead as it will not include\n# the first word i.e 'sos'\ndec_tr_data = dec_data.reshape(len(dec_data),max_tr_len,1)[:,1:]", "_____no_output_____" ] ], [ [ "# Model Building", "_____no_output_____" ] ], [ [ "K.clear_session() \nlatent_dim = 500\n \n#create input object of total number of encoder words\nen_inputs = Input(shape=(max_in_len,)) \nen_embedding = Embedding(num_in_words+1, latent_dim)(en_inputs) ", "2022-01-23 09:07:29.531408: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero\n2022-01-23 09:07:29.621524: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero\n2022-01-23 09:07:29.622229: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero\n2022-01-23 09:07:29.623321: I tensorflow/core/platform/cpu_feature_guard.cc:142] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations: AVX2 AVX512F FMA\nTo enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.\n2022-01-23 09:07:29.624279: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero\n2022-01-23 09:07:29.624944: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero\n2022-01-23 09:07:29.625551: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero\n2022-01-23 09:07:31.436718: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero\n2022-01-23 09:07:31.437522: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero\n2022-01-23 09:07:31.438235: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero\n2022-01-23 09:07:31.438820: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1510] Created device /job:localhost/replica:0/task:0/device:GPU:0 with 15403 MB memory: -> device: 0, name: Tesla P100-PCIE-16GB, pci bus id: 0000:00:04.0, compute capability: 6.0\n" ], [ "#create 3 stacked LSTM layer with the shape of hidden dimension for text summarizer using deep learning\n#LSTM 1\nen_lstm1= LSTM(latent_dim, return_state=True, return_sequences=True) \nen_outputs1, state_h1, state_c1= en_lstm1(en_embedding) \n \n#LSTM2\nen_lstm2= LSTM(latent_dim, return_state=True, return_sequences=True) \nen_outputs2, state_h2, state_c2= en_lstm2(en_outputs1) \n \n#LSTM3\nen_lstm3= LSTM(latent_dim,return_sequences=True,return_state=True)\nen_outputs3 , state_h3 , state_c3= en_lstm3(en_outputs2)\n \n#encoder states\nen_states= [state_h3, state_c3]", "_____no_output_____" ] ], [ [ "# Decoder", "_____no_output_____" ] ], [ [ "# Decoder. \ndec_inputs = Input(shape=(None,)) \ndec_emb_layer = Embedding(num_tr_words+1, latent_dim) \ndec_embedding = dec_emb_layer(dec_inputs) \n \n#initialize decoder's LSTM layer with the output states of encoder\ndec_lstm = LSTM(latent_dim, return_sequences=True, return_state=True)\ndec_outputs, *_ = dec_lstm(dec_embedding,initial_state=en_states) ", "_____no_output_____" ] ], [ [ "# Attention Layer", "_____no_output_____" ] ], [ [ "#Attention layer\nattention =Attention()\nattn_out = attention([dec_outputs,en_outputs3])\n \n#Concatenate the attention output with the decoder outputs\nmerge=Concatenate(axis=-1, name='concat_layer1')([dec_outputs,attn_out])", "_____no_output_____" ], [ "#Dense layer (output layer)\ndec_dense = Dense(num_tr_words+1, activation='softmax') \ndec_outputs = dec_dense(merge)", "_____no_output_____" ] ], [ [ "# Train the Model", "_____no_output_____" ] ], [ [ "#Model class and model summary for text Summarizer\nmodel = Model([en_inputs, dec_inputs], dec_outputs) \nmodel.summary()\nplot_model(model, to_file='model_plot.png', show_shapes=True, show_layer_names=True)", "Model: \"model\"\n__________________________________________________________________________________________________\nLayer (type) Output Shape Param # Connected to \n==================================================================================================\ninput_1 (InputLayer) [(None, 73)] 0 \n__________________________________________________________________________________________________\nembedding (Embedding) (None, 73, 500) 5172500 input_1[0][0] \n__________________________________________________________________________________________________\nlstm (LSTM) [(None, 73, 500), (N 2002000 embedding[0][0] \n__________________________________________________________________________________________________\ninput_2 (InputLayer) [(None, None)] 0 \n__________________________________________________________________________________________________\nlstm_1 (LSTM) [(None, 73, 500), (N 2002000 lstm[0][0] \n__________________________________________________________________________________________________\nembedding_1 (Embedding) (None, None, 500) 2085000 input_2[0][0] \n__________________________________________________________________________________________________\nlstm_2 (LSTM) [(None, 73, 500), (N 2002000 lstm_1[0][0] \n__________________________________________________________________________________________________\nlstm_3 (LSTM) [(None, None, 500), 2002000 embedding_1[0][0] \n lstm_2[0][1] \n lstm_2[0][2] \n__________________________________________________________________________________________________\nattention (Attention) (None, None, 500) 0 lstm_3[0][0] \n lstm_2[0][0] \n__________________________________________________________________________________________________\nconcat_layer1 (Concatenate) (None, None, 1000) 0 lstm_3[0][0] \n attention[0][0] \n__________________________________________________________________________________________________\ndense (Dense) (None, None, 4170) 4174170 concat_layer1[0][0] \n==================================================================================================\nTotal params: 19,439,670\nTrainable params: 19,439,670\nNon-trainable params: 0\n__________________________________________________________________________________________________\n" ], [ "model.compile(optimizer=\"rmsprop\", loss=\"sparse_categorical_crossentropy\", metrics=[\"accuracy\"] ) \nhistory = model.fit( \n [en_in_data, dec_in_data],\n dec_tr_data, \n batch_size=512, \n epochs=10, \n validation_split=0.1,)", "2022-01-23 09:07:34.878431: I tensorflow/compiler/mlir/mlir_graph_optimization_pass.cc:185] None of the MLIR Optimization Passes are enabled (registered 2)\n" ], [ "# save model\nmodel.save('Text_Summarizer.h5')\nprint('Model Saved!')", "Model Saved!\n" ], [ "from matplotlib import pyplot\npyplot.plot(history.history['loss'], label='train')\npyplot.plot(history.history['val_loss'], label='test')\npyplot.legend()\npyplot.show()", "_____no_output_____" ], [ "max_text_len=30\nmax_summary_len=8", "_____no_output_____" ] ], [ [ "### Next, let’s build the dictionary to convert the index to word for target and source vocabulary:", "_____no_output_____" ], [ "# Inference Model", "_____no_output_____" ], [ "### Encoder Inference:", "_____no_output_____" ] ], [ [ "# encoder inference\nlatent_dim=500\n#/content/gdrive/MyDrive/Text Summarizer/\n#load the model\nmodel = models.load_model(\"Text_Summarizer.h5\")\n \n#construct encoder model from the output of 6 layer i.e.last LSTM layer\nen_outputs,state_h_enc,state_c_enc = model.layers[6].output\nen_states=[state_h_enc,state_c_enc]\n#add input and state from the layer.\nen_model = Model(model.input[0],[en_outputs]+en_states)", "_____no_output_____" ] ], [ [ "### Decoder Inference:", "_____no_output_____" ] ], [ [ "# decoder inference\n#create Input object for hidden and cell state for decoder\n#shape of layer with hidden or latent dimension\ndec_state_input_h = Input(shape=(latent_dim,))\ndec_state_input_c = Input(shape=(latent_dim,))\ndec_hidden_state_input = Input(shape=(max_in_len,latent_dim))\n \n# Get the embeddings and input layer from the model\ndec_inputs = model.input[1]\ndec_emb_layer = model.layers[5]\ndec_lstm = model.layers[7]\ndec_embedding= dec_emb_layer(dec_inputs)\n \n#add input and initialize LSTM layer with encoder LSTM states.\ndec_outputs2, state_h2, state_c2 = dec_lstm(dec_embedding, initial_state=[dec_state_input_h,dec_state_input_c])", "_____no_output_____" ] ], [ [ "### Attention Inference:", "_____no_output_____" ] ], [ [ "#Attention layer\nattention = model.layers[8]\nattn_out2 = attention([dec_outputs2,dec_hidden_state_input])\n \nmerge2 = Concatenate(axis=-1)([dec_outputs2, attn_out2])", "_____no_output_____" ] ], [ [ "### Dense layer", "_____no_output_____" ] ], [ [ "#Dense layer\ndec_dense = model.layers[10]\ndec_outputs2 = dec_dense(merge2)\n \n# Finally define the Model Class\ndec_model = Model(\n[dec_inputs] + [dec_hidden_state_input,dec_state_input_h,dec_state_input_c],\n[dec_outputs2] + [state_h2, state_c2])", "_____no_output_____" ], [ "#create a dictionary with a key as index and value as words.\nreverse_target_word_index = tr_tokenizer.index_word\nreverse_source_word_index = in_tokenizer.index_word\ntarget_word_index = tr_tokenizer.word_index", "_____no_output_____" ], [ "def decode_sequence(input_seq):\n # get the encoder output and states by passing the input sequence\n en_out, en_h, en_c = en_model.predict(input_seq)\n\n # target sequence with inital word as 'sos'\n target_seq = np.zeros((1, 1))\n target_seq[0, 0] = target_word_index['sos']\n\n # if the iteration reaches the end of text than it will be stop the iteration\n stop_condition = False\n # append every predicted word in decoded sentence\n decoded_sentence = \"\"\n while not stop_condition:\n # get predicted output, hidden and cell state.\n output_words, dec_h, dec_c = dec_model.predict([target_seq] + [en_out, en_h, en_c])\n\n # get the index and from the dictionary get the word for that index.\n word_index = np.argmax(output_words[0, -1, :])\n text_word = reverse_target_word_index[word_index]\n decoded_sentence += text_word + \" \"\n\n # Exit condition: either hit max length\n # or find a stop word or last word.\n if text_word == \"eos\" or len(decoded_sentence) > max_tr_len:\n stop_condition = True\n\n # update target sequence to the current word index.\n target_seq = np.zeros((1, 1))\n target_seq[0, 0] = word_index\n en_h, en_c = dec_h, dec_c\n\n # return the deocded sentence\n return decoded_sentence", "_____no_output_____" ], [ "# inp_review = input(\"Enter : \")\ninp_review = \"Both the Google platforms provide a great cloud environment for any ML work to be deployed to. The features of them both are equally competent. Notebooks can be downloaded and later uploaded between the two. However, Colab comparatively provides greater flexibility to adjust the batch sizes.Saving or storing of models is easier on Colab since it allows them to be saved and stored to Google Drive. Also if one is using TensorFlow, using TPUs would be preferred on Colab. It is also faster than Kaggle. For a use case demanding more power and longer running processes, Colab is preferred.\"\nprint(\"Review :\", inp_review)\n\ninp_review = clean(inp_review, \"inputs\")\ninp_review = ' '.join(inp_review)\ninp_x = in_tokenizer.texts_to_sequences([inp_review])\ninp_x = pad_sequences(inp_x, maxlen=max_in_len, padding='post')\n\nsummary = decode_sequence(inp_x.reshape(1, max_in_len))\nif 'eos' in summary:\n summary = summary.replace('eos', '')\nprint(\"\\nPredicted summary:\", summary);\nprint(\"\\n\")", "Review : Both the Google platforms provide a great cloud environment for any ML work to be deployed to. The features of them both are equally competent. Notebooks can be downloaded and later uploaded between the two. However, Colab comparatively provides greater flexibility to adjust the batch sizes.Saving or storing of models is easier on Colab since it allows them to be saved and stored to Google Drive. Also if one is using TensorFlow, using TPUs would be preferred on Colab. It is also faster than Kaggle. For a use case demanding more power and longer running processes, Colab is preferred.\n\nPredicted summary: great \n\n\n" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code", "code", "code", "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code" ], [ "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code", "code" ] ]
d021c58f5bce572203c05790d8e2e616371510a2
7,455
ipynb
Jupyter Notebook
0.15/_downloads/plot_brainstorm_phantom_ctf.ipynb
drammock/mne-tools.github.io
5d3a104d174255644d8d5335f58036e32695e85d
[ "BSD-3-Clause" ]
null
null
null
0.15/_downloads/plot_brainstorm_phantom_ctf.ipynb
drammock/mne-tools.github.io
5d3a104d174255644d8d5335f58036e32695e85d
[ "BSD-3-Clause" ]
null
null
null
0.15/_downloads/plot_brainstorm_phantom_ctf.ipynb
drammock/mne-tools.github.io
5d3a104d174255644d8d5335f58036e32695e85d
[ "BSD-3-Clause" ]
null
null
null
34.513889
531
0.544199
[ [ [ "%matplotlib inline", "_____no_output_____" ] ], [ [ "\n# Brainstorm CTF phantom tutorial dataset\n\n\nHere we compute the evoked from raw for the Brainstorm CTF phantom\ntutorial dataset. For comparison, see [1]_ and:\n\n http://neuroimage.usc.edu/brainstorm/Tutorials/PhantomCtf\n\nReferences\n----------\n.. [1] Tadel F, Baillet S, Mosher JC, Pantazis D, Leahy RM.\n Brainstorm: A User-Friendly Application for MEG/EEG Analysis.\n Computational Intelligence and Neuroscience, vol. 2011, Article ID\n 879716, 13 pages, 2011. doi:10.1155/2011/879716\n\n", "_____no_output_____" ] ], [ [ "# Authors: Eric Larson <larson.eric.d@gmail.com>\n#\n# License: BSD (3-clause)\n\nimport os.path as op\nimport numpy as np\nimport matplotlib.pyplot as plt\n\nimport mne\nfrom mne import fit_dipole\nfrom mne.datasets.brainstorm import bst_phantom_ctf\nfrom mne.io import read_raw_ctf\n\nprint(__doc__)", "_____no_output_____" ] ], [ [ "The data were collected with a CTF system at 2400 Hz.\n\n", "_____no_output_____" ] ], [ [ "data_path = bst_phantom_ctf.data_path()\n\n# Switch to these to use the higher-SNR data:\n# raw_path = op.join(data_path, 'phantom_200uA_20150709_01.ds')\n# dip_freq = 7.\nraw_path = op.join(data_path, 'phantom_20uA_20150603_03.ds')\ndip_freq = 23.\nerm_path = op.join(data_path, 'emptyroom_20150709_01.ds')\nraw = read_raw_ctf(raw_path, preload=True)", "_____no_output_____" ] ], [ [ "The sinusoidal signal is generated on channel HDAC006, so we can use\nthat to obtain precise timing.\n\n", "_____no_output_____" ] ], [ [ "sinusoid, times = raw[raw.ch_names.index('HDAC006-4408')]\nplt.figure()\nplt.plot(times[times < 1.], sinusoid.T[times < 1.])", "_____no_output_____" ] ], [ [ "Let's create some events using this signal by thresholding the sinusoid.\n\n", "_____no_output_____" ] ], [ [ "events = np.where(np.diff(sinusoid > 0.5) > 0)[1] + raw.first_samp\nevents = np.vstack((events, np.zeros_like(events), np.ones_like(events))).T", "_____no_output_____" ] ], [ [ "The CTF software compensation works reasonably well:\n\n", "_____no_output_____" ] ], [ [ "raw.plot()", "_____no_output_____" ] ], [ [ "But here we can get slightly better noise suppression, lower localization\nbias, and a better dipole goodness of fit with spatio-temporal (tSSS)\nMaxwell filtering:\n\n", "_____no_output_____" ] ], [ [ "raw.apply_gradient_compensation(0) # must un-do software compensation first\nmf_kwargs = dict(origin=(0., 0., 0.), st_duration=10.)\nraw = mne.preprocessing.maxwell_filter(raw, **mf_kwargs)\nraw.plot()", "_____no_output_____" ] ], [ [ "Our choice of tmin and tmax should capture exactly one cycle, so\nwe can make the unusual choice of baselining using the entire epoch\nwhen creating our evoked data. We also then crop to a single time point\n(@t=0) because this is a peak in our signal.\n\n", "_____no_output_____" ] ], [ [ "tmin = -0.5 / dip_freq\ntmax = -tmin\nepochs = mne.Epochs(raw, events, event_id=1, tmin=tmin, tmax=tmax,\n baseline=(None, None))\nevoked = epochs.average()\nevoked.plot()\nevoked.crop(0., 0.)", "_____no_output_____" ] ], [ [ "Let's use a sphere head geometry model and let's see the coordinate\nalignement and the sphere location.\n\n", "_____no_output_____" ] ], [ [ "sphere = mne.make_sphere_model(r0=(0., 0., 0.), head_radius=None)\n\nmne.viz.plot_alignment(raw.info, subject='sample',\n meg='helmet', bem=sphere, dig=True,\n surfaces=['brain'])\ndel raw, epochs", "_____no_output_____" ] ], [ [ "To do a dipole fit, let's use the covariance provided by the empty room\nrecording.\n\n", "_____no_output_____" ] ], [ [ "raw_erm = read_raw_ctf(erm_path).apply_gradient_compensation(0)\nraw_erm = mne.preprocessing.maxwell_filter(raw_erm, coord_frame='meg',\n **mf_kwargs)\ncov = mne.compute_raw_covariance(raw_erm)\ndel raw_erm\n\ndip, residual = fit_dipole(evoked, cov, sphere)", "_____no_output_____" ] ], [ [ "Compare the actual position with the estimated one.\n\n", "_____no_output_____" ] ], [ [ "expected_pos = np.array([18., 0., 49.])\ndiff = np.sqrt(np.sum((dip.pos[0] * 1000 - expected_pos) ** 2))\nprint('Actual pos: %s mm' % np.array_str(expected_pos, precision=1))\nprint('Estimated pos: %s mm' % np.array_str(dip.pos[0] * 1000, precision=1))\nprint('Difference: %0.1f mm' % diff)\nprint('Amplitude: %0.1f nAm' % (1e9 * dip.amplitude[0]))\nprint('GOF: %0.1f %%' % dip.gof[0])", "_____no_output_____" ] ] ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ] ]
d021d0bfb33f8b1bf0764771946ba8d83c9d389d
78,352
ipynb
Jupyter Notebook
Complex_Systems.ipynb
davidgmiguez/julia_notebooks
b395fac8f73bf8d9d366d6354a561c722f37ce66
[ "BSD-3-Clause" ]
null
null
null
Complex_Systems.ipynb
davidgmiguez/julia_notebooks
b395fac8f73bf8d9d366d6354a561c722f37ce66
[ "BSD-3-Clause" ]
null
null
null
Complex_Systems.ipynb
davidgmiguez/julia_notebooks
b395fac8f73bf8d9d366d6354a561c722f37ce66
[ "BSD-3-Clause" ]
null
null
null
154.84585
27,208
0.669262
[ [ [ "empty" ] ] ]
[ "empty" ]
[ [ "empty" ] ]
d021d550b0599d907d8fb7798c9a992c49369ca5
174,864
ipynb
Jupyter Notebook
matrix_two/day2_viz.ipynb
mattzajac/dw_matrix
16763c44f6c46fc06d0a4a10b5467cc6f0eeaa92
[ "MIT" ]
null
null
null
matrix_two/day2_viz.ipynb
mattzajac/dw_matrix
16763c44f6c46fc06d0a4a10b5467cc6f0eeaa92
[ "MIT" ]
null
null
null
matrix_two/day2_viz.ipynb
mattzajac/dw_matrix
16763c44f6c46fc06d0a4a10b5467cc6f0eeaa92
[ "MIT" ]
null
null
null
174,864
174,864
0.931838
[ [ [ "import pandas as pd\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport seaborn as sns", "_____no_output_____" ], [ "cd '/content/drive/My Drive/Colab Notebooks/dw_matrix'", "/content/drive/My Drive/Colab Notebooks/dw_matrix\n" ], [ "df = pd.read_hdf('data/car.h5')", "_____no_output_____" ], [ "df.columns.values", "_____no_output_____" ], [ "df.price_value.hist(bins=100);", "_____no_output_____" ], [ "df.price_value.describe()", "_____no_output_____" ], [ "df['param_marka-pojazdu'].unique()", "_____no_output_____" ], [ "def plot(groupby_feat, agg_feat='price_value', agg_funcs=[np.mean, np.median, np.size], sort='mean', top=50, subplots=True):\n return (\n df\n .groupby(groupby_feat)[agg_feat]\n .agg(agg_funcs)\n .sort_values(by=sort, ascending=False)\n .head(top)\n \n ).plot(kind='bar', figsize=(15, 5), subplots=subplots);", "_____no_output_____" ], [ "plot('param_marka-pojazdu');", "_____no_output_____" ], [ "plot('param_kraj-pochodzenia', sort='size');", "_____no_output_____" ], [ "plot('param_kolor', sort='mean');", "_____no_output_____" ], [ "plot('param_skrzynia-biegów', sort='mean');", "_____no_output_____" ], [ "plot('param_body-type', sort='mean');", "_____no_output_____" ], [ "!git config --global user.email \"m.zajac1988@gmail.com\"\n!git config --global user.name \"Mateusz\"", "_____no_output_____" ], [ "!git add matrix_two/day2_viz.ipynb", "_____no_output_____" ], [ "!git commit -m 'Correct matplotlib params'", "[master 4fcee53] Correct matplotlib params\n 1 file changed, 1 insertion(+), 1 deletion(-)\n" ], [ "!git push -u origin master", "Counting objects: 8, done.\nDelta compression using up to 2 threads.\nCompressing objects: 12% (1/8) \rCompressing objects: 25% (2/8) \rCompressing objects: 37% (3/8) \rCompressing objects: 50% (4/8) \rCompressing objects: 62% (5/8) \rCompressing objects: 75% (6/8) \rCompressing objects: 87% (7/8) \rCompressing objects: 100% (8/8) \rCompressing objects: 100% (8/8), done.\nWriting objects: 12% (1/8) \rWriting objects: 25% (2/8) \rWriting objects: 37% (3/8) \rWriting objects: 50% (4/8) \rWriting objects: 62% (5/8) \rWriting objects: 75% (6/8) \rWriting objects: 87% (7/8) \rWriting objects: 100% (8/8) \rWriting objects: 100% (8/8), 82.63 KiB | 6.36 MiB/s, done.\nTotal 8 (delta 3), reused 0 (delta 0)\nremote: Resolving deltas: 100% (3/3), completed with 1 local object.\u001b[K\nTo https://github.com/mattzajac/dw_matrix.git\n accd44f..4fcee53 master -> master\nBranch 'master' set up to track remote branch 'master' from 'origin'.\n" ], [ "", "_____no_output_____" ] ] ]
[ "code" ]
[ [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ] ]
d021d6389574c61647d2b662f7f9d280195d89a7
5,490
ipynb
Jupyter Notebook
codility-lessons/7 Stacks and Queues.ipynb
stanislawbartkowski/learnml
1b87c3d433b38a86f85d6e9588cc5de54375bbba
[ "Apache-2.0" ]
null
null
null
codility-lessons/7 Stacks and Queues.ipynb
stanislawbartkowski/learnml
1b87c3d433b38a86f85d6e9588cc5de54375bbba
[ "Apache-2.0" ]
null
null
null
codility-lessons/7 Stacks and Queues.ipynb
stanislawbartkowski/learnml
1b87c3d433b38a86f85d6e9588cc5de54375bbba
[ "Apache-2.0" ]
null
null
null
25.416667
82
0.462842
[ [ [ "# Brackets\nhttps://app.codility.com/programmers/lessons/7-stacks_and_queues/brackets/", "_____no_output_____" ] ], [ [ "from typing import List\n\ndef solution(S) :\n stack : List[str] = []\n\n for ch in S :\n if ch == '{' or ch == '(' or ch == '[' : stack.append(ch)\n else:\n if len(stack) == 0 : return 0\n lastch = stack.pop()\n if ch == '}' and lastch != '{' : return 0\n if ch == ')' and lastch != '(' : return 0\n if ch == ']' and lastch != '[' : return 0\n \n return 0 if (len(stack) > 0) else 1", "_____no_output_____" ], [ "assert(solution(\"{[()()]}\") == 1)\nassert(solution(\"([)()]\") == 0)\nassert(solution(\"\") == 1)\nassert(solution(\"[]{}\") == 1)\nassert(solution(\"[]{\") == 0)", "_____no_output_____" ] ], [ [ "# Fish\n\nhttps://app.codility.com/programmers/lessons/7-stacks_and_queues/fish/", "_____no_output_____" ] ], [ [ "from typing import List\n\ndef solution(A : List[int], B : List[int]) -> int :\n assert(len(A) == len(B))\n stackup : List[int] = []\n eatenfish : int = 0\n\n for i in range(len(B)) :\n if B[i] == 1 : stackup.append(A[i])\n else :\n while len(stackup) > 0 :\n eatenfish += 1\n currup : int = stackup.pop()\n if currup > A[i] : \n stackup.append(currup)\n break\n\n return len(A) - eatenfish", "_____no_output_____" ], [ "assert(solution([4,3,2,1,5],[0,1,0,0,0]) == 2)\nassert(solution([4],[1])==1)", "_____no_output_____" ] ], [ [ "# Nesting\nhttps://app.codility.com/programmers/lessons/7-stacks_and_queues/nesting/", "_____no_output_____" ] ], [ [ "def solution(S : str) -> int :\n numof : int = 0\n \n for c in S :\n if c == \"(\" :\n numof += 1\n else: \n if numof == 0 : return 0\n numof -= 1\n \n return 1 if numof == 0 else 0", "_____no_output_____" ], [ "assert (solution(\"(()(())())\") == 1)\nassert (solution(\"())\") == 0)\nassert (solution(\"\") == 1)", "_____no_output_____" ] ], [ [ "# StoneWall\nhttps://app.codility.com/programmers/lessons/7-stacks_and_queues/stone_wall/", "_____no_output_____" ] ], [ [ "from typing import List\n\ndef solution(H : List[int]) -> int :\n assert(len(H) > 0)\n stack : List[int] = []\n no : int = 0\n for i in range(len(H)) :\n while (len(stack) > 0) and H[i] < stack[len(stack) -1] :\n no += 1\n stack.pop()\n if len(stack) == 0 or H[i] > stack[len(stack) -1] :\n stack.append(H[i])\n \n return no + len(stack)", "_____no_output_____" ], [ "assert(solution([8,8,5,7,9,8,7,4,8]) ==7)\n \nassert(solution([8,8,5,7,9,8,7,8,4]) == 7)\n\nassert(solution([8,8]) == 1)", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ] ]
d021f38cc7bbc3fa3fa32b416732f96d8d886c63
33,649
ipynb
Jupyter Notebook
credit_risk_ensemble.ipynb
THaoV1001/Classification-Homework
ce3d0800104504911eeb5a56639c14fac20e637e
[ "ADSL" ]
null
null
null
credit_risk_ensemble.ipynb
THaoV1001/Classification-Homework
ce3d0800104504911eeb5a56639c14fac20e637e
[ "ADSL" ]
null
null
null
credit_risk_ensemble.ipynb
THaoV1001/Classification-Homework
ce3d0800104504911eeb5a56639c14fac20e637e
[ "ADSL" ]
null
null
null
30.984346
275
0.4097
[ [ [ "# Ensemble Learning\n\n## Initial Imports", "_____no_output_____" ] ], [ [ "import warnings\nwarnings.filterwarnings('ignore')", "_____no_output_____" ], [ "import numpy as np\nimport pandas as pd\nfrom pathlib import Path\nfrom collections import Counter", "_____no_output_____" ], [ "from sklearn.metrics import balanced_accuracy_score\nfrom sklearn.metrics import confusion_matrix\nfrom imblearn.metrics import classification_report_imbalanced", "_____no_output_____" ] ], [ [ "## Read the CSV and Perform Basic Data Cleaning", "_____no_output_____" ] ], [ [ "# Load the data\nfile_path = Path('lending_data.csv')\ndf = pd.read_csv(file_path)\n\n# Preview the data\ndf.head()", "_____no_output_____" ], [ "# homeowner column is categorical, change to numerical so it can be scaled later on\nfrom sklearn.preprocessing import LabelEncoder\nlabel_encoder = LabelEncoder()\nlabel_encoder.fit(df[\"homeowner\"])\n\ndf[\"homeowner\"] = label_encoder.transform(df[\"homeowner\"])\ndf.head()", "_____no_output_____" ] ], [ [ "## Split the Data into Training and Testing", "_____no_output_____" ] ], [ [ "# Create our features\nX = df.drop(columns=\"loan_status\")\n\n# Create our target\ny = df[\"loan_status\"].to_frame()", "_____no_output_____" ], [ "X.describe()", "_____no_output_____" ], [ "# Check the balance of our target values\ny['loan_status'].value_counts()", "_____no_output_____" ], [ "# Split the X and y into X_train, X_test, y_train, y_test\n# Create X_train, X_test, y_train, y_test\nfrom sklearn.model_selection import train_test_split\n\nX_train, X_test, y_train, y_test = train_test_split(X, y, random_state=1, stratify=y)\nX_train", "_____no_output_____" ] ], [ [ "## Data Pre-Processing\n\nScale the training and testing data using the `StandardScaler` from `sklearn`. Remember that when scaling the data, you only scale the features data (`X_train` and `X_testing`).", "_____no_output_____" ] ], [ [ "# Create the StandardScaler instance\nfrom sklearn.preprocessing import StandardScaler\nscaler = StandardScaler()", "_____no_output_____" ], [ "# Fit the Standard Scaler with the training data\n# When fitting scaling functions, only train on the training dataset\nX_scaler = scaler.fit(X_train)", "_____no_output_____" ], [ "# Scale the training and testing data\nX_train_scaled = X_scaler.transform(X_train)\nX_test_scaled = X_scaler.transform(X_test)", "_____no_output_____" ] ], [ [ "## Ensemble Learners\n\nIn this section, you will compare two ensemble algorithms to determine which algorithm results in the best performance. You will train a Balanced Random Forest Classifier and an Easy Ensemble classifier . For each algorithm, be sure to complete the folliowing steps:\n\n1. Train the model using the training data. \n2. Calculate the balanced accuracy score from sklearn.metrics.\n3. Display the confusion matrix from sklearn.metrics.\n4. Generate a classication report using the `imbalanced_classification_report` from imbalanced-learn.\n5. For the Balanced Random Forest Classifier only, print the feature importance sorted in descending order (most important feature to least important) along with the feature score\n\nNote: Use a random state of 1 for each algorithm to ensure consistency between tests", "_____no_output_____" ], [ "### Balanced Random Forest Classifier", "_____no_output_____" ] ], [ [ "# Resample the training data with the BalancedRandomForestClassifier\nfrom imblearn.ensemble import BalancedRandomForestClassifier\nbrf = BalancedRandomForestClassifier(n_estimators=100, random_state=1) #100 trees\n\n# random forest use 50/50 probability decision, so I think scaled data is not required\nbrf.fit(X_train, y_train)", "C:\\Users\\61421\\anaconda3\\envs\\pyvizenv\\lib\\site-packages\\ipykernel_launcher.py:6: DataConversionWarning: A column-vector y was passed when a 1d array was expected. Please change the shape of y to (n_samples,), for example using ravel().\n \n" ], [ "# Calculated the balanced accuracy score\nfrom sklearn.metrics import balanced_accuracy_score\ny_pred = brf.predict(X_test)\nbalanced_accuracy_score(y_test, y_pred)", "_____no_output_____" ], [ "# Display the confusion matrix\nfrom sklearn.metrics import confusion_matrix\nconfusion_matrix(y_test, y_pred)", "_____no_output_____" ], [ "# Print the imbalanced classification report\nfrom imblearn.metrics import classification_report_imbalanced\nprint(classification_report_imbalanced(y_test, y_pred))", " pre rec spe f1 geo iba sup\n\n high_risk 0.81 1.00 0.99 0.89 0.99 0.99 625\n low_risk 1.00 0.99 1.00 1.00 0.99 0.99 18759\n\navg / total 0.99 0.99 1.00 0.99 0.99 0.99 19384\n\n" ], [ "# List the features sorted in descending order by feature importance\nimportances = brf.feature_importances_\nsorted(zip(brf.feature_importances_, X.columns), reverse=True)", "_____no_output_____" ] ], [ [ "### Easy Ensemble Classifier", "_____no_output_____" ] ], [ [ "# Train the Classifier\nfrom imblearn.ensemble import EasyEnsembleClassifier\neec = EasyEnsembleClassifier(n_estimators=100, random_state=1)\n\neec.fit(X_train, y_train)", "C:\\Users\\61421\\anaconda3\\envs\\pyvizenv\\lib\\site-packages\\sklearn\\utils\\validation.py:63: DataConversionWarning: A column-vector y was passed when a 1d array was expected. Please change the shape of y to (n_samples, ), for example using ravel().\n return f(*args, **kwargs)\n" ], [ "# Calculated the balanced accuracy score\ny_pred = eec.predict(X_test)\nbalanced_accuracy_score(y_test, y_pred)", "_____no_output_____" ], [ "# Display the confusion matrix\nconfusion_matrix(y_test, y_pred)", "_____no_output_____" ], [ "# Print the imbalanced classification report\nprint(classification_report_imbalanced(y_test, y_pred))", " pre rec spe f1 geo iba sup\n\n high_risk 0.84 1.00 0.99 0.91 0.99 0.99 625\n low_risk 1.00 0.99 1.00 1.00 0.99 0.99 18759\n\navg / total 0.99 0.99 1.00 0.99 0.99 0.99 19384\n\n" ] ], [ [ "### Final Questions\n\n1. Which model had the best balanced accuracy score?\n\n EEC has slightly better score, but the different is insignificant. \n\n2. Which model had the best recall score?\n\n Both models have the same recall score. \n\n3. Which model had the best geometric mean score?\n\n Both models have the same geometric mean score.\n\n4. What are the top three features?\n\n From Feature Importance, top 3 features are \"Debt to Income\", \"Interest Rate\" & \"Borrower Income\" ", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown", "markdown" ], [ "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code" ], [ "markdown" ] ]
d022065303ea0097200b2cf3ef960cf4c0691919
53,192
ipynb
Jupyter Notebook
Xanadu3.ipynb
olgOk/XanaduTraining
1e4af1091117b219d7a504226a45a1065e010b26
[ "MIT" ]
null
null
null
Xanadu3.ipynb
olgOk/XanaduTraining
1e4af1091117b219d7a504226a45a1065e010b26
[ "MIT" ]
null
null
null
Xanadu3.ipynb
olgOk/XanaduTraining
1e4af1091117b219d7a504226a45a1065e010b26
[ "MIT" ]
null
null
null
101.125475
22,646
0.754456
[ [ [ "<a href=\"https://colab.research.google.com/github/olgOk/XanaduTraining/blob/master/Xanadu3.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>", "_____no_output_____" ] ], [ [ "pip install pennylane", "Requirement already satisfied: pennylane in /usr/local/lib/python3.6/dist-packages (0.10.0)\nRequirement already satisfied: semantic-version==2.6 in /usr/local/lib/python3.6/dist-packages (from pennylane) (2.6.0)\nRequirement already satisfied: numpy in /usr/local/lib/python3.6/dist-packages (from pennylane) (1.18.5)\nRequirement already satisfied: toml in /usr/local/lib/python3.6/dist-packages (from pennylane) (0.10.1)\nRequirement already satisfied: networkx in /usr/local/lib/python3.6/dist-packages (from pennylane) (2.4)\nRequirement already satisfied: scipy in /usr/local/lib/python3.6/dist-packages (from pennylane) (1.4.1)\nRequirement already satisfied: appdirs in /usr/local/lib/python3.6/dist-packages (from pennylane) (1.4.4)\nRequirement already satisfied: autograd in /usr/local/lib/python3.6/dist-packages (from pennylane) (1.3)\nRequirement already satisfied: decorator>=4.3.0 in /usr/local/lib/python3.6/dist-packages (from networkx->pennylane) (4.4.2)\nRequirement already satisfied: future>=0.15.2 in /usr/local/lib/python3.6/dist-packages (from autograd->pennylane) (0.16.0)\n" ], [ "pip install torch", "Requirement already satisfied: torch in /usr/local/lib/python3.6/dist-packages (1.5.1+cu101)\nRequirement already satisfied: numpy in /usr/local/lib/python3.6/dist-packages (from torch) (1.18.5)\nRequirement already satisfied: future in /usr/local/lib/python3.6/dist-packages (from torch) (0.16.0)\n" ], [ "pip install tensorflow", "Requirement already satisfied: tensorflow in /usr/local/lib/python3.6/dist-packages (2.2.0)\nRequirement already satisfied: numpy<2.0,>=1.16.0 in /usr/local/lib/python3.6/dist-packages (from tensorflow) (1.18.5)\nRequirement already satisfied: h5py<2.11.0,>=2.10.0 in /usr/local/lib/python3.6/dist-packages (from tensorflow) (2.10.0)\nRequirement already satisfied: tensorboard<2.3.0,>=2.2.0 in /usr/local/lib/python3.6/dist-packages (from tensorflow) (2.2.2)\nRequirement already satisfied: tensorflow-estimator<2.3.0,>=2.2.0 in /usr/local/lib/python3.6/dist-packages (from tensorflow) (2.2.0)\nRequirement already satisfied: keras-preprocessing>=1.1.0 in /usr/local/lib/python3.6/dist-packages (from tensorflow) (1.1.2)\nRequirement already satisfied: gast==0.3.3 in /usr/local/lib/python3.6/dist-packages (from tensorflow) (0.3.3)\nRequirement already satisfied: protobuf>=3.8.0 in /usr/local/lib/python3.6/dist-packages (from tensorflow) (3.10.0)\nRequirement already satisfied: scipy==1.4.1; python_version >= \"3\" in /usr/local/lib/python3.6/dist-packages (from tensorflow) (1.4.1)\nRequirement already satisfied: termcolor>=1.1.0 in /usr/local/lib/python3.6/dist-packages (from tensorflow) (1.1.0)\nRequirement already satisfied: google-pasta>=0.1.8 in /usr/local/lib/python3.6/dist-packages (from tensorflow) (0.2.0)\nRequirement already satisfied: absl-py>=0.7.0 in /usr/local/lib/python3.6/dist-packages (from tensorflow) (0.9.0)\nRequirement already satisfied: six>=1.12.0 in /usr/local/lib/python3.6/dist-packages (from tensorflow) (1.12.0)\nRequirement already satisfied: opt-einsum>=2.3.2 in /usr/local/lib/python3.6/dist-packages (from tensorflow) (3.2.1)\nRequirement already satisfied: astunparse==1.6.3 in /usr/local/lib/python3.6/dist-packages (from tensorflow) (1.6.3)\nRequirement already satisfied: grpcio>=1.8.6 in /usr/local/lib/python3.6/dist-packages (from tensorflow) (1.30.0)\nRequirement already satisfied: wheel>=0.26; python_version >= \"3\" in /usr/local/lib/python3.6/dist-packages (from tensorflow) (0.34.2)\nRequirement already satisfied: wrapt>=1.11.1 in /usr/local/lib/python3.6/dist-packages (from tensorflow) (1.12.1)\nRequirement already satisfied: markdown>=2.6.8 in /usr/local/lib/python3.6/dist-packages (from tensorboard<2.3.0,>=2.2.0->tensorflow) (3.2.2)\nRequirement already satisfied: google-auth-oauthlib<0.5,>=0.4.1 in /usr/local/lib/python3.6/dist-packages (from tensorboard<2.3.0,>=2.2.0->tensorflow) (0.4.1)\nRequirement already satisfied: werkzeug>=0.11.15 in /usr/local/lib/python3.6/dist-packages (from tensorboard<2.3.0,>=2.2.0->tensorflow) (1.0.1)\nRequirement already satisfied: google-auth<2,>=1.6.3 in /usr/local/lib/python3.6/dist-packages (from tensorboard<2.3.0,>=2.2.0->tensorflow) (1.17.2)\nRequirement already satisfied: setuptools>=41.0.0 in /usr/local/lib/python3.6/dist-packages (from tensorboard<2.3.0,>=2.2.0->tensorflow) (49.1.0)\nRequirement already satisfied: tensorboard-plugin-wit>=1.6.0 in /usr/local/lib/python3.6/dist-packages (from tensorboard<2.3.0,>=2.2.0->tensorflow) (1.7.0)\nRequirement already satisfied: requests<3,>=2.21.0 in /usr/local/lib/python3.6/dist-packages (from tensorboard<2.3.0,>=2.2.0->tensorflow) (2.23.0)\nRequirement already satisfied: importlib-metadata; python_version < \"3.8\" in /usr/local/lib/python3.6/dist-packages (from markdown>=2.6.8->tensorboard<2.3.0,>=2.2.0->tensorflow) (1.7.0)\nRequirement already satisfied: requests-oauthlib>=0.7.0 in /usr/local/lib/python3.6/dist-packages (from google-auth-oauthlib<0.5,>=0.4.1->tensorboard<2.3.0,>=2.2.0->tensorflow) (1.3.0)\nRequirement already satisfied: pyasn1-modules>=0.2.1 in /usr/local/lib/python3.6/dist-packages (from google-auth<2,>=1.6.3->tensorboard<2.3.0,>=2.2.0->tensorflow) (0.2.8)\nRequirement already satisfied: cachetools<5.0,>=2.0.0 in /usr/local/lib/python3.6/dist-packages (from google-auth<2,>=1.6.3->tensorboard<2.3.0,>=2.2.0->tensorflow) (4.1.1)\nRequirement already satisfied: rsa<5,>=3.1.4; python_version >= \"3\" in /usr/local/lib/python3.6/dist-packages (from google-auth<2,>=1.6.3->tensorboard<2.3.0,>=2.2.0->tensorflow) (4.6)\nRequirement already satisfied: certifi>=2017.4.17 in /usr/local/lib/python3.6/dist-packages (from requests<3,>=2.21.0->tensorboard<2.3.0,>=2.2.0->tensorflow) (2020.6.20)\nRequirement already satisfied: urllib3!=1.25.0,!=1.25.1,<1.26,>=1.21.1 in /usr/local/lib/python3.6/dist-packages (from requests<3,>=2.21.0->tensorboard<2.3.0,>=2.2.0->tensorflow) (1.24.3)\nRequirement already satisfied: idna<3,>=2.5 in /usr/local/lib/python3.6/dist-packages (from requests<3,>=2.21.0->tensorboard<2.3.0,>=2.2.0->tensorflow) (2.10)\nRequirement already satisfied: chardet<4,>=3.0.2 in /usr/local/lib/python3.6/dist-packages (from requests<3,>=2.21.0->tensorboard<2.3.0,>=2.2.0->tensorflow) (3.0.4)\nRequirement already satisfied: zipp>=0.5 in /usr/local/lib/python3.6/dist-packages (from importlib-metadata; python_version < \"3.8\"->markdown>=2.6.8->tensorboard<2.3.0,>=2.2.0->tensorflow) (3.1.0)\nRequirement already satisfied: oauthlib>=3.0.0 in /usr/local/lib/python3.6/dist-packages (from requests-oauthlib>=0.7.0->google-auth-oauthlib<0.5,>=0.4.1->tensorboard<2.3.0,>=2.2.0->tensorflow) (3.1.0)\nRequirement already satisfied: pyasn1<0.5.0,>=0.4.6 in /usr/local/lib/python3.6/dist-packages (from pyasn1-modules>=0.2.1->google-auth<2,>=1.6.3->tensorboard<2.3.0,>=2.2.0->tensorflow) (0.4.8)\n" ], [ "pip install sklearn", "Requirement already satisfied: sklearn in /usr/local/lib/python3.6/dist-packages (0.0)\nRequirement already satisfied: scikit-learn in /usr/local/lib/python3.6/dist-packages (from sklearn) (0.22.2.post1)\nRequirement already satisfied: numpy>=1.11.0 in /usr/local/lib/python3.6/dist-packages (from scikit-learn->sklearn) (1.18.5)\nRequirement already satisfied: scipy>=0.17.0 in /usr/local/lib/python3.6/dist-packages (from scikit-learn->sklearn) (1.4.1)\nRequirement already satisfied: joblib>=0.11 in /usr/local/lib/python3.6/dist-packages (from scikit-learn->sklearn) (0.16.0)\n" ], [ "pip install pennylane-qiskit", "Requirement already satisfied: pennylane-qiskit in /usr/local/lib/python3.6/dist-packages (0.9.0)\nRequirement already satisfied: numpy in /usr/local/lib/python3.6/dist-packages (from pennylane-qiskit) (1.18.5)\nRequirement already satisfied: qiskit>=0.19.1 in /usr/local/lib/python3.6/dist-packages (from pennylane-qiskit) (0.19.6)\nRequirement already satisfied: pennylane>=0.9.0 in /usr/local/lib/python3.6/dist-packages (from pennylane-qiskit) (0.10.0)\nRequirement already satisfied: networkx>=2.2; python_version > \"3.5\" in /usr/local/lib/python3.6/dist-packages (from pennylane-qiskit) (2.4)\nRequirement already satisfied: qiskit-aqua==0.7.3 in /usr/local/lib/python3.6/dist-packages (from qiskit>=0.19.1->pennylane-qiskit) (0.7.3)\nRequirement already satisfied: qiskit-terra==0.14.2 in /usr/local/lib/python3.6/dist-packages (from qiskit>=0.19.1->pennylane-qiskit) (0.14.2)\nRequirement already satisfied: qiskit-aer==0.5.2 in /usr/local/lib/python3.6/dist-packages (from qiskit>=0.19.1->pennylane-qiskit) (0.5.2)\nRequirement already satisfied: qiskit-ibmq-provider==0.7.2 in /usr/local/lib/python3.6/dist-packages (from qiskit>=0.19.1->pennylane-qiskit) (0.7.2)\nRequirement already satisfied: qiskit-ignis==0.3.3 in /usr/local/lib/python3.6/dist-packages (from qiskit>=0.19.1->pennylane-qiskit) (0.3.3)\nRequirement already satisfied: appdirs in /usr/local/lib/python3.6/dist-packages (from pennylane>=0.9.0->pennylane-qiskit) (1.4.4)\nRequirement already satisfied: toml in /usr/local/lib/python3.6/dist-packages (from pennylane>=0.9.0->pennylane-qiskit) (0.10.1)\nRequirement already satisfied: semantic-version==2.6 in /usr/local/lib/python3.6/dist-packages (from pennylane>=0.9.0->pennylane-qiskit) (2.6.0)\nRequirement already satisfied: scipy in /usr/local/lib/python3.6/dist-packages (from pennylane>=0.9.0->pennylane-qiskit) (1.4.1)\nRequirement already satisfied: autograd in /usr/local/lib/python3.6/dist-packages (from pennylane>=0.9.0->pennylane-qiskit) (1.3)\nRequirement already satisfied: decorator>=4.3.0 in /usr/local/lib/python3.6/dist-packages (from networkx>=2.2; python_version > \"3.5\"->pennylane-qiskit) (4.4.2)\nRequirement already satisfied: pyscf; sys_platform != \"win32\" in /usr/local/lib/python3.6/dist-packages (from qiskit-aqua==0.7.3->qiskit>=0.19.1->pennylane-qiskit) (1.7.3)\nRequirement already satisfied: scikit-learn>=0.20.0 in /usr/local/lib/python3.6/dist-packages (from qiskit-aqua==0.7.3->qiskit>=0.19.1->pennylane-qiskit) (0.22.2.post1)\nRequirement already satisfied: psutil>=5 in /usr/local/lib/python3.6/dist-packages (from qiskit-aqua==0.7.3->qiskit>=0.19.1->pennylane-qiskit) (5.4.8)\nRequirement already satisfied: sympy>=1.3 in /usr/local/lib/python3.6/dist-packages (from qiskit-aqua==0.7.3->qiskit>=0.19.1->pennylane-qiskit) (1.6.1)\nRequirement already satisfied: setuptools>=40.1.0 in /usr/local/lib/python3.6/dist-packages (from qiskit-aqua==0.7.3->qiskit>=0.19.1->pennylane-qiskit) (49.1.0)\nRequirement already satisfied: dlx in /usr/local/lib/python3.6/dist-packages (from qiskit-aqua==0.7.3->qiskit>=0.19.1->pennylane-qiskit) (1.0.4)\nRequirement already satisfied: docplex in /usr/local/lib/python3.6/dist-packages (from qiskit-aqua==0.7.3->qiskit>=0.19.1->pennylane-qiskit) (2.15.194)\nRequirement already satisfied: h5py in /usr/local/lib/python3.6/dist-packages (from qiskit-aqua==0.7.3->qiskit>=0.19.1->pennylane-qiskit) (2.10.0)\nRequirement already satisfied: fastdtw in /usr/local/lib/python3.6/dist-packages (from qiskit-aqua==0.7.3->qiskit>=0.19.1->pennylane-qiskit) (0.3.4)\nRequirement already satisfied: quandl in /usr/local/lib/python3.6/dist-packages (from qiskit-aqua==0.7.3->qiskit>=0.19.1->pennylane-qiskit) (3.5.1)\nRequirement already satisfied: marshmallow<4,>=3 in /usr/local/lib/python3.6/dist-packages (from qiskit-terra==0.14.2->qiskit>=0.19.1->pennylane-qiskit) (3.7.0)\nRequirement already satisfied: python-constraint>=1.4 in /usr/local/lib/python3.6/dist-packages (from qiskit-terra==0.14.2->qiskit>=0.19.1->pennylane-qiskit) (1.4.0)\nRequirement already satisfied: fastjsonschema>=2.10 in /usr/local/lib/python3.6/dist-packages (from qiskit-terra==0.14.2->qiskit>=0.19.1->pennylane-qiskit) (2.14.4)\nRequirement already satisfied: dill>=0.3 in /usr/local/lib/python3.6/dist-packages (from qiskit-terra==0.14.2->qiskit>=0.19.1->pennylane-qiskit) (0.3.2)\nRequirement already satisfied: ply>=3.10 in /usr/local/lib/python3.6/dist-packages (from qiskit-terra==0.14.2->qiskit>=0.19.1->pennylane-qiskit) (3.11)\nRequirement already satisfied: jsonschema>=2.6 in /usr/local/lib/python3.6/dist-packages (from qiskit-terra==0.14.2->qiskit>=0.19.1->pennylane-qiskit) (2.6.0)\nRequirement already satisfied: python-dateutil>=2.8.0 in /usr/local/lib/python3.6/dist-packages (from qiskit-terra==0.14.2->qiskit>=0.19.1->pennylane-qiskit) (2.8.1)\nRequirement already satisfied: retworkx>=0.3.2 in /usr/local/lib/python3.6/dist-packages (from qiskit-terra==0.14.2->qiskit>=0.19.1->pennylane-qiskit) (0.3.4)\nRequirement already satisfied: marshmallow-polyfield<6,>=5.7 in /usr/local/lib/python3.6/dist-packages (from qiskit-terra==0.14.2->qiskit>=0.19.1->pennylane-qiskit) (5.9)\nRequirement already satisfied: cython>=0.27.1 in /usr/local/lib/python3.6/dist-packages (from qiskit-aer==0.5.2->qiskit>=0.19.1->pennylane-qiskit) (0.29.21)\nRequirement already satisfied: pybind11>=2.4 in /usr/local/lib/python3.6/dist-packages (from qiskit-aer==0.5.2->qiskit>=0.19.1->pennylane-qiskit) (2.5.0)\nRequirement already satisfied: nest-asyncio!=1.1.0,>=1.0.0 in /usr/local/lib/python3.6/dist-packages (from qiskit-ibmq-provider==0.7.2->qiskit>=0.19.1->pennylane-qiskit) (1.3.3)\nRequirement already satisfied: requests>=2.19 in /usr/local/lib/python3.6/dist-packages (from qiskit-ibmq-provider==0.7.2->qiskit>=0.19.1->pennylane-qiskit) (2.23.0)\nRequirement already satisfied: websockets<8,>=7 in /usr/local/lib/python3.6/dist-packages (from qiskit-ibmq-provider==0.7.2->qiskit>=0.19.1->pennylane-qiskit) (7.0)\nRequirement already satisfied: requests-ntlm>=1.1.0 in /usr/local/lib/python3.6/dist-packages (from qiskit-ibmq-provider==0.7.2->qiskit>=0.19.1->pennylane-qiskit) (1.1.0)\nRequirement already satisfied: urllib3>=1.21.1 in /usr/local/lib/python3.6/dist-packages (from qiskit-ibmq-provider==0.7.2->qiskit>=0.19.1->pennylane-qiskit) (1.24.3)\nRequirement already satisfied: future>=0.15.2 in /usr/local/lib/python3.6/dist-packages (from autograd->pennylane>=0.9.0->pennylane-qiskit) (0.16.0)\nRequirement already satisfied: joblib>=0.11 in /usr/local/lib/python3.6/dist-packages (from scikit-learn>=0.20.0->qiskit-aqua==0.7.3->qiskit>=0.19.1->pennylane-qiskit) (0.16.0)\nRequirement already satisfied: mpmath>=0.19 in /usr/local/lib/python3.6/dist-packages (from sympy>=1.3->qiskit-aqua==0.7.3->qiskit>=0.19.1->pennylane-qiskit) (1.1.0)\nRequirement already satisfied: six in /usr/local/lib/python3.6/dist-packages (from docplex->qiskit-aqua==0.7.3->qiskit>=0.19.1->pennylane-qiskit) (1.12.0)\nRequirement already satisfied: more-itertools in /usr/local/lib/python3.6/dist-packages (from quandl->qiskit-aqua==0.7.3->qiskit>=0.19.1->pennylane-qiskit) (8.4.0)\nRequirement already satisfied: pandas>=0.14 in /usr/local/lib/python3.6/dist-packages (from quandl->qiskit-aqua==0.7.3->qiskit>=0.19.1->pennylane-qiskit) (1.0.5)\nRequirement already satisfied: inflection>=0.3.1 in /usr/local/lib/python3.6/dist-packages (from quandl->qiskit-aqua==0.7.3->qiskit>=0.19.1->pennylane-qiskit) (0.5.0)\nRequirement already satisfied: chardet<4,>=3.0.2 in /usr/local/lib/python3.6/dist-packages (from requests>=2.19->qiskit-ibmq-provider==0.7.2->qiskit>=0.19.1->pennylane-qiskit) (3.0.4)\nRequirement already satisfied: certifi>=2017.4.17 in /usr/local/lib/python3.6/dist-packages (from requests>=2.19->qiskit-ibmq-provider==0.7.2->qiskit>=0.19.1->pennylane-qiskit) (2020.6.20)\nRequirement already satisfied: idna<3,>=2.5 in /usr/local/lib/python3.6/dist-packages (from requests>=2.19->qiskit-ibmq-provider==0.7.2->qiskit>=0.19.1->pennylane-qiskit) (2.10)\nRequirement already satisfied: cryptography>=1.3 in /usr/local/lib/python3.6/dist-packages (from requests-ntlm>=1.1.0->qiskit-ibmq-provider==0.7.2->qiskit>=0.19.1->pennylane-qiskit) (2.9.2)\nRequirement already satisfied: ntlm-auth>=1.0.2 in /usr/local/lib/python3.6/dist-packages (from requests-ntlm>=1.1.0->qiskit-ibmq-provider==0.7.2->qiskit>=0.19.1->pennylane-qiskit) (1.5.0)\nRequirement already satisfied: pytz>=2017.2 in /usr/local/lib/python3.6/dist-packages (from pandas>=0.14->quandl->qiskit-aqua==0.7.3->qiskit>=0.19.1->pennylane-qiskit) (2018.9)\nRequirement already satisfied: cffi!=1.11.3,>=1.8 in /usr/local/lib/python3.6/dist-packages (from cryptography>=1.3->requests-ntlm>=1.1.0->qiskit-ibmq-provider==0.7.2->qiskit>=0.19.1->pennylane-qiskit) (1.14.0)\nRequirement already satisfied: pycparser in /usr/local/lib/python3.6/dist-packages (from cffi!=1.11.3,>=1.8->cryptography>=1.3->requests-ntlm>=1.1.0->qiskit-ibmq-provider==0.7.2->qiskit>=0.19.1->pennylane-qiskit) (2.20)\n" ], [ "import pennylane as qml\nfrom pennylane import numpy as np", "_____no_output_____" ], [ "dev = qml.device(\"default.qubit\", wires=2)\n@qml.qnode(device=dev)\ndef cos_func(x, w):\n qml.RX(x, wires=0)\n qml.templates.BasicEntanglerLayers(w, wires=range(2))\n return qml.expval(qml.PauliZ(0))\n\nlayer = 4\nweights = qml.init.basic_entangler_layers_uniform(layer, 2)\n\nxs = np.linspace(-np.pi, 4*np.pi, requires_grad=False)\nys = np.cos(xs)", "_____no_output_____" ], [ "opt = qml.AdamOptimizer()\nepochs = 10\n\nfor epoch in range(epochs):\n for x, y in zip(xs, ys):\n cost = lambda weights:(cos_func(x, weights) - y) ** 2\n weights = opt.step(cost, weights)\n\nys_trained = [cos_func(x, weights) for x in xs]\n", "_____no_output_____" ], [ "import matplotlib.pyplot as plt\n\nplt.figure()\nplt.plot(xs, ys_trained, marker=\"o\", label=\"Cos(x\")\nplt.legend()\nplt.show()", "_____no_output_____" ], [ "", "_____no_output_____" ] ], [ [ "## Preparing GHZ state\n\nUsing the Autograd interface, train a circuit to prepare the 3-qubit W state:\n\n$|W> = {1/sqrt(3)}(001|> + |010> + |100>)", "_____no_output_____" ] ], [ [ "qubits = 3\n\nw = np.array([0, 1, 1, 0, 1, 0, 0, 0]) / np.sqrt(3)\nw_projector = w[:, np.newaxis] * w\nw_decomp = qml.utils.decompose_hamiltonian(w_projector)\nH = qml.Hamiltonian(*w_decomp)\n\ndef prepare_w(weights, wires):\n qml.templates.StronglyEntanglingLayers(weights, wires=wires)\n\ndev = qml.device(\"default.qubit\", wires=qubits)\nqnodes = qml.map(prepare_w, H.ops, dev)\nw_overlap = qml.dot(H.coeffs, qnodes)\n\nlayers = 4\nweights = qml.init.strong_ent_layers_uniform(layers, qubits)\n\nopt = qml.RMSPropOptimizer()\n\nepochs = 50\n\nfor i in range(epochs):\n weights = opt.step(lambda weights: -w_overlap(weights), weights)\n if i % 5 == 0:\n print(i, w_overlap(weights))\n\noutput_overlap = w_overlap(weights)\noutput_state = np.round(dev.state, 3)\n", "_____no_output_____" ] ], [ [ "##Quantum-based Optimization", "_____no_output_____" ] ], [ [ "dev = qml.device('default.qubit', wires=1)\n\n@qml.qnode(dev)\ndef rotation(thetas):\n qml.RX(1, wires=0)\n qml.RZ(1, wires=0)\n \n qml.RX(thetas[0], wires=0)\n qml.RY(thetas[1], wires=0)\n\n return qml.expval(qml.PauliZ(0))\n", "_____no_output_____" ], [ "opt = qml.RotoselectOptimizer()\n\n", "_____no_output_____" ], [ "import sklearn.datasets\n\ndata = sklearn.datasets.load_iris()\nx = data[\"data\"]\ny = data[\"target\"]\n\nnp.random.seed(1967)\nx, y = zip(*np.random.permutation(list(zip(x, y))))\n\nsplit = 125\n\nx_train = x[:split]\nx_test = x[split:]\ny_train = y[:split]\ny_test = y[split:]\n", "_____no_output_____" ], [ "", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code", "code" ] ]
d02209f9fd745eb214f001cd1a9b2ba0015ea9b5
305,154
ipynb
Jupyter Notebook
parte1.ipynb
tiagodalloca/mc920-trabalho1
644e1a7e383a6fd934fcaec15e5de2d5d52c3a4d
[ "MIT" ]
null
null
null
parte1.ipynb
tiagodalloca/mc920-trabalho1
644e1a7e383a6fd934fcaec15e5de2d5d52c3a4d
[ "MIT" ]
null
null
null
parte1.ipynb
tiagodalloca/mc920-trabalho1
644e1a7e383a6fd934fcaec15e5de2d5d52c3a4d
[ "MIT" ]
null
null
null
1,276.794979
150,960
0.961393
[ [ [ "# Parte 1 - Imagens coloridas\n\n**TIAGO PEREIRA DALL'OCA - 206341**", "_____no_output_____" ] ], [ [ "from scipy import misc\nfrom scipy import ndimage\nimport cv2\nimport numpy as np\nimport matplotlib.pyplot as plt", "_____no_output_____" ], [ "img = cv2.imread('imagens/baboon.png')", "_____no_output_____" ], [ "img.shape", "_____no_output_____" ] ], [ [ "## a)\n\nAqui é bem autoexplicativo. É criado uma matriz que irá multiplicar os vetores que representam os três canais de cores de cada pixel.", "_____no_output_____" ] ], [ [ "matriz_a = np.array([[0.393, 0.769, 0.189],\n [0.394, 0.686, 0.168],\n [0.272, 0.534, 0.131]])", "_____no_output_____" ], [ "img_a = np.dot(img, matriz_a)/255\nimg_a = img_a.clip(max=[1,1,1])\nimg_a.shape", "_____no_output_____" ], [ "plt.imshow(img_a)", "_____no_output_____" ] ], [ [ "## b)\n\nSemelhante ao item \"a\", porém agora faremos uma multiplicação vetorial que nos resultará em uma imagem de canal único (imagem cinza).", "_____no_output_____" ] ], [ [ "vetor_b = np.array([0.2989, 0.5870, 0.1140])", "_____no_output_____" ], [ "img_b = np.tensordot(img, vetor_b, axes=([2], [0]))/255\nimg_b = img_b.clip(max=[1]).reshape(img.shape[0:2])\nimg_b.shape", "_____no_output_____" ], [ "plt.imshow(img_b)", "_____no_output_____" ] ], [ [ "## O Programa\n\n**Executar**: \n```\npython3 python3 parte1_imagens_coloridas.py imagens_coloridas/baboon.png [...]\n```\n\nAs imagens resultado da aplicação dos filtros vão estar na pasta `imagens_mascaradas_parte1/` local.\n\nAs imagens estão prefixadas com \"a\" e \"b\" indicando a qual item pertencem.", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ] ]
d0221d3e5b90db8e540f6b05e8a0532131e2d35d
46,176
ipynb
Jupyter Notebook
notebooks/2017-05-27-data-science-of-data-science.ipynb
daniel-acuna/daniel-acuna.github.io
f3dec9f84b594a8d1afdac89b7553b8269e0e230
[ "MIT", "BSD-3-Clause" ]
2
2019-02-03T17:09:28.000Z
2019-06-10T07:05:13.000Z
notebooks/2017-05-27-data-science-of-data-science.ipynb
daniel-acuna/daniel-acuna.github.io
f3dec9f84b594a8d1afdac89b7553b8269e0e230
[ "MIT", "BSD-3-Clause" ]
null
null
null
notebooks/2017-05-27-data-science-of-data-science.ipynb
daniel-acuna/daniel-acuna.github.io
f3dec9f84b594a8d1afdac89b7553b8269e0e230
[ "MIT", "BSD-3-Clause" ]
1
2020-06-23T22:16:44.000Z
2020-06-23T22:16:44.000Z
401.530435
43,092
0.918269
[ [ [ "---\nlayout: single\ntitle: Including a Jupyter notebook in Github Jekyll\nexcerpt: Some examples of the capabilitites of Jekyll and Notebooks\n\n---", "_____no_output_____" ] ], [ [ "The goal of this post is to share how to make Jupyter notebook generate Markdown for Github. This accelerates the publication of code and ideas. Jupyter notebook is a feature-rich environment where you can type equations, put code, and figures.\n\nThe basic idea is to run a basic script in the `notebooks` folder, named `run_jupyter.sh`, which runs a custom configuration for Jupyter notebook. This configuration adds a hook to the saving cycle of the notebook. This hook transforms the Jupyter notebook (`.ipynb`) file into a Markdown file (`.md`) and generates the figures embeeded in it. Then, it moves the files into the `_posts` folder. Then, you only need to run Jekyll to see how the post look like.", "_____no_output_____" ], [ "First, I will load some Python packages:", "_____no_output_____" ] ], [ [ "%matplotlib inline\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport seaborn as sns", "_____no_output_____" ] ], [ [ "I will run plot this simple example:", "_____no_output_____" ] ], [ [ "import numpy as np\nimport seaborn as sns\nimport matplotlib.pyplot as plt\n\nsns.set(style=\"dark\")\nrs = np.random.RandomState(50)\n\nf, axes = plt.subplots(3, 3, figsize=(9, 9), sharex=True, sharey=True)\n\nfor ax, s in zip(axes.flat, np.linspace(0, 3, 10)):\n\n # Create a cubehelix colormap to use with kdeplot\n cmap = sns.cubehelix_palette(start=s, light=1, as_cmap=True)\n\n # Generate and plot a random bivariate dataset\n x, y = rs.randn(2, 50)\n sns.kdeplot(x, y, cmap=cmap, shade=True, cut=5, ax=ax)\n ax.set(xlim=(-3, 3), ylim=(-3, 3))\n\nf.tight_layout()", "_____no_output_____" ] ] ]
[ "raw", "markdown", "code", "markdown", "code" ]
[ [ "raw" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ] ]
d02222a7741bbcbc70431c5d8c4e65ba6687c1b1
20,062
ipynb
Jupyter Notebook
Python-Programming/Python-3-Bootcamp/13-Advanced Python Modules/.ipynb_checkpoints/05-Regular Expressions - re-checkpoint.ipynb
vivekparasharr/Learn-Programming
1ae07ef5143bff3c504978e1d375698820f59af0
[ "MIT" ]
null
null
null
Python-Programming/Python-3-Bootcamp/13-Advanced Python Modules/.ipynb_checkpoints/05-Regular Expressions - re-checkpoint.ipynb
vivekparasharr/Learn-Programming
1ae07ef5143bff3c504978e1d375698820f59af0
[ "MIT" ]
null
null
null
Python-Programming/Python-3-Bootcamp/13-Advanced Python Modules/.ipynb_checkpoints/05-Regular Expressions - re-checkpoint.ipynb
vivekparasharr/Learn-Programming
1ae07ef5143bff3c504978e1d375698820f59af0
[ "MIT" ]
null
null
null
33.548495
444
0.517645
[ [ [ "# Regular Expressions\n\nRegular expressions are text-matching patterns described with a formal syntax. You'll often hear regular expressions referred to as 'regex' or 'regexp' in conversation. Regular expressions can include a variety of rules, from finding repetition, to text-matching, and much more. As you advance in Python you'll see that a lot of your parsing problems can be solved with regular expressions (they're also a common interview question!).\n\n\nIf you're familiar with Perl, you'll notice that the syntax for regular expressions are very similar in Python. We will be using the <code>re</code> module with Python for this lecture.\n\n\nLet's get started!", "_____no_output_____" ], [ "## Searching for Patterns in Text\n\nOne of the most common uses for the re module is for finding patterns in text. Let's do a quick example of using the search method in the re module to find some text:", "_____no_output_____" ] ], [ [ "import re\n\n# List of patterns to search for\npatterns = ['term1', 'term2']\n\n# Text to parse\ntext = 'This is a string with term1, but it does not have the other term.'\n\nfor pattern in patterns:\n print('Searching for \"%s\" in:\\n \"%s\"\\n' %(pattern,text))\n \n #Check for match\n if re.search(pattern,text):\n print('Match was found. \\n')\n else:\n print('No Match was found.\\n')", "Searching for \"term1\" in:\n \"This is a string with term1, but it does not have the other term.\"\n\nMatch was found. \n\nSearching for \"term2\" in:\n \"This is a string with term1, but it does not have the other term.\"\n\nNo Match was found.\n\n" ] ], [ [ "Now we've seen that <code>re.search()</code> will take the pattern, scan the text, and then return a **Match** object. If no pattern is found, **None** is returned. To give a clearer picture of this match object, check out the cell below:", "_____no_output_____" ] ], [ [ "# List of patterns to search for\npattern = 'term1'\n\n# Text to parse\ntext = 'This is a string with term1, but it does not have the other term.'\n\nmatch = re.search(pattern,text)\n\ntype(match)", "_____no_output_____" ] ], [ [ "This **Match** object returned by the search() method is more than just a Boolean or None, it contains information about the match, including the original input string, the regular expression that was used, and the location of the match. Let's see the methods we can use on the match object:", "_____no_output_____" ] ], [ [ "# Show start of match\nmatch.start()", "_____no_output_____" ], [ "# Show end\nmatch.end()", "_____no_output_____" ] ], [ [ "## Split with regular expressions\n\nLet's see how we can split with the re syntax. This should look similar to how you used the split() method with strings.", "_____no_output_____" ] ], [ [ "# Term to split on\nsplit_term = '@'\n\nphrase = 'What is the domain name of someone with the email: hello@gmail.com'\n\n# Split the phrase\nre.split(split_term,phrase)", "_____no_output_____" ] ], [ [ "Note how <code>re.split()</code> returns a list with the term to split on removed and the terms in the list are a split up version of the string. Create a couple of more examples for yourself to make sure you understand!\n\n## Finding all instances of a pattern\n\nYou can use <code>re.findall()</code> to find all the instances of a pattern in a string. For example:", "_____no_output_____" ] ], [ [ "# Returns a list of all matches\nre.findall('match','test phrase match is in middle')", "_____no_output_____" ] ], [ [ "## re Pattern Syntax\n\nThis will be the bulk of this lecture on using re with Python. Regular expressions support a huge variety of patterns beyond just simply finding where a single string occurred. \n\nWe can use *metacharacters* along with re to find specific types of patterns. \n\nSince we will be testing multiple re syntax forms, let's create a function that will print out results given a list of various regular expressions and a phrase to parse:", "_____no_output_____" ] ], [ [ "def multi_re_find(patterns,phrase):\n '''\n Takes in a list of regex patterns\n Prints a list of all matches\n '''\n for pattern in patterns:\n print('Searching the phrase using the re check: %r' %(pattern))\n print(re.findall(pattern,phrase))\n print('\\n')", "_____no_output_____" ] ], [ [ "### Repetition Syntax\n\nThere are five ways to express repetition in a pattern:\n\n 1. A pattern followed by the meta-character <code>*</code> is repeated zero or more times. \n 2. Replace the <code>*</code> with <code>+</code> and the pattern must appear at least once. \n 3. Using <code>?</code> means the pattern appears zero or one time. \n 4. For a specific number of occurrences, use <code>{m}</code> after the pattern, where **m** is replaced with the number of times the pattern should repeat. \n 5. Use <code>{m,n}</code> where **m** is the minimum number of repetitions and **n** is the maximum. Leaving out **n** <code>{m,}</code> means the value appears at least **m** times, with no maximum.\n \nNow we will see an example of each of these using our multi_re_find function:", "_____no_output_____" ] ], [ [ "test_phrase = 'sdsd..sssddd...sdddsddd...dsds...dsssss...sdddd'\n\ntest_patterns = [ 'sd*', # s followed by zero or more d's\n 'sd+', # s followed by one or more d's\n 'sd?', # s followed by zero or one d's\n 'sd{3}', # s followed by three d's\n 'sd{2,3}', # s followed by two to three d's\n ]\n\nmulti_re_find(test_patterns,test_phrase)", "Searching the phrase using the re check: 'sd*'\n['sd', 'sd', 's', 's', 'sddd', 'sddd', 'sddd', 'sd', 's', 's', 's', 's', 's', 's', 'sdddd']\n\n\nSearching the phrase using the re check: 'sd+'\n['sd', 'sd', 'sddd', 'sddd', 'sddd', 'sd', 'sdddd']\n\n\nSearching the phrase using the re check: 'sd?'\n['sd', 'sd', 's', 's', 'sd', 'sd', 'sd', 'sd', 's', 's', 's', 's', 's', 's', 'sd']\n\n\nSearching the phrase using the re check: 'sd{3}'\n['sddd', 'sddd', 'sddd', 'sddd']\n\n\nSearching the phrase using the re check: 'sd{2,3}'\n['sddd', 'sddd', 'sddd', 'sddd']\n\n\n" ] ], [ [ "## Character Sets\n\nCharacter sets are used when you wish to match any one of a group of characters at a point in the input. Brackets are used to construct character set inputs. For example: the input <code>[ab]</code> searches for occurrences of either **a** or **b**.\nLet's see some examples:", "_____no_output_____" ] ], [ [ "test_phrase = 'sdsd..sssddd...sdddsddd...dsds...dsssss...sdddd'\n\ntest_patterns = ['[sd]', # either s or d\n 's[sd]+'] # s followed by one or more s or d\n\nmulti_re_find(test_patterns,test_phrase)", "Searching the phrase using the re check: '[sd]'\n['s', 'd', 's', 'd', 's', 's', 's', 'd', 'd', 'd', 's', 'd', 'd', 'd', 's', 'd', 'd', 'd', 'd', 's', 'd', 's', 'd', 's', 's', 's', 's', 's', 's', 'd', 'd', 'd', 'd']\n\n\nSearching the phrase using the re check: 's[sd]+'\n['sdsd', 'sssddd', 'sdddsddd', 'sds', 'sssss', 'sdddd']\n\n\n" ] ], [ [ "It makes sense that the first input <code>[sd]</code> returns every instance of s or d. Also, the second input <code>s[sd]+</code> returns any full strings that begin with an s and continue with s or d characters until another character is reached.", "_____no_output_____" ], [ "## Exclusion\n\nWe can use <code>^</code> to exclude terms by incorporating it into the bracket syntax notation. For example: <code>[^...]</code> will match any single character not in the brackets. Let's see some examples:", "_____no_output_____" ] ], [ [ "test_phrase = 'This is a string! But it has punctuation. How can we remove it?'", "_____no_output_____" ] ], [ [ "Use <code>[^!.? ]</code> to check for matches that are not a !,.,?, or space. Add a <code>+</code> to check that the match appears at least once. This basically translates into finding the words.", "_____no_output_____" ] ], [ [ "re.findall('[^!.? ]+',test_phrase)", "_____no_output_____" ] ], [ [ "## Character Ranges\n\nAs character sets grow larger, typing every character that should (or should not) match could become very tedious. A more compact format using character ranges lets you define a character set to include all of the contiguous characters between a start and stop point. The format used is <code>[start-end]</code>.\n\nCommon use cases are to search for a specific range of letters in the alphabet. For instance, <code>[a-f]</code> would return matches with any occurrence of letters between a and f. \n\nLet's walk through some examples:", "_____no_output_____" ] ], [ [ "\ntest_phrase = 'This is an example sentence. Lets see if we can find some letters.'\n\ntest_patterns=['[a-z]+', # sequences of lower case letters\n '[A-Z]+', # sequences of upper case letters\n '[a-zA-Z]+', # sequences of lower or upper case letters\n '[A-Z][a-z]+'] # one upper case letter followed by lower case letters\n \nmulti_re_find(test_patterns,test_phrase)", "Searching the phrase using the re check: '[a-z]+'\n['his', 'is', 'an', 'example', 'sentence', 'ets', 'see', 'if', 'we', 'can', 'find', 'some', 'letters']\n\n\nSearching the phrase using the re check: '[A-Z]+'\n['T', 'L']\n\n\nSearching the phrase using the re check: '[a-zA-Z]+'\n['This', 'is', 'an', 'example', 'sentence', 'Lets', 'see', 'if', 'we', 'can', 'find', 'some', 'letters']\n\n\nSearching the phrase using the re check: '[A-Z][a-z]+'\n['This', 'Lets']\n\n\n" ] ], [ [ "## Escape Codes\n\nYou can use special escape codes to find specific types of patterns in your data, such as digits, non-digits, whitespace, and more. For example:\n\n<table border=\"1\" class=\"docutils\">\n<colgroup>\n<col width=\"14%\" />\n<col width=\"86%\" />\n</colgroup>\n<thead valign=\"bottom\">\n<tr class=\"row-odd\"><th class=\"head\">Code</th>\n<th class=\"head\">Meaning</th>\n</tr>\n</thead>\n<tbody valign=\"top\">\n<tr class=\"row-even\"><td><tt class=\"docutils literal\"><span class=\"pre\">\\d</span></tt></td>\n<td>a digit</td>\n</tr>\n<tr class=\"row-odd\"><td><tt class=\"docutils literal\"><span class=\"pre\">\\D</span></tt></td>\n<td>a non-digit</td>\n</tr>\n<tr class=\"row-even\"><td><tt class=\"docutils literal\"><span class=\"pre\">\\s</span></tt></td>\n<td>whitespace (tab, space, newline, etc.)</td>\n</tr>\n<tr class=\"row-odd\"><td><tt class=\"docutils literal\"><span class=\"pre\">\\S</span></tt></td>\n<td>non-whitespace</td>\n</tr>\n<tr class=\"row-even\"><td><tt class=\"docutils literal\"><span class=\"pre\">\\w</span></tt></td>\n<td>alphanumeric</td>\n</tr>\n<tr class=\"row-odd\"><td><tt class=\"docutils literal\"><span class=\"pre\">\\W</span></tt></td>\n<td>non-alphanumeric</td>\n</tr>\n</tbody>\n</table>\n\nEscapes are indicated by prefixing the character with a backslash <code>\\</code>. Unfortunately, a backslash must itself be escaped in normal Python strings, and that results in expressions that are difficult to read. Using raw strings, created by prefixing the literal value with <code>r</code>, eliminates this problem and maintains readability.\n\nPersonally, I think this use of <code>r</code> to escape a backslash is probably one of the things that block someone who is not familiar with regex in Python from being able to read regex code at first. Hopefully after seeing these examples this syntax will become clear.", "_____no_output_____" ] ], [ [ "test_phrase = 'This is a string with some numbers 1233 and a symbol #hashtag'\n\ntest_patterns=[ r'\\d+', # sequence of digits\n r'\\D+', # sequence of non-digits\n r'\\s+', # sequence of whitespace\n r'\\S+', # sequence of non-whitespace\n r'\\w+', # alphanumeric characters\n r'\\W+', # non-alphanumeric\n ]\n\nmulti_re_find(test_patterns,test_phrase)", "Searching the phrase using the re check: '\\\\d+'\n['1233']\n\n\nSearching the phrase using the re check: '\\\\D+'\n['This is a string with some numbers ', ' and a symbol #hashtag']\n\n\nSearching the phrase using the re check: '\\\\s+'\n[' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ']\n\n\nSearching the phrase using the re check: '\\\\S+'\n['This', 'is', 'a', 'string', 'with', 'some', 'numbers', '1233', 'and', 'a', 'symbol', '#hashtag']\n\n\nSearching the phrase using the re check: '\\\\w+'\n['This', 'is', 'a', 'string', 'with', 'some', 'numbers', '1233', 'and', 'a', 'symbol', 'hashtag']\n\n\nSearching the phrase using the re check: '\\\\W+'\n[' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' #']\n\n\n" ] ], [ [ "## Conclusion\n\nYou should now have a solid understanding of how to use the regular expression module in Python. There are a ton of more special character instances, but it would be unreasonable to go through every single use case. Instead take a look at the full [documentation](https://docs.python.org/3/library/re.html#regular-expression-syntax) if you ever need to look up a particular pattern.\n\nYou can also check out the nice summary tables at this [source](http://www.tutorialspoint.com/python/python_reg_expressions.htm).\n\nGood job!\n", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ] ]
d0223de02f6fdaa6529a4970d2ba0ecc8d5df39d
237,044
ipynb
Jupyter Notebook
experiments/tuned_1v2/oracle.run2/trials/4/trial.ipynb
stevester94/csc500-notebooks
4c1b04c537fe233a75bed82913d9d84985a89177
[ "MIT" ]
null
null
null
experiments/tuned_1v2/oracle.run2/trials/4/trial.ipynb
stevester94/csc500-notebooks
4c1b04c537fe233a75bed82913d9d84985a89177
[ "MIT" ]
null
null
null
experiments/tuned_1v2/oracle.run2/trials/4/trial.ipynb
stevester94/csc500-notebooks
4c1b04c537fe233a75bed82913d9d84985a89177
[ "MIT" ]
null
null
null
101.735622
74,808
0.79624
[ [ [ "# PTN Template\nThis notebook serves as a template for single dataset PTN experiments \nIt can be run on its own by setting STANDALONE to True (do a find for \"STANDALONE\" to see where) \nBut it is intended to be executed as part of a *papermill.py script. See any of the \nexperimentes with a papermill script to get started with that workflow. ", "_____no_output_____" ] ], [ [ "%load_ext autoreload\n%autoreload 2\n%matplotlib inline\n\n \nimport os, json, sys, time, random\nimport numpy as np\nimport torch\nfrom torch.optim import Adam\nfrom easydict import EasyDict\nimport matplotlib.pyplot as plt\n\nfrom steves_models.steves_ptn import Steves_Prototypical_Network\n\nfrom steves_utils.lazy_iterable_wrapper import Lazy_Iterable_Wrapper\nfrom steves_utils.iterable_aggregator import Iterable_Aggregator\nfrom steves_utils.ptn_train_eval_test_jig import PTN_Train_Eval_Test_Jig\nfrom steves_utils.torch_sequential_builder import build_sequential\nfrom steves_utils.torch_utils import get_dataset_metrics, ptn_confusion_by_domain_over_dataloader\nfrom steves_utils.utils_v2 import (per_domain_accuracy_from_confusion, get_datasets_base_path)\nfrom steves_utils.PTN.utils import independent_accuracy_assesment\n\nfrom steves_utils.stratified_dataset.episodic_accessor import Episodic_Accessor_Factory\n\nfrom steves_utils.ptn_do_report import (\n get_loss_curve,\n get_results_table,\n get_parameters_table,\n get_domain_accuracies,\n)\n\nfrom steves_utils.transforms import get_chained_transform", "_____no_output_____" ] ], [ [ "# Required Parameters\nThese are allowed parameters, not defaults\nEach of these values need to be present in the injected parameters (the notebook will raise an exception if they are not present)\n\nPapermill uses the cell tag \"parameters\" to inject the real parameters below this cell.\nEnable tags to see what I mean", "_____no_output_____" ] ], [ [ "required_parameters = {\n \"experiment_name\",\n \"lr\",\n \"device\",\n \"seed\",\n \"dataset_seed\",\n \"labels_source\",\n \"labels_target\",\n \"domains_source\",\n \"domains_target\",\n \"num_examples_per_domain_per_label_source\",\n \"num_examples_per_domain_per_label_target\",\n \"n_shot\",\n \"n_way\",\n \"n_query\",\n \"train_k_factor\",\n \"val_k_factor\",\n \"test_k_factor\",\n \"n_epoch\",\n \"patience\",\n \"criteria_for_best\",\n \"x_transforms_source\",\n \"x_transforms_target\",\n \"episode_transforms_source\",\n \"episode_transforms_target\",\n \"pickle_name\",\n \"x_net\",\n \"NUM_LOGS_PER_EPOCH\",\n \"BEST_MODEL_PATH\",\n \"torch_default_dtype\"\n}", "_____no_output_____" ], [ "\n\nstandalone_parameters = {}\nstandalone_parameters[\"experiment_name\"] = \"STANDALONE PTN\"\nstandalone_parameters[\"lr\"] = 0.0001\nstandalone_parameters[\"device\"] = \"cuda\"\n\nstandalone_parameters[\"seed\"] = 1337\nstandalone_parameters[\"dataset_seed\"] = 1337\n\n\nstandalone_parameters[\"num_examples_per_domain_per_label_source\"]=100\nstandalone_parameters[\"num_examples_per_domain_per_label_target\"]=100\n\nstandalone_parameters[\"n_shot\"] = 3\nstandalone_parameters[\"n_query\"] = 2\nstandalone_parameters[\"train_k_factor\"] = 1\nstandalone_parameters[\"val_k_factor\"] = 2\nstandalone_parameters[\"test_k_factor\"] = 2\n\n\nstandalone_parameters[\"n_epoch\"] = 100\n\nstandalone_parameters[\"patience\"] = 10\nstandalone_parameters[\"criteria_for_best\"] = \"target_accuracy\"\n\nstandalone_parameters[\"x_transforms_source\"] = [\"unit_power\"]\nstandalone_parameters[\"x_transforms_target\"] = [\"unit_power\"]\nstandalone_parameters[\"episode_transforms_source\"] = []\nstandalone_parameters[\"episode_transforms_target\"] = []\n\nstandalone_parameters[\"torch_default_dtype\"] = \"torch.float32\" \n\n\n\nstandalone_parameters[\"x_net\"] = [\n {\"class\": \"nnReshape\", \"kargs\": {\"shape\":[-1, 1, 2, 256]}},\n {\"class\": \"Conv2d\", \"kargs\": { \"in_channels\":1, \"out_channels\":256, \"kernel_size\":(1,7), \"bias\":False, \"padding\":(0,3), },},\n {\"class\": \"ReLU\", \"kargs\": {\"inplace\": True}},\n {\"class\": \"BatchNorm2d\", \"kargs\": {\"num_features\":256}},\n\n {\"class\": \"Conv2d\", \"kargs\": { \"in_channels\":256, \"out_channels\":80, \"kernel_size\":(2,7), \"bias\":True, \"padding\":(0,3), },},\n {\"class\": \"ReLU\", \"kargs\": {\"inplace\": True}},\n {\"class\": \"BatchNorm2d\", \"kargs\": {\"num_features\":80}},\n {\"class\": \"Flatten\", \"kargs\": {}},\n\n {\"class\": \"Linear\", \"kargs\": {\"in_features\": 80*256, \"out_features\": 256}}, # 80 units per IQ pair\n {\"class\": \"ReLU\", \"kargs\": {\"inplace\": True}},\n {\"class\": \"BatchNorm1d\", \"kargs\": {\"num_features\":256}},\n\n {\"class\": \"Linear\", \"kargs\": {\"in_features\": 256, \"out_features\": 256}},\n]\n\n# Parameters relevant to results\n# These parameters will basically never need to change\nstandalone_parameters[\"NUM_LOGS_PER_EPOCH\"] = 10\nstandalone_parameters[\"BEST_MODEL_PATH\"] = \"./best_model.pth\"\n\n# uncomment for CORES dataset\nfrom steves_utils.CORES.utils import (\n ALL_NODES,\n ALL_NODES_MINIMUM_1000_EXAMPLES,\n ALL_DAYS\n)\n\n\nstandalone_parameters[\"labels_source\"] = ALL_NODES\nstandalone_parameters[\"labels_target\"] = ALL_NODES\n\nstandalone_parameters[\"domains_source\"] = [1]\nstandalone_parameters[\"domains_target\"] = [2,3,4,5]\n\nstandalone_parameters[\"pickle_name\"] = \"cores.stratified_ds.2022A.pkl\"\n\n\n# Uncomment these for ORACLE dataset\n# from steves_utils.ORACLE.utils_v2 import (\n# ALL_DISTANCES_FEET,\n# ALL_RUNS,\n# ALL_SERIAL_NUMBERS,\n# )\n# standalone_parameters[\"labels_source\"] = ALL_SERIAL_NUMBERS\n# standalone_parameters[\"labels_target\"] = ALL_SERIAL_NUMBERS\n# standalone_parameters[\"domains_source\"] = [8,20, 38,50]\n# standalone_parameters[\"domains_target\"] = [14, 26, 32, 44, 56]\n# standalone_parameters[\"pickle_name\"] = \"oracle.frame_indexed.stratified_ds.2022A.pkl\"\n# standalone_parameters[\"num_examples_per_domain_per_label_source\"]=1000\n# standalone_parameters[\"num_examples_per_domain_per_label_target\"]=1000\n\n# Uncomment these for Metahan dataset\n# standalone_parameters[\"labels_source\"] = list(range(19))\n# standalone_parameters[\"labels_target\"] = list(range(19))\n# standalone_parameters[\"domains_source\"] = [0]\n# standalone_parameters[\"domains_target\"] = [1]\n# standalone_parameters[\"pickle_name\"] = \"metehan.stratified_ds.2022A.pkl\"\n# standalone_parameters[\"n_way\"] = len(standalone_parameters[\"labels_source\"])\n# standalone_parameters[\"num_examples_per_domain_per_label_source\"]=200\n# standalone_parameters[\"num_examples_per_domain_per_label_target\"]=100\n\n\nstandalone_parameters[\"n_way\"] = len(standalone_parameters[\"labels_source\"])", "_____no_output_____" ], [ "# Parameters\nparameters = {\n \"experiment_name\": \"tuned_1v2:oracle.run2\",\n \"device\": \"cuda\",\n \"lr\": 0.0001,\n \"labels_source\": [\n \"3123D52\",\n \"3123D65\",\n \"3123D79\",\n \"3123D80\",\n \"3123D54\",\n \"3123D70\",\n \"3123D7B\",\n \"3123D89\",\n \"3123D58\",\n \"3123D76\",\n \"3123D7D\",\n \"3123EFE\",\n \"3123D64\",\n \"3123D78\",\n \"3123D7E\",\n \"3124E4A\",\n ],\n \"labels_target\": [\n \"3123D52\",\n \"3123D65\",\n \"3123D79\",\n \"3123D80\",\n \"3123D54\",\n \"3123D70\",\n \"3123D7B\",\n \"3123D89\",\n \"3123D58\",\n \"3123D76\",\n \"3123D7D\",\n \"3123EFE\",\n \"3123D64\",\n \"3123D78\",\n \"3123D7E\",\n \"3124E4A\",\n ],\n \"episode_transforms_source\": [],\n \"episode_transforms_target\": [],\n \"domains_source\": [8, 32, 50],\n \"domains_target\": [14, 20, 26, 38, 44],\n \"num_examples_per_domain_per_label_source\": -1,\n \"num_examples_per_domain_per_label_target\": -1,\n \"n_shot\": 3,\n \"n_way\": 16,\n \"n_query\": 2,\n \"train_k_factor\": 3,\n \"val_k_factor\": 2,\n \"test_k_factor\": 2,\n \"torch_default_dtype\": \"torch.float32\",\n \"n_epoch\": 50,\n \"patience\": 3,\n \"criteria_for_best\": \"target_accuracy\",\n \"x_net\": [\n {\"class\": \"nnReshape\", \"kargs\": {\"shape\": [-1, 1, 2, 256]}},\n {\n \"class\": \"Conv2d\",\n \"kargs\": {\n \"in_channels\": 1,\n \"out_channels\": 256,\n \"kernel_size\": [1, 7],\n \"bias\": False,\n \"padding\": [0, 3],\n },\n },\n {\"class\": \"ReLU\", \"kargs\": {\"inplace\": True}},\n {\"class\": \"BatchNorm2d\", \"kargs\": {\"num_features\": 256}},\n {\n \"class\": \"Conv2d\",\n \"kargs\": {\n \"in_channels\": 256,\n \"out_channels\": 80,\n \"kernel_size\": [2, 7],\n \"bias\": True,\n \"padding\": [0, 3],\n },\n },\n {\"class\": \"ReLU\", \"kargs\": {\"inplace\": True}},\n {\"class\": \"BatchNorm2d\", \"kargs\": {\"num_features\": 80}},\n {\"class\": \"Flatten\", \"kargs\": {}},\n {\"class\": \"Linear\", \"kargs\": {\"in_features\": 20480, \"out_features\": 256}},\n {\"class\": \"ReLU\", \"kargs\": {\"inplace\": True}},\n {\"class\": \"BatchNorm1d\", \"kargs\": {\"num_features\": 256}},\n {\"class\": \"Linear\", \"kargs\": {\"in_features\": 256, \"out_features\": 256}},\n ],\n \"NUM_LOGS_PER_EPOCH\": 10,\n \"BEST_MODEL_PATH\": \"./best_model.pth\",\n \"pickle_name\": \"oracle.Run2_10kExamples_stratified_ds.2022A.pkl\",\n \"x_transforms_source\": [\"unit_mag\"],\n \"x_transforms_target\": [\"unit_mag\"],\n \"dataset_seed\": 500,\n \"seed\": 500,\n}\n", "_____no_output_____" ], [ "# Set this to True if you want to run this template directly\nSTANDALONE = False\nif STANDALONE:\n print(\"parameters not injected, running with standalone_parameters\")\n parameters = standalone_parameters\n\nif not 'parameters' in locals() and not 'parameters' in globals():\n raise Exception(\"Parameter injection failed\")\n\n#Use an easy dict for all the parameters\np = EasyDict(parameters)\n\nsupplied_keys = set(p.keys())\n\nif supplied_keys != required_parameters:\n print(\"Parameters are incorrect\")\n if len(supplied_keys - required_parameters)>0: print(\"Shouldn't have:\", str(supplied_keys - required_parameters))\n if len(required_parameters - supplied_keys)>0: print(\"Need to have:\", str(required_parameters - supplied_keys))\n raise RuntimeError(\"Parameters are incorrect\")\n\n", "_____no_output_____" ], [ "###################################\n# Set the RNGs and make it all deterministic\n###################################\nnp.random.seed(p.seed)\nrandom.seed(p.seed)\ntorch.manual_seed(p.seed)\n\ntorch.use_deterministic_algorithms(True) ", "_____no_output_____" ], [ "###########################################\n# The stratified datasets honor this\n###########################################\ntorch.set_default_dtype(eval(p.torch_default_dtype))", "_____no_output_____" ], [ "###################################\n# Build the network(s)\n# Note: It's critical to do this AFTER setting the RNG\n# (This is due to the randomized initial weights)\n###################################\nx_net = build_sequential(p.x_net)", "_____no_output_____" ], [ "start_time_secs = time.time()", "_____no_output_____" ], [ "###################################\n# Build the dataset\n###################################\n\nif p.x_transforms_source == []: x_transform_source = None\nelse: x_transform_source = get_chained_transform(p.x_transforms_source) \n\nif p.x_transforms_target == []: x_transform_target = None\nelse: x_transform_target = get_chained_transform(p.x_transforms_target)\n\nif p.episode_transforms_source == []: episode_transform_source = None\nelse: raise Exception(\"episode_transform_source not implemented\")\n\nif p.episode_transforms_target == []: episode_transform_target = None\nelse: raise Exception(\"episode_transform_target not implemented\")\n\n\neaf_source = Episodic_Accessor_Factory(\n labels=p.labels_source,\n domains=p.domains_source,\n num_examples_per_domain_per_label=p.num_examples_per_domain_per_label_source,\n iterator_seed=p.seed,\n dataset_seed=p.dataset_seed,\n n_shot=p.n_shot,\n n_way=p.n_way,\n n_query=p.n_query,\n train_val_test_k_factors=(p.train_k_factor,p.val_k_factor,p.test_k_factor),\n pickle_path=os.path.join(get_datasets_base_path(), p.pickle_name),\n x_transform_func=x_transform_source,\n example_transform_func=episode_transform_source,\n \n)\ntrain_original_source, val_original_source, test_original_source = eaf_source.get_train(), eaf_source.get_val(), eaf_source.get_test()\n\n\neaf_target = Episodic_Accessor_Factory(\n labels=p.labels_target,\n domains=p.domains_target,\n num_examples_per_domain_per_label=p.num_examples_per_domain_per_label_target,\n iterator_seed=p.seed,\n dataset_seed=p.dataset_seed,\n n_shot=p.n_shot,\n n_way=p.n_way,\n n_query=p.n_query,\n train_val_test_k_factors=(p.train_k_factor,p.val_k_factor,p.test_k_factor),\n pickle_path=os.path.join(get_datasets_base_path(), p.pickle_name),\n x_transform_func=x_transform_target,\n example_transform_func=episode_transform_target,\n)\ntrain_original_target, val_original_target, test_original_target = eaf_target.get_train(), eaf_target.get_val(), eaf_target.get_test()\n\n\ntransform_lambda = lambda ex: ex[1] # Original is (<domain>, <episode>) so we strip down to episode only\n\ntrain_processed_source = Lazy_Iterable_Wrapper(train_original_source, transform_lambda)\nval_processed_source = Lazy_Iterable_Wrapper(val_original_source, transform_lambda)\ntest_processed_source = Lazy_Iterable_Wrapper(test_original_source, transform_lambda)\n\ntrain_processed_target = Lazy_Iterable_Wrapper(train_original_target, transform_lambda)\nval_processed_target = Lazy_Iterable_Wrapper(val_original_target, transform_lambda)\ntest_processed_target = Lazy_Iterable_Wrapper(test_original_target, transform_lambda)\n\ndatasets = EasyDict({\n \"source\": {\n \"original\": {\"train\":train_original_source, \"val\":val_original_source, \"test\":test_original_source},\n \"processed\": {\"train\":train_processed_source, \"val\":val_processed_source, \"test\":test_processed_source}\n },\n \"target\": {\n \"original\": {\"train\":train_original_target, \"val\":val_original_target, \"test\":test_original_target},\n \"processed\": {\"train\":train_processed_target, \"val\":val_processed_target, \"test\":test_processed_target}\n },\n})", "_____no_output_____" ], [ "# Some quick unit tests on the data\nfrom steves_utils.transforms import get_average_power, get_average_magnitude\n\nq_x, q_y, s_x, s_y, truth = next(iter(train_processed_source))\n\nassert q_x.dtype == eval(p.torch_default_dtype)\nassert s_x.dtype == eval(p.torch_default_dtype)\n\nprint(\"Visually inspect these to see if they line up with expected values given the transforms\")\nprint('x_transforms_source', p.x_transforms_source)\nprint('x_transforms_target', p.x_transforms_target)\nprint(\"Average magnitude, source:\", get_average_magnitude(q_x[0].numpy()))\nprint(\"Average power, source:\", get_average_power(q_x[0].numpy()))\n\nq_x, q_y, s_x, s_y, truth = next(iter(train_processed_target))\nprint(\"Average magnitude, target:\", get_average_magnitude(q_x[0].numpy()))\nprint(\"Average power, target:\", get_average_power(q_x[0].numpy()))\n", "Visually inspect these to see if they line up with expected values given the transforms\nx_transforms_source ['unit_mag']\nx_transforms_target ['unit_mag']\nAverage magnitude, source: 1.0\nAverage power, source: 1.2574446\n" ], [ "###################################\n# Build the model\n###################################\nmodel = Steves_Prototypical_Network(x_net, device=p.device, x_shape=(2,256))\noptimizer = Adam(params=model.parameters(), lr=p.lr)", "(2, 256)\n" ], [ "###################################\n# train\n###################################\njig = PTN_Train_Eval_Test_Jig(model, p.BEST_MODEL_PATH, p.device)\n\njig.train(\n train_iterable=datasets.source.processed.train,\n source_val_iterable=datasets.source.processed.val,\n target_val_iterable=datasets.target.processed.val,\n num_epochs=p.n_epoch,\n num_logs_per_epoch=p.NUM_LOGS_PER_EPOCH,\n patience=p.patience,\n optimizer=optimizer,\n criteria_for_best=p.criteria_for_best,\n)", "epoch: 1, [batch: 1 / 12600], examples_per_second: 27.0819, train_label_loss: 2.7782, \n" ], [ "total_experiment_time_secs = time.time() - start_time_secs", "_____no_output_____" ], [ "###################################\n# Evaluate the model\n###################################\nsource_test_label_accuracy, source_test_label_loss = jig.test(datasets.source.processed.test)\ntarget_test_label_accuracy, target_test_label_loss = jig.test(datasets.target.processed.test)\n\nsource_val_label_accuracy, source_val_label_loss = jig.test(datasets.source.processed.val)\ntarget_val_label_accuracy, target_val_label_loss = jig.test(datasets.target.processed.val)\n\nhistory = jig.get_history()\n\ntotal_epochs_trained = len(history[\"epoch_indices\"])\n\nval_dl = Iterable_Aggregator((datasets.source.original.val,datasets.target.original.val))\n\nconfusion = ptn_confusion_by_domain_over_dataloader(model, p.device, val_dl)\nper_domain_accuracy = per_domain_accuracy_from_confusion(confusion)\n\n# Add a key to per_domain_accuracy for if it was a source domain\nfor domain, accuracy in per_domain_accuracy.items():\n per_domain_accuracy[domain] = {\n \"accuracy\": accuracy,\n \"source?\": domain in p.domains_source\n }\n\n# Do an independent accuracy assesment JUST TO BE SURE!\n# _source_test_label_accuracy = independent_accuracy_assesment(model, datasets.source.processed.test, p.device)\n# _target_test_label_accuracy = independent_accuracy_assesment(model, datasets.target.processed.test, p.device)\n# _source_val_label_accuracy = independent_accuracy_assesment(model, datasets.source.processed.val, p.device)\n# _target_val_label_accuracy = independent_accuracy_assesment(model, datasets.target.processed.val, p.device)\n\n# assert(_source_test_label_accuracy == source_test_label_accuracy)\n# assert(_target_test_label_accuracy == target_test_label_accuracy)\n# assert(_source_val_label_accuracy == source_val_label_accuracy)\n# assert(_target_val_label_accuracy == target_val_label_accuracy)\n\nexperiment = {\n \"experiment_name\": p.experiment_name,\n \"parameters\": dict(p),\n \"results\": {\n \"source_test_label_accuracy\": source_test_label_accuracy,\n \"source_test_label_loss\": source_test_label_loss,\n \"target_test_label_accuracy\": target_test_label_accuracy,\n \"target_test_label_loss\": target_test_label_loss,\n \"source_val_label_accuracy\": source_val_label_accuracy,\n \"source_val_label_loss\": source_val_label_loss,\n \"target_val_label_accuracy\": target_val_label_accuracy,\n \"target_val_label_loss\": target_val_label_loss,\n \"total_epochs_trained\": total_epochs_trained,\n \"total_experiment_time_secs\": total_experiment_time_secs,\n \"confusion\": confusion,\n \"per_domain_accuracy\": per_domain_accuracy,\n },\n \"history\": history,\n \"dataset_metrics\": get_dataset_metrics(datasets, \"ptn\"),\n}", "_____no_output_____" ], [ "ax = get_loss_curve(experiment)\nplt.show()", "_____no_output_____" ], [ "get_results_table(experiment)", "_____no_output_____" ], [ "get_domain_accuracies(experiment)", "_____no_output_____" ], [ "print(\"Source Test Label Accuracy:\", experiment[\"results\"][\"source_test_label_accuracy\"], \"Target Test Label Accuracy:\", experiment[\"results\"][\"target_test_label_accuracy\"])\nprint(\"Source Val Label Accuracy:\", experiment[\"results\"][\"source_val_label_accuracy\"], \"Target Val Label Accuracy:\", experiment[\"results\"][\"target_val_label_accuracy\"])", "Source Test Label Accuracy: 0.7174131944444444 Target Test Label Accuracy: 0.5981875\nSource Val Label Accuracy: 0.7176041666666667 Target Val Label Accuracy: 0.5975520833333333\n" ], [ "json.dumps(experiment)", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code" ]
[ [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ] ]
d02241d313de8b4ee050e90fc3a6311f6fc7ae14
49,141
ipynb
Jupyter Notebook
Jupyter notebook/Practice 4 - Cython.ipynb
marcomussi/RecommenderSystemPolimi
ce45b1eee2231abe1a844697648e94b98dadabea
[ "MIT" ]
null
null
null
Jupyter notebook/Practice 4 - Cython.ipynb
marcomussi/RecommenderSystemPolimi
ce45b1eee2231abe1a844697648e94b98dadabea
[ "MIT" ]
null
null
null
Jupyter notebook/Practice 4 - Cython.ipynb
marcomussi/RecommenderSystemPolimi
ce45b1eee2231abe1a844697648e94b98dadabea
[ "MIT" ]
null
null
null
31.500641
415
0.527726
[ [ [ "# Recommender Systems 2018/19\n\n### Practice 4 - Similarity with Cython\n\n\n### Cython is a superset of Python, allowing you to use C-like operations and import C code. Cython files (.pyx) are compiled and support static typing.", "_____no_output_____" ] ], [ [ "import time\nimport numpy as np", "_____no_output_____" ] ], [ [ "### Let's implement something simple", "_____no_output_____" ] ], [ [ "def isPrime(n):\n \n i = 2\n \n # Usually you loop up to sqrt(n)\n while i < n:\n if n % i == 0:\n return False\n \n i += 1\n \n return True", "_____no_output_____" ], [ "print(\"Is prime 2? {}\".format(isPrime(2)))\nprint(\"Is prime 3? {}\".format(isPrime(3)))\nprint(\"Is prime 5? {}\".format(isPrime(5)))\nprint(\"Is prime 15? {}\".format(isPrime(15)))\nprint(\"Is prime 20? {}\".format(isPrime(20)))", "Is prime 2? True\nIs prime 3? True\nIs prime 5? True\nIs prime 15? False\nIs prime 20? False\n" ], [ "start_time = time.time()\n\nresult = isPrime(80000023)\n\nprint(\"Is Prime 80000023? {}, time required {:.2f} sec\".format(result, time.time()-start_time))", "Is Prime 80000023? True, time required 8.19 sec\n" ] ], [ [ "#### Load Cython magic command, this takes care of the compilation step. If you are writing code outside Jupyter you'll have to compile using other tools", "_____no_output_____" ] ], [ [ "%load_ext Cython", "_____no_output_____" ] ], [ [ "#### Declare Cython function, paste the same code as before. The function will be compiled and then executed with a Python interface", "_____no_output_____" ] ], [ [ "%%cython\ndef isPrime(n):\n \n i = 2\n \n # Usually you loop up to sqrt(n)\n while i < n:\n if n % i == 0:\n return False\n \n i += 1\n \n return True", "_____no_output_____" ], [ "start_time = time.time()\n\nresult = isPrime(80000023)\n\nprint(\"Is Prime 80000023? {}, time required {:.2f} sec\".format(result, time.time()-start_time))", "Is Prime 80000023? True, time required 4.81 sec\n" ] ], [ [ "#### As you can see by just compiling the same code we got some improvement.\n#### To go seriously higher, we have to use some static tiping", "_____no_output_____" ] ], [ [ "%%cython\n# Declare the tipe of the arguments\ndef isPrime(long n):\n \n # Declare index of for loop\n cdef long i\n \n i = 2\n \n # Usually you loop up to sqrt(n)\n while i < n:\n if n % i == 0:\n return False\n \n i += 1\n \n return True", "_____no_output_____" ], [ "start_time = time.time()\n\nresult = isPrime(80000023)\n\nprint(\"Is Prime 80000023? {}, time required {:.2f} sec\".format(result, time.time()-start_time))", "Is Prime 80000023? True, time required 0.94 sec\n" ] ], [ [ "#### Cython code with two tipe declaration, for n and i, runs 50x faster than Python", "_____no_output_____" ], [ "#### Main benefits of Cython:\n* Compiled, no interpreter\n* Static typing, no overhead\n* Fast loops, no need to vectorize. Vectorization sometimes performes lots of useless operations\n* Numpy, which is fast in python, becomes often slooooow compared to a carefully written Cython code", "_____no_output_____" ], [ "### Similarity with Cython\n\n#### Load the usual data.", "_____no_output_____" ] ], [ [ "from urllib.request import urlretrieve\nimport zipfile\n\n# skip the download\n#urlretrieve (\"http://files.grouplens.org/datasets/movielens/ml-10m.zip\", \"data/Movielens_10M/movielens_10m.zip\")\ndataFile = zipfile.ZipFile(\"data/Movielens_10M/movielens_10m.zip\")\nURM_path = dataFile.extract(\"ml-10M100K/ratings.dat\", path = \"data/Movielens_10M\")\nURM_file = open(URM_path, 'r')\n\n\ndef rowSplit (rowString):\n \n split = rowString.split(\"::\")\n split[3] = split[3].replace(\"\\n\",\"\")\n \n split[0] = int(split[0])\n split[1] = int(split[1])\n split[2] = float(split[2])\n split[3] = int(split[3])\n \n result = tuple(split)\n \n return result\n\n\nURM_file.seek(0)\nURM_tuples = []\n\nfor line in URM_file:\n URM_tuples.append(rowSplit (line))\n\nuserList, itemList, ratingList, timestampList = zip(*URM_tuples)\n\nuserList = list(userList)\nitemList = list(itemList)\nratingList = list(ratingList)\ntimestampList = list(timestampList)\n\nimport scipy.sparse as sps\n\nURM_all = sps.coo_matrix((ratingList, (userList, itemList)))\nURM_all = URM_all.tocsr()\n\nURM_all\n\n\n", "_____no_output_____" ], [ "from Notebooks_utils.data_splitter import train_test_holdout\n\n\nURM_train, URM_test = train_test_holdout(URM_all, train_perc = 0.8)", "_____no_output_____" ], [ "URM_train", "_____no_output_____" ] ], [ [ "#### Since we cannot store in memory the whole similarity, we compute it one row at a time", "_____no_output_____" ] ], [ [ "itemIndex=1\nitem_ratings = URM_train[:,itemIndex]\nitem_ratings = item_ratings.toarray().squeeze()\n\nitem_ratings.shape", "_____no_output_____" ], [ "this_item_weights = URM_train.T.dot(item_ratings)\nthis_item_weights.shape", "_____no_output_____" ] ], [ [ "#### Once we have the scores for that row, we get the TopK", "_____no_output_____" ] ], [ [ "k=10\n\ntop_k_idx = np.argsort(this_item_weights) [-k:]\ntop_k_idx", "_____no_output_____" ], [ "import scipy.sparse as sps", "_____no_output_____" ], [ "# Function hiding some conversion checks\ndef check_matrix(X, format='csc', dtype=np.float32):\n if format == 'csc' and not isinstance(X, sps.csc_matrix):\n return X.tocsc().astype(dtype)\n elif format == 'csr' and not isinstance(X, sps.csr_matrix):\n return X.tocsr().astype(dtype)\n elif format == 'coo' and not isinstance(X, sps.coo_matrix):\n return X.tocoo().astype(dtype)\n elif format == 'dok' and not isinstance(X, sps.dok_matrix):\n return X.todok().astype(dtype)\n elif format == 'bsr' and not isinstance(X, sps.bsr_matrix):\n return X.tobsr().astype(dtype)\n elif format == 'dia' and not isinstance(X, sps.dia_matrix):\n return X.todia().astype(dtype)\n elif format == 'lil' and not isinstance(X, sps.lil_matrix):\n return X.tolil().astype(dtype)\n else:\n return X.astype(dtype)", "_____no_output_____" ] ], [ [ "#### Create a Basic Collaborative filtering recommender using only cosine similarity", "_____no_output_____" ] ], [ [ "class BasicItemKNN_CF_Recommender(object):\n \"\"\" ItemKNN recommender with cosine similarity and no shrinkage\"\"\"\n\n def __init__(self, URM):\n self.dataset = URM\n \n \n def compute_similarity(self, URM):\n \n # We explore the matrix column-wise\n URM = check_matrix(URM, 'csc') \n \n values = []\n rows = []\n cols = []\n \n start_time = time.time()\n processedItems = 0\n \n # Compute all similarities for each item using vectorization\n for itemIndex in range(URM.shape[0]):\n \n processedItems += 1\n \n if processedItems % 100==0:\n \n itemPerSec = processedItems/(time.time()-start_time)\n \n print(\"Similarity item {}, {:.2f} item/sec, required time {:.2f} min\".format(\n processedItems, itemPerSec, URM.shape[0]/itemPerSec/60))\n \n # All ratings for a given item\n item_ratings = URM[:,itemIndex]\n item_ratings = item_ratings.toarray().squeeze()\n \n # Compute item similarities\n this_item_weights = URM_train.T.dot(item_ratings)\n \n # Sort indices and select TopK\n top_k_idx = np.argsort(this_item_weights) [-self.k:]\n \n # Incrementally build sparse matrix\n values.extend(this_item_weights[top_k_idx])\n rows.extend(np.arange(URM.shape[0])[top_k_idx])\n cols.extend(np.ones(self.k) * itemIndex)\n \n self.W_sparse = sps.csc_matrix((values, (rows, cols)),\n shape=(URM.shape[0], URM.shape[0]),\n dtype=np.float32)\n\n \n\n def fit(self, k=50, shrinkage=100):\n\n self.k = k\n self.shrinkage = shrinkage\n \n item_weights = self.compute_similarity(self.dataset)\n \n item_weights = check_matrix(item_weights, 'csr')\n \n \n def recommend(self, user_id, at=None, exclude_seen=True):\n # compute the scores using the dot product\n user_profile = self.URM[user_id]\n scores = user_profile.dot(self.W_sparse).toarray().ravel()\n\n if exclude_seen:\n scores = self.filter_seen(user_id, scores)\n\n # rank items\n ranking = scores.argsort()[::-1]\n \n return ranking[:at]\n \n \n def filter_seen(self, user_id, scores):\n\n start_pos = self.URM.indptr[user_id]\n end_pos = self.URM.indptr[user_id+1]\n\n user_profile = self.URM.indices[start_pos:end_pos]\n \n scores[user_profile] = -np.inf\n\n return scores", "_____no_output_____" ] ], [ [ "#### Let's isolate the compute_similarity function ", "_____no_output_____" ] ], [ [ "def compute_similarity(URM, k=100):\n\n # We explore the matrix column-wise\n URM = check_matrix(URM, 'csc')\n \n n_items = URM.shape[0]\n\n values = []\n rows = []\n cols = []\n\n start_time = time.time()\n processedItems = 0\n\n # Compute all similarities for each item using vectorization\n # for itemIndex in range(n_items):\n for itemIndex in range(1000):\n\n processedItems += 1\n\n if processedItems % 100==0:\n\n itemPerSec = processedItems/(time.time()-start_time)\n\n print(\"Similarity item {}, {:.2f} item/sec, required time {:.2f} min\".format(\n processedItems, itemPerSec, n_items/itemPerSec/60))\n\n # All ratings for a given item\n item_ratings = URM[:,itemIndex]\n item_ratings = item_ratings.toarray().squeeze()\n\n # Compute item similarities\n this_item_weights = URM.T.dot(item_ratings)\n\n # Sort indices and select TopK\n top_k_idx = np.argsort(this_item_weights) [-k:]\n\n # Incrementally build sparse matrix\n values.extend(this_item_weights[top_k_idx])\n rows.extend(np.arange(URM.shape[0])[top_k_idx])\n cols.extend(np.ones(k) * itemIndex)\n\n W_sparse = sps.csc_matrix((values, (rows, cols)),\n shape=(n_items, n_items),\n dtype=np.float32)\n\n return W_sparse\n ", "_____no_output_____" ], [ "compute_similarity(URM_train)", "Similarity item 100, 81.61 item/sec, required time 14.62 min\nSimilarity item 200, 80.34 item/sec, required time 14.85 min\nSimilarity item 300, 80.08 item/sec, required time 14.89 min\nSimilarity item 400, 80.50 item/sec, required time 14.82 min\nSimilarity item 500, 80.02 item/sec, required time 14.91 min\nSimilarity item 600, 80.30 item/sec, required time 14.85 min\nSimilarity item 700, 80.23 item/sec, required time 14.87 min\nSimilarity item 800, 80.58 item/sec, required time 14.80 min\nSimilarity item 900, 81.18 item/sec, required time 14.69 min\nSimilarity item 1000, 81.15 item/sec, required time 14.70 min\n" ] ], [ [ "### We see that computing the similarity takes more or less 15 minutes\n### Now we use the same identical code, but we compile it", "_____no_output_____" ] ], [ [ "%%cython\nimport time\nimport numpy as np\nimport scipy.sparse as sps\n\ndef compute_similarity_compiled(URM, k=100):\n\n # We explore the matrix column-wise\n URM = URM.tocsc()\n \n n_items = URM.shape[0]\n\n values = []\n rows = []\n cols = []\n\n start_time = time.time()\n processedItems = 0\n\n # Compute all similarities for each item using vectorization\n # for itemIndex in range(n_items):\n for itemIndex in range(1000):\n\n processedItems += 1\n\n if processedItems % 100==0:\n\n itemPerSec = processedItems/(time.time()-start_time)\n\n print(\"Similarity item {}, {:.2f} item/sec, required time {:.2f} min\".format(\n processedItems, itemPerSec, n_items/itemPerSec/60))\n\n # All ratings for a given item\n item_ratings = URM[:,itemIndex]\n item_ratings = item_ratings.toarray().squeeze()\n\n # Compute item similarities\n this_item_weights = URM.T.dot(item_ratings)\n\n # Sort indices and select TopK\n top_k_idx = np.argsort(this_item_weights) [-k:]\n\n # Incrementally build sparse matrix\n values.extend(this_item_weights[top_k_idx])\n rows.extend(np.arange(URM.shape[0])[top_k_idx])\n cols.extend(np.ones(k) * itemIndex)\n\n W_sparse = sps.csc_matrix((values, (rows, cols)),\n shape=(n_items, n_items),\n dtype=np.float32)\n\n return W_sparse\n ", "_____no_output_____" ], [ "compute_similarity_compiled(URM_train)", "Similarity item 100, 56.48 item/sec, required time 21.12 min\nSimilarity item 200, 56.12 item/sec, required time 21.25 min\nSimilarity item 300, 56.58 item/sec, required time 21.08 min\nSimilarity item 400, 56.42 item/sec, required time 21.14 min\nSimilarity item 500, 56.74 item/sec, required time 21.02 min\nSimilarity item 600, 56.90 item/sec, required time 20.96 min\nSimilarity item 700, 56.90 item/sec, required time 20.96 min\nSimilarity item 800, 56.97 item/sec, required time 20.94 min\nSimilarity item 900, 56.84 item/sec, required time 20.99 min\nSimilarity item 1000, 56.57 item/sec, required time 21.08 min\n" ] ], [ [ "#### As opposed to the previous example, compilation by itself is not very helpful. Why?\n#### Because the compiler is just porting in C all operations that the python interpreter would have to perform, dynamic tiping included\n\n### Now try to add some tipes", "_____no_output_____" ] ], [ [ "%%cython\nimport time\nimport numpy as np\nimport scipy.sparse as sps\n\ncimport numpy as np\n\ndef compute_similarity_compiled(URM, int k=100):\n \n cdef int itemIndex, processedItems\n \n # We use the numpy syntax, allowing us to perform vectorized operations\n cdef np.ndarray[double, ndim=1] item_ratings, this_item_weights\n cdef np.ndarray[long, ndim=1] top_k_idx\n\n # We explore the matrix column-wise\n URM = URM.tocsc()\n \n n_items = URM.shape[0]\n\n values = []\n rows = []\n cols = []\n\n start_time = time.time()\n processedItems = 0\n\n # Compute all similarities for each item using vectorization\n # for itemIndex in range(n_items):\n for itemIndex in range(1000):\n\n processedItems += 1\n\n if processedItems % 100==0:\n\n itemPerSec = processedItems/(time.time()-start_time)\n\n print(\"Similarity item {}, {:.2f} item/sec, required time {:.2f} min\".format(\n processedItems, itemPerSec, n_items/itemPerSec/60))\n\n # All ratings for a given item\n item_ratings = URM[:,itemIndex].toarray().squeeze()\n\n # Compute item similarities\n this_item_weights = URM.T.dot(item_ratings)\n\n # Sort indices and select TopK\n top_k_idx = np.argsort(this_item_weights) [-k:]\n\n # Incrementally build sparse matrix\n values.extend(this_item_weights[top_k_idx])\n rows.extend(np.arange(URM.shape[0])[top_k_idx])\n cols.extend(np.ones(k) * itemIndex)\n\n W_sparse = sps.csc_matrix((values, (rows, cols)),\n shape=(n_items, n_items),\n dtype=np.float32)\n\n return W_sparse", "_____no_output_____" ], [ "compute_similarity_compiled(URM_train)", "Similarity item 100, 57.80 item/sec, required time 20.64 min\nSimilarity item 200, 53.69 item/sec, required time 22.22 min\nSimilarity item 300, 54.57 item/sec, required time 21.86 min\nSimilarity item 400, 54.07 item/sec, required time 22.06 min\nSimilarity item 500, 54.65 item/sec, required time 21.83 min\nSimilarity item 600, 54.82 item/sec, required time 21.76 min\nSimilarity item 700, 55.08 item/sec, required time 21.66 min\nSimilarity item 800, 55.30 item/sec, required time 21.57 min\nSimilarity item 900, 55.64 item/sec, required time 21.44 min\nSimilarity item 1000, 55.80 item/sec, required time 21.38 min\n" ] ], [ [ "### Still no luck! Why?\n### There are a few reasons:\n* We are getting the data from the sparse matrix using its interface, which is SLOW\n* We are transforming sparse data into a dense array, which is SLOW\n* We are performing a dot product against a dense vector\n\n#### You colud find a workaround... here we do something different", "_____no_output_____" ], [ "### Proposed solution\n### Change the algorithm!\n\n### Instead of performing the dot product, let's implement somenting that computes the similarity using sparse data directly\n\n### We loop through the data and update selectively the similarity matrix cells. \n### Underlying idea:\n* When I select an item I can know which users rated it\n* Instead of looping through the other items trying to find common users, I use the URM to find which other items that user rated\n* The user I am considering will be common between the two, so I increment the similarity of the two items\n* Instead of following the path item1 -> loop item2 -> find user, i go item1 -> loop user -> loop item2", "_____no_output_____" ] ], [ [ "data_matrix = np.array([[1,1,0,1],[0,1,1,1],[1,0,1,0]])\ndata_matrix = sps.csc_matrix(data_matrix)\ndata_matrix.todense()", "_____no_output_____" ] ], [ [ "### Example: Compute the similarities for item 1\n\n#### Step 1: get users that rated item 1", "_____no_output_____" ] ], [ [ "users_rated_item = data_matrix[:,1]\nusers_rated_item.indices", "_____no_output_____" ] ], [ [ "#### Step 2: count how many times those users rated other items", "_____no_output_____" ] ], [ [ "item_similarity = data_matrix[users_rated_item.indices].sum(axis = 0)\nnp.array(item_similarity).squeeze()", "_____no_output_____" ] ], [ [ "#### Verify our result against the common method. We can see that the similarity values for col 1 are identical", "_____no_output_____" ] ], [ [ "similarity_matrix_product = data_matrix.T.dot(data_matrix)\nsimilarity_matrix_product.toarray()[:,1]", "_____no_output_____" ], [ "# The following code works for implicit feedback only\ndef compute_similarity_new_algorithm(URM, k=100):\n\n # We explore the matrix column-wise\n URM = check_matrix(URM, 'csc')\n URM.data = np.ones_like(URM.data)\n \n n_items = URM.shape[0]\n\n values = []\n rows = []\n cols = []\n\n start_time = time.time()\n processedItems = 0\n\n # Compute all similarities for each item using vectorization\n # for itemIndex in range(n_items):\n for itemIndex in range(1000):\n\n processedItems += 1\n\n if processedItems % 100==0:\n\n itemPerSec = processedItems/(time.time()-start_time)\n\n print(\"Similarity item {}, {:.2f} item/sec, required time {:.2f} min\".format(\n processedItems, itemPerSec, n_items/itemPerSec/60))\n\n # All ratings for a given item\n users_rated_item = URM.indices[URM.indptr[itemIndex]:URM.indptr[itemIndex+1]]\n\n # Compute item similarities\n this_item_weights = URM[users_rated_item].sum(axis = 0)\n this_item_weights = np.array(this_item_weights).squeeze()\n\n # Sort indices and select TopK\n top_k_idx = np.argsort(this_item_weights) [-k:]\n\n # Incrementally build sparse matrix\n values.extend(this_item_weights[top_k_idx])\n rows.extend(np.arange(URM.shape[0])[top_k_idx])\n cols.extend(np.ones(k) * itemIndex)\n\n W_sparse = sps.csc_matrix((values, (rows, cols)),\n shape=(n_items, n_items),\n dtype=np.float32)\n\n return W_sparse\n ", "_____no_output_____" ], [ "compute_similarity_new_algorithm(URM_train)", "Similarity item 100, 28.04 item/sec, required time 42.53 min\nSimilarity item 200, 28.37 item/sec, required time 42.04 min\nSimilarity item 300, 28.85 item/sec, required time 41.35 min\nSimilarity item 400, 28.77 item/sec, required time 41.45 min\nSimilarity item 500, 29.20 item/sec, required time 40.85 min\nSimilarity item 600, 28.85 item/sec, required time 41.34 min\nSimilarity item 700, 29.60 item/sec, required time 40.30 min\nSimilarity item 800, 29.91 item/sec, required time 39.88 min\nSimilarity item 900, 30.54 item/sec, required time 39.06 min\nSimilarity item 1000, 30.61 item/sec, required time 38.96 min\n" ] ], [ [ "#### Slower but expected, dot product operations are implemented in an efficient way and here we are using an indirect approach", "_____no_output_____" ], [ "### Now let's write this algorithm in Cython", "_____no_output_____" ] ], [ [ "%%cython\n\nimport time\n\nimport numpy as np\ncimport numpy as np\nfrom cpython.array cimport array, clone\n\nimport scipy.sparse as sps\n\n\ncdef class Cosine_Similarity:\n\n cdef int TopK\n cdef long n_items\n\n # Arrays containing the sparse data\n cdef int[:] user_to_item_row_ptr, user_to_item_cols\n cdef int[:] item_to_user_rows, item_to_user_col_ptr\n cdef double[:] user_to_item_data, item_to_user_data\n\n # In case you select no TopK\n cdef double[:,:] W_dense\n\n \n def __init__(self, URM, TopK = 100):\n \"\"\"\n Dataset must be a matrix with items as columns\n :param dataset:\n :param TopK:\n \"\"\"\n\n super(Cosine_Similarity, self).__init__()\n\n self.n_items = URM.shape[1]\n\n self.TopK = min(TopK, self.n_items)\n\n URM = URM.tocsr()\n self.user_to_item_row_ptr = URM.indptr\n self.user_to_item_cols = URM.indices\n self.user_to_item_data = np.array(URM.data, dtype=np.float64)\n\n URM = URM.tocsc()\n self.item_to_user_rows = URM.indices\n self.item_to_user_col_ptr = URM.indptr\n self.item_to_user_data = np.array(URM.data, dtype=np.float64)\n\n if self.TopK == 0:\n self.W_dense = np.zeros((self.n_items,self.n_items))\n\n\n\n cdef int[:] getUsersThatRatedItem(self, long item_id):\n return self.item_to_user_rows[self.item_to_user_col_ptr[item_id]:self.item_to_user_col_ptr[item_id+1]]\n\n cdef int[:] getItemsRatedByUser(self, long user_id):\n return self.user_to_item_cols[self.user_to_item_row_ptr[user_id]:self.user_to_item_row_ptr[user_id+1]]\n\n \n \n cdef double[:] computeItemSimilarities(self, long item_id_input):\n \"\"\"\n For every item the cosine similarity against other items depends on whether they have users in common. \n The more common users the higher the similarity.\n \n The basic implementation is:\n - Select the first item\n - Loop through all other items\n -- Given the two items, get the users they have in common\n -- Update the similarity considering all common users\n \n That is VERY slow due to the common user part, in which a long data structure is looped multiple times.\n \n A better way is to use the data structure in a different way skipping the search part, getting directly\n the information we need.\n \n The implementation here used is:\n - Select the first item\n - Initialize a zero valued array for the similarities\n - Get the users who rated the first item\n - Loop through the users\n -- Given a user, get the items he rated (second item)\n -- Update the similarity of the items he rated\n \n \n \"\"\"\n\n # Create template used to initialize an array with zeros\n # Much faster than np.zeros(self.n_items)\n cdef array[double] template_zero = array('d')\n cdef array[double] result = clone(template_zero, self.n_items, zero=True)\n\n\n cdef long user_index, user_id, item_index, item_id_second\n\n cdef int[:] users_that_rated_item = self.getUsersThatRatedItem(item_id_input)\n cdef int[:] items_rated_by_user\n\n cdef double rating_item_input, rating_item_second\n\n # Get users that rated the items\n for user_index in range(len(users_that_rated_item)):\n\n user_id = users_that_rated_item[user_index]\n rating_item_input = self.item_to_user_data[self.item_to_user_col_ptr[item_id_input]+user_index]\n\n # Get all items rated by that user\n items_rated_by_user = self.getItemsRatedByUser(user_id)\n\n for item_index in range(len(items_rated_by_user)):\n\n item_id_second = items_rated_by_user[item_index]\n\n # Do not compute the similarity on the diagonal\n if item_id_second != item_id_input:\n # Increment similairty\n rating_item_second = self.user_to_item_data[self.user_to_item_row_ptr[user_id]+item_index]\n\n result[item_id_second] += rating_item_input*rating_item_second\n\n return result\n\n\n def compute_similarity(self):\n\n cdef int itemIndex, innerItemIndex\n cdef long long topKItemIndex\n\n cdef long long[:] top_k_idx\n\n # Declare numpy data type to use vetor indexing and simplify the topK selection code\n cdef np.ndarray[long, ndim=1] top_k_partition, top_k_partition_sorting\n cdef np.ndarray[np.float64_t, ndim=1] this_item_weights_np\n\n #cdef long[:] top_k_idx\n cdef double[:] this_item_weights\n\n cdef long processedItems = 0\n\n # Data structure to incrementally build sparse matrix\n # Preinitialize max possible length\n cdef double[:] values = np.zeros((self.n_items*self.TopK))\n cdef int[:] rows = np.zeros((self.n_items*self.TopK,), dtype=np.int32)\n cdef int[:] cols = np.zeros((self.n_items*self.TopK,), dtype=np.int32)\n cdef long sparse_data_pointer = 0\n\n\n start_time = time.time()\n\n # Compute all similarities for each item\n for itemIndex in range(self.n_items):\n\n processedItems += 1\n\n if processedItems % 10000==0 or processedItems==self.n_items:\n\n itemPerSec = processedItems/(time.time()-start_time)\n\n print(\"Similarity item {} ( {:2.0f} % ), {:.2f} item/sec, required time {:.2f} min\".format(\n processedItems, processedItems*1.0/self.n_items*100, itemPerSec, (self.n_items-processedItems) / itemPerSec / 60))\n\n this_item_weights = self.computeItemSimilarities(itemIndex)\n\n if self.TopK == 0:\n\n for innerItemIndex in range(self.n_items):\n self.W_dense[innerItemIndex,itemIndex] = this_item_weights[innerItemIndex]\n\n else:\n\n # Sort indices and select TopK\n # Using numpy implies some overhead, unfortunately the plain C qsort function is even slower\n # top_k_idx = np.argsort(this_item_weights) [-self.TopK:]\n\n # Sorting is done in three steps. Faster then plain np.argsort for higher number of items\n # because we avoid sorting elements we already know we don't care about\n # - Partition the data to extract the set of TopK items, this set is unsorted\n # - Sort only the TopK items, discarding the rest\n # - Get the original item index\n\n this_item_weights_np = - np.array(this_item_weights)\n \n # Get the unordered set of topK items\n top_k_partition = np.argpartition(this_item_weights_np, self.TopK-1)[0:self.TopK]\n # Sort only the elements in the partition\n top_k_partition_sorting = np.argsort(this_item_weights_np[top_k_partition])\n # Get original index\n top_k_idx = top_k_partition[top_k_partition_sorting]\n\n\n\n # Incrementally build sparse matrix\n for innerItemIndex in range(len(top_k_idx)):\n\n topKItemIndex = top_k_idx[innerItemIndex]\n\n values[sparse_data_pointer] = this_item_weights[topKItemIndex]\n rows[sparse_data_pointer] = topKItemIndex\n cols[sparse_data_pointer] = itemIndex\n\n sparse_data_pointer += 1\n\n\n if self.TopK == 0:\n\n return np.array(self.W_dense)\n\n else:\n\n values = np.array(values[0:sparse_data_pointer])\n rows = np.array(rows[0:sparse_data_pointer])\n cols = np.array(cols[0:sparse_data_pointer])\n\n W_sparse = sps.csr_matrix((values, (rows, cols)),\n shape=(self.n_items, self.n_items),\n dtype=np.float32)\n\n return W_sparse\n\n\n", "_____no_output_____" ], [ "cosine_cython = Cosine_Similarity(URM_train, TopK=100)\n\nstart_time = time.time()\n\ncosine_cython.compute_similarity()\n\nprint(\"Similarity computed in {:.2f} seconds\".format(time.time()-start_time))", "Similarity item 10000 ( 15 % ), 722.73 item/sec, required time 1.27 min\nSimilarity item 20000 ( 31 % ), 1152.12 item/sec, required time 0.65 min\nSimilarity item 30000 ( 46 % ), 1413.59 item/sec, required time 0.41 min\nSimilarity item 40000 ( 61 % ), 1611.02 item/sec, required time 0.26 min\nSimilarity item 50000 ( 77 % ), 1761.78 item/sec, required time 0.14 min\nSimilarity item 60000 ( 92 % ), 1876.49 item/sec, required time 0.05 min\nSimilarity item 65134 ( 100 % ), 1929.34 item/sec, required time 0.00 min\nSimilarity computed in 33.94 seconds\n" ] ], [ [ "### Better... much better. There are a few other things you could do, but at this point it is not worth the effort", "_____no_output_____" ], [ "## How to use Cython outside a notebook\n\n### Step1: Create a .pyx file and write your code\n\n### Step2: Create a compilation script \"compileCython.py\" with the following content", "_____no_output_____" ] ], [ [ "# This code will not run in a notebook cell\n\ntry:\n from setuptools import setup\n from setuptools import Extension\nexcept ImportError:\n from distutils.core import setup\n from distutils.extension import Extension\n\n\nfrom Cython.Distutils import build_ext\nimport numpy\nimport sys\nimport re\n\n\nif len(sys.argv) != 4:\n raise ValueError(\"Wrong number of paramethers received. Expected 4, got {}\".format(sys.argv))\n\n\n# Get the name of the file to compile\nfileToCompile = sys.argv[1]\n\n# Remove the argument from sys argv in order for it to contain only what setup needs\ndel sys.argv[1]\n\nextensionName = re.sub(\"\\.pyx\", \"\", fileToCompile)\n\n\next_modules = Extension(extensionName,\n [fileToCompile],\n extra_compile_args=['-O3'],\n include_dirs=[numpy.get_include(),],\n )\n\nsetup(\n cmdclass={'build_ext': build_ext},\n ext_modules=[ext_modules]\n)\n", "_____no_output_____" ] ], [ [ "### Step3: Compile your code with the following command \n\npython compileCython.py Cosine_Similarity_Cython.pyx build_ext --inplace", "_____no_output_____" ], [ "### Step4: Generate cython report and look for \"yellow lines\". The report is an .html file which represents how many operations are necessary to translate each python operation in cython code. If a line is white, it has a direct C translation. If it is yellow it will require many indirect steps that will slow down execution. Some of those steps may be inevitable, some may be removed via static typing.\n\n### IMPORTANT: white does not mean fast!! If a system call is involved that part might be slow anyway.\n\ncython -a Cosine_Similarity_Cython.pyx", "_____no_output_____" ], [ "### Step5: Add static types and C functions to remove \"yellow\" lines.\n\n#### If you use a variable only as a C object, use primitive tipes \ncdef int namevar\n\ndef double namevar\n\ncdef float namevar\n\n#### If you call a function only within C code, use a specific declaration \"cdef\"\n\ncdef function_name(self, int param1, double param2):\n...\n\n", "_____no_output_____" ], [ "## Step6: Iterate step 4 and 5 until you are satisfied with how clean your code is, then compile. An example of non optimized code can be found in the source folder of this notebook with the _SLOW suffix\n\n## Step7: the compilation generates a file wose name is something like \"Cosine_Similarity_Cython.cpython-36m-x86_64-linux-gnu.so\" and tells you the source file, the architecture it is compiled for and the OS\n\n## Step8: Import and use the compiled file as if it were a python class", "_____no_output_____" ] ], [ [ "from Base.Simialrity.Cython.Cosine_Similarity_Cython import Cosine_Similarity\n\ncosine_cython = Cosine_Similarity(URM_train, TopK=100)\n\nstart_time = time.time()\n\ncosine_cython.compute_similarity()\n\nprint(\"Similarity computed in {:.2f} seconds\".format(time.time()-start_time))", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown", "markdown", "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown", "markdown" ], [ "code", "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown", "markdown" ], [ "code" ] ]
d022595d6a51a74ac9d53ea19e3e3063161a32ad
669,918
ipynb
Jupyter Notebook
15_PDEs/15_PDEs.ipynb
ASU-CompMethodsPhysics-PHY494/PHY494-resources-2018
635a6678569406e11865c8a583a56f4a3cf2bdc4
[ "CC-BY-4.0" ]
null
null
null
15_PDEs/15_PDEs.ipynb
ASU-CompMethodsPhysics-PHY494/PHY494-resources-2018
635a6678569406e11865c8a583a56f4a3cf2bdc4
[ "CC-BY-4.0" ]
null
null
null
15_PDEs/15_PDEs.ipynb
ASU-CompMethodsPhysics-PHY494/PHY494-resources-2018
635a6678569406e11865c8a583a56f4a3cf2bdc4
[ "CC-BY-4.0" ]
null
null
null
656.139079
181,332
0.935895
[ [ [ "# 15 PDEs: Solution with Time Stepping\n\n## Heat Equation\nThe **heat equation** can be derived from Fourier's law and energy conservation (see the [lecture notes on the heat equation (PDF)](https://github.com/ASU-CompMethodsPhysics-PHY494/PHY494-resources/blob/master/15_PDEs/15_PDEs_LectureNotes_HeatEquation.pdf))\n\n$$\n\\frac{\\partial T(\\mathbf{x}, t)}{\\partial t} = \\frac{K}{C\\rho} \\nabla^2 T(\\mathbf{x}, t),\n$$", "_____no_output_____" ], [ "## Problem: insulated metal bar (1D heat equation)\nA metal bar of length $L$ is insulated along it lengths and held at 0ºC at its ends. Initially, the whole bar is at 100ºC. Calculate $T(x, t)$ for $t>0$.", "_____no_output_____" ], [ "### Analytic solution\nSolve by separation of variables and power series: The general solution that obeys the boundary conditions $T(0, t) = T(L, t) = 0$ is\n\n$$\nT(x, t) = \\sum_{n=1}^{+\\infty} A_n \\sin(k_n x)\\, \\exp\\left(-\\frac{k_n^2 K t}{C\\rho}\\right), \\quad k_n = \\frac{n\\pi}{L}\n$$", "_____no_output_____" ], [ "The specific solution that satisfies $T(x, 0) = T_0 = 100^\\circ\\text{C}$ leads to $A_n = 4 T_0/n\\pi$ for $n$ odd:\n\n$$\nT(x, t) = \\sum_{n=1,3,5,\\dots}^{+\\infty} \\frac{4 T_0}{n \\pi} \\sin(k_n x)\\, \\exp\\left(-\\frac{k_n^2 K t}{C\\rho}\\right)\n$$", "_____no_output_____" ] ], [ [ "import numpy as np\nimport matplotlib.pyplot as plt\n%matplotlib inline\nplt.style.use('ggplot')", "_____no_output_____" ], [ "def T_bar(x, t, T0, L, K=237, C=900, rho=2700, nmax=1000):\n T = np.zeros_like(x)\n eta = K / (C*rho)\n for n in range(1, nmax, 2):\n kn = n*np.pi/L\n T += 4*T0/(np.pi * n) * np.sin(kn*x) * np.exp(-kn*kn * eta * t)\n return T", "_____no_output_____" ], [ "T0 = 100.\nL = 1.0\nX = np.linspace(0, L, 100)\nfor t in np.linspace(0, 3000, 50):\n plt.plot(X, T_bar(X, t, T0, L))\nplt.xlabel(r\"$x$ (m)\")\nplt.ylabel(r\"$T$ ($^\\circ$C)\");", "_____no_output_____" ] ], [ [ "### Numerical solution: Leap frog\nDiscretize (finite difference):\n\nFor the time domain we only have the initial values so we use a simple forward difference for the time derivative:\n\n$$\n\\frac{\\partial T(x,t)}{\\partial t} \\approx \\frac{T(x, t+\\Delta t) - T(x, t)}{\\Delta t}\n$$", "_____no_output_____" ], [ "For the spatial derivative we have initially all values so we can use the more accurate central difference approximation:\n\n$$\n\\frac{\\partial^2 T(x, t)}{\\partial x^2} \\approx \\frac{T(x+\\Delta x, t) + T(x-\\Delta x, t) - 2 T(x, t)}{\\Delta x^2}\n$$", "_____no_output_____" ], [ "Thus, the heat equation can be written as the finite difference equation\n\n$$\n\\frac{T(x, t+\\Delta t) - T(x, t)}{\\Delta t} = \\frac{K}{C\\rho} \\frac{T(x+\\Delta x, t) + T(x-\\Delta x, t) - 2 T(x, t)}{\\Delta x^2}\n$$", "_____no_output_____" ], [ "which can be reordered so that the RHS contains only known terms and the LHS future terms. Index $i$ is the spatial index, and $j$ the time index: $x = x_0 + i \\Delta x$, $t = t_0 + j \\Delta t$.\n\n$$\nT_{i, j+1} = (1 - 2\\eta) T_{i,j} + \\eta(T_{i+1,j} + T_{i-1, j}), \\quad \\eta := \\frac{K \\Delta t}{C \\rho \\Delta x^2}\n$$\n\nThus we can step forward in time (\"leap frog\"), using only known values.", "_____no_output_____" ], [ "### Solve the 1D heat equation numerically for an iron bar\n* $K = 237$ W/mK\n* $C = 900$ J/K\n* $\\rho = 2700$ kg/m<sup>3</sup>\n* $L = 1$ m\n* $T_0 = 373$ K and $T_b = 273$ K\n* $T(x, 0) = T_0$ and $T(0, t) = T(L, t) = T_b$", "_____no_output_____" ], [ "#### Key considerations ", "_____no_output_____" ], [ "The key line is the computation of the new temperature field at time step $j+1$ from the temperature distribution at time step $j$. It can be written purely with numpy array operations (see last lecture!):\n\n```python\nT[1:-1] = (1 - 2*eta) * T[1:-1] + eta * (T[2:] + T[:-2])\n```\n\nNote that the range operator `T[start:end]` *excludes* `end`, so in order to include `T[1], T[2], ..., T[-2]` (but not the rightmost `T[-1]`) we have to use `T[1:-1]`.", "_____no_output_____" ], [ "The *boundary conditions* are fixed for all times:\n```python\nT[0] = T[-1] = Tb\n```\n\nThe *initial conditions* (at time step `j=0`)\n```python\nT[1:-1] = T0\n```\nare only used to compute the distribution of temperatures at the next step `j=1`.", "_____no_output_____" ], [ "#### Solution", "_____no_output_____" ] ], [ [ "import numpy as np\n\nimport matplotlib.pyplot as plt\nfrom mpl_toolkits.mplot3d import Axes3D\n%matplotlib notebook", "_____no_output_____" ] ], [ [ "For HTML/nbviewer output, use inline:", "_____no_output_____" ] ], [ [ "%matplotlib inline", "_____no_output_____" ], [ "L_rod = 1. # m\nt_max = 3000. # s\n\nDx = 0.02 # m\nDt = 2 # s\n\nNx = int(L_rod // Dx)\nNt = int(t_max // Dt)\n\nKappa = 237 # W/(m K)\nCHeat = 900 # J/K\nrho = 2700 # kg/m^3\n\nT0 = 373 # K\nTb = 273 # K\n\neta = Kappa * Dt / (CHeat * rho * Dx**2)\neta2 = 1 - 2*eta\n\nstep = 20 # plot solution every n steps\n\nprint(\"Nx = {0}, Nt = {1}\".format(Nx, Nt))\nprint(\"eta = {0}\".format(eta))\n\nT = np.zeros(Nx)\nT_plot = np.zeros((Nt//step + 1, Nx))\n\n# initial conditions\nT[1:-1] = T0\n# boundary conditions\nT[0] = T[-1] = Tb\n\nt_index = 0\nT_plot[t_index, :] = T\nfor jt in range(1, Nt):\n T[1:-1] = eta2 * T[1:-1] + eta*(T[2:] + T[:-2])\n if jt % step == 0 or jt == Nt-1:\n t_index += 1\n T_plot[t_index, :] = T\n print(\"Iteration {0:5d}\".format(jt), end=\"\\r\")\nelse:\n print(\"Completed {0:5d} iterations: t={1} s\".format(jt, jt*Dt))", "Nx = 49, Nt = 1500\neta = 0.4876543209876543\nIteration 20\rIteration 40\rIteration 60\rIteration 80\rIteration 100\rIteration 120\rIteration 140\rIteration 160\rIteration 180\rIteration 200\rIteration 220\rIteration 240\rIteration 260\rIteration 280\rIteration 300\rIteration 320\rIteration 340\rIteration 360\rIteration 380\rIteration 400\rIteration 420\rIteration 440\rIteration 460\rIteration 480\rIteration 500\rIteration 520\rIteration 540\rIteration 560\rIteration 580\rIteration 600\rIteration 620\rIteration 640\rIteration 660\rIteration 680\rIteration 700\rIteration 720\rIteration 740\rIteration 760\rIteration 780\rIteration 800\rIteration 820\rIteration 840\rIteration 860\rIteration 880\rIteration 900\rIteration 920\rIteration 940\rIteration 960\rIteration 980\rIteration 1000\rIteration 1020\rIteration 1040\rIteration 1060\rIteration 1080\rIteration 1100\rIteration 1120\rIteration 1140\rIteration 1160\rIteration 1180\rIteration 1200\rIteration 1220\rIteration 1240\rIteration 1260\rIteration 1280\rIteration 1300\rIteration 1320\rIteration 1340\rIteration 1360\rIteration 1380\rIteration 1400\rIteration 1420\rIteration 1440\rIteration 1460\rIteration 1480\rIteration 1499\rCompleted 1499 iterations: t=2998 s\n" ] ], [ [ "#### Visualization\nVisualize (you can use the code as is). \n\nNote how we are making the plot use proper units by mutiplying with `Dt * step` and `Dx`.", "_____no_output_____" ] ], [ [ "X, Y = np.meshgrid(range(T_plot.shape[0]), range(T_plot.shape[1]))\nZ = T_plot[X, Y]\nfig = plt.figure()\nax = fig.add_subplot(111, projection=\"3d\")\nax.plot_wireframe(X*Dt*step, Y*Dx, Z)\nax.set_xlabel(r\"time $t$ (s)\")\nax.set_ylabel(r\"position $x$ (m)\")\nax.set_zlabel(r\"temperature $T$ (K)\")\nfig.tight_layout()", "_____no_output_____" ] ], [ [ "2D as above for the analytical solution…", "_____no_output_____" ] ], [ [ "X = Dx * np.arange(T_plot.shape[1])\nplt.plot(X, T_plot.T)\nplt.xlabel(r\"$x$ (m)\")\nplt.ylabel(r\"$T$ (K)\");", "_____no_output_____" ] ], [ [ "#### Slower solution ", "_____no_output_____" ], [ "I benchmarked this slow solution at 89.7 ms and the fast solution at 14.8 ms (commented out all `print`) so the explicit loop is not that much worse (probably because the overhead on array copying etc is high).", "_____no_output_____" ] ], [ [ "L_rod = 1. # m\nt_max = 3000. # s\n\nDx = 0.02 # m\nDt = 2 # s\n\nNx = int(L_rod // Dx)\nNt = int(t_max // Dt)\n\nKappa = 237 # W/(m K)\nCHeat = 900 # J/K\nrho = 2700 # kg/m^3\n\nT0 = 373 # K\nTb = 273 # K\n\neta = Kappa * Dt / (CHeat * rho * Dx**2)\neta2 = 1 - 2*eta\n\nstep = 20 # plot solution every n steps\n\nprint(\"Nx = {0}, Nt = {1}\".format(Nx, Nt))\nprint(\"eta = {0}\".format(eta))\n\nT = np.zeros(Nx)\nT_new = np.zeros_like(T)\nT_plot = np.zeros((int(np.ceil(Nt/step)) + 1, Nx))\n\n# initial conditions\nT[1:-1] = T0\n# boundary conditions\nT[0] = T[-1] = Tb\n\nT_new[:] = T\n\nt_index = 0\nT_plot[t_index, :] = T\nfor jt in range(1, Nt):\n # T[1:-1] = eta2 * T[1:-1] + eta*(T[2:] + T[:-2])\n for ix in range(1, Nx-1):\n T_new[ix] = eta2 * T[ix] + eta*(T[ix+1] + T[ix-1])\n T[:] = T_new\n if jt % step == 0 or jt == Nt-1:\n t_index += 1\n T_plot[t_index, :] = T\n print(\"Iteration {0:5d}\".format(jt), end=\"\\r\")\nelse:\n print(\"Completed {0:5d} iterations: t={1} s\".format(jt, jt*Dt))", "Nx = 49, Nt = 1500\neta = 0.4876543209876543\nIteration 20\rIteration 40\rIteration 60\rIteration 80\rIteration 100\rIteration 120\rIteration 140\rIteration 160\rIteration 180\rIteration 200\rIteration 220\rIteration 240\rIteration 260\rIteration 280\rIteration 300\rIteration 320\rIteration 340\rIteration 360\rIteration 380\rIteration 400\rIteration 420\rIteration 440\rIteration 460\rIteration 480\rIteration 500\rIteration 520\rIteration 540\rIteration 560\rIteration 580\rIteration 600\rIteration 620\rIteration 640\rIteration 660\rIteration 680\rIteration 700\rIteration 720\rIteration 740\rIteration 760\rIteration 780\rIteration 800\rIteration 820\rIteration 840\rIteration 860\rIteration 880\rIteration 900\rIteration 920\rIteration 940\rIteration 960\rIteration 980\rIteration 1000\rIteration 1020\rIteration 1040\rIteration 1060\rIteration 1080\rIteration 1100\rIteration 1120\rIteration 1140\rIteration 1160\rIteration 1180\rIteration 1200\rIteration 1220\rIteration 1240\rIteration 1260\rIteration 1280\rIteration 1300\rIteration 1320\rIteration 1340\rIteration 1360\rIteration 1380\rIteration 1400\rIteration 1420\rIteration 1440\rIteration 1460\rIteration 1480\rIteration 1499\rCompleted 1499 iterations: t=2998 s\n" ], [ "X, Y = np.meshgrid(range(T_plot.shape[0]), range(T_plot.shape[1]))\nZ = T_plot[X, Y]\nfig = plt.figure()\nax = fig.add_subplot(111, projection=\"3d\")\nax.plot_wireframe(X*Dt*step, Y*Dx, Z)\nax.set_xlabel(r\"time $t$ (s)\")\nax.set_ylabel(r\"position $x$ (m)\")\nax.set_zlabel(r\"temperature $T$ (K)\")\nfig.tight_layout()", "_____no_output_____" ] ], [ [ "## Stability of the solution\n\n### Empirical investigation of the stability\nInvestigate the solution for different values of `Dt` and `Dx`. Can you discern patters for stable/unstable solutions?\n\nReport `Dt`, `Dx`, and `eta`\n* for 3 stable solutions \n* for 3 unstable solutions\n", "_____no_output_____" ] ], [ [ "def calculate_T(L_rod=1, t_max=3000, Dx=0.02, Dt=2, T0=373, Tb=273,\n step=20):\n Nx = int(L_rod // Dx)\n Nt = int(t_max // Dt)\n\n Kappa = 237 # W/(m K)\n CHeat = 900 # J/K\n rho = 2700 # kg/m^3\n\n eta = Kappa * Dt / (CHeat * rho * Dx**2)\n eta2 = 1 - 2*eta\n\n print(\"Nx = {0}, Nt = {1}\".format(Nx, Nt))\n print(\"eta = {0}\".format(eta))\n\n T = np.zeros(Nx)\n T_plot = np.zeros((int(np.ceil(Nt/step)) + 1, Nx))\n\n # initial conditions\n T[1:-1] = T0\n # boundary conditions\n T[0] = T[-1] = Tb\n\n t_index = 0\n T_plot[t_index, :] = T\n for jt in range(1, Nt):\n T[1:-1] = eta2 * T[1:-1] + eta*(T[2:] + T[:-2])\n if jt % step == 0 or jt == Nt-1:\n t_index += 1\n T_plot[t_index, :] = T\n print(\"Iteration {0:5d}\".format(jt), end=\"\\r\")\n else:\n print(\"Completed {0:5d} iterations: t={1} s\".format(jt, jt*Dt))\n return T_plot\n\ndef plot_T(T_plot, Dx, Dt, step):\n X, Y = np.meshgrid(range(T_plot.shape[0]), range(T_plot.shape[1]))\n Z = T_plot[X, Y]\n fig = plt.figure()\n ax = fig.add_subplot(111, projection=\"3d\")\n ax.plot_wireframe(X*Dt*step, Y*Dx, Z)\n ax.set_xlabel(r\"time $t$ (s)\")\n ax.set_ylabel(r\"position $x$ (m)\")\n ax.set_zlabel(r\"temperature $T$ (K)\")\n fig.tight_layout()\n return ax", "_____no_output_____" ], [ "T_plot = calculate_T(Dx=0.01, Dt=2, step=20)\nplot_T(T_plot, 0.01, 2, 20)", "Nx = 99, Nt = 1500\neta = 1.9506172839506173\nIteration 20\rIteration 40\rIteration 60\rIteration 80\rIteration 100\rIteration 120\rIteration 140\rIteration 160\rIteration 180\rIteration 200\rIteration 220\rIteration 240\rIteration 260\rIteration 280\rIteration 300\rIteration 320\rIteration 340\rIteration 360\rIteration 380\rIteration 400\rIteration 420\rIteration 440\rIteration 460\rIteration 480\rIteration 500\rIteration 520\rIteration 540\rIteration 560\rIteration 580\rIteration 600\rIteration 620\rIteration 640\rIteration 660\rIteration 680\rIteration 700\rIteration 720\rIteration 740\rIteration 760\rIteration 780\rIteration 800\rIteration 820\rIteration 840\rIteration 860\rIteration 880\rIteration 900\rIteration 920\rIteration 940\rIteration 960\rIteration 980\rIteration 1000\rIteration 1020\rIteration 1040\rIteration 1060\rIteration 1080\rIteration 1100\rIteration 1120\rIteration 1140\rIteration 1160\rIteration 1180\rIteration 1200\rIteration 1220\rIteration 1240\rIteration 1260\rIteration 1280\rIteration 1300\rIteration 1320\rIteration 1340\rIteration 1360\rIteration 1380\rIteration 1400\rIteration 1420\rIteration 1440\rIteration 1460\rIteration 1480\rIteration 1499\rCompleted 1499 iterations: t=2998 s\n" ] ], [ [ "Note that *decreasing* the value of $\\Delta x$ made the solution *unstable*. This is strange, we have gotten used to the idea that working on a finer mesh will increase the detail (until we hit round-off error) and just become computationally more expensive. But here the algorithm suddenly becomes unstable (and it is not just round-off).", "_____no_output_____" ], [ "For certain combination of values of $\\Delta t$ and $\\Delta x$ the solution become unstable. Empirically, bigger $\\eta$ leads to instability. (In fact, $\\eta \\geq \\frac{1}{2}$ is unstable for the leapfrog algorithm as we will see.)", "_____no_output_____" ], [ "### Von Neumann stability analysis ", "_____no_output_____" ], [ "If the difference equation solution diverges then we *know* that we have a bad approximation to the original PDE. ", "_____no_output_____" ], [ "Von Neumann stability analysis starts from the assumption that *eigenmodes* of the difference equation can be written as\n\n$$\nT_{m,j} = \\xi(k)^j e^{ikm\\Delta x}, \\quad t=j\\Delta t,\\ x=m\\Delta x \n$$\n\nwith the unknown wave vectors $k=2\\pi/\\lambda$ and unknown complex functions – the *amplification factors* – $\\xi(k)$.", "_____no_output_____" ], [ "Solutions of the difference equation can be written as linear superpositions of these basis functions. But they are only stable if the eigenmodes are stable, i.e., will not grow in time (with $j$). This is the case when \n\n$$\n|\\xi(k)| < 1\n$$\n\nfor all $k$.", "_____no_output_____" ], [ "Insert the eigenmodes into the finite difference equation\n\n$$\nT_{m, j+1} = (1 - 2\\eta) T_{m,j} + \\eta(T_{m+1,j} + T_{m-1, j})\n$$\n\nto obtain \n\n\\begin{align}\n\\xi(k)^{j+1} e^{ikm\\Delta x} &= (1 - 2\\eta) \\xi(k)^{j} e^{ikm\\Delta x} \n + \\eta(\\xi(k)^{j} e^{ik(m+1)\\Delta x} + \\xi(k)^{j} e^{ik(m-1)\\Delta x})\\\\\n\\xi(k) &= (1 - 2\\eta) + \\eta(e^{ik\\Delta x} + e^{-ik\\Delta x})\\\\\n\\xi(k) &= 1 - 2\\eta + 2\\eta \\cos k\\Delta x\\\\\n\\xi(k) &= 1 + 2\\eta\\big(\\cos k\\Delta x - 1\\big)\n\\end{align}", "_____no_output_____" ], [ "For $|\\xi(k)| < 1$ (and all possible $k$):\n\n\\begin{align}\n|\\xi(k)| < 1 \\quad &\\Leftrightarrow \\quad \\xi^2(k) < 1\\\\\n(1 + 2y)^2 = 1 + 4y + 4y^2 &< 1 \\quad \\text{with}\\ \\ y = \\eta(\\cos k\\Delta x - 1)\\\\\ny(1 + y) &< 0 \\quad \\Leftrightarrow \\quad -1 < y < 0\\\\\n\\eta(\\cos k\\Delta x - 1) &\\leq 0 \\quad \\forall k \\quad (\\eta > 0, -1 \\leq \\cos x \\leq 1)\\\\\n\\eta(\\cos k\\Delta x - 1) &> -1\\\\\n\\eta &< \\frac{1}{1 - \\cos k\\Delta x}\\\\\n\\eta = \\frac{K \\Delta t}{C \\rho \\Delta x^2} &< \\frac{1}{2} \\le \\frac{1}{1 - \\cos k\\Delta x}\n\\end{align}", "_____no_output_____" ], [ "Thus, solutions are only stable for $\\eta < 1/2$. In particular, decreasing $\\Delta t$ will always improve stability, But decreasing $\\Delta x$ requires an quadratic *increase* in $\\Delta t$!", "_____no_output_____" ], [ "Note\n* Perform von Neumann stability analysis when possible (depends on PDE and the specific discretization).\n* Test different combinations of $\\Delta t$ and $\\Delta x$.\n* Not guarantee that decreasing both will lead to more stable solutions!", "_____no_output_____" ], [ "Check my inputs:\n\nThis was stable and it conforms to the stability criterion:", "_____no_output_____" ] ], [ [ "Dt = 2\nDx = 0.02\neta = Kappa * Dt /(CHeat * rho * Dx*Dx)\nprint(eta)", "0.4876543209876543\n" ] ], [ [ "... and this was unstable, despite a seemingly small change:", "_____no_output_____" ] ], [ [ "Dt = 2\nDx = 0.01\neta = Kappa * Dt /(CHeat * rho * Dx*Dx)\nprint(eta)", "1.9506172839506173\n" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown", "markdown", "markdown", "markdown" ], [ "code", "code", "code" ], [ "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ] ]
d0225c9bb0eb873370c1c779409d7026474ae20f
47,292
ipynb
Jupyter Notebook
ML_course/ML_Contest_train.ipynb
Riwedieb/handson-ml
76ffe3b41732c76e3487aaabe38719075cd712d1
[ "Apache-2.0" ]
null
null
null
ML_course/ML_Contest_train.ipynb
Riwedieb/handson-ml
76ffe3b41732c76e3487aaabe38719075cd712d1
[ "Apache-2.0" ]
null
null
null
ML_course/ML_Contest_train.ipynb
Riwedieb/handson-ml
76ffe3b41732c76e3487aaabe38719075cd712d1
[ "Apache-2.0" ]
null
null
null
69.855244
1,626
0.628817
[ [ [ "# Build a sklearn Pipeline for a to ML contest submission\nIn the ML_coruse_train notebook we at first analyzed the housing dataset to gain statistical insights and then e.g. features added new, \nreplaced missing values and scaled the colums using pandas dataset methods.\nIn the following we will use sklearn [Pipelines](https://scikit-learn.org/stable/modules/generated/sklearn.pipeline.Pipeline.html) to integrate all these steps into one final *estimator*. The resulting pipeline can be used for saving an ML estimator to a file and use it later for production.\n\n*Optional:*\nIf you want, you can save your estimator as explained in the last cell at the bottom of this notebook.\nBased on a hidden dataset, it's performance will then be ranked against all other submissions.", "_____no_output_____" ] ], [ [ "# read housing data again\nimport pandas as pd\nimport numpy as np \nhousing = pd.read_csv(\"datasets/housing/housing.csv\")\n\n# Try to get header information of the dataframe:\nhousing.head()", "_____no_output_____" ] ], [ [ "One remark: sklearn transformers do **not** act on pandas dataframes. Instead, they use numpy arrays. \nNow try to [convert](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.to_numpy.html) a dataframe to a numpy array:", "_____no_output_____" ] ], [ [ "housing.head().to_numpy()", "_____no_output_____" ] ], [ [ "As you can see, the column names are lost now.\nIn a numpy array, columns indexed using integers and no more by their names. ", "_____no_output_____" ], [ "### Add extra feature columns\nAt first, we again add some extra columns (e.g. `rooms_per_household, population_per_household, bedrooms_per_household`) which might correlate better with the predicted parameter `median_house_value`.\nFor modifying the dataset, we now use a [FunctionTransformer](https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.FunctionTransformer.html), which we later can put into a pipeline. \nHints: \n* For finding the index number of a given column name, you can use the method [get_loc()](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Index.get_loc.html)\n* For concatenating the new columns with the given array, you can use numpy method [c_](https://docs.scipy.org/doc/numpy/reference/generated/numpy.c_.html)", "_____no_output_____" ] ], [ [ "from sklearn.preprocessing import FunctionTransformer\n\n# At first, get the indexes as integers from the column names:\nrooms_ix = housing.columns.get_loc(\"total_rooms\")\nbedrooms_ix = \npopulation_ix = \nhousehold_ix = \n\n# Now implement a function which takes a numpy array a argument and adds the new feature columns\ndef add_extra_features(X):\n rooms_per_household = X[:, rooms_ix] / X[:, household_ix]\n population_per_household = \n bedrooms_per_household = \n \n # Concatenate the original array X with the new columns\n return \n\nattr_adder = FunctionTransformer(add_extra_features, validate = False)\nhousing_extra_attribs = attr_adder.fit_transform(housing.values)\n\nassert housing_extra_attribs.shape == (17999, 13)\nhousing_extra_attribs ", "_____no_output_____" ] ], [ [ "### Imputing missing elements\nFor replacing nan values in the dataset with the mean or median of the column they are in, you can also use a [SimpleImputer](https://scikit-learn.org/stable/modules/generated/sklearn.impute.SimpleImputer.html) : ", "_____no_output_____" ] ], [ [ "from sklearn.impute import SimpleImputer \n\n# Drop the categorial column ocean_proximity\nhousing_num = housing.drop(...)\n\nprint(\"We have %d nan elements in the numerical columns\" %np.count_nonzero(np.isnan(housing_num.to_numpy())))\n\nimp_mean = ...\nhousing_num_cleaned = imp_mean.fit_transform(housing_num)\n\nassert np.count_nonzero(np.isnan(housing_num_cleaned)) == 0\nhousing_num_cleaned[1,:]", "_____no_output_____" ] ], [ [ "### Column scaling\nFor scaling and normalizing the columns, you can use the class [StandardScaler](https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.StandardScaler.html)\nUse numpy [mean](https://docs.scipy.org/doc/numpy/reference/generated/numpy.mean.html) and [std](https://docs.scipy.org/doc/numpy/reference/generated/numpy.std.html) to calculate the mean and standard deviation of each column (Hint: columns are axis = 0! ) after scaling.", "_____no_output_____" ] ], [ [ "from sklearn.preprocessing import StandardScaler\n\nscaler = ...\nscaled = scaler.fit_transform(housing_num_cleaned)\nprint(\"mean of the columns is: \" , ...)\nprint(\"standard deviation of the columns is: \" , ...)", "_____no_output_____" ] ], [ [ "### Putting all preprocessing steps together \nNow let's build a pipeline for preprocessing the **numerical** attributes.\nThe pipeline shall process the data in the following steps:\n* [Impute](https://scikit-learn.org/stable/modules/generated/sklearn.impute.SimpleImputer.html) median or mean values for elements which are NaN\n* Add attributes using the FunctionTransformer with the function add_extra_features().\n* Scale the numerical values using the [StandardScaler()](https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.StandardScaler.html)", "_____no_output_____" ] ], [ [ "from sklearn.pipeline import Pipeline\n\nnum_pipeline = Pipeline([\n ('give a name', ...), # Imputer\n ('give a name', ...), # FunctionTransformer\n ('give a name', ...), # Scaler\n ])\n\n# Now test the pipeline on housing_num\nnum_pipeline.fit_transform(housing_num)", "_____no_output_____" ] ], [ [ "Now we have a pipeline for the numerical columns. \nBut we still have a categorical column:", "_____no_output_____" ] ], [ [ "housing['ocean_proximity'].head()", "_____no_output_____" ] ], [ [ "We need one more pipeline for the categorical column. Instead of the \"Dummy encoding\" we used before, we now use the [OneHotEncoder](https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.OneHotEncoder.html) from sklearn. \nHint: to make things easier, set the sparse option of the OneHotEncoder to False.", "_____no_output_____" ] ], [ [ "from sklearn.preprocessing import OneHotEncoder\nhousing_cat = housing[] #get the right column\ncat_encoder = \nhousing_cat_1hot = cat_encoder.fit_transform(housing_cat)\nhousing_cat_1hot", "_____no_output_____" ] ], [ [ "We have everything we need for building a preprocessing pipeline which transforms the columns including all the steps before. \nSince we have columns where different transformations should be applied, we use the class [ColumnTransformer](https://scikit-learn.org/stable/modules/generated/sklearn.compose.ColumnTransformer.html)", "_____no_output_____" ] ], [ [ "from sklearn.compose import ColumnTransformer\n\n# These are the columns with the numerical features:\nnum_attribs = [\"longitude\", ...]\n\n# Here are the columns with categorical features:\ncat_attribs = [...]\n\nfull_prep_pipeline = ColumnTransformer([\n (\"give a name\", ..., ...), # Add the numerical pipeline and specify the columns it should work on\n (\"give a name\", ..., ...), # Add a OneHotEncoder and specify the columns it should work on\n ])\n\nfull_prep_pipeline.fit_transform(housing)", "_____no_output_____" ] ], [ [ "### Train an estimator\nInclude `full_prep_pipeline` into a further pipeline where it is followed by an RandomForestRegressor. \nThis way, at first our data is prepared using `full_prep_pipeline` and then the RandomForestRegressor is trained on it.", "_____no_output_____" ] ], [ [ "from sklearn.ensemble import RandomForestRegressor\nfrom sklearn.model_selection import train_test_split\n\nfull_pipeline_with_predictor = Pipeline([\n (\"give a name\", full_prep_pipeline), # add the full_prep_pipeline\n (\"give a name\", RandomForestRegressor()) # Add a RandomForestRegressor \n ])", "_____no_output_____" ] ], [ [ "For training the regressor, seperate the label colum (`median_house_value`) and feature columns (all other columns).\nSplit the data into a training and testing dataset using train_test_split.", "_____no_output_____" ] ], [ [ "# Create two dataframes, one for the labels one for the features\nhousing_features = housing...\nhousing_labels = housing\n\n# Split the two dataframes into a training and a test dataset\nX_train, X_test, y_train, y_test = train_test_split(housing_features, housing_labels, test_size = 0.20)\n\n# Now train the full_pipeline_with_predictor on the training dataset\nfull_pipeline_with_predictor.fit(X_train, y_train)", "_____no_output_____" ] ], [ [ "As usual, calculate some score metrics:", "_____no_output_____" ] ], [ [ "from sklearn.metrics import mean_squared_error\n\ny_pred = full_pipeline_with_predictor.predict(X_test)\ntree_mse = mean_squared_error(y_pred, y_test)\ntree_rmse = np.sqrt(tree_mse)\ntree_rmse", "_____no_output_____" ], [ "from sklearn.metrics import r2_score\n\nr2_score(y_pred, y_test)", "_____no_output_____" ] ], [ [ "Use the [pickle serializer](https://docs.python.org/3/library/pickle.html) to save your estimator to a file for contest participation.", "_____no_output_____" ] ], [ [ "import pickle\nimport getpass\nfrom sklearn.utils.validation import check_is_fitted\n\nyour_regressor = ... # Put your regression pipeline here\nassert isinstance(your_regressor, Pipeline)\npickle.dump(your_regressor, open(getpass.getuser() + \"s_model.p\", \"wb\" ) )", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ] ]
d0225db7b41c0f2d8fdb96885904c07732daeb9b
8,111
ipynb
Jupyter Notebook
examples/direct_fidelity_estimation.ipynb
mganahl/Cirq
f2bf60f31ad247a68589d7c29263a6765fc3f791
[ "Apache-2.0" ]
3,326
2018-07-18T23:17:21.000Z
2022-03-29T22:28:24.000Z
examples/direct_fidelity_estimation.ipynb
mganahl/Cirq
f2bf60f31ad247a68589d7c29263a6765fc3f791
[ "Apache-2.0" ]
3,443
2018-07-18T21:07:28.000Z
2022-03-31T20:23:21.000Z
examples/direct_fidelity_estimation.ipynb
mganahl/Cirq
f2bf60f31ad247a68589d7c29263a6765fc3f791
[ "Apache-2.0" ]
865
2018-07-18T23:30:24.000Z
2022-03-30T11:43:23.000Z
41.172589
515
0.603748
[ [ [ "\n\n# Running the Direct Fidelity Estimation (DFE) algorithm\nThis example walks through the steps of running the direct fidelity estimation (DFE) algorithm as described in these two papers:\n\n* Direct Fidelity Estimation from Few Pauli Measurements (https://arxiv.org/abs/1104.4695)\n* Practical characterization of quantum devices without tomography (https://arxiv.org/abs/1104.3835)\n\nOptimizations for Clifford circuits are based on a tableau-based simulator:\n* Improved Simulation of Stabilizer Circuits (https://arxiv.org/pdf/quant-ph/0406196.pdf)", "_____no_output_____" ] ], [ [ "try:\n import cirq\nexcept ImportError:\n print(\"installing cirq...\")\n !pip install --quiet cirq\n print(\"installed cirq.\")", "_____no_output_____" ], [ "# Import Cirq, DFE, and create a circuit\nimport cirq\nfrom cirq.contrib.svg import SVGCircuit\nimport examples.direct_fidelity_estimation as dfe\n\nqubits = cirq.LineQubit.range(3)\ncircuit = cirq.Circuit(cirq.CNOT(qubits[0], qubits[2]),\n cirq.Z(qubits[0]),\n cirq.H(qubits[2]),\n cirq.CNOT(qubits[2], qubits[1]))\n\nSVGCircuit(circuit)", "_____no_output_____" ], [ "# We then create a sampler. For this example, we use a simulator but the code can accept a hardware sampler.\nnoise = cirq.ConstantQubitNoiseModel(cirq.depolarize(0.1))\nsampler = cirq.DensityMatrixSimulator(noise=noise)", "_____no_output_____" ], [ "# We run the DFE:\nestimated_fidelity, intermediate_results = dfe.direct_fidelity_estimation(\n circuit,\n qubits,\n sampler,\n n_measured_operators=None, # None=returns all the Pauli strings\n samples_per_term=0) # 0=use dense matrix simulator\n\nprint('Estimated fidelity: %.2f' % (estimated_fidelity))", "_____no_output_____" ] ], [ [ "# What is happening under the hood?\nNow, let's look at the `intermediate_results` and correlate what is happening in the code with the papers. The definition of fidelity is:\n$$\nF = F(\\hat{\\rho},\\hat{\\sigma}) = \\mathrm{Tr} \\left(\\hat{\\rho} \\hat{\\sigma}\\right)\n$$\nwhere $\\hat{\\rho}$ is the theoretical pure state and $\\hat{\\sigma}$ is the actual state. The idea of DFE is to write fidelity as:\n$$F= \\sum _i \\frac{\\rho _i \\sigma _i}{d}$$\n\nwhere $d=4^{\\mathit{number-of-qubits}}$, $\\rho _i = \\mathrm{Tr} \\left( \\hat{\\rho} P_i \\right)$, and $\\sigma _i = \\mathrm{Tr} \\left(\\hat{\\sigma} P_i \\right)$. Each of the $P_i$ is a Pauli operator. We can then finally rewrite the fidelity as:\n\n$$F= \\sum _i Pr(i) \\frac{\\sigma _i}{\\rho_i}$$\n\nwith $Pr(i) = \\frac{\\rho_i ^2}{d}$, which is a probability-like set of numbers (between 0.0 and 1.0 and they add up to 1.0).\n\nOne important question is how do we choose these Pauli operators $P_i$? It depends on whether the circuit is Clifford or not. In case it is, we know that there are \"only\" $2^{\\mathit{number-of-qubits}}$ operators for which $Pr(i)$ is non-zero. In fact, we know that they are all equiprobable with $Pr(i) = \\frac{1}{2^{\\mathit{number-of-qubits}}}$. The code does detect the Cliffordness automatically and switches to this mode. In case the circuit is not Clifford, the code just uses all the operators.\n\nLet's inspect that in the case of our example, we do see the Pauli operators with equiprobability (i.e. the $\\rho_i$):\n", "_____no_output_____" ] ], [ [ "for pauli_trace in intermediate_results.pauli_traces:\n print('Probability %.3f\\tPauli: %s' % (pauli_trace.Pr_i, pauli_trace.P_i))", "_____no_output_____" ] ], [ [ "Yay! We do see 8 entries (we have 3 qubits) with all the same 1/8 probability. What if we had a 23 qubit circuit? In this case, that would be quite many of them. That is where the parameter `n_measured_operators` becomes useful. If it is set to `None` we return *all* the Pauli strings (regardless of whether the circuit is Clifford or not). If set to an integer, we randomly sample the Pauli strings.\n\nThen, let's actually look at the measurements, i.e. $\\sigma_i$:", "_____no_output_____" ] ], [ [ "for trial_result in intermediate_results.trial_results:\n print('rho_i=%.3f\\tsigma_i=%.3f\\tPauli:%s' % (trial_result.pauli_trace.rho_i, trial_result.sigma_i, trial_result.pauli_trace.P_i))", "_____no_output_____" ] ], [ [ "How are these measurements chosen? Since we had set `n_measured_operators=None`, all the measurements are used. If we had set the parameter to an integer, we would only have a subset to start from. We would then, as per the algorithm, sample from this set with replacement according to the probability distribution of $Pr(i)$ (for Clifford circuits, the probabilities are all the same, but for non-Clifford circuits, it means we favor more probable Pauli strings).", "_____no_output_____" ], [ "What about the parameter `samples_per_term`? Remember that the code can handle both a sampler or use a simulator. If we use a sampler, then we can repeat the measurements `samples_per_term` times. In our case, we use a dense matrix simulator and thus we keep that parameter set to `0`.", "_____no_output_____" ], [ "# How do we bound the variance of the fidelity when the circuit is Clifford?\nRecall that the formula for DFE is:\n$$F= \\sum _i Pr(i) \\frac{\\sigma _i}{\\rho_i}$$\n\nBut for Clifford circuits, we have $Pr(i) = \\frac{1}{d}$ and $\\rho_i = 1$ and thus the formula becomes:\n$$F= \\frac{1}{d} \\sum _i \\sigma _i$$\n\nIf we estimate by randomly sampling $N$ values for the indicies $i$ for $\\sigma_i$ we get:\n$$\\hat{F} = \\frac{1}{N} \\sum_{j=1}^N \\sigma _{i(j)}$$\n\nUsing the Bhatia–Davis inequality ([A Better Bound on the Variance, Rajendra Bhatia and Chandler Davis](https://www.jstor.org/stable/2589180)) and the fact that $0 \\le \\sigma_i \\le 1$, we have the variance of:\n$$\\mathrm{Var}\\left[ \\hat{F} \\right] \\le \\frac{(1 - F)F}{N}$$\n\n$$\\mathrm{StdDev}\\left[ \\hat{F} \\right] \\le \\sqrt{\\frac{(1 - F)F}{N}}$$\n\nIn particular, since $0 \\le F \\le 1$ we have:\n$$\\mathrm{StdDev}\\left[ \\hat{F} \\right] \\le \\sqrt{\\frac{(1 - \\frac{1}{2})\\frac{1}{2}}{N}}$$\n\n$$\\mathrm{StdDev}\\left[ \\hat{F} \\right] \\le \\frac{1}{2 \\sqrt{N}}$$", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "markdown" ], [ "code", "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown" ] ]
d0226a500ed6c8e8c6b03c08a9fcfefb1ef7026a
10,066
ipynb
Jupyter Notebook
languages/south_asia/Gujarati_tutorial.ipynb
glaserti/tutorials
fb56a58bbe2e0ae338b01a9528cecc9b652df7cc
[ "MIT" ]
44
2017-03-18T09:30:50.000Z
2022-02-04T00:05:34.000Z
languages/south_asia/Gujarati_tutorial.ipynb
glaserti/tutorials
fb56a58bbe2e0ae338b01a9528cecc9b652df7cc
[ "MIT" ]
23
2017-06-05T21:03:39.000Z
2022-02-13T18:26:29.000Z
languages/south_asia/Gujarati_tutorial.ipynb
glaserti/tutorials
fb56a58bbe2e0ae338b01a9528cecc9b652df7cc
[ "MIT" ]
29
2017-06-05T19:02:05.000Z
2021-03-09T19:05:29.000Z
25.039801
192
0.536161
[ [ [ "# Gujarati with CLTK", "_____no_output_____" ], [ "See how you can analyse your Gujarati texts with <b>CLTK</b> ! <br>\nLet's begin by adding the `USER_PATH`..", "_____no_output_____" ] ], [ [ "import os\nUSER_PATH = os.path.expanduser('~')", "_____no_output_____" ] ], [ [ "In order to be able to download Gujarati texts from CLTK's Github repo, we will require an importer.", "_____no_output_____" ] ], [ [ "from cltk.corpus.utils.importer import CorpusImporter\ngujarati_downloader = CorpusImporter('gujarati')", "_____no_output_____" ] ], [ [ "We can now see the corpora available for download, by using `list_corpora` feature of the importer. Let's go ahead and try it out!", "_____no_output_____" ] ], [ [ "gujarati_downloader.list_corpora", "_____no_output_____" ] ], [ [ "The corpus <i>gujarati_text_wikisource</i> can be downloaded from the Github repo. The corpus will be downloaded to the directory `cltk_data/gujarati` at the above mentioned `USER_PATH`", "_____no_output_____" ] ], [ [ "gujarati_downloader.import_corpus('gujarati_text_wikisource')", "_____no_output_____" ] ], [ [ "You can see the texts downloaded by doing the following, or checking out the `cltk_data/gujarati/text/gujarati_text_wikisource` directory.", "_____no_output_____" ] ], [ [ "gujarati_corpus_path = os.path.join(USER_PATH,'cltk_data/gujarati/text/gujarati_text_wikisource')\nlist_of_texts = [text for text in os.listdir(gujarati_corpus_path) if '.' not in text]\nprint(list_of_texts)", "['narsinh_mehta', 'kabir', 'vallabhacharya']\n" ] ], [ [ "Great, now that we have our texts, let's take a sample from one of them. For this tutorial, we shall be using <i>govinda_khele_holi</i> , a text by the Gujarati poet Narsinh Mehta.", "_____no_output_____" ] ], [ [ "gujarati_text_path = os.path.join(gujarati_corpus_path,'narsinh_mehta/govinda_khele_holi.txt')\ngujarati_text = open(gujarati_text_path,'r').read()\nprint(gujarati_text)", "વૃંદાવન જઈએ,\nજીહાં ગોવિંદ ખેલે હોળી;\nનટવર વેશ ધર્યો નંદ નંદન,\nમળી મહાવન ટોળી... ચાલો સખી !\n\nએક નાચે એક ચંગ વજાડે,\nછાંટે કેસર ઘોળી;\nએક અબીરગુલાલ ઉડાડે,\nએક ગાય ભાંભર ભોળી... ચાલો સખી !\n\nએક એકને કરે છમકલાં,\nહસી હસી કર લે તાળી;\nમાહોમાહે કરે મરકલાં,\nમધ્ય ખેલે વનમાળી... ચાલો સખી !\n\nવસંત ઋતુ વૃંદાવન સરી,\nફૂલ્યો ફાગણ માસ;\nગોવિંદગોપી રમે રંગભર,\nજુએ નરસૈંયો દાસ... ચાલો સખી !\n \n" ] ], [ [ "## Gujarati Alphabets", "_____no_output_____" ], [ "There are 13 vowels, 33 consonants, which are grouped as follows:", "_____no_output_____" ] ], [ [ "from cltk.corpus.gujarati.alphabet import *\nprint(\"Digits:\",DIGITS)\nprint(\"Vowels:\",VOWELS)\nprint(\"Dependent vowels:\",DEPENDENT_VOWELS)\nprint(\"Consonants:\",CONSONANTS)\nprint(\"Velar consonants:\",VELAR_CONSONANTS)\nprint(\"Palatal consonants:\",PALATAL_CONSONANTS)\nprint(\"Retroflex consonants:\",RETROFLEX_CONSONANTS)\nprint(\"Dental consonants:\",DENTAL_CONSONANTS)\nprint(\"Labial consonants:\",LABIAL_CONSONANTS)\nprint(\"Sonorant consonants:\",SONORANT_CONSONANTS)\nprint(\"Sibilant consonants:\",SIBILANT_CONSONANTS)\nprint(\"Guttural consonant:\",GUTTURAL_CONSONANT)\nprint(\"Additional consonants:\",ADDITIONAL_CONSONANTS)\nprint(\"Modifiers:\",MODIFIERS)", "Digits: ['૦', '૧', '૨', '૩', '૪', '૫', '૬', '૭', '૮', '૯', '૧૦']\nVowels: ['અ', 'આ', 'ઇ', 'ઈ', 'ઉ', 'ઊ', 'ઋ', 'એ', 'ઐ', 'ઓ', 'ઔ', 'અં', 'અઃ']\nDependent vowels: ['ા ', 'િ', 'ી', 'ો', 'ૌ']\nConsonants: ['ક', 'ખ', 'ગ', 'ઘ', 'ચ', 'છ', 'જ', 'ઝ', 'ઞ', 'ટ', 'ઠ', 'ડ', 'ઢ', 'ણ', 'ત', 'થ', 'દ', 'ધ', 'ન', 'પ', 'ફ', 'બ', 'ભ', 'મ', 'ય', 'ર', 'લ', 'ળ', 'વ', 'શ', 'ષ', 'સ', 'હ']\nVelar consonants: ['ક', 'ખ', 'ગ', 'ઘ', 'ઙ']\nPalatal consonants: ['ચ', 'છ', 'જ', 'ઝ', 'ઞ']\nRetroflex consonants: ['ટ', 'ઠ', 'ડ', 'ઢ', 'ણ']\nDental consonants: ['ત', 'થ', 'દ', 'ધ', 'ન']\nLabial consonants: ['પ', 'ફ', 'બ', 'ભ', 'મ']\nSonorant consonants: ['ય', 'ર', 'લ', 'વ']\nSibilant consonants: ['શ', 'ષ', 'સ']\nGuttural consonant: ['હ']\nAdditional consonants: ['ળ', 'ક્ષ', 'જ્ઞ']\nModifiers: [' ्', ' ॓', ' ॔']\n" ] ], [ [ "## Transliterations", "_____no_output_____" ], [ "We can transliterate Gujarati scripts to that of other Indic languages. Let us transliterate `કમળ ભારતનો રાષ્ટ્રીય ફૂલ છે`to Kannada:", "_____no_output_____" ] ], [ [ "gujarati_text_two = 'કમળ ભારતનો રાષ્ટ્રીય ફૂલ છે'\nfrom cltk.corpus.sanskrit.itrans.unicode_transliterate import UnicodeIndicTransliterator\nUnicodeIndicTransliterator.transliterate(gujarati_text_two,\"gu\",\"kn\")", "_____no_output_____" ] ], [ [ "We can also romanize the text as shown:", "_____no_output_____" ] ], [ [ "from cltk.corpus.sanskrit.itrans.unicode_transliterate import ItransTransliterator\nItransTransliterator.to_itrans(gujarati_text_two,'gu')", "_____no_output_____" ] ], [ [ "Similarly, we can indicize a text given in its ITRANS-transliteration", "_____no_output_____" ] ], [ [ "gujarati_text_itrans = 'bhaawanaa'\nItransTransliterator.from_itrans(gujarati_text_itrans,'gu')", "_____no_output_____" ] ], [ [ "## Syllabifier", "_____no_output_____" ], [ "We can use the indian_syllabifier to syllabify the Gujarati sentences. To do this, we will have to import models as follows. The importing of `sanskrit_models_cltk` might take some time.", "_____no_output_____" ] ], [ [ "phonetics_model_importer = CorpusImporter('sanskrit')\nphonetics_model_importer.list_corpora\nphonetics_model_importer.import_corpus('sanskrit_models_cltk') ", "_____no_output_____" ] ], [ [ "Now we import the syllabifier and syllabify as follows:", "_____no_output_____" ] ], [ [ "%%capture\nfrom cltk.stem.sanskrit.indian_syllabifier import Syllabifier\ngujarati_syllabifier = Syllabifier('gujarati')\ngujarati_syllables = gujarati_syllabifier.orthographic_syllabify('ભાવના')", "_____no_output_____" ] ], [ [ "The syllables of the word `ભાવના` will thus be:", "_____no_output_____" ] ], [ [ "print(gujarati_syllables)", "['ભા', 'વ', 'ના']\n" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ] ]
d0226aed22f893f8ef1c0c3fb0cbc5335ebcdb7f
93,932
ipynb
Jupyter Notebook
3. Landmark Detection and Tracking.ipynb
mitsunami/SLAM
9aa5f35dbe4b110acb0625efb833ca6532c6d108
[ "MIT" ]
null
null
null
3. Landmark Detection and Tracking.ipynb
mitsunami/SLAM
9aa5f35dbe4b110acb0625efb833ca6532c6d108
[ "MIT" ]
null
null
null
3. Landmark Detection and Tracking.ipynb
mitsunami/SLAM
9aa5f35dbe4b110acb0625efb833ca6532c6d108
[ "MIT" ]
null
null
null
113.719128
30,652
0.814493
[ [ [ "# Project 3: Implement SLAM \n\n---\n\n## Project Overview\n\nIn this project, you'll implement SLAM for robot that moves and senses in a 2 dimensional, grid world!\n\nSLAM gives us a way to both localize a robot and build up a map of its environment as a robot moves and senses in real-time. This is an active area of research in the fields of robotics and autonomous systems. Since this localization and map-building relies on the visual sensing of landmarks, this is a computer vision problem. \n\nUsing what you've learned about robot motion, representations of uncertainty in motion and sensing, and localization techniques, you will be tasked with defining a function, `slam`, which takes in six parameters as input and returns the vector `mu`. \n> `mu` contains the (x,y) coordinate locations of the robot as it moves, and the positions of landmarks that it senses in the world\n\nYou can implement helper functions as you see fit, but your function must return `mu`. The vector, `mu`, should have (x, y) coordinates interlaced, for example, if there were 2 poses and 2 landmarks, `mu` will look like the following, where `P` is the robot position and `L` the landmark position:\n```\nmu = matrix([[Px0],\n [Py0],\n [Px1],\n [Py1],\n [Lx0],\n [Ly0],\n [Lx1],\n [Ly1]])\n```\n\nYou can see that `mu` holds the poses first `(x0, y0), (x1, y1), ...,` then the landmark locations at the end of the matrix; we consider a `nx1` matrix to be a vector.\n\n## Generating an environment\n\nIn a real SLAM problem, you may be given a map that contains information about landmark locations, and in this example, we will make our own data using the `make_data` function, which generates a world grid with landmarks in it and then generates data by placing a robot in that world and moving and sensing over some numer of time steps. The `make_data` function relies on a correct implementation of robot move/sense functions, which, at this point, should be complete and in the `robot_class.py` file. The data is collected as an instantiated robot moves and senses in a world. Your SLAM function will take in this data as input. So, let's first create this data and explore how it represents the movement and sensor measurements that our robot takes.\n\n---", "_____no_output_____" ], [ "## Create the world\n\nUse the code below to generate a world of a specified size with randomly generated landmark locations. You can change these parameters and see how your implementation of SLAM responds! \n\n`data` holds the sensors measurements and motion of your robot over time. It stores the measurements as `data[i][0]` and the motion as `data[i][1]`.\n\n#### Helper functions\n\nYou will be working with the `robot` class that may look familiar from the first notebook, \n\nIn fact, in the `helpers.py` file, you can read the details of how data is made with the `make_data` function. It should look very similar to the robot move/sense cycle you've seen in the first notebook.", "_____no_output_____" ] ], [ [ "import numpy as np\nfrom helpers import make_data\n\n# your implementation of slam should work with the following inputs\n# feel free to change these input values and see how it responds!\n\n# world parameters\nnum_landmarks = 5 # number of landmarks\nN = 20 # time steps\nworld_size = 100.0 # size of world (square)\n\n# robot parameters\nmeasurement_range = 50.0 # range at which we can sense landmarks\nmotion_noise = 2.0 # noise in robot motion\nmeasurement_noise = 2.0 # noise in the measurements\ndistance = 20.0 # distance by which robot (intends to) move each iteratation \n\n\n# make_data instantiates a robot, AND generates random landmarks for a given world size and number of landmarks\ndata = make_data(N, num_landmarks, world_size, measurement_range, motion_noise, measurement_noise, distance)", " \nLandmarks: [[12, 44], [62, 98], [19, 13], [45, 12], [7, 97]]\nRobot: [x=69.61429 y=95.52181]\n" ] ], [ [ "### A note on `make_data`\n\nThe function above, `make_data`, takes in so many world and robot motion/sensor parameters because it is responsible for:\n1. Instantiating a robot (using the robot class)\n2. Creating a grid world with landmarks in it\n\n**This function also prints out the true location of landmarks and the *final* robot location, which you should refer back to when you test your implementation of SLAM.**\n\nThe `data` this returns is an array that holds information about **robot sensor measurements** and **robot motion** `(dx, dy)` that is collected over a number of time steps, `N`. You will have to use *only* these readings about motion and measurements to track a robot over time and find the determine the location of the landmarks using SLAM. We only print out the true landmark locations for comparison, later.\n\n\nIn `data` the measurement and motion data can be accessed from the first and second index in the columns of the data array. See the following code for an example, where `i` is the time step:\n```\nmeasurement = data[i][0]\nmotion = data[i][1]\n```\n", "_____no_output_____" ] ], [ [ "# print out some stats about the data\ntime_step = 0\n\nprint('Example measurements: \\n', data[time_step][0])\nprint('\\n')\nprint('Example motion: \\n', data[time_step][1])", "Example measurements: \n [[0, -38.94955155697709, -7.2954814723926384], [1, 11.679250951477753, 46.597074026819655], [2, -30.450451619432496, -37.41378043748835], [3, -4.896442127766177, -38.434283116881524], [4, -43.08341118340028, 47.17699212819607]]\n\n\nExample motion: \n [-15.396274422511562, -12.765372454680524]\n" ] ], [ [ "Try changing the value of `time_step`, you should see that the list of measurements varies based on what in the world the robot sees after it moves. As you know from the first notebook, the robot can only sense so far and with a certain amount of accuracy in the measure of distance between its location and the location of landmarks. The motion of the robot always is a vector with two values: one for x and one for y displacement. This structure will be useful to keep in mind as you traverse this data in your implementation of slam.", "_____no_output_____" ], [ "## Initialize Constraints\n\nOne of the most challenging tasks here will be to create and modify the constraint matrix and vector: omega and xi. In the second notebook, you saw an example of how omega and xi could hold all the values the define the relationships between robot poses `xi` and landmark positions `Li` in a 1D world, as seen below, where omega is the blue matrix and xi is the pink vector.\n\n<img src='images/motion_constraint.png' width=50% height=50% />\n\n\nIn *this* project, you are tasked with implementing constraints for a 2D world. We are referring to robot poses as `Px, Py` and landmark positions as `Lx, Ly`, and one way to approach this challenge is to add *both* x and y locations in the constraint matrices.\n\n<img src='images/constraints2D.png' width=50% height=50% />\n\nYou may also choose to create two of each omega and xi (one for x and one for y positions).", "_____no_output_____" ], [ "### TODO: Write a function that initializes omega and xi\n\nComplete the function `initialize_constraints` so that it returns `omega` and `xi` constraints for the starting position of the robot. Any values that we do not yet know should be initialized with the value `0`. You may assume that our robot starts out in exactly the middle of the world with 100% confidence (no motion or measurement noise at this point). The inputs `N` time steps, `num_landmarks`, and `world_size` should give you all the information you need to construct intial constraints of the correct size and starting values.\n\n*Depending on your approach you may choose to return one omega and one xi that hold all (x,y) positions *or* two of each (one for x values and one for y); choose whichever makes most sense to you!*", "_____no_output_____" ] ], [ [ "def initialize_constraints(N, num_landmarks, world_size):\n ''' This function takes in a number of time steps N, number of landmarks, and a world_size,\n and returns initialized constraint matrices, omega and xi.'''\n \n ## Recommended: Define and store the size (rows/cols) of the constraint matrix in a variable\n \n ## TODO: Define the constraint matrix, Omega, with two initial \"strength\" values\n ## for the initial x, y location of our robot\n omega = np.zeros((2*N + 2*num_landmarks, 2*N + 2*num_landmarks))\n omega[0,0] = 1\n omega[1,1] = 1\n \n ## TODO: Define the constraint *vector*, xi\n ## you can assume that the robot starts out in the middle of the world with 100% confidence\n xi = np.zeros((2*N + 2*num_landmarks, 1))\n xi[0] = world_size/2\n xi[1] = world_size/2\n \n return omega, xi\n ", "_____no_output_____" ] ], [ [ "### Test as you go\n\nIt's good practice to test out your code, as you go. Since `slam` relies on creating and updating constraint matrices, `omega` and `xi` to account for robot sensor measurements and motion, let's check that they initialize as expected for any given parameters.\n\nBelow, you'll find some test code that allows you to visualize the results of your function `initialize_constraints`. We are using the [seaborn](https://seaborn.pydata.org/) library for visualization.\n\n**Please change the test values of N, landmarks, and world_size and see the results**. Be careful not to use these values as input into your final smal function.\n\nThis code assumes that you have created one of each constraint: `omega` and `xi`, but you can change and add to this code, accordingly. The constraints should vary in size with the number of time steps and landmarks as these values affect the number of poses a robot will take `(Px0,Py0,...Pxn,Pyn)` and landmark locations `(Lx0,Ly0,...Lxn,Lyn)` whose relationships should be tracked in the constraint matrices. Recall that `omega` holds the weights of each variable and `xi` holds the value of the sum of these variables, as seen in Notebook 2. You'll need the `world_size` to determine the starting pose of the robot in the world and fill in the initial values for `xi`.", "_____no_output_____" ] ], [ [ "# import data viz resources\nimport matplotlib.pyplot as plt\nfrom pandas import DataFrame\nimport seaborn as sns\n%matplotlib inline", "_____no_output_____" ], [ "# define a small N and world_size (small for ease of visualization)\nN_test = 5\nnum_landmarks_test = 2\nsmall_world = 10\n\n# initialize the constraints\ninitial_omega, initial_xi = initialize_constraints(N_test, num_landmarks_test, small_world)", "_____no_output_____" ], [ "# define figure size\nplt.rcParams[\"figure.figsize\"] = (10,7)\n\n# display omega\nsns.heatmap(DataFrame(initial_omega), cmap='Blues', annot=True, linewidths=.5)", "_____no_output_____" ], [ "# define figure size\nplt.rcParams[\"figure.figsize\"] = (1,7)\n\n# display xi\nsns.heatmap(DataFrame(initial_xi), cmap='Oranges', annot=True, linewidths=.5)", "_____no_output_____" ] ], [ [ "---\n## SLAM inputs \n\nIn addition to `data`, your slam function will also take in:\n* N - The number of time steps that a robot will be moving and sensing\n* num_landmarks - The number of landmarks in the world\n* world_size - The size (w/h) of your world\n* motion_noise - The noise associated with motion; the update confidence for motion should be `1.0/motion_noise`\n* measurement_noise - The noise associated with measurement/sensing; the update weight for measurement should be `1.0/measurement_noise`\n\n#### A note on noise\n\nRecall that `omega` holds the relative \"strengths\" or weights for each position variable, and you can update these weights by accessing the correct index in omega `omega[row][col]` and *adding/subtracting* `1.0/noise` where `noise` is measurement or motion noise. `Xi` holds actual position values, and so to update `xi` you'll do a similar addition process only using the actual value of a motion or measurement. So for a vector index `xi[row][0]` you will end up adding/subtracting one measurement or motion divided by their respective `noise`.\n\n### TODO: Implement Graph SLAM\n\nFollow the TODO's below to help you complete this slam implementation (these TODO's are in the recommended order), then test out your implementation! \n\n#### Updating with motion and measurements\n\nWith a 2D omega and xi structure as shown above (in earlier cells), you'll have to be mindful about how you update the values in these constraint matrices to account for motion and measurement constraints in the x and y directions. Recall that the solution to these matrices (which holds all values for robot poses `P` and landmark locations `L`) is the vector, `mu`, which can be computed at the end of the construction of omega and xi as the inverse of omega times xi: $\\mu = \\Omega^{-1}\\xi$\n\n**You may also choose to return the values of `omega` and `xi` if you want to visualize their final state!**", "_____no_output_____" ] ], [ [ "## TODO: Complete the code to implement SLAM\n\n## slam takes in 6 arguments and returns mu, \n## mu is the entire path traversed by a robot (all x,y poses) *and* all landmarks locations\ndef slam(data, N, num_landmarks, world_size, motion_noise, measurement_noise):\n \n ## TODO: Use your initilization to create constraint matrices, omega and xi\n omega, xi = initialize_constraints(N, num_landmarks, world_size)\n \n ## TODO: Iterate through each time step in the data\n ## get all the motion and measurement data as you iterate\n for t in range(N-1):\n \n ## TODO: update the constraint matrix/vector to account for all *measurements*\n ## this should be a series of additions that take into account the measurement noise\n #print(\"data: \", len(data), data[t][0])\n measurements = data[t][0]\n for m in measurements:\n Lnum = m[0]\n Ldx = m[1]\n Ldy = m[2]\n \n omega[2*t+0] [2*t+0] += 1/measurement_noise\n omega[2*t+1] [2*t+1] += 1/measurement_noise\n omega[2*t+0] [2*(N+Lnum)+0] += -1/measurement_noise\n omega[2*t+1] [2*(N+Lnum)+1] += -1/measurement_noise\n omega[2*(N+Lnum)+0][2*t+0] += -1/measurement_noise\n omega[2*(N+Lnum)+1][2*t+1] += -1/measurement_noise\n omega[2*(N+Lnum)+0][2*(N+Lnum)+0] += 1/measurement_noise\n omega[2*(N+Lnum)+1][2*(N+Lnum)+1] += 1/measurement_noise\n \n xi[2*t+0] += -Ldx/measurement_noise\n xi[2*t+1] += -Ldy/measurement_noise\n xi[2*(N+Lnum)+0] += Ldx/measurement_noise\n xi[2*(N+Lnum)+1] += Ldy/measurement_noise\n \n ## TODO: update the constraint matrix/vector to account for all *motion* and motion noise\n motion = data[t][1]\n\n omega[2*t+0][2*t+0] += 1/motion_noise\n omega[2*t+1][2*t+1] += 1/motion_noise\n omega[2*t+0][2*t+2] += -1/motion_noise\n omega[2*t+1][2*t+3] += -1/motion_noise\n omega[2*t+2][2*t+0] += -1/motion_noise\n omega[2*t+3][2*t+1] += -1/motion_noise\n omega[2*t+2][2*t+2] += 1/motion_noise\n omega[2*t+3][2*t+3] += 1/motion_noise\n\n xi[2*t+0] += -motion[0]/motion_noise\n xi[2*t+2] += motion[0]/motion_noise\n xi[2*t+1] += -motion[1]/motion_noise\n xi[2*t+3] += motion[1]/motion_noise\n \n ## TODO: After iterating through all the data\n ## Compute the best estimate of poses and landmark positions\n ## using the formula, omega_inverse * Xi\n mu = np.linalg.inv(np.matrix(omega)) * xi\n \n return mu # return `mu`\n", "_____no_output_____" ] ], [ [ "## Helper functions\n\nTo check that your implementation of SLAM works for various inputs, we have provided two helper functions that will help display the estimated pose and landmark locations that your function has produced. First, given a result `mu` and number of time steps, `N`, we define a function that extracts the poses and landmarks locations and returns those as their own, separate lists. \n\nThen, we define a function that nicely print out these lists; both of these we will call, in the next step.\n", "_____no_output_____" ] ], [ [ "# a helper function that creates a list of poses and of landmarks for ease of printing\n# this only works for the suggested constraint architecture of interlaced x,y poses\ndef get_poses_landmarks(mu, N):\n # create a list of poses\n poses = []\n for i in range(N):\n poses.append((mu[2*i].item(), mu[2*i+1].item()))\n\n # create a list of landmarks\n landmarks = []\n for i in range(num_landmarks):\n landmarks.append((mu[2*(N+i)].item(), mu[2*(N+i)+1].item()))\n\n # return completed lists\n return poses, landmarks\n", "_____no_output_____" ], [ "def print_all(poses, landmarks):\n print('\\n')\n print('Estimated Poses:')\n for i in range(len(poses)):\n print('['+', '.join('%.3f'%p for p in poses[i])+']')\n print('\\n')\n print('Estimated Landmarks:')\n for i in range(len(landmarks)):\n print('['+', '.join('%.3f'%l for l in landmarks[i])+']')\n", "_____no_output_____" ] ], [ [ "## Run SLAM\n\nOnce you've completed your implementation of `slam`, see what `mu` it returns for different world sizes and different landmarks!\n\n### What to Expect\n\nThe `data` that is generated is random, but you did specify the number, `N`, or time steps that the robot was expected to move and the `num_landmarks` in the world (which your implementation of `slam` should see and estimate a position for. Your robot should also start with an estimated pose in the very center of your square world, whose size is defined by `world_size`.\n\nWith these values in mind, you should expect to see a result that displays two lists:\n1. **Estimated poses**, a list of (x, y) pairs that is exactly `N` in length since this is how many motions your robot has taken. The very first pose should be the center of your world, i.e. `[50.000, 50.000]` for a world that is 100.0 in square size.\n2. **Estimated landmarks**, a list of landmark positions (x, y) that is exactly `num_landmarks` in length. \n\n#### Landmark Locations\n\nIf you refer back to the printout of *exact* landmark locations when this data was created, you should see values that are very similar to those coordinates, but not quite (since `slam` must account for noise in motion and measurement).", "_____no_output_____" ] ], [ [ "# call your implementation of slam, passing in the necessary parameters\nmu = slam(data, N, num_landmarks, world_size, motion_noise, measurement_noise)\n\n# print out the resulting landmarks and poses\nif(mu is not None):\n # get the lists of poses and landmarks\n # and print them out\n poses, landmarks = get_poses_landmarks(mu, N)\n print_all(poses, landmarks)", "\n\nEstimated Poses:\n[50.000, 50.000]\n[35.859, 35.926]\n[21.364, 23.942]\n[6.980, 11.344]\n[24.945, 20.405]\n[43.518, 30.202]\n[62.058, 37.373]\n[79.693, 44.655]\n[95.652, 52.956]\n[77.993, 43.819]\n[60.450, 33.659]\n[41.801, 24.066]\n[23.993, 15.292]\n[7.068, 7.322]\n[23.995, -0.325]\n[32.465, 17.730]\n[41.235, 37.599]\n[50.421, 57.362]\n[59.424, 75.357]\n[67.357, 93.716]\n\n\nEstimated Landmarks:\n[11.692, 44.036]\n[61.744, 96.855]\n[19.061, 12.781]\n[44.483, 11.522]\n[6.063, 96.744]\n" ] ], [ [ "## Visualize the constructed world\n\nFinally, using the `display_world` code from the `helpers.py` file (which was also used in the first notebook), we can actually visualize what you have coded with `slam`: the final position of the robot and the positon of landmarks, created from only motion and measurement data!\n\n**Note that these should be very similar to the printed *true* landmark locations and final pose from our call to `make_data` early in this notebook.**", "_____no_output_____" ] ], [ [ "# import the helper function\nfrom helpers import display_world\n\n# Display the final world!\n\n# define figure size\nplt.rcParams[\"figure.figsize\"] = (20,20)\n\n# check if poses has been created\nif 'poses' in locals():\n # print out the last pose\n print('Last pose: ', poses[-1])\n # display the last position of the robot *and* the landmark positions\n display_world(int(world_size), poses[-1], landmarks)", "Last pose: (67.35712814937992, 93.71611790835976)\n" ] ], [ [ "### Question: How far away is your final pose (as estimated by `slam`) compared to the *true* final pose? Why do you think these poses are different?\n\nYou can find the true value of the final pose in one of the first cells where `make_data` was called. You may also want to look at the true landmark locations and compare them to those that were estimated by `slam`. Ask yourself: what do you think would happen if we moved and sensed more (increased N)? Or if we had lower/higher noise parameters.", "_____no_output_____" ], [ "**Answer**: The true value of the final pose is [x=69.61429 y=95.52181], and it is close to the estimated pose [67.357, 93.716] in my slam implementation. \nAnd the true landmarks are [12, 44], [62, 98], [19, 13], [45, 12], [7, 97] while the estimated are [11.692, 44.036], [61.744, 96.855], [19.061, 12.781], [44.483, 11.522], [6.063, 96.744].\n\nIf we moved and sensed more, the results becomes more accurate. And if we had lower noise parameters, then I can have more acculate results than higher noise parameters.", "_____no_output_____" ], [ "## Testing\n\nTo confirm that your slam code works before submitting your project, it is suggested that you run it on some test data and cases. A few such cases have been provided for you, in the cells below. When you are ready, uncomment the test cases in the next cells (there are two test cases, total); your output should be **close-to or exactly** identical to the given results. If there are minor discrepancies it could be a matter of floating point accuracy or in the calculation of the inverse matrix.\n\n### Submit your project\n\nIf you pass these tests, it is a good indication that your project will pass all the specifications in the project rubric. Follow the submission instructions to officially submit!", "_____no_output_____" ] ], [ [ "# Here is the data and estimated outputs for test case 1\n\ntest_data1 = [[[[1, 19.457599255548065, 23.8387362100849], [2, -13.195807561967236, 11.708840328458608], [3, -30.0954905279171, 15.387879242505843]], [-12.2607279422326, -15.801093326936487]], [[[2, -0.4659930049620491, 28.088559771215664], [4, -17.866382374890936, -16.384904503932]], [-12.2607279422326, -15.801093326936487]], [[[4, -6.202512900833806, -1.823403210274639]], [-12.2607279422326, -15.801093326936487]], [[[4, 7.412136480918645, 15.388585962142429]], [14.008259661173426, 14.274756084260822]], [[[4, -7.526138813444998, -0.4563942429717849]], [14.008259661173426, 14.274756084260822]], [[[2, -6.299793150150058, 29.047830407717623], [4, -21.93551130411791, -13.21956810989039]], [14.008259661173426, 14.274756084260822]], [[[1, 15.796300959032276, 30.65769689694247], [2, -18.64370821983482, 17.380022987031367]], [14.008259661173426, 14.274756084260822]], [[[1, 0.40311325410337906, 14.169429532679855], [2, -35.069349468466235, 2.4945558982439957]], [14.008259661173426, 14.274756084260822]], [[[1, -16.71340983241936, -2.777000269543834]], [-11.006096015782283, 16.699276945166858]], [[[1, -3.611096830835776, -17.954019226763958]], [-19.693482634035977, 3.488085684573048]], [[[1, 18.398273354362416, -22.705102332550947]], [-19.693482634035977, 3.488085684573048]], [[[2, 2.789312482883833, -39.73720193121324]], [12.849049222879723, -15.326510824972983]], [[[1, 21.26897046581808, -10.121029799040915], [2, -11.917698965880655, -23.17711662602097], [3, -31.81167947898398, -16.7985673023331]], [12.849049222879723, -15.326510824972983]], [[[1, 10.48157743234859, 5.692957082575485], [2, -22.31488473554935, -5.389184118551409], [3, -40.81803984305378, -2.4703329790238118]], [12.849049222879723, -15.326510824972983]], [[[0, 10.591050242096598, -39.2051798967113], [1, -3.5675572049297553, 22.849456408289125], [2, -38.39251065320351, 7.288990306029511]], [12.849049222879723, -15.326510824972983]], [[[0, -3.6225556479370766, -25.58006865235512]], [-7.8874682868419965, -18.379005523261092]], [[[0, 1.9784503557879374, -6.5025974151499]], [-7.8874682868419965, -18.379005523261092]], [[[0, 10.050665232782423, 11.026385307998742]], [-17.82919359778298, 9.062000642947142]], [[[0, 26.526838150174818, -0.22563393232425621], [4, -33.70303936886652, 2.880339841013677]], [-17.82919359778298, 9.062000642947142]]]\n\n## Test Case 1\n##\n# Estimated Pose(s):\n# [50.000, 50.000]\n# [37.858, 33.921]\n# [25.905, 18.268]\n# [13.524, 2.224]\n# [27.912, 16.886]\n# [42.250, 30.994]\n# [55.992, 44.886]\n# [70.749, 59.867]\n# [85.371, 75.230]\n# [73.831, 92.354]\n# [53.406, 96.465]\n# [34.370, 100.134]\n# [48.346, 83.952]\n# [60.494, 68.338]\n# [73.648, 53.082]\n# [86.733, 38.197]\n# [79.983, 20.324]\n# [72.515, 2.837]\n# [54.993, 13.221]\n# [37.164, 22.283]\n\n\n# Estimated Landmarks:\n# [82.679, 13.435]\n# [70.417, 74.203]\n# [36.688, 61.431]\n# [18.705, 66.136]\n# [20.437, 16.983]\n\n\n### Uncomment the following three lines for test case 1 and compare the output to the values above ###\n\nmu_1 = slam(test_data1, 20, 5, 100.0, 2.0, 2.0)\nposes, landmarks = get_poses_landmarks(mu_1, 20)\nprint_all(poses, landmarks)", "\n\nEstimated Poses:\n[50.000, 50.000]\n[37.973, 33.652]\n[26.185, 18.155]\n[13.745, 2.116]\n[28.097, 16.783]\n[42.384, 30.902]\n[55.831, 44.497]\n[70.857, 59.699]\n[85.697, 75.543]\n[74.011, 92.434]\n[53.544, 96.454]\n[34.525, 100.080]\n[48.623, 83.953]\n[60.197, 68.107]\n[73.778, 52.935]\n[87.132, 38.538]\n[80.303, 20.508]\n[72.798, 2.945]\n[55.245, 13.255]\n[37.416, 22.317]\n\n\nEstimated Landmarks:\n[82.956, 13.539]\n[70.495, 74.141]\n[36.740, 61.281]\n[18.698, 66.060]\n[20.635, 16.875]\n" ], [ "# Here is the data and estimated outputs for test case 2\n\ntest_data2 = [[[[0, 26.543274387283322, -6.262538160312672], [3, 9.937396825799755, -9.128540360867689]], [18.92765331253674, -6.460955043986683]], [[[0, 7.706544739722961, -3.758467215445748], [1, 17.03954411948937, 31.705489938553438], [3, -11.61731288777497, -6.64964096716416]], [18.92765331253674, -6.460955043986683]], [[[0, -12.35130507136378, 2.585119104239249], [1, -2.563534536165313, 38.22159657838369], [3, -26.961236804740935, -0.4802312626141525]], [-11.167066095509824, 16.592065417497455]], [[[0, 1.4138633151721272, -13.912454837810632], [1, 8.087721200818589, 20.51845934354381], [3, -17.091723454402302, -16.521500551709707], [4, -7.414211721400232, 38.09191602674439]], [-11.167066095509824, 16.592065417497455]], [[[0, 12.886743222179561, -28.703968411636318], [1, 21.660953298391387, 3.4912891084614914], [3, -6.401401414569506, -32.321583037341625], [4, 5.034079343639034, 23.102207946092893]], [-11.167066095509824, 16.592065417497455]], [[[1, 31.126317672358578, -10.036784369535214], [2, -38.70878528420893, 7.4987265861424595], [4, 17.977218575473767, 6.150889254289742]], [-6.595520680493778, -18.88118393939265]], [[[1, 41.82460922922086, 7.847527392202475], [3, 15.711709540417502, -30.34633659912818]], [-6.595520680493778, -18.88118393939265]], [[[0, 40.18454208294434, -6.710999804403755], [3, 23.019508919299156, -10.12110867290604]], [-6.595520680493778, -18.88118393939265]], [[[3, 27.18579315312821, 8.067219022708391]], [-6.595520680493778, -18.88118393939265]], [[], [11.492663265706092, 16.36822198838621]], [[[3, 24.57154567653098, 13.461499960708197]], [11.492663265706092, 16.36822198838621]], [[[0, 31.61945290413707, 0.4272295085799329], [3, 16.97392299158991, -5.274596836133088]], [11.492663265706092, 16.36822198838621]], [[[0, 22.407381798735177, -18.03500068379259], [1, 29.642444125196995, 17.3794951934614], [3, 4.7969752441371645, -21.07505361639969], [4, 14.726069092569372, 32.75999422300078]], [11.492663265706092, 16.36822198838621]], [[[0, 10.705527984670137, -34.589764174299596], [1, 18.58772336795603, -0.20109708164787765], [3, -4.839806195049413, -39.92208742305105], [4, 4.18824810165454, 14.146847823548889]], [11.492663265706092, 16.36822198838621]], [[[1, 5.878492140223764, -19.955352450942357], [4, -7.059505455306587, -0.9740849280550585]], [19.628527845173146, 3.83678180657467]], [[[1, -11.150789592446378, -22.736641053247872], [4, -28.832815721158255, -3.9462962046291388]], [-19.841703647091965, 2.5113335861604362]], [[[1, 8.64427397916182, -20.286336970889053], [4, -5.036917727942285, -6.311739993868336]], [-5.946642674882207, -19.09548221169787]], [[[0, 7.151866679283043, -39.56103232616369], [1, 16.01535401373368, -3.780995345194027], [4, -3.04801331832137, 13.697362774960865]], [-5.946642674882207, -19.09548221169787]], [[[0, 12.872879480504395, -19.707592098123207], [1, 22.236710716903136, 16.331770792606406], [3, -4.841206109583004, -21.24604435851242], [4, 4.27111163223552, 32.25309748614184]], [-5.946642674882207, -19.09548221169787]]] \n\n\n## Test Case 2\n##\n# Estimated Pose(s):\n# [50.000, 50.000]\n# [69.035, 45.061]\n# [87.655, 38.971]\n# [76.084, 55.541]\n# [64.283, 71.684]\n# [52.396, 87.887]\n# [44.674, 68.948]\n# [37.532, 49.680]\n# [31.392, 30.893]\n# [24.796, 12.012]\n# [33.641, 26.440]\n# [43.858, 43.560]\n# [54.735, 60.659]\n# [65.884, 77.791]\n# [77.413, 94.554]\n# [96.740, 98.020]\n# [76.149, 99.586]\n# [70.211, 80.580]\n# [64.130, 61.270]\n# [58.183, 42.175]\n\n\n# Estimated Landmarks:\n# [76.777, 42.415]\n# [85.109, 76.850]\n# [13.687, 95.386]\n# [59.488, 39.149]\n# [69.283, 93.654]\n\n\n### Uncomment the following three lines for test case 2 and compare to the values above ###\n\nmu_2 = slam(test_data2, 20, 5, 100.0, 2.0, 2.0)\nposes, landmarks = get_poses_landmarks(mu_2, 20)\nprint_all(poses, landmarks)\n", "\n\nEstimated Poses:\n[50.000, 50.000]\n[69.181, 45.665]\n[87.743, 39.703]\n[76.270, 56.311]\n[64.317, 72.176]\n[52.257, 88.154]\n[44.059, 69.401]\n[37.002, 49.918]\n[30.924, 30.955]\n[23.508, 11.419]\n[34.180, 27.133]\n[44.155, 43.846]\n[54.806, 60.920]\n[65.698, 78.546]\n[77.468, 95.626]\n[96.802, 98.821]\n[75.957, 99.971]\n[70.200, 81.181]\n[64.054, 61.723]\n[58.107, 42.628]\n\n\nEstimated Landmarks:\n[76.779, 42.887]\n[85.065, 77.438]\n[13.548, 95.652]\n[59.449, 39.595]\n[69.263, 94.240]\n" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown" ], [ "code", "code" ] ]
d0226f90661049b017f88d274cbf51df7ed8fa52
36,457
ipynb
Jupyter Notebook
Nets on Spectral data/01_PDU_Total_Designed_Inception.ipynb
Saman689/Weed-sensing-basics
25355b20af94432fbe43969cc21fcbf402d01972
[ "MIT" ]
null
null
null
Nets on Spectral data/01_PDU_Total_Designed_Inception.ipynb
Saman689/Weed-sensing-basics
25355b20af94432fbe43969cc21fcbf402d01972
[ "MIT" ]
null
null
null
Nets on Spectral data/01_PDU_Total_Designed_Inception.ipynb
Saman689/Weed-sensing-basics
25355b20af94432fbe43969cc21fcbf402d01972
[ "MIT" ]
null
null
null
37.623323
154
0.498779
[ [ [ "### In this notebook we investigate a designed simple Inception network on PDU data", "_____no_output_____" ] ], [ [ "%reload_ext autoreload\n%autoreload 2\n%matplotlib inline", "_____no_output_____" ] ], [ [ "### Importing the libraries", "_____no_output_____" ] ], [ [ "import torch \n\nimport torch.nn as nn\nimport torch.utils.data as Data\nfrom torch.autograd import Function, Variable\nfrom torch.optim import lr_scheduler\n\nimport torchvision\nimport torchvision.transforms as transforms\nimport torch.backends.cudnn as cudnn\n\nfrom pathlib import Path\nimport os\nimport copy\nimport math\nimport matplotlib.pyplot as plt\nimport numpy as np\n\nfrom datetime import datetime\nimport time as time\n\nimport warnings", "_____no_output_____" ] ], [ [ "#### Checking whether the GPU is active", "_____no_output_____" ] ], [ [ "torch.backends.cudnn.enabled", "_____no_output_____" ], [ "torch.cuda.is_available()", "_____no_output_____" ], [ "torch.cuda.init()", "_____no_output_____" ] ], [ [ "#### Dataset paths", "_____no_output_____" ] ], [ [ "PATH = Path(\"/home/saman/Saman/data/PDU_Raw_Data01/Test06_600x30/\")\ntrain_path = PATH / 'train' / 'Total'\nvalid_path = PATH / 'valid' / 'Total'\ntest_path = PATH / 'test' / 'Total'", "_____no_output_____" ] ], [ [ "### Model parameters", "_____no_output_____" ] ], [ [ "Num_Filter1= 16\nNum_Filter2= 64\nKer_Sz1 = 5\nKer_Sz2 = 5\n\nlearning_rate= 0.0001\n\nDropout= 0.2\nBchSz= 32\nEPOCH= 5", "_____no_output_____" ] ], [ [ "### Data Augmenation", "_____no_output_____" ] ], [ [ "# Mode of transformation\ntransformation = transforms.Compose([\n transforms.RandomVerticalFlip(),\n transforms.RandomHorizontalFlip(),\n transforms.ToTensor(),\n transforms.Normalize((0,0,0), (0.5,0.5,0.5)),\n]) \n\ntransformation2 = transforms.Compose([\n transforms.ToTensor(),\n transforms.Normalize((0,0,0), (0.5,0.5,0.5)), \n]) ", "_____no_output_____" ], [ "# Loss calculator\ncriterion = nn.CrossEntropyLoss() # cross entropy loss", "_____no_output_____" ] ], [ [ "### Defining models", "_____no_output_____" ], [ "#### Defining a class of our simple model", "_____no_output_____" ] ], [ [ "class ConvNet(nn.Module):\n def __init__(self, Num_Filter1 , Num_Filter2, Ker_Sz1, Ker_Sz2, Dropout, num_classes=2):\n super(ConvNet, self).__init__()\n self.layer1 = nn.Sequential( \n nn.Conv2d( # input shape (3, 30, 600)\n in_channels=3, # input height\n out_channels=Num_Filter1, # n_filters\n kernel_size=Ker_Sz1, # Kernel size\n stride=1, # filter movement/step\n padding=int((Ker_Sz1-1)/2), # if want same width and length of this image after con2d,\n ), # padding=(kernel_size-1)/2 if stride=1\n nn.BatchNorm2d(Num_Filter1), # Batch Normalization\n nn.ReLU(), # Rectified linear activation\n nn.MaxPool2d(kernel_size=2, stride=2)) # choose max value in 2x2 area, \n \n # Visualizing this in https://github.com/vdumoulin/conv_arithmetic/blob/master/README.md\n \n self.layer2 = nn.Sequential(\n nn.Conv2d(Num_Filter1, Num_Filter2, \n kernel_size=Ker_Sz2, \n stride=1, \n padding=int((Ker_Sz2-1)/2)),\n nn.BatchNorm2d(Num_Filter2), \n nn.ReLU(),\n nn.MaxPool2d(kernel_size=2, stride=2), # output shape (64, 38, 38)\n nn.Dropout2d(p=Dropout))\n \n self.fc = nn.Linear(1050*Num_Filter2, num_classes) # fully connected layer, output 2 classes\n\n \n \n def forward(self, x): # Forwarding the data to classifier \n out = self.layer1(x)\n out = self.layer2(out)\n out = out.reshape(out.size(0), -1) # flatten the output of conv2 to (batch_size, 64*38*38)\n out = self.fc(out)\n return out", "_____no_output_____" ] ], [ [ "### Defining inception classes", "_____no_output_____" ] ], [ [ "class BasicConv2d(nn.Module):\n\n def __init__(self, in_planes, out_planes, **kwargs):\n super(BasicConv2d, self).__init__()\n self.conv = nn.Conv2d(in_planes, out_planes, bias=False, **kwargs)\n self.bn = nn.BatchNorm2d(out_planes, eps=0.001)\n self.relu = nn.ReLU(inplace=True)\n\n def forward(self, x):\n x = self.conv(x)\n x = self.bn(x)\n out = self.relu(x)\n return x", "_____no_output_____" ], [ "class Inception(nn.Module):\n\n def __init__(self, in_channels):\n super(Inception, self).__init__()\n self.branch3x3 = BasicConv2d(in_channels, 384, kernel_size=3, stride=2)\n\n self.branch3x3dbl_1 = BasicConv2d(in_channels, 64, kernel_size=1)\n self.branch3x3dbl_2 = BasicConv2d(64, 96, kernel_size=3, padding=1)\n self.branch3x3dbl_3 = BasicConv2d(96, 96, kernel_size=3, stride=2)\n \n self.avgpool = nn.AvgPool2d(kernel_size=3, stride=2)\n\n def forward(self, x):\n branch3x3 = self.branch3x3(x)\n\n branch3x3dbl = self.branch3x3dbl_1(x)\n branch3x3dbl = self.branch3x3dbl_2(branch3x3dbl)\n branch3x3dbl = self.branch3x3dbl_3(branch3x3dbl)\n\n branch_pool = self.avgpool(x)\n\n outputs = [branch3x3, branch3x3dbl, branch_pool]\n return torch.cat(outputs, 1)", "_____no_output_____" ], [ "class Inception_Net(nn.Module):\n def __init__(self, Num_Filter1 , Num_Filter2, Ker_Sz1, Ker_Sz2, Dropout, num_classes=2):\n super(Inception_Net, self).__init__()\n self.layer1 = nn.Sequential( \n nn.Conv2d( # input shape (3, 30, 600)\n in_channels=3, # input height\n out_channels=Num_Filter1, # n_filters\n kernel_size=Ker_Sz1, # Kernel size\n stride=1, # filter movement/step\n padding=int((Ker_Sz1-1)/2), # if want same width and length of this image after con2d,\n ), # padding=(kernel_size-1)/2 if stride=1\n nn.BatchNorm2d(Num_Filter1), # Batch Normalization\n nn.ReLU(), # Rectified linear activation\n nn.MaxPool2d(kernel_size=2, stride=2)) # choose max value in 2x2 area, \n \n # Visualizing this in https://github.com/vdumoulin/conv_arithmetic/blob/master/README.md\n \n self.layer2 = nn.Sequential(\n nn.Conv2d(Num_Filter1, Num_Filter2, \n kernel_size=Ker_Sz2, \n stride=1, \n padding=int((Ker_Sz2-1)/2)),\n nn.BatchNorm2d(Num_Filter2), \n nn.ReLU(),\n nn.MaxPool2d(kernel_size=2, stride=2), # output shape (64, 38, 38)\n nn.Dropout2d(p=Dropout))\n \n self.Inception = Inception(Num_Filter2)\n \n self.fc = nn.Linear(120768, num_classes) # fully connected layer, output 2 classes\n \n def forward(self, x): # Forwarding the data to classifier \n out = self.layer1(x)\n out = self.layer2(out)\n out = self.Inception(out)\n out = out.reshape(out.size(0), -1) # flatten the output of conv2 to (batch_size, 64*38*38)\n out = self.fc(out)\n return out", "_____no_output_____" ] ], [ [ "### Finding number of parameter in our model", "_____no_output_____" ] ], [ [ "def print_num_params(model):\n TotalParam=0\n for param in list(model.parameters()):\n print(\"Individual parameters are:\")\n nn=1\n for size in list(param.size()):\n print(size)\n nn = nn*size\n print(\"Total parameters: {}\" .format(param.numel()))\n TotalParam += nn\n print('-' * 10)\n print(\"Sum of all Parameters is: {}\" .format(TotalParam))", "_____no_output_____" ], [ "def get_num_params(model):\n TotalParam=0\n for param in list(model.parameters()):\n nn=1\n for size in list(param.size()):\n nn = nn*size\n TotalParam += nn\n return TotalParam", "_____no_output_____" ] ], [ [ "### Training and Validating", "_____no_output_____" ], [ "#### Training and validation function", "_____no_output_____" ] ], [ [ "def train_model(model, criterion, optimizer, Dropout, learning_rate, BATCHSIZE, num_epochs):\n print(str(datetime.now()).split('.')[0], \"Starting training and validation...\\n\")\n print(\"====================Data and Hyperparameter Overview====================\\n\")\n print(\"Number of training examples: {} , Number of validation examples: {} \\n\".format(len(train_data), len(valid_data)))\n \n print(\"Dropout:{:,.2f}, Learning rate: {:,.5f} \" \n .format( Dropout, learning_rate )) \n print(\"Batch size: {}, Number of epochs: {} \" \n .format(BATCHSIZE, num_epochs)) \n \n print(\"Number of parameter in the model: {}\". format(get_num_params(model)))\n \n print(\"================================Results...==============================\\n\")\n\n since = time.time() #record the beginning time\n\n best_model = model\n best_acc = 0.0\n acc_vect =[] \n\n for epoch in range(num_epochs):\n for i, (images, labels) in enumerate(train_loader): \n images = Variable(images).cuda()\n labels = Variable(labels).cuda()\n\n # Forward pass\n outputs = model(images) # model output\n loss = criterion(outputs, labels) # cross entropy loss\n\n # Trying binary cross entropy\n #loss = criterion(torch.max(outputs.data, 1), labels)\n #loss = torch.nn.functional.binary_cross_entropy(outputs, labels)\n \n \n\n # Backward and optimize\n optimizer.zero_grad() # clear gradients for this training step\n loss.backward() # backpropagation, compute gradients\n optimizer.step() # apply gradients\n\n if (i+1) % 1000 == 0: # Reporting the loss and progress every 50 step\n print ('Epoch [{}/{}], Step [{}/{}], Loss: {:.4f}' \n .format(epoch+1, num_epochs, i+1, len(train_loader), loss.item()))\n\n model.eval() # eval mode (batchnorm uses moving mean/variance instead of mini-batch mean/variance)\n\n with torch.no_grad():\n correct = 0\n total = 0\n for images, labels in valid_loader:\n images = Variable(images).cuda()\n labels = Variable(labels).cuda()\n \n outputs = model(images)\n _, predicted = torch.max(outputs.data, 1)\n\n loss = criterion(outputs, labels)\n loss += loss.item()\n\n total += labels.size(0)\n correct += (predicted == labels).sum().item()\n\n epoch_loss= loss / total\n epoch_acc = 100 * correct / total\n acc_vect.append(epoch_acc)\n\n if epoch_acc > best_acc:\n best_acc = epoch_acc\n best_model = copy.deepcopy(model)\n\n print('Validation accuracy and loss of the model on {} images: {} %, {:.5f}'\n .format(len(valid_data), 100 * correct / total, loss))\n\n correct = 0\n total = 0\n for images, labels in train_loader:\n images = Variable(images).cuda()\n labels = Variable(labels).cuda()\n \n outputs = model(images)\n _, predicted = torch.max(outputs.data, 1)\n\n loss = criterion(outputs, labels)\n loss += loss.item()\n\n total += labels.size(0)\n correct += (predicted == labels).sum().item()\n\n epoch_loss= loss / total\n epoch_acc = 100 * correct / total\n\n print('Train accuracy and loss of the model on {} images: {} %, {:.5f}'\n .format(len(train_data), epoch_acc, loss))\n print('-' * 10)\n\n time_elapsed = time.time() - since\n print('Training complete in {:.0f}m {:.0f}s'.format(\n time_elapsed // 60, time_elapsed % 60))\n print('Best validation Acc: {:4f}'.format(best_acc)) \n \n mean_acc = np.mean(acc_vect)\n print('Average accuracy on the validation {} images: {}'\n .format(len(train_data),mean_acc))\n print('-' * 10)\n return best_model, mean_acc", "_____no_output_____" ] ], [ [ "### Testing function", "_____no_output_____" ] ], [ [ "def test_model(model, test_loader):\n print(\"Starting testing...\\n\")\n model.eval() # eval mode (batchnorm uses moving mean/variance instead of mini-batch mean/variance)\n\n with torch.no_grad():\n correct = 0\n total = 0\n test_loss_vect=[]\n test_acc_vect=[]\n \n since = time.time() #record the beginning time\n \n for i in range(10):\n \n Indx = torch.randperm(len(test_data))\n Cut=int(len(Indx)/10) # Here 10% showing the proportion of data is chosen for pooling\n indices=Indx[:Cut] \n Sampler = Data.SubsetRandomSampler(indices)\n pooled_data = torch.utils.data.DataLoader(test_data , batch_size=BchSz,sampler=Sampler)\n\n for images, labels in pooled_data:\n images = Variable(images).cuda()\n labels = Variable(labels).cuda()\n \n outputs = model(images)\n _, predicted = torch.max(outputs.data, 1)\n loss = criterion(outputs, labels)\n total += labels.size(0)\n correct += (predicted == labels).sum().item()\n \n test_loss= loss / total\n test_accuracy= 100 * correct / total\n \n test_loss_vect.append(test_loss)\n test_acc_vect.append(test_accuracy)\n\n \n# print('Test accuracy and loss for the {}th pool: {:.2f} %, {:.5f}'\n# .format(i+1, test_accuracy, test_loss))\n \n \n mean_test_loss = np.mean(test_loss_vect)\n mean_test_acc = np.mean(test_acc_vect)\n std_test_acc = np.std(test_acc_vect)\n \n print('-' * 10)\n print('Average test accuracy on test data: {:.2f} %, loss: {:.5f}, Standard deviion of accuracy: {:.4f}'\n .format(mean_test_acc, mean_test_loss, std_test_acc))\n \n print('-' * 10)\n time_elapsed = time.time() - since\n print('Testing complete in {:.1f}m {:.4f}s'.format(time_elapsed // 60, time_elapsed % 60))\n \n print('-' * 10)\n \n return mean_test_acc, mean_test_loss, std_test_acc", "_____no_output_____" ] ], [ [ "### Applying aumentation and batch size", "_____no_output_____" ] ], [ [ "## Using batch size to load data\ntrain_data = torchvision.datasets.ImageFolder(train_path,transform=transformation)\ntrain_loader =torch.utils.data.DataLoader(train_data, batch_size=BchSz, shuffle=True,\n num_workers=8)\n\nvalid_data = torchvision.datasets.ImageFolder(valid_path,transform=transformation)\nvalid_loader =torch.utils.data.DataLoader(valid_data, batch_size=BchSz, shuffle=True,\n num_workers=8)\n\ntest_data = torchvision.datasets.ImageFolder(test_path,transform=transformation2)\ntest_loader =torch.utils.data.DataLoader(test_data, batch_size=BchSz, shuffle=True,\n num_workers=8)", "_____no_output_____" ], [ "model = Inception_Net(Num_Filter1 , Num_Filter2, Ker_Sz1, Ker_Sz2, Dropout, num_classes=2)\n\nmodel = model.cuda()\nprint(model)\n\n# Defining optimizer with variable learning rate\noptimizer = torch.optim.Adam(model.parameters(), lr=learning_rate)\noptimizer.scheduler=lr_scheduler.ReduceLROnPlateau(optimizer, 'min') ", "Inception_Net(\n (layer1): Sequential(\n (0): Conv2d(3, 16, kernel_size=(5, 5), stride=(1, 1), padding=(2, 2))\n (1): BatchNorm2d(16, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\n (2): ReLU()\n (3): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)\n )\n (layer2): Sequential(\n (0): Conv2d(16, 64, kernel_size=(5, 5), stride=(1, 1), padding=(2, 2))\n (1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\n (2): ReLU()\n (3): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)\n (4): Dropout2d(p=0.2)\n )\n (Inception): Inception(\n (branch3x3): BasicConv2d(\n (conv): Conv2d(64, 384, kernel_size=(3, 3), stride=(2, 2), bias=False)\n (bn): BatchNorm2d(384, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)\n (relu): ReLU(inplace)\n )\n (branch3x3dbl_1): BasicConv2d(\n (conv): Conv2d(64, 64, kernel_size=(1, 1), stride=(1, 1), bias=False)\n (bn): BatchNorm2d(64, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)\n (relu): ReLU(inplace)\n )\n (branch3x3dbl_2): BasicConv2d(\n (conv): Conv2d(64, 96, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)\n (bn): BatchNorm2d(96, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)\n (relu): ReLU(inplace)\n )\n (branch3x3dbl_3): BasicConv2d(\n (conv): Conv2d(96, 96, kernel_size=(3, 3), stride=(2, 2), bias=False)\n (bn): BatchNorm2d(96, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)\n (relu): ReLU(inplace)\n )\n (avgpool): AvgPool2d(kernel_size=3, stride=2, padding=0)\n )\n (fc): Linear(in_features=120768, out_features=2, bias=True)\n)\n" ], [ "get_num_params(model)", "_____no_output_____" ], [ "seed= [1, 3, 7, 19, 22]\n\nval_acc_vect=[]\ntest_acc_vect=[]\n\n\nfor ii in seed: \n torch.cuda.manual_seed(ii)\n torch.manual_seed(ii)\n \n model, val_acc= train_model(model, criterion, optimizer, Dropout, learning_rate, BchSz, EPOCH)\n testing = test_model (model, test_loader)\n test_acc= testing[0]\n \n \n val_acc_vect.append( val_acc )\n test_acc_vect.append(test_acc)\n \n mean_val_acc = np.mean(val_acc_vect)\n mean_test_acc = np.mean(test_acc_vect)\n \n \nprint('-' * 10)\nprint('-' * 10)\nprint('Average of validation accuracies on 5 different random seed: {:.2f} %, Average of testing accuracies on 5 different random seed: {:.2f} %'\n .format(mean_val_acc, mean_test_acc)) \n ", "2019-03-01 15:11:27 Starting training and validation...\n\n====================Data and Hyperparameter Overview====================\n\nNumber of training examples: 24000 , Number of validation examples: 8000 \n\nDropout:0.20, Learning rate: 0.00010 \nBatch size: 32, Number of epochs: 5 \nNumber of parameter in the model: 633378\n================================Results...==============================\n\nValidation accuracy and loss of the model on 8000 images: 64.9 %, 1.33086\nTrain accuracy and loss of the model on 24000 images: 62.7375 %, 1.01242\n----------\nValidation accuracy and loss of the model on 8000 images: 75.4 %, 0.76369\nTrain accuracy and loss of the model on 24000 images: 77.225 %, 1.38264\n----------\nValidation accuracy and loss of the model on 8000 images: 77.35 %, 1.22606\nTrain accuracy and loss of the model on 24000 images: 87.25833333333334 %, 0.64452\n----------\nValidation accuracy and loss of the model on 8000 images: 72.8875 %, 0.65668\nTrain accuracy and loss of the model on 24000 images: 88.3125 %, 0.52884\n----------\nValidation accuracy and loss of the model on 8000 images: 79.6875 %, 1.17200\nTrain accuracy and loss of the model on 24000 images: 95.64583333333333 %, 0.63624\n----------\nTraining complete in 1m 55s\nBest validation Acc: 79.687500\nAverage accuracy on the validation 24000 images: 74.045\n----------\nStarting testing...\n\n----------\nAverage test accuracy on test data: 77.27 %, loss: 0.00026, Standard deviion of accuracy: 0.7046\n----------\nTesting complete in 0.0m 5.8832s\n----------\n2019-03-01 15:13:28 Starting training and validation...\n\n====================Data and Hyperparameter Overview====================\n\nNumber of training examples: 24000 , Number of validation examples: 8000 \n\nDropout:0.20, Learning rate: 0.00010 \nBatch size: 32, Number of epochs: 5 \nNumber of parameter in the model: 633378\n================================Results...==============================\n\nValidation accuracy and loss of the model on 8000 images: 80.275 %, 0.75893\nTrain accuracy and loss of the model on 24000 images: 95.59583333333333 %, 0.11324\n----------\nValidation accuracy and loss of the model on 8000 images: 79.4 %, 1.01741\nTrain accuracy and loss of the model on 24000 images: 95.62916666666666 %, 0.20947\n----------\nValidation accuracy and loss of the model on 8000 images: 80.3875 %, 0.54221\nTrain accuracy and loss of the model on 24000 images: 95.6375 %, 0.08113\n----------\nValidation accuracy and loss of the model on 8000 images: 79.375 %, 0.50299\nTrain accuracy and loss of the model on 24000 images: 95.59583333333333 %, 0.42088\n----------\nValidation accuracy and loss of the model on 8000 images: 80.075 %, 2.54078\nTrain accuracy and loss of the model on 24000 images: 95.75416666666666 %, 0.24887\n----------\nTraining complete in 1m 55s\nBest validation Acc: 80.387500\nAverage accuracy on the validation 24000 images: 79.9025\n----------\nStarting testing...\n\n----------\nAverage test accuracy on test data: 76.61 %, loss: 0.00041, Standard deviion of accuracy: 0.4764\n----------\nTesting complete in 0.0m 5.7241s\n----------\n2019-03-01 15:15:28 Starting training and validation...\n\n====================Data and Hyperparameter Overview====================\n\nNumber of training examples: 24000 , Number of validation examples: 8000 \n\nDropout:0.20, Learning rate: 0.00010 \nBatch size: 32, Number of epochs: 5 \nNumber of parameter in the model: 633378\n================================Results...==============================\n\nValidation accuracy and loss of the model on 8000 images: 80.0625 %, 1.32076\nTrain accuracy and loss of the model on 24000 images: 95.54166666666667 %, 0.43024\n----------\nValidation accuracy and loss of the model on 8000 images: 79.8875 %, 0.41576\nTrain accuracy and loss of the model on 24000 images: 95.54166666666667 %, 0.24901\n----------\nValidation accuracy and loss of the model on 8000 images: 79.575 %, 1.62173\nTrain accuracy and loss of the model on 24000 images: 95.81666666666666 %, 0.24963\n----------\nValidation accuracy and loss of the model on 8000 images: 79.925 %, 2.40927\nTrain accuracy and loss of the model on 24000 images: 95.60833333333333 %, 0.15915\n----------\nValidation accuracy and loss of the model on 8000 images: 80.1 %, 1.71480\nTrain accuracy and loss of the model on 24000 images: 95.70416666666667 %, 0.18263\n----------\nTraining complete in 1m 54s\nBest validation Acc: 80.100000\nAverage accuracy on the validation 24000 images: 79.91\n----------\nStarting testing...\n\n----------\nAverage test accuracy on test data: 76.58 %, loss: 0.00036, Standard deviion of accuracy: 0.3228\n----------\nTesting complete in 0.0m 5.7930s\n----------\n2019-03-01 15:17:29 Starting training and validation...\n\n====================Data and Hyperparameter Overview====================\n\nNumber of training examples: 24000 , Number of validation examples: 8000 \n\nDropout:0.20, Learning rate: 0.00010 \nBatch size: 32, Number of epochs: 5 \nNumber of parameter in the model: 633378\n================================Results...==============================\n\nValidation accuracy and loss of the model on 8000 images: 79.85 %, 1.32361\nTrain accuracy and loss of the model on 24000 images: 95.53333333333333 %, 0.22441\n----------\nValidation accuracy and loss of the model on 8000 images: 80.225 %, 1.95208\nTrain accuracy and loss of the model on 24000 images: 95.63333333333334 %, 0.08277\n----------\nValidation accuracy and loss of the model on 8000 images: 79.425 %, 1.50681\nTrain accuracy and loss of the model on 24000 images: 95.70416666666667 %, 0.11324\n----------\nValidation accuracy and loss of the model on 8000 images: 80.0625 %, 1.03933\nTrain accuracy and loss of the model on 24000 images: 95.58333333333333 %, 0.67020\n----------\nValidation accuracy and loss of the model on 8000 images: 79.875 %, 0.84893\nTrain accuracy and loss of the model on 24000 images: 95.52083333333333 %, 0.12579\n----------\nTraining complete in 1m 55s\nBest validation Acc: 80.225000\nAverage accuracy on the validation 24000 images: 79.8875\n----------\nStarting testing...\n\n----------\nAverage test accuracy on test data: 76.76 %, loss: 0.00031, Standard deviion of accuracy: 0.6555\n----------\nTesting complete in 0.0m 5.8354s\n----------\n2019-03-01 15:19:29 Starting training and validation...\n\n====================Data and Hyperparameter Overview====================\n\nNumber of training examples: 24000 , Number of validation examples: 8000 \n\nDropout:0.20, Learning rate: 0.00010 \nBatch size: 32, Number of epochs: 5 \nNumber of parameter in the model: 633378\n================================Results...==============================\n\nValidation accuracy and loss of the model on 8000 images: 79.7625 %, 1.31404\nTrain accuracy and loss of the model on 24000 images: 95.51666666666667 %, 0.21090\n----------\nValidation accuracy and loss of the model on 8000 images: 79.3125 %, 0.71353\nTrain accuracy and loss of the model on 24000 images: 95.7 %, 0.29437\n----------\nValidation accuracy and loss of the model on 8000 images: 79.975 %, 0.97653\nTrain accuracy and loss of the model on 24000 images: 95.67083333333333 %, 0.10430\n----------\nValidation accuracy and loss of the model on 8000 images: 79.4375 %, 1.69258\nTrain accuracy and loss of the model on 24000 images: 95.55 %, 0.14140\n----------\nValidation accuracy and loss of the model on 8000 images: 80.075 %, 1.34002\nTrain accuracy and loss of the model on 24000 images: 95.53333333333333 %, 0.33516\n----------\nTraining complete in 1m 57s\nBest validation Acc: 80.075000\nAverage accuracy on the validation 24000 images: 79.71249999999999\n----------\nStarting testing...\n\n----------\nAverage test accuracy on test data: 76.57 %, loss: 0.00025, Standard deviion of accuracy: 0.3669\n----------\nTesting complete in 0.0m 5.8700s\n----------\n----------\n----------\nAverage of validation accuracies on 5 different random seed: 78.69 %, Average of testing accuracies on 5 different random seed: 76.76 %\n" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code", "code" ] ]
d022858ab38d5aa60a93f1a29210f329f36f7527
331,787
ipynb
Jupyter Notebook
expressyeaself/models/lstm/LSTM_builder.ipynb
yeastpro/expressYeaself
e7a94176f84c6b501b5ea4d76c5f82592af168ed
[ "MIT" ]
null
null
null
expressyeaself/models/lstm/LSTM_builder.ipynb
yeastpro/expressYeaself
e7a94176f84c6b501b5ea4d76c5f82592af168ed
[ "MIT" ]
null
null
null
expressyeaself/models/lstm/LSTM_builder.ipynb
yeastpro/expressYeaself
e7a94176f84c6b501b5ea4d76c5f82592af168ed
[ "MIT" ]
null
null
null
65.700396
30,312
0.581635
[ [ [ "### Import all needed package", "_____no_output_____" ] ], [ [ "import os\nimport ast\nimport numpy as np\nimport pandas as pd\nfrom keras import optimizers\nfrom keras.models import Sequential\nfrom keras.layers import Dense, Activation, LSTM, Dropout\nfrom keras.utils import to_categorical\nfrom keras.datasets import mnist\nfrom sklearn.preprocessing import OneHotEncoder\nimport matplotlib.pyplot as plt\nfrom sklearn.model_selection import train_test_split\nfrom tensorflow.python.keras.callbacks import ModelCheckpoint, TensorBoard\nimport context\n\nbuild = context.build_promoter\nconstruct = context.construct_neural_net\nencode = context.encode_sequences\norganize = context.organize_data\n\nROOT_DIR = os.getcwd()[:os.getcwd().rfind('Express')] + 'ExpressYeaself/'\nSAVE_DIR = ROOT_DIR + 'expressyeaself/models/lstm/saved_models/'\nROOT_DIR", "Using TensorFlow backend.\n" ] ], [ [ "### Define the input data", "_____no_output_____" ], [ "#### Using the full data set", "_____no_output_____" ] ], [ [ "sample_filename = ('10000_from_20190612130111781831_percentiles_els_binarized_homogeneous_deflanked_'\n 'sequences_with_exp_levels.txt.gz')", "_____no_output_____" ] ], [ [ "#### Define the absolute path", "_____no_output_____" ] ], [ [ "sample_path = ROOT_DIR + 'example/processed_data/' + sample_filename", "_____no_output_____" ] ], [ [ "### Encode sequences", "_____no_output_____" ] ], [ [ "# Seems to give slightly better accuracy when expression level values aren't scaled.\nscale_els = False", "_____no_output_____" ], [ "X_padded, y_scaled, abs_max_el = encode.encode_sequences_with_method(sample_path, method='One-Hot', scale_els=scale_els)\nnum_seqs, max_sequence_len = organize.get_num_and_len_of_seqs_from_file(sample_path)", "_____no_output_____" ] ], [ [ "### Bulid the 3 dimensions LSTM model", "_____no_output_____" ], [ "#### Reshape encoded sequences", "_____no_output_____" ] ], [ [ "X_padded = X_padded.reshape(-1)\nX_padded = X_padded.reshape(int(num_seqs), 1, 5 * int(max_sequence_len))", "_____no_output_____" ] ], [ [ "#### Reshape expression levels", "_____no_output_____" ] ], [ [ "y_scaled = y_scaled.reshape(len(y_scaled), 1, 1)", "_____no_output_____" ] ], [ [ "#### Perform a train-test split", "_____no_output_____" ] ], [ [ "test_size = 0.25", "_____no_output_____" ], [ "X_train, X_test, y_train, y_test = train_test_split(X_padded, y_scaled, test_size=test_size)", "_____no_output_____" ] ], [ [ "#### Build the model", "_____no_output_____" ] ], [ [ "# Define the model parameters\nbatch_size = int(len(y_scaled) * 0.01) # no bigger than 1 % of data\nepochs = 50\ndropout = 0.3\nlearning_rate = 0.01\n\n# Define the checkpointer to allow saving of models\nmodel_type = 'lstm_sequential_3d_onehot'\nsave_path = SAVE_DIR + model_type + '.hdf5'\ncheckpointer = ModelCheckpoint(monitor='val_acc', \n filepath=save_path, \n verbose=1, \n save_best_only=True)\n\n# Define the model\nmodel = Sequential()\n\n# Build up the layers\n\nmodel.add(Dense(1024, kernel_initializer='uniform', input_shape=(1,5*int(max_sequence_len),)))\nmodel.add(Activation('softmax'))\nmodel.add(Dropout(dropout))\n# model.add(Dense(512, kernel_initializer='uniform', input_shape=(1,1024,)))\n# model.add(Activation('softmax'))\n# model.add(Dropout(dropout))\nmodel.add(Dense(256, kernel_initializer='uniform', input_shape=(1,512,)))\nmodel.add(Activation('softmax'))\nmodel.add(Dropout(dropout))\n# model.add(Dense(128, kernel_initializer='uniform', input_shape=(1,256,)))\n# model.add(Activation('softmax'))\n# model.add(Dropout(dropout))\n# model.add(Dense(64, kernel_initializer='uniform', input_shape=(1,128,)))\n# model.add(Activation('softmax'))\n# model.add(Dropout(dropout))\n# model.add(Dense(32, kernel_initializer='uniform', input_shape=(1,64,)))\n# model.add(Activation('softmax'))\n# model.add(Dropout(dropout))\n# model.add(Dense(16, kernel_initializer='uniform', input_shape=(1,32,)))\n# model.add(Activation('softmax'))\n# model.add(Dropout(dropout))\n# model.add(Dense(8, kernel_initializer='uniform', input_shape=(1,16,)))\n# model.add(Activation('softmax'))\nmodel.add(LSTM(units=1, return_sequences=True))\nsgd = optimizers.SGD(lr=learning_rate, decay=1e-6, momentum=0.9, nesterov=True)\n\n# Compile the model\nmodel.compile(loss='mse', optimizer='rmsprop', metrics=['accuracy'])\n\n# Print model summary\nprint(model.summary())\n\n\n\n# model.add(LSTM(100,input_shape=(int(max_sequence_len), 5)))\n# model.add(Dropout(dropout))\n# model.add(Dense(50, activation='sigmoid'))\n# # model.add(Dense(25, activation='sigmoid'))\n# # model.add(Dense(12, activation='sigmoid'))\n# # model.add(Dense(6, activation='sigmoid'))\n# # model.add(Dense(3, activation='sigmoid'))\n# model.add(Dense(1, activation='sigmoid'))\n\n# model.compile(loss='mse',\n# optimizer='rmsprop',\n# metrics=['accuracy'])\n# print(model.summary())", "_________________________________________________________________\nLayer (type) Output Shape Param # \n=================================================================\ndense_87 (Dense) (None, 1, 1024) 410624 \n_________________________________________________________________\nactivation_69 (Activation) (None, 1, 1024) 0 \n_________________________________________________________________\ndropout_72 (Dropout) (None, 1, 1024) 0 \n_________________________________________________________________\ndense_88 (Dense) (None, 1, 256) 262400 \n_________________________________________________________________\nactivation_70 (Activation) (None, 1, 256) 0 \n_________________________________________________________________\ndropout_73 (Dropout) (None, 1, 256) 0 \n_________________________________________________________________\nlstm_25 (LSTM) (None, 1, 1) 1032 \n=================================================================\nTotal params: 674,056\nTrainable params: 674,056\nNon-trainable params: 0\n_________________________________________________________________\nNone\n" ] ], [ [ "### Fit and Evaluate the model", "_____no_output_____" ] ], [ [ "# Fit\nhistory = model.fit(X_train, y_train, batch_size=batch_size, epochs=eposhs,verbose=1,\n validation_data=(X_test, y_test), callbacks=[checkpointer])\n\n\n# Evaluate\nscore = max(history.history['val_acc'])\nprint(\"%s: %.2f%%\" % (model.metrics_names[1], score*100))\nplt = construct.plot_results(history.history)\nplt.show()", "Train on 7500 samples, validate on 2500 samples\nEpoch 1/500\n7500/7500 [==============================] - 4s 594us/step - loss: 0.4805 - acc: 0.4929 - val_loss: 0.4735 - val_acc: 0.4740\n\nEpoch 00001: val_acc improved from -inf to 0.47400, saving model to C:\\Users\\Lisboa\\011019\\ExpressYeaself/expressyeaself/models/lstm/saved_models/lstm_sequential_3d_onehot.hdf5\nEpoch 2/500\n7500/7500 [==============================] - 2s 252us/step - loss: 0.4334 - acc: 0.4929 - val_loss: 0.4249 - val_acc: 0.4740\n\nEpoch 00002: val_acc did not improve from 0.47400\nEpoch 3/500\n7500/7500 [==============================] - 2s 283us/step - loss: 0.3886 - acc: 0.4929 - val_loss: 0.3805 - val_acc: 0.4740\n\nEpoch 00003: val_acc did not improve from 0.47400\nEpoch 4/500\n7500/7500 [==============================] - 2s 263us/step - loss: 0.3497 - acc: 0.4929 - val_loss: 0.3427 - val_acc: 0.4740\n\nEpoch 00004: val_acc did not improve from 0.47400\nEpoch 5/500\n7500/7500 [==============================] - 2s 291us/step - loss: 0.3178 - acc: 0.4929 - val_loss: 0.3123 - val_acc: 0.4740\n\nEpoch 00005: val_acc did not improve from 0.47400\nEpoch 6/500\n7500/7500 [==============================] - 2s 281us/step - loss: 0.2933 - acc: 0.4929 - val_loss: 0.2895 - val_acc: 0.4740\n\nEpoch 00006: val_acc did not improve from 0.47400\nEpoch 7/500\n7500/7500 [==============================] - 2s 281us/step - loss: 0.2750 - acc: 0.4929 - val_loss: 0.2725 - val_acc: 0.4740\n\nEpoch 00007: val_acc did not improve from 0.47400\nEpoch 8/500\n7500/7500 [==============================] - 2s 295us/step - loss: 0.2625 - acc: 0.4929 - val_loss: 0.2615 - val_acc: 0.4740\n\nEpoch 00008: val_acc did not improve from 0.47400\nEpoch 9/500\n7500/7500 [==============================] - 2s 240us/step - loss: 0.2555 - acc: 0.4929 - val_loss: 0.2549 - val_acc: 0.4740\n\nEpoch 00009: val_acc did not improve from 0.47400\nEpoch 10/500\n7500/7500 [==============================] - 2s 250us/step - loss: 0.2520 - acc: 0.4923 - val_loss: 0.2518 - val_acc: 0.4740\n\nEpoch 00010: val_acc did not improve from 0.47400\nEpoch 11/500\n7500/7500 [==============================] - 2s 225us/step - loss: 0.2506 - acc: 0.4968 - val_loss: 0.2504 - val_acc: 0.4740\n\nEpoch 00011: val_acc did not improve from 0.47400\nEpoch 12/500\n7500/7500 [==============================] - 2s 247us/step - loss: 0.2500 - acc: 0.5005 - val_loss: 0.2500 - val_acc: 0.5260\n\nEpoch 00012: val_acc improved from 0.47400 to 0.52600, saving model to C:\\Users\\Lisboa\\011019\\ExpressYeaself/expressyeaself/models/lstm/saved_models/lstm_sequential_3d_onehot.hdf5\nEpoch 13/500\n7500/7500 [==============================] - 2s 263us/step - loss: 0.2499 - acc: 0.5023 - val_loss: 0.2498 - val_acc: 0.5260\n\nEpoch 00013: val_acc did not improve from 0.52600\nEpoch 14/500\n7500/7500 [==============================] - 2s 272us/step - loss: 0.2503 - acc: 0.4987 - val_loss: 0.2497 - val_acc: 0.5260\n\nEpoch 00014: val_acc did not improve from 0.52600\nEpoch 15/500\n7500/7500 [==============================] - 2s 281us/step - loss: 0.2502 - acc: 0.4951 - val_loss: 0.2497 - val_acc: 0.5260\n\nEpoch 00015: val_acc did not improve from 0.52600\nEpoch 16/500\n7500/7500 [==============================] - 2s 221us/step - loss: 0.2500 - acc: 0.5064 - val_loss: 0.2497 - val_acc: 0.5260\n\nEpoch 00016: val_acc did not improve from 0.52600\nEpoch 17/500\n7500/7500 [==============================] - 2s 214us/step - loss: 0.2502 - acc: 0.5005 - val_loss: 0.2497 - val_acc: 0.5260\n\nEpoch 00017: val_acc did not improve from 0.52600\nEpoch 18/500\n7500/7500 [==============================] - 2s 214us/step - loss: 0.2501 - acc: 0.5081 - val_loss: 0.2497 - val_acc: 0.5260\n\nEpoch 00018: val_acc did not improve from 0.52600\nEpoch 19/500\n7500/7500 [==============================] - 2s 217us/step - loss: 0.2499 - acc: 0.5017 - val_loss: 0.2497 - val_acc: 0.5260\n\nEpoch 00019: val_acc did not improve from 0.52600\nEpoch 20/500\n7500/7500 [==============================] - 2s 213us/step - loss: 0.2504 - acc: 0.4905 - val_loss: 0.2496 - val_acc: 0.5260\n\nEpoch 00020: val_acc did not improve from 0.52600\nEpoch 21/500\n7500/7500 [==============================] - 2s 215us/step - loss: 0.2497 - acc: 0.5159 - val_loss: 0.2493 - val_acc: 0.5260\n\nEpoch 00021: val_acc did not improve from 0.52600\nEpoch 22/500\n7500/7500 [==============================] - 2s 214us/step - loss: 0.2497 - acc: 0.5107 - val_loss: 0.2491 - val_acc: 0.5260\n\nEpoch 00022: val_acc did not improve from 0.52600\nEpoch 23/500\n7500/7500 [==============================] - 2s 213us/step - loss: 0.2491 - acc: 0.5211 - val_loss: 0.2486 - val_acc: 0.5260\n\nEpoch 00023: val_acc did not improve from 0.52600\nEpoch 24/500\n7500/7500 [==============================] - 2s 258us/step - loss: 0.2485 - acc: 0.5396 - val_loss: 0.2478 - val_acc: 0.5260\n\nEpoch 00024: val_acc did not improve from 0.52600\nEpoch 25/500\n7500/7500 [==============================] - 2s 264us/step - loss: 0.2474 - acc: 0.5551 - val_loss: 0.2466 - val_acc: 0.6284\n\nEpoch 00025: val_acc improved from 0.52600 to 0.62840, saving model to C:\\Users\\Lisboa\\011019\\ExpressYeaself/expressyeaself/models/lstm/saved_models/lstm_sequential_3d_onehot.hdf5\nEpoch 26/500\n7500/7500 [==============================] - 2s 236us/step - loss: 0.2456 - acc: 0.5913 - val_loss: 0.2449 - val_acc: 0.7040\n\nEpoch 00026: val_acc improved from 0.62840 to 0.70400, saving model to C:\\Users\\Lisboa\\011019\\ExpressYeaself/expressyeaself/models/lstm/saved_models/lstm_sequential_3d_onehot.hdf5\nEpoch 27/500\n7500/7500 [==============================] - 2s 242us/step - loss: 0.2435 - acc: 0.6288 - val_loss: 0.2425 - val_acc: 0.7100\n\nEpoch 00027: val_acc improved from 0.70400 to 0.71000, saving model to C:\\Users\\Lisboa\\011019\\ExpressYeaself/expressyeaself/models/lstm/saved_models/lstm_sequential_3d_onehot.hdf5\nEpoch 28/500\n7500/7500 [==============================] - 2s 259us/step - loss: 0.2404 - acc: 0.6519 - val_loss: 0.2394 - val_acc: 0.7168\n\nEpoch 00028: val_acc improved from 0.71000 to 0.71680, saving model to C:\\Users\\Lisboa\\011019\\ExpressYeaself/expressyeaself/models/lstm/saved_models/lstm_sequential_3d_onehot.hdf5\nEpoch 29/500\n7500/7500 [==============================] - 2s 245us/step - loss: 0.2368 - acc: 0.6575 - val_loss: 0.2355 - val_acc: 0.7160\n\nEpoch 00029: val_acc did not improve from 0.71680\nEpoch 30/500\n7500/7500 [==============================] - 2s 231us/step - loss: 0.2324 - acc: 0.6643 - val_loss: 0.2312 - val_acc: 0.7220\n\nEpoch 00030: val_acc improved from 0.71680 to 0.72200, saving model to C:\\Users\\Lisboa\\011019\\ExpressYeaself/expressyeaself/models/lstm/saved_models/lstm_sequential_3d_onehot.hdf5\nEpoch 31/500\n7500/7500 [==============================] - 2s 247us/step - loss: 0.2273 - acc: 0.6667 - val_loss: 0.2258 - val_acc: 0.7208\n\nEpoch 00031: val_acc did not improve from 0.72200\nEpoch 32/500\n7500/7500 [==============================] - 2s 234us/step - loss: 0.2228 - acc: 0.6655 - val_loss: 0.2205 - val_acc: 0.7228\n\nEpoch 00032: val_acc improved from 0.72200 to 0.72280, saving model to C:\\Users\\Lisboa\\011019\\ExpressYeaself/expressyeaself/models/lstm/saved_models/lstm_sequential_3d_onehot.hdf5\nEpoch 33/500\n7500/7500 [==============================] - 2s 245us/step - loss: 0.2178 - acc: 0.6604 - val_loss: 0.2146 - val_acc: 0.7216\n\nEpoch 00033: val_acc did not improve from 0.72280\nEpoch 34/500\n7500/7500 [==============================] - 2s 219us/step - loss: 0.2124 - acc: 0.6667 - val_loss: 0.2095 - val_acc: 0.7196\n\nEpoch 00034: val_acc did not improve from 0.72280\nEpoch 35/500\n7500/7500 [==============================] - 2s 222us/step - loss: 0.2074 - acc: 0.6747 - val_loss: 0.2049 - val_acc: 0.7208\n\nEpoch 00035: val_acc did not improve from 0.72280\nEpoch 36/500\n7500/7500 [==============================] - 2s 245us/step - loss: 0.2043 - acc: 0.6691 - val_loss: 0.2012 - val_acc: 0.7232\n\nEpoch 00036: val_acc improved from 0.72280 to 0.72320, saving model to C:\\Users\\Lisboa\\011019\\ExpressYeaself/expressyeaself/models/lstm/saved_models/lstm_sequential_3d_onehot.hdf5\nEpoch 37/500\n7500/7500 [==============================] - 2s 253us/step - loss: 0.2041 - acc: 0.6649 - val_loss: 0.1989 - val_acc: 0.7212\n\nEpoch 00037: val_acc did not improve from 0.72320\nEpoch 38/500\n7500/7500 [==============================] - 2s 227us/step - loss: 0.2022 - acc: 0.6648 - val_loss: 0.1971 - val_acc: 0.7224\n\nEpoch 00038: val_acc did not improve from 0.72320\nEpoch 39/500\n7500/7500 [==============================] - 2s 240us/step - loss: 0.2037 - acc: 0.6533 - val_loss: 0.1962 - val_acc: 0.7212\n\nEpoch 00039: val_acc did not improve from 0.72320\nEpoch 40/500\n7500/7500 [==============================] - 2s 234us/step - loss: 0.1982 - acc: 0.6713 - val_loss: 0.1954 - val_acc: 0.7180\n\nEpoch 00040: val_acc did not improve from 0.72320\nEpoch 41/500\n7500/7500 [==============================] - 2s 226us/step - loss: 0.2015 - acc: 0.6603 - val_loss: 0.1952 - val_acc: 0.7196\n\nEpoch 00041: val_acc did not improve from 0.72320\nEpoch 42/500\n7500/7500 [==============================] - 2s 217us/step - loss: 0.2021 - acc: 0.6643 - val_loss: 0.1951 - val_acc: 0.7192\n\nEpoch 00042: val_acc did not improve from 0.72320\nEpoch 43/500\n7500/7500 [==============================] - 2s 224us/step - loss: 0.2004 - acc: 0.6581 - val_loss: 0.1956 - val_acc: 0.7176\n\nEpoch 00043: val_acc did not improve from 0.72320\nEpoch 44/500\n7500/7500 [==============================] - 2s 244us/step - loss: 0.2001 - acc: 0.6664 - val_loss: 0.1946 - val_acc: 0.7196\n\nEpoch 00044: val_acc did not improve from 0.72320\nEpoch 45/500\n7500/7500 [==============================] - 2s 239us/step - loss: 0.2017 - acc: 0.6600 - val_loss: 0.1946 - val_acc: 0.7172\n\nEpoch 00045: val_acc did not improve from 0.72320\nEpoch 46/500\n7500/7500 [==============================] - 2s 220us/step - loss: 0.2000 - acc: 0.6664 - val_loss: 0.1943 - val_acc: 0.7180\n\nEpoch 00046: val_acc did not improve from 0.72320\nEpoch 47/500\n7500/7500 [==============================] - 2s 226us/step - loss: 0.2006 - acc: 0.6607 - val_loss: 0.1943 - val_acc: 0.7184\n\nEpoch 00047: val_acc did not improve from 0.72320\nEpoch 48/500\n7500/7500 [==============================] - 2s 259us/step - loss: 0.2004 - acc: 0.6565 - val_loss: 0.1944 - val_acc: 0.7192\n\nEpoch 00048: val_acc did not improve from 0.72320\nEpoch 49/500\n7500/7500 [==============================] - 2s 253us/step - loss: 0.1986 - acc: 0.6668 - val_loss: 0.1944 - val_acc: 0.7188\n\nEpoch 00049: val_acc did not improve from 0.72320\nEpoch 50/500\n7500/7500 [==============================] - 2s 276us/step - loss: 0.1977 - acc: 0.6681 - val_loss: 0.1942 - val_acc: 0.7212\n\nEpoch 00050: val_acc did not improve from 0.72320\nEpoch 51/500\n7500/7500 [==============================] - 2s 295us/step - loss: 0.1985 - acc: 0.6689 - val_loss: 0.1944 - val_acc: 0.7188\n\nEpoch 00051: val_acc did not improve from 0.72320\nEpoch 52/500\n7500/7500 [==============================] - 2s 281us/step - loss: 0.1985 - acc: 0.6636 - val_loss: 0.1943 - val_acc: 0.7212\n\nEpoch 00052: val_acc did not improve from 0.72320\nEpoch 53/500\n7500/7500 [==============================] - 2s 254us/step - loss: 0.1988 - acc: 0.6692 - val_loss: 0.1942 - val_acc: 0.7192\n\nEpoch 00053: val_acc did not improve from 0.72320\nEpoch 54/500\n7500/7500 [==============================] - 2s 275us/step - loss: 0.1969 - acc: 0.6717 - val_loss: 0.1941 - val_acc: 0.7176\n\nEpoch 00054: val_acc did not improve from 0.72320\nEpoch 55/500\n7500/7500 [==============================] - 2s 274us/step - loss: 0.1967 - acc: 0.6757 - val_loss: 0.1941 - val_acc: 0.7200\n\nEpoch 00055: val_acc did not improve from 0.72320\nEpoch 56/500\n7500/7500 [==============================] - 2s 240us/step - loss: 0.1969 - acc: 0.6785 - val_loss: 0.1941 - val_acc: 0.7172\n\nEpoch 00056: val_acc did not improve from 0.72320\nEpoch 57/500\n7500/7500 [==============================] - 2s 244us/step - loss: 0.1966 - acc: 0.6799 - val_loss: 0.1933 - val_acc: 0.7180\n\nEpoch 00057: val_acc did not improve from 0.72320\nEpoch 58/500\n7500/7500 [==============================] - 2s 254us/step - loss: 0.1947 - acc: 0.6933 - val_loss: 0.1930 - val_acc: 0.7188\n\nEpoch 00058: val_acc did not improve from 0.72320\nEpoch 59/500\n7500/7500 [==============================] - 2s 241us/step - loss: 0.1942 - acc: 0.6880 - val_loss: 0.1933 - val_acc: 0.7204\n\nEpoch 00059: val_acc did not improve from 0.72320\nEpoch 60/500\n7500/7500 [==============================] - 2s 248us/step - loss: 0.1936 - acc: 0.6952 - val_loss: 0.1936 - val_acc: 0.7248\n\nEpoch 00060: val_acc improved from 0.72320 to 0.72480, saving model to C:\\Users\\Lisboa\\011019\\ExpressYeaself/expressyeaself/models/lstm/saved_models/lstm_sequential_3d_onehot.hdf5\nEpoch 61/500\n7500/7500 [==============================] - 2s 255us/step - loss: 0.1913 - acc: 0.7027 - val_loss: 0.1932 - val_acc: 0.7216\n\nEpoch 00061: val_acc did not improve from 0.72480\nEpoch 62/500\n7500/7500 [==============================] - 2s 276us/step - loss: 0.1934 - acc: 0.6943 - val_loss: 0.1934 - val_acc: 0.7164\n\nEpoch 00062: val_acc did not improve from 0.72480\nEpoch 63/500\n7500/7500 [==============================] - 2s 234us/step - loss: 0.1897 - acc: 0.7065 - val_loss: 0.1929 - val_acc: 0.7156\n\nEpoch 00063: val_acc did not improve from 0.72480\nEpoch 64/500\n7500/7500 [==============================] - 2s 236us/step - loss: 0.1911 - acc: 0.7065 - val_loss: 0.1926 - val_acc: 0.7204\n\nEpoch 00064: val_acc did not improve from 0.72480\nEpoch 65/500\n7500/7500 [==============================] - 2s 236us/step - loss: 0.1914 - acc: 0.7072 - val_loss: 0.1925 - val_acc: 0.7208\n\nEpoch 00065: val_acc did not improve from 0.72480\nEpoch 66/500\n7500/7500 [==============================] - 2s 244us/step - loss: 0.1902 - acc: 0.7104 - val_loss: 0.1932 - val_acc: 0.7176\n\nEpoch 00066: val_acc did not improve from 0.72480\nEpoch 67/500\n7500/7500 [==============================] - 2s 252us/step - loss: 0.1928 - acc: 0.7051 - val_loss: 0.1937 - val_acc: 0.7196\n\nEpoch 00067: val_acc did not improve from 0.72480\nEpoch 68/500\n7500/7500 [==============================] - 2s 229us/step - loss: 0.1933 - acc: 0.7027 - val_loss: 0.1930 - val_acc: 0.7208\n\nEpoch 00068: val_acc did not improve from 0.72480\nEpoch 69/500\n7500/7500 [==============================] - 2s 229us/step - loss: 0.1893 - acc: 0.7131 - val_loss: 0.1931 - val_acc: 0.7196\n\nEpoch 00069: val_acc did not improve from 0.72480\nEpoch 70/500\n7500/7500 [==============================] - 2s 230us/step - loss: 0.1910 - acc: 0.7108 - val_loss: 0.1935 - val_acc: 0.7188\n\nEpoch 00070: val_acc did not improve from 0.72480\nEpoch 71/500\n7500/7500 [==============================] - 2s 233us/step - loss: 0.1881 - acc: 0.7227 - val_loss: 0.1933 - val_acc: 0.7160\n\nEpoch 00071: val_acc did not improve from 0.72480\nEpoch 72/500\n7500/7500 [==============================] - 2s 248us/step - loss: 0.1895 - acc: 0.7092 - val_loss: 0.1933 - val_acc: 0.7176\n\nEpoch 00072: val_acc did not improve from 0.72480\nEpoch 73/500\n7500/7500 [==============================] - 2s 227us/step - loss: 0.1893 - acc: 0.7139 - val_loss: 0.1935 - val_acc: 0.7164\n\nEpoch 00073: val_acc did not improve from 0.72480\nEpoch 74/500\n7500/7500 [==============================] - 2s 247us/step - loss: 0.1877 - acc: 0.7148 - val_loss: 0.1929 - val_acc: 0.7160\n\nEpoch 00074: val_acc did not improve from 0.72480\nEpoch 75/500\n7500/7500 [==============================] - 2s 246us/step - loss: 0.1902 - acc: 0.7151 - val_loss: 0.1933 - val_acc: 0.7180\n\nEpoch 00075: val_acc did not improve from 0.72480\nEpoch 76/500\n7500/7500 [==============================] - 2s 223us/step - loss: 0.1899 - acc: 0.7136 - val_loss: 0.1931 - val_acc: 0.7176\n\nEpoch 00076: val_acc did not improve from 0.72480\nEpoch 77/500\n7500/7500 [==============================] - 2s 253us/step - loss: 0.1880 - acc: 0.7207 - val_loss: 0.1943 - val_acc: 0.7144\n\nEpoch 00077: val_acc did not improve from 0.72480\nEpoch 78/500\n7500/7500 [==============================] - 2s 245us/step - loss: 0.1894 - acc: 0.7212 - val_loss: 0.1936 - val_acc: 0.7172\n\nEpoch 00078: val_acc did not improve from 0.72480\nEpoch 79/500\n7500/7500 [==============================] - 2s 236us/step - loss: 0.1871 - acc: 0.7185 - val_loss: 0.1936 - val_acc: 0.7156\n\nEpoch 00079: val_acc did not improve from 0.72480\nEpoch 80/500\n7500/7500 [==============================] - 2s 236us/step - loss: 0.1892 - acc: 0.7160 - val_loss: 0.1937 - val_acc: 0.7148\n\nEpoch 00080: val_acc did not improve from 0.72480\nEpoch 81/500\n7500/7500 [==============================] - 2s 234us/step - loss: 0.1876 - acc: 0.7256 - val_loss: 0.1936 - val_acc: 0.7132\n\nEpoch 00081: val_acc did not improve from 0.72480\nEpoch 82/500\n7500/7500 [==============================] - 2s 227us/step - loss: 0.1879 - acc: 0.7249 - val_loss: 0.1938 - val_acc: 0.7192\n\nEpoch 00082: val_acc did not improve from 0.72480\nEpoch 83/500\n7500/7500 [==============================] - 2s 235us/step - loss: 0.1903 - acc: 0.7188 - val_loss: 0.1939 - val_acc: 0.7172\n\nEpoch 00083: val_acc did not improve from 0.72480\nEpoch 84/500\n7500/7500 [==============================] - 2s 246us/step - loss: 0.1873 - acc: 0.7216 - val_loss: 0.1935 - val_acc: 0.7160\n\nEpoch 00084: val_acc did not improve from 0.72480\nEpoch 85/500\n7500/7500 [==============================] - 2s 226us/step - loss: 0.1903 - acc: 0.7247 - val_loss: 0.1937 - val_acc: 0.7200\n\nEpoch 00085: val_acc did not improve from 0.72480\nEpoch 86/500\n7500/7500 [==============================] - 2s 248us/step - loss: 0.1903 - acc: 0.7229 - val_loss: 0.1938 - val_acc: 0.7168\n\nEpoch 00086: val_acc did not improve from 0.72480\nEpoch 87/500\n7500/7500 [==============================] - 2s 259us/step - loss: 0.1889 - acc: 0.7256 - val_loss: 0.1938 - val_acc: 0.7148\n\nEpoch 00087: val_acc did not improve from 0.72480\nEpoch 88/500\n7500/7500 [==============================] - 2s 261us/step - loss: 0.1884 - acc: 0.7219 - val_loss: 0.1941 - val_acc: 0.7180\n\nEpoch 00088: val_acc did not improve from 0.72480\nEpoch 89/500\n7500/7500 [==============================] - 2s 236us/step - loss: 0.1874 - acc: 0.7259 - val_loss: 0.1940 - val_acc: 0.7156\n\nEpoch 00089: val_acc did not improve from 0.72480\nEpoch 90/500\n7500/7500 [==============================] - 2s 237us/step - loss: 0.1885 - acc: 0.7272 - val_loss: 0.1938 - val_acc: 0.7184\n\nEpoch 00090: val_acc did not improve from 0.72480\nEpoch 91/500\n7500/7500 [==============================] - 2s 240us/step - loss: 0.1876 - acc: 0.7280 - val_loss: 0.1939 - val_acc: 0.7192\n\nEpoch 00091: val_acc did not improve from 0.72480\nEpoch 92/500\n7500/7500 [==============================] - 2s 229us/step - loss: 0.1865 - acc: 0.7309 - val_loss: 0.1936 - val_acc: 0.7204\n\nEpoch 00092: val_acc did not improve from 0.72480\nEpoch 93/500\n7500/7500 [==============================] - 2s 221us/step - loss: 0.1850 - acc: 0.7353 - val_loss: 0.1939 - val_acc: 0.7120\n\nEpoch 00093: val_acc did not improve from 0.72480\nEpoch 94/500\n7500/7500 [==============================] - 2s 219us/step - loss: 0.1878 - acc: 0.7281 - val_loss: 0.1938 - val_acc: 0.7172\n\nEpoch 00094: val_acc did not improve from 0.72480\nEpoch 95/500\n7500/7500 [==============================] - 2s 220us/step - loss: 0.1863 - acc: 0.7312 - val_loss: 0.1938 - val_acc: 0.7164\n\nEpoch 00095: val_acc did not improve from 0.72480\nEpoch 96/500\n7500/7500 [==============================] - 2s 235us/step - loss: 0.1889 - acc: 0.7267 - val_loss: 0.1938 - val_acc: 0.7176\n\nEpoch 00096: val_acc did not improve from 0.72480\nEpoch 97/500\n7500/7500 [==============================] - 2s 236us/step - loss: 0.1847 - acc: 0.7335 - val_loss: 0.1937 - val_acc: 0.7160\n\nEpoch 00097: val_acc did not improve from 0.72480\nEpoch 98/500\n7500/7500 [==============================] - 2s 231us/step - loss: 0.1850 - acc: 0.7331 - val_loss: 0.1938 - val_acc: 0.7204\n\nEpoch 00098: val_acc did not improve from 0.72480\nEpoch 99/500\n7500/7500 [==============================] - 2s 238us/step - loss: 0.1850 - acc: 0.7289 - val_loss: 0.1939 - val_acc: 0.7212\n\nEpoch 00099: val_acc did not improve from 0.72480\nEpoch 100/500\n7500/7500 [==============================] - 2s 238us/step - loss: 0.1850 - acc: 0.7371 - val_loss: 0.1937 - val_acc: 0.7184\n\nEpoch 00100: val_acc did not improve from 0.72480\nEpoch 101/500\n7500/7500 [==============================] - 2s 230us/step - loss: 0.1860 - acc: 0.7321 - val_loss: 0.1940 - val_acc: 0.7192\n\nEpoch 00101: val_acc did not improve from 0.72480\nEpoch 102/500\n7500/7500 [==============================] - 2s 230us/step - loss: 0.1858 - acc: 0.7388 - val_loss: 0.1940 - val_acc: 0.7156\n\nEpoch 00102: val_acc did not improve from 0.72480\nEpoch 103/500\n7500/7500 [==============================] - 2s 236us/step - loss: 0.1847 - acc: 0.7345 - val_loss: 0.1942 - val_acc: 0.7184\n\nEpoch 00103: val_acc did not improve from 0.72480\nEpoch 104/500\n7500/7500 [==============================] - 2s 231us/step - loss: 0.1842 - acc: 0.7397 - val_loss: 0.1942 - val_acc: 0.7184\n\nEpoch 00104: val_acc did not improve from 0.72480\nEpoch 105/500\n7500/7500 [==============================] - 2s 230us/step - loss: 0.1843 - acc: 0.7343 - val_loss: 0.1944 - val_acc: 0.7184\n\nEpoch 00105: val_acc did not improve from 0.72480\nEpoch 106/500\n7500/7500 [==============================] - 2s 236us/step - loss: 0.1852 - acc: 0.7363 - val_loss: 0.1942 - val_acc: 0.7148\n\nEpoch 00106: val_acc did not improve from 0.72480\nEpoch 107/500\n7500/7500 [==============================] - 2s 230us/step - loss: 0.1831 - acc: 0.7353 - val_loss: 0.1950 - val_acc: 0.7192\n\nEpoch 00107: val_acc did not improve from 0.72480\nEpoch 108/500\n7500/7500 [==============================] - 2s 232us/step - loss: 0.1845 - acc: 0.7384 - val_loss: 0.1943 - val_acc: 0.7168\n\nEpoch 00108: val_acc did not improve from 0.72480\nEpoch 109/500\n7500/7500 [==============================] - 2s 231us/step - loss: 0.1855 - acc: 0.7385 - val_loss: 0.1945 - val_acc: 0.7192\n\nEpoch 00109: val_acc did not improve from 0.72480\nEpoch 110/500\n7500/7500 [==============================] - 2s 230us/step - loss: 0.1833 - acc: 0.7376 - val_loss: 0.1943 - val_acc: 0.7176\n\nEpoch 00110: val_acc did not improve from 0.72480\nEpoch 111/500\n7500/7500 [==============================] - 2s 236us/step - loss: 0.1848 - acc: 0.7385 - val_loss: 0.1950 - val_acc: 0.7188\n\nEpoch 00111: val_acc did not improve from 0.72480\nEpoch 112/500\n7500/7500 [==============================] - 2s 233us/step - loss: 0.1842 - acc: 0.7401 - val_loss: 0.1947 - val_acc: 0.7168\n\nEpoch 00112: val_acc did not improve from 0.72480\nEpoch 113/500\n7500/7500 [==============================] - 2s 233us/step - loss: 0.1839 - acc: 0.7416 - val_loss: 0.1948 - val_acc: 0.7176\n\nEpoch 00113: val_acc did not improve from 0.72480\nEpoch 114/500\n7500/7500 [==============================] - 2s 235us/step - loss: 0.1832 - acc: 0.7441 - val_loss: 0.1947 - val_acc: 0.7176\n\nEpoch 00114: val_acc did not improve from 0.72480\nEpoch 115/500\n7500/7500 [==============================] - 2s 234us/step - loss: 0.1850 - acc: 0.7363 - val_loss: 0.1947 - val_acc: 0.7180\n\nEpoch 00115: val_acc did not improve from 0.72480\nEpoch 116/500\n7500/7500 [==============================] - 2s 233us/step - loss: 0.1837 - acc: 0.7436 - val_loss: 0.1948 - val_acc: 0.7184\n\nEpoch 00116: val_acc did not improve from 0.72480\nEpoch 117/500\n7500/7500 [==============================] - 2s 245us/step - loss: 0.1838 - acc: 0.7408 - val_loss: 0.1950 - val_acc: 0.7184\n\nEpoch 00117: val_acc did not improve from 0.72480\nEpoch 118/500\n7500/7500 [==============================] - 2s 233us/step - loss: 0.1823 - acc: 0.7420 - val_loss: 0.1949 - val_acc: 0.7176\n\nEpoch 00118: val_acc did not improve from 0.72480\nEpoch 119/500\n7500/7500 [==============================] - 2s 233us/step - loss: 0.1815 - acc: 0.7457 - val_loss: 0.1953 - val_acc: 0.7172\n\nEpoch 00119: val_acc did not improve from 0.72480\nEpoch 120/500\n7500/7500 [==============================] - 2s 237us/step - loss: 0.1830 - acc: 0.7444 - val_loss: 0.1950 - val_acc: 0.7176\n\nEpoch 00120: val_acc did not improve from 0.72480\nEpoch 121/500\n7500/7500 [==============================] - 2s 233us/step - loss: 0.1820 - acc: 0.7460 - val_loss: 0.1956 - val_acc: 0.7188\n\nEpoch 00121: val_acc did not improve from 0.72480\nEpoch 122/500\n7500/7500 [==============================] - 2s 236us/step - loss: 0.1831 - acc: 0.7445 - val_loss: 0.1956 - val_acc: 0.7188\n\nEpoch 00122: val_acc did not improve from 0.72480\nEpoch 123/500\n7500/7500 [==============================] - 2s 236us/step - loss: 0.1834 - acc: 0.7404 - val_loss: 0.1952 - val_acc: 0.7212\n\nEpoch 00123: val_acc did not improve from 0.72480\nEpoch 124/500\n7500/7500 [==============================] - 2s 233us/step - loss: 0.1840 - acc: 0.7419 - val_loss: 0.1958 - val_acc: 0.7220\n\nEpoch 00124: val_acc did not improve from 0.72480\nEpoch 125/500\n7500/7500 [==============================] - 2s 235us/step - loss: 0.1808 - acc: 0.7491 - val_loss: 0.1961 - val_acc: 0.7216\n\nEpoch 00125: val_acc did not improve from 0.72480\nEpoch 126/500\n7500/7500 [==============================] - 2s 237us/step - loss: 0.1805 - acc: 0.7465 - val_loss: 0.1959 - val_acc: 0.7220\n\nEpoch 00126: val_acc did not improve from 0.72480\nEpoch 127/500\n7500/7500 [==============================] - 2s 231us/step - loss: 0.1829 - acc: 0.7463 - val_loss: 0.1952 - val_acc: 0.7216\n\nEpoch 00127: val_acc did not improve from 0.72480\nEpoch 128/500\n7500/7500 [==============================] - 2s 239us/step - loss: 0.1815 - acc: 0.7469 - val_loss: 0.1957 - val_acc: 0.7212\n\nEpoch 00128: val_acc did not improve from 0.72480\nEpoch 129/500\n7500/7500 [==============================] - 2s 232us/step - loss: 0.1808 - acc: 0.7471 - val_loss: 0.1960 - val_acc: 0.7196\n\nEpoch 00129: val_acc did not improve from 0.72480\nEpoch 130/500\n7500/7500 [==============================] - 2s 237us/step - loss: 0.1815 - acc: 0.7443 - val_loss: 0.1964 - val_acc: 0.7212\n\nEpoch 00130: val_acc did not improve from 0.72480\nEpoch 131/500\n7500/7500 [==============================] - 2s 235us/step - loss: 0.1803 - acc: 0.7536 - val_loss: 0.1970 - val_acc: 0.7208\n\nEpoch 00131: val_acc did not improve from 0.72480\nEpoch 132/500\n7500/7500 [==============================] - 2s 235us/step - loss: 0.1807 - acc: 0.7520 - val_loss: 0.1968 - val_acc: 0.7216\n\nEpoch 00132: val_acc did not improve from 0.72480\nEpoch 133/500\n7500/7500 [==============================] - 2s 235us/step - loss: 0.1815 - acc: 0.7456 - val_loss: 0.1976 - val_acc: 0.7172\n\nEpoch 00133: val_acc did not improve from 0.72480\nEpoch 134/500\n7500/7500 [==============================] - 2s 249us/step - loss: 0.1824 - acc: 0.7455 - val_loss: 0.1961 - val_acc: 0.7200\n\nEpoch 00134: val_acc did not improve from 0.72480\nEpoch 135/500\n7500/7500 [==============================] - 2s 237us/step - loss: 0.1798 - acc: 0.7511 - val_loss: 0.1962 - val_acc: 0.7220\n\nEpoch 00135: val_acc did not improve from 0.72480\nEpoch 136/500\n7500/7500 [==============================] - 2s 235us/step - loss: 0.1804 - acc: 0.7505 - val_loss: 0.1966 - val_acc: 0.7208\n\nEpoch 00136: val_acc did not improve from 0.72480\nEpoch 137/500\n7500/7500 [==============================] - 2s 237us/step - loss: 0.1807 - acc: 0.7475 - val_loss: 0.1970 - val_acc: 0.7192\n\nEpoch 00137: val_acc did not improve from 0.72480\nEpoch 138/500\n7500/7500 [==============================] - 2s 236us/step - loss: 0.1796 - acc: 0.7529 - val_loss: 0.1987 - val_acc: 0.7216\n\nEpoch 00138: val_acc did not improve from 0.72480\nEpoch 139/500\n7500/7500 [==============================] - 2s 239us/step - loss: 0.1810 - acc: 0.7476 - val_loss: 0.1981 - val_acc: 0.7204\n\nEpoch 00139: val_acc did not improve from 0.72480\nEpoch 140/500\n7500/7500 [==============================] - 2s 239us/step - loss: 0.1807 - acc: 0.7505 - val_loss: 0.1982 - val_acc: 0.7196\n\nEpoch 00140: val_acc did not improve from 0.72480\nEpoch 141/500\n7500/7500 [==============================] - 2s 236us/step - loss: 0.1783 - acc: 0.7600 - val_loss: 0.1978 - val_acc: 0.7168\n\nEpoch 00141: val_acc did not improve from 0.72480\nEpoch 142/500\n7500/7500 [==============================] - 2s 237us/step - loss: 0.1788 - acc: 0.7532 - val_loss: 0.1982 - val_acc: 0.7168\n\nEpoch 00142: val_acc did not improve from 0.72480\nEpoch 143/500\n7500/7500 [==============================] - 2s 238us/step - loss: 0.1793 - acc: 0.7540 - val_loss: 0.1992 - val_acc: 0.7176\n\nEpoch 00143: val_acc did not improve from 0.72480\nEpoch 144/500\n7500/7500 [==============================] - 2s 234us/step - loss: 0.1785 - acc: 0.7532 - val_loss: 0.1987 - val_acc: 0.7168\n\nEpoch 00144: val_acc did not improve from 0.72480\nEpoch 145/500\n7500/7500 [==============================] - 2s 242us/step - loss: 0.1814 - acc: 0.7477 - val_loss: 0.1981 - val_acc: 0.7192\n\nEpoch 00145: val_acc did not improve from 0.72480\nEpoch 146/500\n7500/7500 [==============================] - 2s 249us/step - loss: 0.1769 - acc: 0.7605 - val_loss: 0.1989 - val_acc: 0.7192\n\nEpoch 00146: val_acc did not improve from 0.72480\nEpoch 147/500\n7500/7500 [==============================] - 2s 238us/step - loss: 0.1777 - acc: 0.7547 - val_loss: 0.1992 - val_acc: 0.7172\n\nEpoch 00147: val_acc did not improve from 0.72480\nEpoch 148/500\n7500/7500 [==============================] - 2s 239us/step - loss: 0.1806 - acc: 0.7521 - val_loss: 0.1994 - val_acc: 0.7168\n\nEpoch 00148: val_acc did not improve from 0.72480\nEpoch 149/500\n7500/7500 [==============================] - 2s 234us/step - loss: 0.1777 - acc: 0.7611 - val_loss: 0.1997 - val_acc: 0.7192\n\nEpoch 00149: val_acc did not improve from 0.72480\nEpoch 150/500\n7500/7500 [==============================] - 2s 234us/step - loss: 0.1760 - acc: 0.7571 - val_loss: 0.1998 - val_acc: 0.7156\n\nEpoch 00150: val_acc did not improve from 0.72480\nEpoch 151/500\n7500/7500 [==============================] - 2s 238us/step - loss: 0.1777 - acc: 0.7573 - val_loss: 0.2003 - val_acc: 0.7188\n\nEpoch 00151: val_acc did not improve from 0.72480\nEpoch 152/500\n7500/7500 [==============================] - 2s 239us/step - loss: 0.1774 - acc: 0.7567 - val_loss: 0.2008 - val_acc: 0.7176\n\nEpoch 00152: val_acc did not improve from 0.72480\nEpoch 153/500\n7500/7500 [==============================] - 2s 237us/step - loss: 0.1778 - acc: 0.7552 - val_loss: 0.2027 - val_acc: 0.7160\n\nEpoch 00153: val_acc did not improve from 0.72480\nEpoch 154/500\n7500/7500 [==============================] - 2s 241us/step - loss: 0.1773 - acc: 0.7545 - val_loss: 0.2007 - val_acc: 0.7160\n\nEpoch 00154: val_acc did not improve from 0.72480\nEpoch 155/500\n7500/7500 [==============================] - 2s 240us/step - loss: 0.1754 - acc: 0.7615 - val_loss: 0.2003 - val_acc: 0.7180\n\nEpoch 00155: val_acc did not improve from 0.72480\nEpoch 156/500\n7500/7500 [==============================] - 2s 241us/step - loss: 0.1768 - acc: 0.7595 - val_loss: 0.2037 - val_acc: 0.7156\n\nEpoch 00156: val_acc did not improve from 0.72480\nEpoch 157/500\n7500/7500 [==============================] - 2s 237us/step - loss: 0.1763 - acc: 0.7585 - val_loss: 0.2012 - val_acc: 0.7160\n\nEpoch 00157: val_acc did not improve from 0.72480\nEpoch 158/500\n7500/7500 [==============================] - 2s 246us/step - loss: 0.1766 - acc: 0.7556 - val_loss: 0.2003 - val_acc: 0.7136\n\nEpoch 00158: val_acc did not improve from 0.72480\nEpoch 159/500\n7500/7500 [==============================] - 2s 235us/step - loss: 0.1757 - acc: 0.7575 - val_loss: 0.2028 - val_acc: 0.7140\n\nEpoch 00159: val_acc did not improve from 0.72480\nEpoch 160/500\n7500/7500 [==============================] - 2s 239us/step - loss: 0.1768 - acc: 0.7580 - val_loss: 0.2027 - val_acc: 0.7132\n\nEpoch 00160: val_acc did not improve from 0.72480\nEpoch 161/500\n7500/7500 [==============================] - 2s 261us/step - loss: 0.1765 - acc: 0.7599 - val_loss: 0.2018 - val_acc: 0.7160\n\nEpoch 00161: val_acc did not improve from 0.72480\nEpoch 162/500\n7500/7500 [==============================] - 2s 235us/step - loss: 0.1769 - acc: 0.7561 - val_loss: 0.2022 - val_acc: 0.7160\n\nEpoch 00162: val_acc did not improve from 0.72480\nEpoch 163/500\n7500/7500 [==============================] - 2s 275us/step - loss: 0.1774 - acc: 0.7541 - val_loss: 0.2026 - val_acc: 0.7140\n\nEpoch 00163: val_acc did not improve from 0.72480\nEpoch 164/500\n7500/7500 [==============================] - 2s 287us/step - loss: 0.1776 - acc: 0.7541 - val_loss: 0.2013 - val_acc: 0.7136\n\nEpoch 00164: val_acc did not improve from 0.72480\nEpoch 165/500\n7500/7500 [==============================] - 3s 337us/step - loss: 0.1744 - acc: 0.7643 - val_loss: 0.2012 - val_acc: 0.7156\n\nEpoch 00165: val_acc did not improve from 0.72480\nEpoch 166/500\n7500/7500 [==============================] - 3s 349us/step - loss: 0.1747 - acc: 0.7597 - val_loss: 0.2051 - val_acc: 0.7128\n\nEpoch 00166: val_acc did not improve from 0.72480\nEpoch 167/500\n7500/7500 [==============================] - 2s 324us/step - loss: 0.1758 - acc: 0.7597 - val_loss: 0.2035 - val_acc: 0.7152\n\nEpoch 00167: val_acc did not improve from 0.72480\nEpoch 168/500\n7500/7500 [==============================] - 2s 315us/step - loss: 0.1764 - acc: 0.7553 - val_loss: 0.2025 - val_acc: 0.7156\n\nEpoch 00168: val_acc did not improve from 0.72480\nEpoch 169/500\n7500/7500 [==============================] - 2s 309us/step - loss: 0.1754 - acc: 0.7640 - val_loss: 0.2034 - val_acc: 0.7148\n\nEpoch 00169: val_acc did not improve from 0.72480\nEpoch 170/500\n7500/7500 [==============================] - 2s 308us/step - loss: 0.1765 - acc: 0.7592 - val_loss: 0.2025 - val_acc: 0.7136\n\nEpoch 00170: val_acc did not improve from 0.72480\nEpoch 171/500\n7500/7500 [==============================] - 2s 299us/step - loss: 0.1743 - acc: 0.7621 - val_loss: 0.2032 - val_acc: 0.7124\n\nEpoch 00171: val_acc did not improve from 0.72480\nEpoch 172/500\n7500/7500 [==============================] - 3s 354us/step - loss: 0.1747 - acc: 0.7601 - val_loss: 0.2030 - val_acc: 0.7132\n\nEpoch 00172: val_acc did not improve from 0.72480\nEpoch 173/500\n7500/7500 [==============================] - 2s 301us/step - loss: 0.1748 - acc: 0.7604 - val_loss: 0.2027 - val_acc: 0.7140\n\nEpoch 00173: val_acc did not improve from 0.72480\nEpoch 174/500\n7500/7500 [==============================] - 2s 259us/step - loss: 0.1757 - acc: 0.7569 - val_loss: 0.2042 - val_acc: 0.7136\n\nEpoch 00174: val_acc did not improve from 0.72480\nEpoch 175/500\n7500/7500 [==============================] - 2s 262us/step - loss: 0.1759 - acc: 0.7617 - val_loss: 0.2050 - val_acc: 0.7132\n\nEpoch 00175: val_acc did not improve from 0.72480\nEpoch 176/500\n7500/7500 [==============================] - 2s 260us/step - loss: 0.1744 - acc: 0.7601 - val_loss: 0.2050 - val_acc: 0.7148\n\nEpoch 00176: val_acc did not improve from 0.72480\nEpoch 177/500\n7500/7500 [==============================] - 2s 265us/step - loss: 0.1739 - acc: 0.7637 - val_loss: 0.2057 - val_acc: 0.7140\n\nEpoch 00177: val_acc did not improve from 0.72480\nEpoch 178/500\n7500/7500 [==============================] - 2s 260us/step - loss: 0.1733 - acc: 0.7645 - val_loss: 0.2036 - val_acc: 0.7144\n\nEpoch 00178: val_acc did not improve from 0.72480\nEpoch 179/500\n7500/7500 [==============================] - 2s 259us/step - loss: 0.1736 - acc: 0.7607 - val_loss: 0.2060 - val_acc: 0.7140\n\nEpoch 00179: val_acc did not improve from 0.72480\nEpoch 180/500\n7500/7500 [==============================] - 2s 261us/step - loss: 0.1752 - acc: 0.7585 - val_loss: 0.2038 - val_acc: 0.7148\n\nEpoch 00180: val_acc did not improve from 0.72480\nEpoch 181/500\n7500/7500 [==============================] - 2s 260us/step - loss: 0.1744 - acc: 0.7589 - val_loss: 0.2064 - val_acc: 0.7144\n\nEpoch 00181: val_acc did not improve from 0.72480\nEpoch 182/500\n7500/7500 [==============================] - 2s 257us/step - loss: 0.1741 - acc: 0.7612 - val_loss: 0.2051 - val_acc: 0.7128\n\nEpoch 00182: val_acc did not improve from 0.72480\nEpoch 183/500\n7500/7500 [==============================] - 2s 258us/step - loss: 0.1737 - acc: 0.7605 - val_loss: 0.2048 - val_acc: 0.7140\n\nEpoch 00183: val_acc did not improve from 0.72480\nEpoch 184/500\n7500/7500 [==============================] - 2s 259us/step - loss: 0.1746 - acc: 0.7625 - val_loss: 0.2066 - val_acc: 0.7128\n\nEpoch 00184: val_acc did not improve from 0.72480\nEpoch 185/500\n7500/7500 [==============================] - 2s 260us/step - loss: 0.1752 - acc: 0.7616 - val_loss: 0.2050 - val_acc: 0.7152\n\nEpoch 00185: val_acc did not improve from 0.72480\nEpoch 186/500\n7500/7500 [==============================] - 2s 261us/step - loss: 0.1754 - acc: 0.7552 - val_loss: 0.2051 - val_acc: 0.7132\n\nEpoch 00186: val_acc did not improve from 0.72480\nEpoch 187/500\n7500/7500 [==============================] - 2s 258us/step - loss: 0.1753 - acc: 0.7595 - val_loss: 0.2087 - val_acc: 0.7164\n\nEpoch 00187: val_acc did not improve from 0.72480\nEpoch 188/500\n7500/7500 [==============================] - 2s 293us/step - loss: 0.1735 - acc: 0.7637 - val_loss: 0.2080 - val_acc: 0.7096\n\nEpoch 00188: val_acc did not improve from 0.72480\nEpoch 189/500\n7500/7500 [==============================] - 2s 283us/step - loss: 0.1741 - acc: 0.7628 - val_loss: 0.2067 - val_acc: 0.7140\n\nEpoch 00189: val_acc did not improve from 0.72480\nEpoch 190/500\n7500/7500 [==============================] - 2s 268us/step - loss: 0.1730 - acc: 0.7643 - val_loss: 0.2043 - val_acc: 0.7152\n\nEpoch 00190: val_acc did not improve from 0.72480\nEpoch 191/500\n7500/7500 [==============================] - 2s 266us/step - loss: 0.1738 - acc: 0.7624 - val_loss: 0.2080 - val_acc: 0.7120\n\nEpoch 00191: val_acc did not improve from 0.72480\nEpoch 192/500\n7500/7500 [==============================] - 2s 268us/step - loss: 0.1733 - acc: 0.7631 - val_loss: 0.2078 - val_acc: 0.7144\n\nEpoch 00192: val_acc did not improve from 0.72480\nEpoch 193/500\n7500/7500 [==============================] - 2s 270us/step - loss: 0.1733 - acc: 0.7645 - val_loss: 0.2062 - val_acc: 0.7144\n\nEpoch 00193: val_acc did not improve from 0.72480\nEpoch 194/500\n7500/7500 [==============================] - 2s 262us/step - loss: 0.1744 - acc: 0.7600 - val_loss: 0.2067 - val_acc: 0.7144\n\nEpoch 00194: val_acc did not improve from 0.72480\nEpoch 195/500\n7500/7500 [==============================] - 2s 269us/step - loss: 0.1735 - acc: 0.7625 - val_loss: 0.2086 - val_acc: 0.7116\n\nEpoch 00195: val_acc did not improve from 0.72480\nEpoch 196/500\n7500/7500 [==============================] - 2s 275us/step - loss: 0.1739 - acc: 0.7621 - val_loss: 0.2079 - val_acc: 0.7148\n\nEpoch 00196: val_acc did not improve from 0.72480\nEpoch 197/500\n7500/7500 [==============================] - 2s 267us/step - loss: 0.1754 - acc: 0.7564 - val_loss: 0.2091 - val_acc: 0.7128\n\nEpoch 00197: val_acc did not improve from 0.72480\nEpoch 198/500\n7500/7500 [==============================] - 2s 272us/step - loss: 0.1747 - acc: 0.7608 - val_loss: 0.2112 - val_acc: 0.7136\n\nEpoch 00198: val_acc did not improve from 0.72480\nEpoch 199/500\n7500/7500 [==============================] - 2s 266us/step - loss: 0.1742 - acc: 0.7597 - val_loss: 0.2091 - val_acc: 0.7140\n\nEpoch 00199: val_acc did not improve from 0.72480\nEpoch 200/500\n7500/7500 [==============================] - 2s 263us/step - loss: 0.1721 - acc: 0.7679 - val_loss: 0.2092 - val_acc: 0.7128\n\nEpoch 00200: val_acc did not improve from 0.72480\nEpoch 201/500\n7500/7500 [==============================] - 2s 258us/step - loss: 0.1718 - acc: 0.7665 - val_loss: 0.2115 - val_acc: 0.7152\n\nEpoch 00201: val_acc did not improve from 0.72480\nEpoch 202/500\n7500/7500 [==============================] - 2s 260us/step - loss: 0.1720 - acc: 0.7687 - val_loss: 0.2114 - val_acc: 0.7120\n\nEpoch 00202: val_acc did not improve from 0.72480\nEpoch 203/500\n7500/7500 [==============================] - 2s 265us/step - loss: 0.1728 - acc: 0.7639 - val_loss: 0.2105 - val_acc: 0.7148\n\nEpoch 00203: val_acc did not improve from 0.72480\nEpoch 204/500\n7500/7500 [==============================] - 2s 263us/step - loss: 0.1718 - acc: 0.7699 - val_loss: 0.2104 - val_acc: 0.7124\n\nEpoch 00204: val_acc did not improve from 0.72480\nEpoch 205/500\n7500/7500 [==============================] - 2s 266us/step - loss: 0.1715 - acc: 0.7644 - val_loss: 0.2114 - val_acc: 0.7096\n\nEpoch 00205: val_acc did not improve from 0.72480\nEpoch 206/500\n7500/7500 [==============================] - 2s 265us/step - loss: 0.1723 - acc: 0.7641 - val_loss: 0.2124 - val_acc: 0.7112\n\nEpoch 00206: val_acc did not improve from 0.72480\nEpoch 207/500\n7500/7500 [==============================] - 2s 269us/step - loss: 0.1727 - acc: 0.7641 - val_loss: 0.2089 - val_acc: 0.7108\n\nEpoch 00207: val_acc did not improve from 0.72480\nEpoch 208/500\n7500/7500 [==============================] - 2s 261us/step - loss: 0.1731 - acc: 0.7605 - val_loss: 0.2121 - val_acc: 0.7104\n\nEpoch 00208: val_acc did not improve from 0.72480\nEpoch 209/500\n7500/7500 [==============================] - 2s 270us/step - loss: 0.1732 - acc: 0.7632 - val_loss: 0.2115 - val_acc: 0.7120\n\nEpoch 00209: val_acc did not improve from 0.72480\nEpoch 210/500\n7500/7500 [==============================] - 2s 266us/step - loss: 0.1720 - acc: 0.7649 - val_loss: 0.2105 - val_acc: 0.7100\n\nEpoch 00210: val_acc did not improve from 0.72480\nEpoch 211/500\n7500/7500 [==============================] - 2s 264us/step - loss: 0.1729 - acc: 0.7616 - val_loss: 0.2106 - val_acc: 0.7116\n\nEpoch 00211: val_acc did not improve from 0.72480\nEpoch 212/500\n7500/7500 [==============================] - 2s 260us/step - loss: 0.1712 - acc: 0.7683 - val_loss: 0.2113 - val_acc: 0.7116\n\nEpoch 00212: val_acc did not improve from 0.72480\nEpoch 213/500\n7500/7500 [==============================] - 2s 260us/step - loss: 0.1722 - acc: 0.7632 - val_loss: 0.2122 - val_acc: 0.7100\n\nEpoch 00213: val_acc did not improve from 0.72480\nEpoch 214/500\n7500/7500 [==============================] - 2s 264us/step - loss: 0.1735 - acc: 0.7639 - val_loss: 0.2133 - val_acc: 0.7100\n\nEpoch 00214: val_acc did not improve from 0.72480\nEpoch 215/500\n7500/7500 [==============================] - 2s 263us/step - loss: 0.1701 - acc: 0.7695 - val_loss: 0.2136 - val_acc: 0.7100\n\nEpoch 00215: val_acc did not improve from 0.72480\nEpoch 216/500\n7500/7500 [==============================] - 2s 276us/step - loss: 0.1702 - acc: 0.7693 - val_loss: 0.2134 - val_acc: 0.7112\n\nEpoch 00216: val_acc did not improve from 0.72480\nEpoch 217/500\n7500/7500 [==============================] - 2s 294us/step - loss: 0.1723 - acc: 0.7621 - val_loss: 0.2107 - val_acc: 0.7108\n\nEpoch 00217: val_acc did not improve from 0.72480\nEpoch 218/500\n7500/7500 [==============================] - 2s 281us/step - loss: 0.1683 - acc: 0.7721 - val_loss: 0.2131 - val_acc: 0.7116\n\nEpoch 00218: val_acc did not improve from 0.72480\nEpoch 219/500\n7500/7500 [==============================] - 2s 264us/step - loss: 0.1702 - acc: 0.7721 - val_loss: 0.2147 - val_acc: 0.7104\n\nEpoch 00219: val_acc did not improve from 0.72480\nEpoch 220/500\n7500/7500 [==============================] - 2s 267us/step - loss: 0.1719 - acc: 0.7643 - val_loss: 0.2107 - val_acc: 0.7124\n\nEpoch 00220: val_acc did not improve from 0.72480\nEpoch 221/500\n7500/7500 [==============================] - 2s 279us/step - loss: 0.1689 - acc: 0.7695 - val_loss: 0.2150 - val_acc: 0.7088\n\nEpoch 00221: val_acc did not improve from 0.72480\nEpoch 222/500\n7500/7500 [==============================] - 2s 264us/step - loss: 0.1691 - acc: 0.7700 - val_loss: 0.2114 - val_acc: 0.7104\n\nEpoch 00222: val_acc did not improve from 0.72480\nEpoch 223/500\n7500/7500 [==============================] - 2s 286us/step - loss: 0.1707 - acc: 0.7697 - val_loss: 0.2143 - val_acc: 0.7088\n\nEpoch 00223: val_acc did not improve from 0.72480\nEpoch 224/500\n7500/7500 [==============================] - 2s 269us/step - loss: 0.1719 - acc: 0.7687 - val_loss: 0.2130 - val_acc: 0.7088\n\nEpoch 00224: val_acc did not improve from 0.72480\nEpoch 225/500\n7500/7500 [==============================] - 2s 263us/step - loss: 0.1692 - acc: 0.7720 - val_loss: 0.2135 - val_acc: 0.7104\n\nEpoch 00225: val_acc did not improve from 0.72480\nEpoch 226/500\n7500/7500 [==============================] - 2s 261us/step - loss: 0.1705 - acc: 0.7712 - val_loss: 0.2150 - val_acc: 0.7112\n\nEpoch 00226: val_acc did not improve from 0.72480\nEpoch 227/500\n7500/7500 [==============================] - 2s 266us/step - loss: 0.1734 - acc: 0.7601 - val_loss: 0.2151 - val_acc: 0.7092\n\nEpoch 00227: val_acc did not improve from 0.72480\nEpoch 228/500\n7500/7500 [==============================] - 2s 268us/step - loss: 0.1692 - acc: 0.7703 - val_loss: 0.2148 - val_acc: 0.7104\n\nEpoch 00228: val_acc did not improve from 0.72480\nEpoch 229/500\n7500/7500 [==============================] - 2s 265us/step - loss: 0.1714 - acc: 0.7660 - val_loss: 0.2154 - val_acc: 0.7104\n\nEpoch 00229: val_acc did not improve from 0.72480\nEpoch 230/500\n7500/7500 [==============================] - 2s 263us/step - loss: 0.1692 - acc: 0.7709 - val_loss: 0.2140 - val_acc: 0.7108\n\nEpoch 00230: val_acc did not improve from 0.72480\nEpoch 231/500\n7500/7500 [==============================] - 2s 258us/step - loss: 0.1714 - acc: 0.7623 - val_loss: 0.2153 - val_acc: 0.7100\n\nEpoch 00231: val_acc did not improve from 0.72480\nEpoch 232/500\n7500/7500 [==============================] - 2s 258us/step - loss: 0.1696 - acc: 0.7709 - val_loss: 0.2135 - val_acc: 0.7112\n\nEpoch 00232: val_acc did not improve from 0.72480\nEpoch 233/500\n7500/7500 [==============================] - 2s 263us/step - loss: 0.1699 - acc: 0.7700 - val_loss: 0.2156 - val_acc: 0.7116\n\nEpoch 00233: val_acc did not improve from 0.72480\nEpoch 234/500\n7500/7500 [==============================] - 2s 259us/step - loss: 0.1718 - acc: 0.7651 - val_loss: 0.2125 - val_acc: 0.7108\n\nEpoch 00234: val_acc did not improve from 0.72480\nEpoch 235/500\n7500/7500 [==============================] - 2s 261us/step - loss: 0.1690 - acc: 0.7729 - val_loss: 0.2138 - val_acc: 0.7092\n\nEpoch 00235: val_acc did not improve from 0.72480\nEpoch 236/500\n7500/7500 [==============================] - 2s 263us/step - loss: 0.1686 - acc: 0.7693 - val_loss: 0.2156 - val_acc: 0.7100\n\nEpoch 00236: val_acc did not improve from 0.72480\nEpoch 237/500\n7500/7500 [==============================] - 2s 262us/step - loss: 0.1700 - acc: 0.7676 - val_loss: 0.2146 - val_acc: 0.7124\n\nEpoch 00237: val_acc did not improve from 0.72480\nEpoch 238/500\n7500/7500 [==============================] - 2s 258us/step - loss: 0.1695 - acc: 0.7671 - val_loss: 0.2155 - val_acc: 0.7116\n\nEpoch 00238: val_acc did not improve from 0.72480\nEpoch 239/500\n7500/7500 [==============================] - 2s 259us/step - loss: 0.1709 - acc: 0.7668 - val_loss: 0.2150 - val_acc: 0.7120\n\nEpoch 00239: val_acc did not improve from 0.72480\nEpoch 240/500\n7500/7500 [==============================] - 2s 263us/step - loss: 0.1704 - acc: 0.7644 - val_loss: 0.2189 - val_acc: 0.7108\n\nEpoch 00240: val_acc did not improve from 0.72480\nEpoch 241/500\n7500/7500 [==============================] - 2s 265us/step - loss: 0.1671 - acc: 0.7751 - val_loss: 0.2176 - val_acc: 0.7104\n\nEpoch 00241: val_acc did not improve from 0.72480\nEpoch 242/500\n7500/7500 [==============================] - 2s 257us/step - loss: 0.1691 - acc: 0.7713 - val_loss: 0.2183 - val_acc: 0.7108\n\nEpoch 00242: val_acc did not improve from 0.72480\nEpoch 243/500\n7500/7500 [==============================] - 2s 258us/step - loss: 0.1701 - acc: 0.7687 - val_loss: 0.2158 - val_acc: 0.7108\n\nEpoch 00243: val_acc did not improve from 0.72480\nEpoch 244/500\n7500/7500 [==============================] - 2s 259us/step - loss: 0.1693 - acc: 0.7692 - val_loss: 0.2189 - val_acc: 0.7096\n\nEpoch 00244: val_acc did not improve from 0.72480\nEpoch 245/500\n7500/7500 [==============================] - 2s 281us/step - loss: 0.1694 - acc: 0.7689 - val_loss: 0.2200 - val_acc: 0.7116\n\nEpoch 00245: val_acc did not improve from 0.72480\nEpoch 246/500\n7500/7500 [==============================] - 2s 272us/step - loss: 0.1695 - acc: 0.7712 - val_loss: 0.2196 - val_acc: 0.7116\n\nEpoch 00246: val_acc did not improve from 0.72480\nEpoch 247/500\n7500/7500 [==============================] - 2s 261us/step - loss: 0.1673 - acc: 0.7756 - val_loss: 0.2188 - val_acc: 0.7104\n\nEpoch 00247: val_acc did not improve from 0.72480\nEpoch 248/500\n7500/7500 [==============================] - 2s 263us/step - loss: 0.1695 - acc: 0.7664 - val_loss: 0.2212 - val_acc: 0.7132\n\nEpoch 00248: val_acc did not improve from 0.72480\nEpoch 249/500\n7500/7500 [==============================] - 2s 273us/step - loss: 0.1675 - acc: 0.7735 - val_loss: 0.2184 - val_acc: 0.7104\n\nEpoch 00249: val_acc did not improve from 0.72480\nEpoch 250/500\n7500/7500 [==============================] - 2s 303us/step - loss: 0.1704 - acc: 0.7656 - val_loss: 0.2201 - val_acc: 0.7104\n\nEpoch 00250: val_acc did not improve from 0.72480\nEpoch 251/500\n7500/7500 [==============================] - 2s 309us/step - loss: 0.1691 - acc: 0.7671 - val_loss: 0.2219 - val_acc: 0.7076\n\nEpoch 00251: val_acc did not improve from 0.72480\nEpoch 252/500\n7500/7500 [==============================] - 2s 286us/step - loss: 0.1686 - acc: 0.7703 - val_loss: 0.2195 - val_acc: 0.7092\n\nEpoch 00252: val_acc did not improve from 0.72480\nEpoch 253/500\n7500/7500 [==============================] - 2s 301us/step - loss: 0.1690 - acc: 0.7688 - val_loss: 0.2217 - val_acc: 0.7104\n\nEpoch 00253: val_acc did not improve from 0.72480\nEpoch 254/500\n7500/7500 [==============================] - 2s 271us/step - loss: 0.1685 - acc: 0.7713 - val_loss: 0.2187 - val_acc: 0.7116\n\nEpoch 00254: val_acc did not improve from 0.72480\nEpoch 255/500\n7500/7500 [==============================] - 2s 286us/step - loss: 0.1691 - acc: 0.7696 - val_loss: 0.2205 - val_acc: 0.7112\n\nEpoch 00255: val_acc did not improve from 0.72480\nEpoch 256/500\n7500/7500 [==============================] - 2s 282us/step - loss: 0.1677 - acc: 0.7681 - val_loss: 0.2229 - val_acc: 0.7100\n\nEpoch 00256: val_acc did not improve from 0.72480\nEpoch 257/500\n7500/7500 [==============================] - 2s 311us/step - loss: 0.1688 - acc: 0.7664 - val_loss: 0.2241 - val_acc: 0.7104\n\nEpoch 00257: val_acc did not improve from 0.72480\nEpoch 258/500\n7500/7500 [==============================] - 2s 300us/step - loss: 0.1675 - acc: 0.7701 - val_loss: 0.2216 - val_acc: 0.7104\n\nEpoch 00258: val_acc did not improve from 0.72480\nEpoch 259/500\n7500/7500 [==============================] - 2s 303us/step - loss: 0.1662 - acc: 0.7744 - val_loss: 0.2250 - val_acc: 0.7092\n\nEpoch 00259: val_acc did not improve from 0.72480\nEpoch 260/500\n7500/7500 [==============================] - 2s 314us/step - loss: 0.1662 - acc: 0.7720 - val_loss: 0.2209 - val_acc: 0.7088\n\nEpoch 00260: val_acc did not improve from 0.72480\nEpoch 261/500\n7500/7500 [==============================] - 2s 270us/step - loss: 0.1663 - acc: 0.7763 - val_loss: 0.2264 - val_acc: 0.7100\n\nEpoch 00261: val_acc did not improve from 0.72480\nEpoch 262/500\n7500/7500 [==============================] - 2s 258us/step - loss: 0.1655 - acc: 0.7741 - val_loss: 0.2244 - val_acc: 0.7116\n\nEpoch 00262: val_acc did not improve from 0.72480\nEpoch 263/500\n7500/7500 [==============================] - 2s 260us/step - loss: 0.1660 - acc: 0.7708 - val_loss: 0.2235 - val_acc: 0.7124\n\nEpoch 00263: val_acc did not improve from 0.72480\nEpoch 264/500\n7500/7500 [==============================] - 2s 260us/step - loss: 0.1678 - acc: 0.7691 - val_loss: 0.2274 - val_acc: 0.7104\n\nEpoch 00264: val_acc did not improve from 0.72480\nEpoch 265/500\n7500/7500 [==============================] - 2s 260us/step - loss: 0.1688 - acc: 0.7687 - val_loss: 0.2266 - val_acc: 0.7096\n\nEpoch 00265: val_acc did not improve from 0.72480\nEpoch 266/500\n7500/7500 [==============================] - 2s 258us/step - loss: 0.1674 - acc: 0.7687 - val_loss: 0.2281 - val_acc: 0.7108\n\nEpoch 00266: val_acc did not improve from 0.72480\nEpoch 267/500\n7500/7500 [==============================] - 2s 257us/step - loss: 0.1682 - acc: 0.7669 - val_loss: 0.2255 - val_acc: 0.7116\n\nEpoch 00267: val_acc did not improve from 0.72480\nEpoch 268/500\n7500/7500 [==============================] - 2s 255us/step - loss: 0.1677 - acc: 0.7688 - val_loss: 0.2261 - val_acc: 0.7116\n\nEpoch 00268: val_acc did not improve from 0.72480\nEpoch 269/500\n7500/7500 [==============================] - 2s 264us/step - loss: 0.1681 - acc: 0.7645 - val_loss: 0.2271 - val_acc: 0.7088\n\nEpoch 00269: val_acc did not improve from 0.72480\nEpoch 270/500\n7500/7500 [==============================] - 2s 259us/step - loss: 0.1674 - acc: 0.7663 - val_loss: 0.2290 - val_acc: 0.7096\n\nEpoch 00270: val_acc did not improve from 0.72480\nEpoch 271/500\n7500/7500 [==============================] - 2s 267us/step - loss: 0.1667 - acc: 0.7701 - val_loss: 0.2266 - val_acc: 0.7104\n\nEpoch 00271: val_acc did not improve from 0.72480\nEpoch 272/500\n7500/7500 [==============================] - 2s 262us/step - loss: 0.1673 - acc: 0.7709 - val_loss: 0.2279 - val_acc: 0.7096\n\nEpoch 00272: val_acc did not improve from 0.72480\nEpoch 273/500\n7500/7500 [==============================] - 2s 258us/step - loss: 0.1648 - acc: 0.7728 - val_loss: 0.2300 - val_acc: 0.7096\n\nEpoch 00273: val_acc did not improve from 0.72480\nEpoch 274/500\n7500/7500 [==============================] - 2s 281us/step - loss: 0.1659 - acc: 0.7735 - val_loss: 0.2306 - val_acc: 0.7120\n\nEpoch 00274: val_acc did not improve from 0.72480\nEpoch 275/500\n7500/7500 [==============================] - 2s 275us/step - loss: 0.1668 - acc: 0.7723 - val_loss: 0.2281 - val_acc: 0.7104\n\nEpoch 00275: val_acc did not improve from 0.72480\nEpoch 276/500\n7500/7500 [==============================] - 2s 272us/step - loss: 0.1657 - acc: 0.7711 - val_loss: 0.2280 - val_acc: 0.7092\n\nEpoch 00276: val_acc did not improve from 0.72480\nEpoch 277/500\n7500/7500 [==============================] - 2s 278us/step - loss: 0.1679 - acc: 0.7681 - val_loss: 0.2300 - val_acc: 0.7112\n\nEpoch 00277: val_acc did not improve from 0.72480\nEpoch 278/500\n7500/7500 [==============================] - 2s 266us/step - loss: 0.1678 - acc: 0.7677 - val_loss: 0.2329 - val_acc: 0.7112\n\nEpoch 00278: val_acc did not improve from 0.72480\nEpoch 279/500\n7500/7500 [==============================] - 2s 274us/step - loss: 0.1695 - acc: 0.7645 - val_loss: 0.2293 - val_acc: 0.7124\n\nEpoch 00279: val_acc did not improve from 0.72480\nEpoch 280/500\n7500/7500 [==============================] - 2s 266us/step - loss: 0.1672 - acc: 0.7673 - val_loss: 0.2317 - val_acc: 0.7108\n\nEpoch 00280: val_acc did not improve from 0.72480\nEpoch 281/500\n7500/7500 [==============================] - 2s 268us/step - loss: 0.1667 - acc: 0.7689 - val_loss: 0.2305 - val_acc: 0.7108\n\nEpoch 00281: val_acc did not improve from 0.72480\nEpoch 282/500\n7500/7500 [==============================] - 2s 267us/step - loss: 0.1663 - acc: 0.7707 - val_loss: 0.2334 - val_acc: 0.7100\n\nEpoch 00282: val_acc did not improve from 0.72480\nEpoch 283/500\n7500/7500 [==============================] - 2s 288us/step - loss: 0.1670 - acc: 0.7684 - val_loss: 0.2296 - val_acc: 0.7128\n\nEpoch 00283: val_acc did not improve from 0.72480\nEpoch 284/500\n7500/7500 [==============================] - 2s 271us/step - loss: 0.1673 - acc: 0.7653 - val_loss: 0.2317 - val_acc: 0.7120\n\nEpoch 00284: val_acc did not improve from 0.72480\nEpoch 285/500\n7500/7500 [==============================] - 2s 284us/step - loss: 0.1654 - acc: 0.7715 - val_loss: 0.2318 - val_acc: 0.7116\n\nEpoch 00285: val_acc did not improve from 0.72480\nEpoch 286/500\n7500/7500 [==============================] - 2s 276us/step - loss: 0.1660 - acc: 0.7711 - val_loss: 0.2337 - val_acc: 0.7108\n\nEpoch 00286: val_acc did not improve from 0.72480\nEpoch 287/500\n7500/7500 [==============================] - 2s 276us/step - loss: 0.1667 - acc: 0.7709 - val_loss: 0.2330 - val_acc: 0.7116\n\nEpoch 00287: val_acc did not improve from 0.72480\nEpoch 288/500\n7500/7500 [==============================] - 2s 271us/step - loss: 0.1660 - acc: 0.7680 - val_loss: 0.2299 - val_acc: 0.7092\n\nEpoch 00288: val_acc did not improve from 0.72480\nEpoch 289/500\n7500/7500 [==============================] - 2s 233us/step - loss: 0.1655 - acc: 0.7751 - val_loss: 0.2339 - val_acc: 0.7124\n\nEpoch 00289: val_acc did not improve from 0.72480\nEpoch 290/500\n7500/7500 [==============================] - 2s 229us/step - loss: 0.1657 - acc: 0.7704 - val_loss: 0.2323 - val_acc: 0.7120\n\nEpoch 00290: val_acc did not improve from 0.72480\nEpoch 291/500\n7500/7500 [==============================] - 2s 234us/step - loss: 0.1661 - acc: 0.7687 - val_loss: 0.2335 - val_acc: 0.7112\n\nEpoch 00291: val_acc did not improve from 0.72480\nEpoch 292/500\n7500/7500 [==============================] - 2s 228us/step - loss: 0.1654 - acc: 0.7707 - val_loss: 0.2324 - val_acc: 0.7100\n\nEpoch 00292: val_acc did not improve from 0.72480\nEpoch 293/500\n7500/7500 [==============================] - 2s 233us/step - loss: 0.1690 - acc: 0.7651 - val_loss: 0.2318 - val_acc: 0.7108\n\nEpoch 00293: val_acc did not improve from 0.72480\nEpoch 294/500\n7500/7500 [==============================] - 2s 231us/step - loss: 0.1643 - acc: 0.7747 - val_loss: 0.2308 - val_acc: 0.7108\n\nEpoch 00294: val_acc did not improve from 0.72480\nEpoch 295/500\n7500/7500 [==============================] - 2s 229us/step - loss: 0.1650 - acc: 0.7695 - val_loss: 0.2316 - val_acc: 0.7120\n\nEpoch 00295: val_acc did not improve from 0.72480\nEpoch 296/500\n7500/7500 [==============================] - 2s 230us/step - loss: 0.1654 - acc: 0.7697 - val_loss: 0.2354 - val_acc: 0.7128\n\nEpoch 00296: val_acc did not improve from 0.72480\nEpoch 297/500\n7500/7500 [==============================] - 2s 232us/step - loss: 0.1642 - acc: 0.7727 - val_loss: 0.2339 - val_acc: 0.7136\n\nEpoch 00297: val_acc did not improve from 0.72480\nEpoch 298/500\n7500/7500 [==============================] - 2s 232us/step - loss: 0.1651 - acc: 0.7715 - val_loss: 0.2347 - val_acc: 0.7128\n\nEpoch 00298: val_acc did not improve from 0.72480\nEpoch 299/500\n7500/7500 [==============================] - 2s 235us/step - loss: 0.1643 - acc: 0.7716 - val_loss: 0.2336 - val_acc: 0.7096\n\nEpoch 00299: val_acc did not improve from 0.72480\nEpoch 300/500\n7500/7500 [==============================] - 2s 252us/step - loss: 0.1645 - acc: 0.7707 - val_loss: 0.2333 - val_acc: 0.7108\n\nEpoch 00300: val_acc did not improve from 0.72480\nEpoch 301/500\n7500/7500 [==============================] - 2s 270us/step - loss: 0.1662 - acc: 0.7720 - val_loss: 0.2333 - val_acc: 0.7096\n\nEpoch 00301: val_acc did not improve from 0.72480\nEpoch 302/500\n7500/7500 [==============================] - 2s 269us/step - loss: 0.1667 - acc: 0.7677 - val_loss: 0.2344 - val_acc: 0.7104\n\nEpoch 00302: val_acc did not improve from 0.72480\nEpoch 303/500\n7500/7500 [==============================] - 2s 268us/step - loss: 0.1650 - acc: 0.7723 - val_loss: 0.2370 - val_acc: 0.7096\n\nEpoch 00303: val_acc did not improve from 0.72480\nEpoch 304/500\n7500/7500 [==============================] - 2s 271us/step - loss: 0.1640 - acc: 0.7713 - val_loss: 0.2382 - val_acc: 0.7108\n\nEpoch 00304: val_acc did not improve from 0.72480\nEpoch 305/500\n7500/7500 [==============================] - 2s 269us/step - loss: 0.1645 - acc: 0.7699 - val_loss: 0.2380 - val_acc: 0.7108\n\nEpoch 00305: val_acc did not improve from 0.72480\nEpoch 306/500\n7500/7500 [==============================] - 2s 274us/step - loss: 0.1640 - acc: 0.7752 - val_loss: 0.2354 - val_acc: 0.7096\n\nEpoch 00306: val_acc did not improve from 0.72480\nEpoch 307/500\n7500/7500 [==============================] - 2s 267us/step - loss: 0.1641 - acc: 0.7728 - val_loss: 0.2377 - val_acc: 0.7100\n\nEpoch 00307: val_acc did not improve from 0.72480\nEpoch 308/500\n7500/7500 [==============================] - 2s 268us/step - loss: 0.1659 - acc: 0.7669 - val_loss: 0.2389 - val_acc: 0.7116\n\nEpoch 00308: val_acc did not improve from 0.72480\nEpoch 309/500\n7500/7500 [==============================] - 2s 274us/step - loss: 0.1633 - acc: 0.7729 - val_loss: 0.2394 - val_acc: 0.7104\n\nEpoch 00309: val_acc did not improve from 0.72480\nEpoch 310/500\n7500/7500 [==============================] - 2s 269us/step - loss: 0.1643 - acc: 0.7687 - val_loss: 0.2370 - val_acc: 0.7096\n\nEpoch 00310: val_acc did not improve from 0.72480\nEpoch 311/500\n7500/7500 [==============================] - 2s 273us/step - loss: 0.1635 - acc: 0.7728 - val_loss: 0.2370 - val_acc: 0.7092\n\nEpoch 00311: val_acc did not improve from 0.72480\nEpoch 312/500\n7500/7500 [==============================] - 2s 269us/step - loss: 0.1648 - acc: 0.7728 - val_loss: 0.2374 - val_acc: 0.7092\n\nEpoch 00312: val_acc did not improve from 0.72480\nEpoch 313/500\n7500/7500 [==============================] - 2s 268us/step - loss: 0.1650 - acc: 0.7707 - val_loss: 0.2377 - val_acc: 0.7092\n\nEpoch 00313: val_acc did not improve from 0.72480\nEpoch 314/500\n7500/7500 [==============================] - 2s 271us/step - loss: 0.1641 - acc: 0.7693 - val_loss: 0.2387 - val_acc: 0.7116\n\nEpoch 00314: val_acc did not improve from 0.72480\nEpoch 315/500\n7500/7500 [==============================] - 2s 269us/step - loss: 0.1669 - acc: 0.7660 - val_loss: 0.2393 - val_acc: 0.7128\n\nEpoch 00315: val_acc did not improve from 0.72480\nEpoch 316/500\n7500/7500 [==============================] - 2s 268us/step - loss: 0.1640 - acc: 0.7697 - val_loss: 0.2421 - val_acc: 0.7124\n\nEpoch 00316: val_acc did not improve from 0.72480\nEpoch 317/500\n7500/7500 [==============================] - 2s 273us/step - loss: 0.1652 - acc: 0.7691 - val_loss: 0.2390 - val_acc: 0.7128\n\nEpoch 00317: val_acc did not improve from 0.72480\nEpoch 318/500\n7500/7500 [==============================] - 2s 269us/step - loss: 0.1648 - acc: 0.7701 - val_loss: 0.2394 - val_acc: 0.7136\n\nEpoch 00318: val_acc did not improve from 0.72480\nEpoch 319/500\n7500/7500 [==============================] - 2s 268us/step - loss: 0.1637 - acc: 0.7707 - val_loss: 0.2382 - val_acc: 0.7100\n\nEpoch 00319: val_acc did not improve from 0.72480\nEpoch 320/500\n7500/7500 [==============================] - 2s 273us/step - loss: 0.1631 - acc: 0.7720 - val_loss: 0.2437 - val_acc: 0.7088\n\nEpoch 00320: val_acc did not improve from 0.72480\nEpoch 321/500\n7500/7500 [==============================] - 2s 268us/step - loss: 0.1637 - acc: 0.7735 - val_loss: 0.2398 - val_acc: 0.7112\n\nEpoch 00321: val_acc did not improve from 0.72480\nEpoch 322/500\n7500/7500 [==============================] - 2s 267us/step - loss: 0.1638 - acc: 0.7715 - val_loss: 0.2416 - val_acc: 0.7140\n\nEpoch 00322: val_acc did not improve from 0.72480\nEpoch 323/500\n7500/7500 [==============================] - 2s 273us/step - loss: 0.1640 - acc: 0.7708 - val_loss: 0.2383 - val_acc: 0.7100\n\nEpoch 00323: val_acc did not improve from 0.72480\nEpoch 324/500\n7500/7500 [==============================] - 2s 267us/step - loss: 0.1630 - acc: 0.7735 - val_loss: 0.2384 - val_acc: 0.7112\n\nEpoch 00324: val_acc did not improve from 0.72480\nEpoch 325/500\n7500/7500 [==============================] - 2s 268us/step - loss: 0.1628 - acc: 0.7768 - val_loss: 0.2407 - val_acc: 0.7092\n\nEpoch 00325: val_acc did not improve from 0.72480\nEpoch 326/500\n7500/7500 [==============================] - 2s 269us/step - loss: 0.1639 - acc: 0.7716 - val_loss: 0.2425 - val_acc: 0.7112\n\nEpoch 00326: val_acc did not improve from 0.72480\nEpoch 327/500\n7500/7500 [==============================] - 2s 270us/step - loss: 0.1650 - acc: 0.7676 - val_loss: 0.2402 - val_acc: 0.7108\n\nEpoch 00327: val_acc did not improve from 0.72480\nEpoch 328/500\n7500/7500 [==============================] - 2s 272us/step - loss: 0.1636 - acc: 0.7720 - val_loss: 0.2441 - val_acc: 0.7148\n\nEpoch 00328: val_acc did not improve from 0.72480\nEpoch 329/500\n7500/7500 [==============================] - 2s 266us/step - loss: 0.1638 - acc: 0.7728 - val_loss: 0.2399 - val_acc: 0.7104\n\nEpoch 00329: val_acc did not improve from 0.72480\nEpoch 330/500\n7500/7500 [==============================] - 2s 268us/step - loss: 0.1630 - acc: 0.7731 - val_loss: 0.2396 - val_acc: 0.7104\n\nEpoch 00330: val_acc did not improve from 0.72480\nEpoch 331/500\n7500/7500 [==============================] - 2s 267us/step - loss: 0.1649 - acc: 0.7699 - val_loss: 0.2422 - val_acc: 0.7112\n\nEpoch 00331: val_acc did not improve from 0.72480\nEpoch 332/500\n7500/7500 [==============================] - 2s 268us/step - loss: 0.1644 - acc: 0.7697 - val_loss: 0.2421 - val_acc: 0.7116\n\nEpoch 00332: val_acc did not improve from 0.72480\nEpoch 333/500\n7500/7500 [==============================] - 2s 269us/step - loss: 0.1632 - acc: 0.7715 - val_loss: 0.2446 - val_acc: 0.7128\n\nEpoch 00333: val_acc did not improve from 0.72480\nEpoch 334/500\n7500/7500 [==============================] - 2s 275us/step - loss: 0.1633 - acc: 0.7720 - val_loss: 0.2402 - val_acc: 0.7100\n\nEpoch 00334: val_acc did not improve from 0.72480\nEpoch 335/500\n7500/7500 [==============================] - 2s 269us/step - loss: 0.1637 - acc: 0.7739 - val_loss: 0.2406 - val_acc: 0.7116\n\nEpoch 00335: val_acc did not improve from 0.72480\nEpoch 336/500\n7500/7500 [==============================] - 2s 268us/step - loss: 0.1640 - acc: 0.7712 - val_loss: 0.2419 - val_acc: 0.7116\n\nEpoch 00336: val_acc did not improve from 0.72480\nEpoch 337/500\n7500/7500 [==============================] - 2s 271us/step - loss: 0.1635 - acc: 0.7735 - val_loss: 0.2420 - val_acc: 0.7108\n\nEpoch 00337: val_acc did not improve from 0.72480\nEpoch 338/500\n7500/7500 [==============================] - 2s 271us/step - loss: 0.1639 - acc: 0.7687 - val_loss: 0.2417 - val_acc: 0.7116\n\nEpoch 00338: val_acc did not improve from 0.72480\nEpoch 339/500\n7500/7500 [==============================] - 2s 274us/step - loss: 0.1621 - acc: 0.7780 - val_loss: 0.2438 - val_acc: 0.7116\n\nEpoch 00339: val_acc did not improve from 0.72480\nEpoch 340/500\n7500/7500 [==============================] - 2s 272us/step - loss: 0.1626 - acc: 0.7719 - val_loss: 0.2440 - val_acc: 0.7120\n\nEpoch 00340: val_acc did not improve from 0.72480\nEpoch 341/500\n7500/7500 [==============================] - 2s 267us/step - loss: 0.1626 - acc: 0.7716 - val_loss: 0.2427 - val_acc: 0.7124\n\nEpoch 00341: val_acc did not improve from 0.72480\nEpoch 342/500\n7500/7500 [==============================] - 2s 274us/step - loss: 0.1619 - acc: 0.7735 - val_loss: 0.2442 - val_acc: 0.7136\n\nEpoch 00342: val_acc did not improve from 0.72480\nEpoch 343/500\n7500/7500 [==============================] - 2s 270us/step - loss: 0.1632 - acc: 0.7740 - val_loss: 0.2436 - val_acc: 0.7124\n\nEpoch 00343: val_acc did not improve from 0.72480\nEpoch 344/500\n7500/7500 [==============================] - 2s 272us/step - loss: 0.1649 - acc: 0.7661 - val_loss: 0.2424 - val_acc: 0.7124\n\nEpoch 00344: val_acc did not improve from 0.72480\nEpoch 345/500\n7500/7500 [==============================] - 2s 281us/step - loss: 0.1631 - acc: 0.7736 - val_loss: 0.2434 - val_acc: 0.7112\n\nEpoch 00345: val_acc did not improve from 0.72480\nEpoch 346/500\n7500/7500 [==============================] - 2s 270us/step - loss: 0.1649 - acc: 0.7705 - val_loss: 0.2476 - val_acc: 0.7104\n\nEpoch 00346: val_acc did not improve from 0.72480\nEpoch 347/500\n7500/7500 [==============================] - 2s 276us/step - loss: 0.1642 - acc: 0.7693 - val_loss: 0.2448 - val_acc: 0.7092\n\nEpoch 00347: val_acc did not improve from 0.72480\nEpoch 348/500\n7500/7500 [==============================] - 2s 270us/step - loss: 0.1646 - acc: 0.7669 - val_loss: 0.2451 - val_acc: 0.7116\n\nEpoch 00348: val_acc did not improve from 0.72480\nEpoch 349/500\n7500/7500 [==============================] - 2s 270us/step - loss: 0.1620 - acc: 0.7732 - val_loss: 0.2441 - val_acc: 0.7112\n\nEpoch 00349: val_acc did not improve from 0.72480\nEpoch 350/500\n7500/7500 [==============================] - 2s 272us/step - loss: 0.1632 - acc: 0.7716 - val_loss: 0.2435 - val_acc: 0.7112\n\nEpoch 00350: val_acc did not improve from 0.72480\nEpoch 351/500\n7500/7500 [==============================] - 2s 273us/step - loss: 0.1646 - acc: 0.7723 - val_loss: 0.2455 - val_acc: 0.7112\n\nEpoch 00351: val_acc did not improve from 0.72480\nEpoch 352/500\n7500/7500 [==============================] - 2s 270us/step - loss: 0.1613 - acc: 0.7741 - val_loss: 0.2432 - val_acc: 0.7108\n\nEpoch 00352: val_acc did not improve from 0.72480\nEpoch 353/500\n7500/7500 [==============================] - 2s 269us/step - loss: 0.1629 - acc: 0.7697 - val_loss: 0.2458 - val_acc: 0.7108\n\nEpoch 00353: val_acc did not improve from 0.72480\nEpoch 354/500\n7500/7500 [==============================] - 2s 265us/step - loss: 0.1615 - acc: 0.7733 - val_loss: 0.2489 - val_acc: 0.7108\n\nEpoch 00354: val_acc did not improve from 0.72480\nEpoch 355/500\n7500/7500 [==============================] - 2s 268us/step - loss: 0.1618 - acc: 0.7729 - val_loss: 0.2440 - val_acc: 0.7100\n\nEpoch 00355: val_acc did not improve from 0.72480\nEpoch 356/500\n7500/7500 [==============================] - 2s 270us/step - loss: 0.1627 - acc: 0.7699 - val_loss: 0.2434 - val_acc: 0.7096\n\nEpoch 00356: val_acc did not improve from 0.72480\nEpoch 357/500\n7500/7500 [==============================] - 2s 274us/step - loss: 0.1622 - acc: 0.7744 - val_loss: 0.2451 - val_acc: 0.7112\n\nEpoch 00357: val_acc did not improve from 0.72480\nEpoch 358/500\n7500/7500 [==============================] - 2s 269us/step - loss: 0.1633 - acc: 0.7665 - val_loss: 0.2482 - val_acc: 0.7112\n\nEpoch 00358: val_acc did not improve from 0.72480\nEpoch 359/500\n7500/7500 [==============================] - 2s 271us/step - loss: 0.1616 - acc: 0.7765 - val_loss: 0.2468 - val_acc: 0.7096\n\nEpoch 00359: val_acc did not improve from 0.72480\nEpoch 360/500\n7500/7500 [==============================] - 2s 272us/step - loss: 0.1624 - acc: 0.7727 - val_loss: 0.2495 - val_acc: 0.7080\n\nEpoch 00360: val_acc did not improve from 0.72480\nEpoch 361/500\n7500/7500 [==============================] - 2s 270us/step - loss: 0.1640 - acc: 0.7677 - val_loss: 0.2452 - val_acc: 0.7100\n\nEpoch 00361: val_acc did not improve from 0.72480\nEpoch 362/500\n7500/7500 [==============================] - 2s 271us/step - loss: 0.1626 - acc: 0.7715 - val_loss: 0.2490 - val_acc: 0.7076\n\nEpoch 00362: val_acc did not improve from 0.72480\nEpoch 363/500\n7500/7500 [==============================] - 2s 273us/step - loss: 0.1651 - acc: 0.7657 - val_loss: 0.2485 - val_acc: 0.7088\n\nEpoch 00363: val_acc did not improve from 0.72480\nEpoch 364/500\n7500/7500 [==============================] - 2s 270us/step - loss: 0.1614 - acc: 0.7781 - val_loss: 0.2512 - val_acc: 0.7068\n\nEpoch 00364: val_acc did not improve from 0.72480\nEpoch 365/500\n7500/7500 [==============================] - 2s 273us/step - loss: 0.1624 - acc: 0.7703 - val_loss: 0.2482 - val_acc: 0.7108\n\nEpoch 00365: val_acc did not improve from 0.72480\nEpoch 366/500\n7500/7500 [==============================] - 2s 270us/step - loss: 0.1614 - acc: 0.7744 - val_loss: 0.2484 - val_acc: 0.7108\n\nEpoch 00366: val_acc did not improve from 0.72480\nEpoch 367/500\n7500/7500 [==============================] - 2s 270us/step - loss: 0.1620 - acc: 0.7697 - val_loss: 0.2471 - val_acc: 0.7108\n\nEpoch 00367: val_acc did not improve from 0.72480\nEpoch 368/500\n7500/7500 [==============================] - 2s 271us/step - loss: 0.1615 - acc: 0.7760 - val_loss: 0.2495 - val_acc: 0.7088\n\nEpoch 00368: val_acc did not improve from 0.72480\nEpoch 369/500\n7500/7500 [==============================] - 2s 273us/step - loss: 0.1634 - acc: 0.7671 - val_loss: 0.2496 - val_acc: 0.7096\n\nEpoch 00369: val_acc did not improve from 0.72480\nEpoch 370/500\n7500/7500 [==============================] - 2s 272us/step - loss: 0.1638 - acc: 0.7685 - val_loss: 0.2493 - val_acc: 0.7076\n\nEpoch 00370: val_acc did not improve from 0.72480\nEpoch 371/500\n7500/7500 [==============================] - 2s 272us/step - loss: 0.1613 - acc: 0.7727 - val_loss: 0.2490 - val_acc: 0.7108\n\nEpoch 00371: val_acc did not improve from 0.72480\nEpoch 372/500\n7500/7500 [==============================] - 2s 266us/step - loss: 0.1611 - acc: 0.7732 - val_loss: 0.2507 - val_acc: 0.7112\n\nEpoch 00372: val_acc did not improve from 0.72480\nEpoch 373/500\n7500/7500 [==============================] - 2s 268us/step - loss: 0.1634 - acc: 0.7685 - val_loss: 0.2493 - val_acc: 0.7104\n\nEpoch 00373: val_acc did not improve from 0.72480\nEpoch 374/500\n7500/7500 [==============================] - 2s 269us/step - loss: 0.1608 - acc: 0.7733 - val_loss: 0.2472 - val_acc: 0.7120\n\nEpoch 00374: val_acc did not improve from 0.72480\nEpoch 375/500\n7500/7500 [==============================] - 2s 276us/step - loss: 0.1649 - acc: 0.7671 - val_loss: 0.2481 - val_acc: 0.7096\n\nEpoch 00375: val_acc did not improve from 0.72480\nEpoch 376/500\n7500/7500 [==============================] - 2s 269us/step - loss: 0.1636 - acc: 0.7707 - val_loss: 0.2494 - val_acc: 0.7104\n\nEpoch 00376: val_acc did not improve from 0.72480\nEpoch 377/500\n7500/7500 [==============================] - 2s 270us/step - loss: 0.1611 - acc: 0.7691 - val_loss: 0.2478 - val_acc: 0.7104\n\nEpoch 00377: val_acc did not improve from 0.72480\nEpoch 378/500\n7500/7500 [==============================] - 2s 271us/step - loss: 0.1626 - acc: 0.7687 - val_loss: 0.2485 - val_acc: 0.7104\n\nEpoch 00378: val_acc did not improve from 0.72480\nEpoch 379/500\n7500/7500 [==============================] - 2s 274us/step - loss: 0.1610 - acc: 0.7731 - val_loss: 0.2494 - val_acc: 0.7112\n\nEpoch 00379: val_acc did not improve from 0.72480\nEpoch 380/500\n7500/7500 [==============================] - 2s 269us/step - loss: 0.1606 - acc: 0.7735 - val_loss: 0.2503 - val_acc: 0.7092\n\nEpoch 00380: val_acc did not improve from 0.72480\nEpoch 381/500\n7500/7500 [==============================] - 2s 272us/step - loss: 0.1620 - acc: 0.7709 - val_loss: 0.2539 - val_acc: 0.7072\n\nEpoch 00381: val_acc did not improve from 0.72480\nEpoch 382/500\n7500/7500 [==============================] - 2s 269us/step - loss: 0.1614 - acc: 0.7717 - val_loss: 0.2494 - val_acc: 0.7104\n\nEpoch 00382: val_acc did not improve from 0.72480\nEpoch 383/500\n7500/7500 [==============================] - 2s 270us/step - loss: 0.1598 - acc: 0.7748 - val_loss: 0.2472 - val_acc: 0.7076\n\nEpoch 00383: val_acc did not improve from 0.72480\nEpoch 384/500\n7500/7500 [==============================] - 2s 271us/step - loss: 0.1606 - acc: 0.7759 - val_loss: 0.2486 - val_acc: 0.7092\n\nEpoch 00384: val_acc did not improve from 0.72480\nEpoch 385/500\n7500/7500 [==============================] - 2s 270us/step - loss: 0.1623 - acc: 0.7712 - val_loss: 0.2485 - val_acc: 0.7108\n\nEpoch 00385: val_acc did not improve from 0.72480\nEpoch 386/500\n7500/7500 [==============================] - 2s 270us/step - loss: 0.1620 - acc: 0.7707 - val_loss: 0.2480 - val_acc: 0.7112\n\nEpoch 00386: val_acc did not improve from 0.72480\nEpoch 387/500\n7500/7500 [==============================] - 2s 275us/step - loss: 0.1600 - acc: 0.7748 - val_loss: 0.2519 - val_acc: 0.7100\n\nEpoch 00387: val_acc did not improve from 0.72480\nEpoch 388/500\n7500/7500 [==============================] - 2s 271us/step - loss: 0.1624 - acc: 0.7715 - val_loss: 0.2501 - val_acc: 0.7112\n\nEpoch 00388: val_acc did not improve from 0.72480\nEpoch 389/500\n7500/7500 [==============================] - 2s 271us/step - loss: 0.1643 - acc: 0.7675 - val_loss: 0.2541 - val_acc: 0.7088\n\nEpoch 00389: val_acc did not improve from 0.72480\nEpoch 390/500\n7500/7500 [==============================] - 2s 269us/step - loss: 0.1619 - acc: 0.7709 - val_loss: 0.2472 - val_acc: 0.7104\n\nEpoch 00390: val_acc did not improve from 0.72480\nEpoch 391/500\n7500/7500 [==============================] - 2s 272us/step - loss: 0.1624 - acc: 0.7685 - val_loss: 0.2520 - val_acc: 0.7104\n\nEpoch 00391: val_acc did not improve from 0.72480\nEpoch 392/500\n7500/7500 [==============================] - 2s 272us/step - loss: 0.1622 - acc: 0.7677 - val_loss: 0.2485 - val_acc: 0.7092\n\nEpoch 00392: val_acc did not improve from 0.72480\nEpoch 393/500\n7500/7500 [==============================] - 2s 276us/step - loss: 0.1600 - acc: 0.7745 - val_loss: 0.2507 - val_acc: 0.7092\n\nEpoch 00393: val_acc did not improve from 0.72480\nEpoch 394/500\n7500/7500 [==============================] - 2s 272us/step - loss: 0.1574 - acc: 0.7797 - val_loss: 0.2486 - val_acc: 0.7104\n\nEpoch 00394: val_acc did not improve from 0.72480\nEpoch 395/500\n7500/7500 [==============================] - 2s 272us/step - loss: 0.1610 - acc: 0.7673 - val_loss: 0.2502 - val_acc: 0.7104\n\nEpoch 00395: val_acc did not improve from 0.72480\nEpoch 396/500\n7500/7500 [==============================] - 2s 273us/step - loss: 0.1610 - acc: 0.7707 - val_loss: 0.2522 - val_acc: 0.7128\n\nEpoch 00396: val_acc did not improve from 0.72480\nEpoch 397/500\n7500/7500 [==============================] - 2s 270us/step - loss: 0.1604 - acc: 0.7736 - val_loss: 0.2551 - val_acc: 0.7120\n\nEpoch 00397: val_acc did not improve from 0.72480\nEpoch 398/500\n7500/7500 [==============================] - 2s 275us/step - loss: 0.1609 - acc: 0.7756 - val_loss: 0.2525 - val_acc: 0.7132\n\nEpoch 00398: val_acc did not improve from 0.72480\nEpoch 399/500\n7500/7500 [==============================] - 2s 287us/step - loss: 0.1602 - acc: 0.7723 - val_loss: 0.2551 - val_acc: 0.7096\n\nEpoch 00399: val_acc did not improve from 0.72480\nEpoch 400/500\n7500/7500 [==============================] - 2s 282us/step - loss: 0.1634 - acc: 0.7661 - val_loss: 0.2564 - val_acc: 0.7100\n\nEpoch 00400: val_acc did not improve from 0.72480\nEpoch 401/500\n7500/7500 [==============================] - 2s 272us/step - loss: 0.1611 - acc: 0.7696 - val_loss: 0.2540 - val_acc: 0.7112\n\nEpoch 00401: val_acc did not improve from 0.72480\nEpoch 402/500\n7500/7500 [==============================] - 2s 271us/step - loss: 0.1600 - acc: 0.7727 - val_loss: 0.2528 - val_acc: 0.7128\n\nEpoch 00402: val_acc did not improve from 0.72480\nEpoch 403/500\n7500/7500 [==============================] - 2s 272us/step - loss: 0.1597 - acc: 0.7728 - val_loss: 0.2572 - val_acc: 0.7084\n\nEpoch 00403: val_acc did not improve from 0.72480\nEpoch 404/500\n7500/7500 [==============================] - 2s 269us/step - loss: 0.1633 - acc: 0.7693 - val_loss: 0.2540 - val_acc: 0.7112\n\nEpoch 00404: val_acc did not improve from 0.72480\nEpoch 405/500\n7500/7500 [==============================] - 2s 273us/step - loss: 0.1613 - acc: 0.7736 - val_loss: 0.2533 - val_acc: 0.7104\n\nEpoch 00405: val_acc did not improve from 0.72480\nEpoch 406/500\n7500/7500 [==============================] - 2s 272us/step - loss: 0.1602 - acc: 0.7727 - val_loss: 0.2555 - val_acc: 0.7116\n\nEpoch 00406: val_acc did not improve from 0.72480\nEpoch 407/500\n7500/7500 [==============================] - 2s 270us/step - loss: 0.1619 - acc: 0.7703 - val_loss: 0.2528 - val_acc: 0.7108\n\nEpoch 00407: val_acc did not improve from 0.72480\nEpoch 408/500\n7500/7500 [==============================] - 2s 272us/step - loss: 0.1607 - acc: 0.7723 - val_loss: 0.2549 - val_acc: 0.7116\n\nEpoch 00408: val_acc did not improve from 0.72480\nEpoch 409/500\n7500/7500 [==============================] - 2s 275us/step - loss: 0.1595 - acc: 0.7769 - val_loss: 0.2515 - val_acc: 0.7112\n\nEpoch 00409: val_acc did not improve from 0.72480\nEpoch 410/500\n7500/7500 [==============================] - 2s 268us/step - loss: 0.1593 - acc: 0.7744 - val_loss: 0.2549 - val_acc: 0.7124\n\nEpoch 00410: val_acc did not improve from 0.72480\nEpoch 411/500\n7500/7500 [==============================] - 2s 270us/step - loss: 0.1622 - acc: 0.7691 - val_loss: 0.2546 - val_acc: 0.7116\n\nEpoch 00411: val_acc did not improve from 0.72480\nEpoch 412/500\n7500/7500 [==============================] - 2s 271us/step - loss: 0.1598 - acc: 0.7779 - val_loss: 0.2560 - val_acc: 0.7084\n\nEpoch 00412: val_acc did not improve from 0.72480\nEpoch 413/500\n7500/7500 [==============================] - 2s 274us/step - loss: 0.1605 - acc: 0.7752 - val_loss: 0.2583 - val_acc: 0.7096\n\nEpoch 00413: val_acc did not improve from 0.72480\nEpoch 414/500\n7500/7500 [==============================] - 2s 272us/step - loss: 0.1604 - acc: 0.7761 - val_loss: 0.2535 - val_acc: 0.7100\n\nEpoch 00414: val_acc did not improve from 0.72480\nEpoch 415/500\n7500/7500 [==============================] - 2s 271us/step - loss: 0.1594 - acc: 0.7756 - val_loss: 0.2571 - val_acc: 0.7088\n\nEpoch 00415: val_acc did not improve from 0.72480\nEpoch 416/500\n7500/7500 [==============================] - 2s 271us/step - loss: 0.1606 - acc: 0.7735 - val_loss: 0.2542 - val_acc: 0.7128\n\nEpoch 00416: val_acc did not improve from 0.72480\nEpoch 417/500\n7500/7500 [==============================] - 2s 269us/step - loss: 0.1593 - acc: 0.7771 - val_loss: 0.2584 - val_acc: 0.7096\n\nEpoch 00417: val_acc did not improve from 0.72480\nEpoch 418/500\n7500/7500 [==============================] - 2s 278us/step - loss: 0.1601 - acc: 0.7759 - val_loss: 0.2539 - val_acc: 0.7108\n\nEpoch 00418: val_acc did not improve from 0.72480\nEpoch 419/500\n7500/7500 [==============================] - 2s 274us/step - loss: 0.1610 - acc: 0.7711 - val_loss: 0.2551 - val_acc: 0.7108\n\nEpoch 00419: val_acc did not improve from 0.72480\nEpoch 420/500\n7500/7500 [==============================] - 2s 270us/step - loss: 0.1579 - acc: 0.7788 - val_loss: 0.2569 - val_acc: 0.7092\n\nEpoch 00420: val_acc did not improve from 0.72480\nEpoch 421/500\n7500/7500 [==============================] - 2s 268us/step - loss: 0.1604 - acc: 0.7720 - val_loss: 0.2577 - val_acc: 0.7076\n\nEpoch 00421: val_acc did not improve from 0.72480\nEpoch 422/500\n7500/7500 [==============================] - 2s 272us/step - loss: 0.1624 - acc: 0.7701 - val_loss: 0.2573 - val_acc: 0.7096\n\nEpoch 00422: val_acc did not improve from 0.72480\nEpoch 423/500\n7500/7500 [==============================] - 2s 274us/step - loss: 0.1603 - acc: 0.7747 - val_loss: 0.2557 - val_acc: 0.7120\n\nEpoch 00423: val_acc did not improve from 0.72480\nEpoch 424/500\n7500/7500 [==============================] - 2s 271us/step - loss: 0.1609 - acc: 0.7721 - val_loss: 0.2574 - val_acc: 0.7112\n\nEpoch 00424: val_acc did not improve from 0.72480\nEpoch 425/500\n7500/7500 [==============================] - 2s 270us/step - loss: 0.1617 - acc: 0.7700 - val_loss: 0.2566 - val_acc: 0.7120\n\nEpoch 00425: val_acc did not improve from 0.72480\nEpoch 426/500\n7500/7500 [==============================] - 2s 268us/step - loss: 0.1614 - acc: 0.7727 - val_loss: 0.2562 - val_acc: 0.7108\n\nEpoch 00426: val_acc did not improve from 0.72480\nEpoch 427/500\n7500/7500 [==============================] - 2s 271us/step - loss: 0.1610 - acc: 0.7713 - val_loss: 0.2614 - val_acc: 0.7064\n\nEpoch 00427: val_acc did not improve from 0.72480\nEpoch 428/500\n7500/7500 [==============================] - 2s 275us/step - loss: 0.1603 - acc: 0.7707 - val_loss: 0.2578 - val_acc: 0.7104\n\nEpoch 00428: val_acc did not improve from 0.72480\nEpoch 429/500\n7500/7500 [==============================] - 2s 270us/step - loss: 0.1614 - acc: 0.7697 - val_loss: 0.2574 - val_acc: 0.7084\n\nEpoch 00429: val_acc did not improve from 0.72480\nEpoch 430/500\n7500/7500 [==============================] - 2s 269us/step - loss: 0.1609 - acc: 0.7733 - val_loss: 0.2563 - val_acc: 0.7108\n\nEpoch 00430: val_acc did not improve from 0.72480\nEpoch 431/500\n7500/7500 [==============================] - 2s 268us/step - loss: 0.1609 - acc: 0.7720 - val_loss: 0.2584 - val_acc: 0.7092\n\nEpoch 00431: val_acc did not improve from 0.72480\nEpoch 432/500\n7500/7500 [==============================] - 2s 273us/step - loss: 0.1613 - acc: 0.7709 - val_loss: 0.2584 - val_acc: 0.7104\n\nEpoch 00432: val_acc did not improve from 0.72480\nEpoch 433/500\n7500/7500 [==============================] - 2s 272us/step - loss: 0.1604 - acc: 0.7695 - val_loss: 0.2590 - val_acc: 0.7084\n\nEpoch 00433: val_acc did not improve from 0.72480\nEpoch 434/500\n7500/7500 [==============================] - 2s 274us/step - loss: 0.1603 - acc: 0.7715 - val_loss: 0.2626 - val_acc: 0.7064\n\nEpoch 00434: val_acc did not improve from 0.72480\nEpoch 435/500\n7500/7500 [==============================] - 2s 270us/step - loss: 0.1600 - acc: 0.7700 - val_loss: 0.2615 - val_acc: 0.7108\n\nEpoch 00435: val_acc did not improve from 0.72480\nEpoch 436/500\n7500/7500 [==============================] - 2s 273us/step - loss: 0.1602 - acc: 0.7761 - val_loss: 0.2567 - val_acc: 0.7100\n\nEpoch 00436: val_acc did not improve from 0.72480\nEpoch 437/500\n7500/7500 [==============================] - 2s 269us/step - loss: 0.1603 - acc: 0.7717 - val_loss: 0.2563 - val_acc: 0.7104\n\nEpoch 00437: val_acc did not improve from 0.72480\nEpoch 438/500\n7500/7500 [==============================] - 2s 270us/step - loss: 0.1607 - acc: 0.7712 - val_loss: 0.2597 - val_acc: 0.7104\n\nEpoch 00438: val_acc did not improve from 0.72480\nEpoch 439/500\n7500/7500 [==============================] - 2s 275us/step - loss: 0.1607 - acc: 0.7735 - val_loss: 0.2611 - val_acc: 0.7104\n\nEpoch 00439: val_acc did not improve from 0.72480\nEpoch 440/500\n7500/7500 [==============================] - 2s 269us/step - loss: 0.1597 - acc: 0.7744 - val_loss: 0.2596 - val_acc: 0.7112\n\nEpoch 00440: val_acc did not improve from 0.72480\nEpoch 441/500\n7500/7500 [==============================] - 2s 269us/step - loss: 0.1580 - acc: 0.7719 - val_loss: 0.2619 - val_acc: 0.7124\n\nEpoch 00441: val_acc did not improve from 0.72480\nEpoch 442/500\n7500/7500 [==============================] - 2s 272us/step - loss: 0.1627 - acc: 0.7665 - val_loss: 0.2577 - val_acc: 0.7124\n\nEpoch 00442: val_acc did not improve from 0.72480\nEpoch 443/500\n7500/7500 [==============================] - 2s 269us/step - loss: 0.1606 - acc: 0.7729 - val_loss: 0.2569 - val_acc: 0.7116\n\nEpoch 00443: val_acc did not improve from 0.72480\nEpoch 444/500\n7500/7500 [==============================] - 2s 286us/step - loss: 0.1607 - acc: 0.7712 - val_loss: 0.2523 - val_acc: 0.7112\n\nEpoch 00444: val_acc did not improve from 0.72480\nEpoch 445/500\n7500/7500 [==============================] - 2s 272us/step - loss: 0.1602 - acc: 0.7715 - val_loss: 0.2573 - val_acc: 0.7112\n\nEpoch 00445: val_acc did not improve from 0.72480\nEpoch 446/500\n7500/7500 [==============================] - 2s 273us/step - loss: 0.1613 - acc: 0.7679 - val_loss: 0.2610 - val_acc: 0.7096\n\nEpoch 00446: val_acc did not improve from 0.72480\nEpoch 447/500\n7500/7500 [==============================] - 2s 271us/step - loss: 0.1616 - acc: 0.7697 - val_loss: 0.2581 - val_acc: 0.7108\n\nEpoch 00447: val_acc did not improve from 0.72480\nEpoch 448/500\n7500/7500 [==============================] - 2s 269us/step - loss: 0.1612 - acc: 0.7720 - val_loss: 0.2594 - val_acc: 0.7100\n\nEpoch 00448: val_acc did not improve from 0.72480\nEpoch 449/500\n7500/7500 [==============================] - 2s 274us/step - loss: 0.1605 - acc: 0.7728 - val_loss: 0.2594 - val_acc: 0.7088\n\nEpoch 00449: val_acc did not improve from 0.72480\nEpoch 450/500\n7500/7500 [==============================] - 2s 268us/step - loss: 0.1621 - acc: 0.7709 - val_loss: 0.2582 - val_acc: 0.7096\n\nEpoch 00450: val_acc did not improve from 0.72480\nEpoch 451/500\n7500/7500 [==============================] - 2s 272us/step - loss: 0.1595 - acc: 0.7752 - val_loss: 0.2601 - val_acc: 0.7116\n\nEpoch 00451: val_acc did not improve from 0.72480\nEpoch 452/500\n7500/7500 [==============================] - 2s 271us/step - loss: 0.1625 - acc: 0.7663 - val_loss: 0.2605 - val_acc: 0.7092\n\nEpoch 00452: val_acc did not improve from 0.72480\nEpoch 453/500\n7500/7500 [==============================] - 2s 282us/step - loss: 0.1571 - acc: 0.7775 - val_loss: 0.2628 - val_acc: 0.7116\n\nEpoch 00453: val_acc did not improve from 0.72480\nEpoch 454/500\n7500/7500 [==============================] - 2s 273us/step - loss: 0.1614 - acc: 0.7697 - val_loss: 0.2616 - val_acc: 0.7088\n\nEpoch 00454: val_acc did not improve from 0.72480\nEpoch 455/500\n7500/7500 [==============================] - 2s 272us/step - loss: 0.1602 - acc: 0.7712 - val_loss: 0.2649 - val_acc: 0.7104\n\nEpoch 00455: val_acc did not improve from 0.72480\nEpoch 456/500\n7500/7500 [==============================] - 2s 269us/step - loss: 0.1601 - acc: 0.7691 - val_loss: 0.2601 - val_acc: 0.7104\n\nEpoch 00456: val_acc did not improve from 0.72480\nEpoch 457/500\n7500/7500 [==============================] - 2s 275us/step - loss: 0.1609 - acc: 0.7708 - val_loss: 0.2644 - val_acc: 0.7068\n\nEpoch 00457: val_acc did not improve from 0.72480\nEpoch 458/500\n7500/7500 [==============================] - 2s 271us/step - loss: 0.1597 - acc: 0.7735 - val_loss: 0.2658 - val_acc: 0.7060\n\nEpoch 00458: val_acc did not improve from 0.72480\nEpoch 459/500\n7500/7500 [==============================] - 2s 269us/step - loss: 0.1595 - acc: 0.7728 - val_loss: 0.2651 - val_acc: 0.7104\n\nEpoch 00459: val_acc did not improve from 0.72480\nEpoch 460/500\n7500/7500 [==============================] - 2s 269us/step - loss: 0.1608 - acc: 0.7737 - val_loss: 0.2613 - val_acc: 0.7096\n\nEpoch 00460: val_acc did not improve from 0.72480\nEpoch 461/500\n7500/7500 [==============================] - 2s 274us/step - loss: 0.1587 - acc: 0.7747 - val_loss: 0.2648 - val_acc: 0.7084\n\nEpoch 00461: val_acc did not improve from 0.72480\nEpoch 462/500\n7500/7500 [==============================] - 2s 269us/step - loss: 0.1596 - acc: 0.7732 - val_loss: 0.2693 - val_acc: 0.7076\n\nEpoch 00462: val_acc did not improve from 0.72480\nEpoch 463/500\n7500/7500 [==============================] - 2s 271us/step - loss: 0.1600 - acc: 0.7696 - val_loss: 0.2661 - val_acc: 0.7064\n\nEpoch 00463: val_acc did not improve from 0.72480\nEpoch 464/500\n7500/7500 [==============================] - 2s 268us/step - loss: 0.1600 - acc: 0.7740 - val_loss: 0.2622 - val_acc: 0.7128\n\nEpoch 00464: val_acc did not improve from 0.72480\nEpoch 465/500\n7500/7500 [==============================] - 2s 273us/step - loss: 0.1588 - acc: 0.7749 - val_loss: 0.2657 - val_acc: 0.7076\n\nEpoch 00465: val_acc did not improve from 0.72480\nEpoch 466/500\n7500/7500 [==============================] - 2s 274us/step - loss: 0.1610 - acc: 0.7707 - val_loss: 0.2673 - val_acc: 0.7068\n\nEpoch 00466: val_acc did not improve from 0.72480\nEpoch 467/500\n7500/7500 [==============================] - 2s 271us/step - loss: 0.1594 - acc: 0.7741 - val_loss: 0.2629 - val_acc: 0.7088\n\nEpoch 00467: val_acc did not improve from 0.72480\nEpoch 468/500\n7500/7500 [==============================] - 2s 271us/step - loss: 0.1607 - acc: 0.7675 - val_loss: 0.2636 - val_acc: 0.7080\n\nEpoch 00468: val_acc did not improve from 0.72480\nEpoch 469/500\n7500/7500 [==============================] - 2s 270us/step - loss: 0.1583 - acc: 0.7748 - val_loss: 0.2645 - val_acc: 0.7088\n\nEpoch 00469: val_acc did not improve from 0.72480\nEpoch 470/500\n7500/7500 [==============================] - 2s 268us/step - loss: 0.1597 - acc: 0.7721 - val_loss: 0.2623 - val_acc: 0.7088\n\nEpoch 00470: val_acc did not improve from 0.72480\nEpoch 471/500\n7500/7500 [==============================] - 2s 274us/step - loss: 0.1581 - acc: 0.7736 - val_loss: 0.2569 - val_acc: 0.7100\n\nEpoch 00471: val_acc did not improve from 0.72480\nEpoch 472/500\n7500/7500 [==============================] - 2s 271us/step - loss: 0.1581 - acc: 0.7735 - val_loss: 0.2552 - val_acc: 0.7104\n\nEpoch 00472: val_acc did not improve from 0.72480\nEpoch 473/500\n7500/7500 [==============================] - 2s 273us/step - loss: 0.1590 - acc: 0.7729 - val_loss: 0.2572 - val_acc: 0.7100\n\nEpoch 00473: val_acc did not improve from 0.72480\nEpoch 474/500\n7500/7500 [==============================] - 2s 270us/step - loss: 0.1601 - acc: 0.7700 - val_loss: 0.2568 - val_acc: 0.7112\n\nEpoch 00474: val_acc did not improve from 0.72480\nEpoch 475/500\n7500/7500 [==============================] - 2s 275us/step - loss: 0.1586 - acc: 0.7709 - val_loss: 0.2591 - val_acc: 0.7092\n\nEpoch 00475: val_acc did not improve from 0.72480\nEpoch 476/500\n7500/7500 [==============================] - 2s 271us/step - loss: 0.1583 - acc: 0.7753 - val_loss: 0.2535 - val_acc: 0.7120\n\nEpoch 00476: val_acc did not improve from 0.72480\nEpoch 477/500\n7500/7500 [==============================] - 2s 271us/step - loss: 0.1580 - acc: 0.7771 - val_loss: 0.2562 - val_acc: 0.7104\n\nEpoch 00477: val_acc did not improve from 0.72480\nEpoch 478/500\n7500/7500 [==============================] - 2s 267us/step - loss: 0.1569 - acc: 0.7776 - val_loss: 0.2549 - val_acc: 0.7100\n\nEpoch 00478: val_acc did not improve from 0.72480\nEpoch 479/500\n7500/7500 [==============================] - 2s 269us/step - loss: 0.1564 - acc: 0.7753 - val_loss: 0.2558 - val_acc: 0.7116\n\nEpoch 00479: val_acc did not improve from 0.72480\nEpoch 480/500\n7500/7500 [==============================] - 2s 274us/step - loss: 0.1580 - acc: 0.7707 - val_loss: 0.2524 - val_acc: 0.7124\n\nEpoch 00480: val_acc did not improve from 0.72480\nEpoch 481/500\n7500/7500 [==============================] - 2s 269us/step - loss: 0.1569 - acc: 0.7760 - val_loss: 0.2564 - val_acc: 0.7092\n\nEpoch 00481: val_acc did not improve from 0.72480\nEpoch 482/500\n7500/7500 [==============================] - 2s 271us/step - loss: 0.1564 - acc: 0.7781 - val_loss: 0.2499 - val_acc: 0.7120\n\nEpoch 00482: val_acc did not improve from 0.72480\nEpoch 483/500\n7500/7500 [==============================] - 2s 269us/step - loss: 0.1579 - acc: 0.7723 - val_loss: 0.2533 - val_acc: 0.7092\n\nEpoch 00483: val_acc did not improve from 0.72480\nEpoch 484/500\n7500/7500 [==============================] - 2s 269us/step - loss: 0.1563 - acc: 0.7764 - val_loss: 0.2541 - val_acc: 0.7124\n\nEpoch 00484: val_acc did not improve from 0.72480\nEpoch 485/500\n7500/7500 [==============================] - 2s 271us/step - loss: 0.1564 - acc: 0.7785 - val_loss: 0.2530 - val_acc: 0.7140\n\nEpoch 00485: val_acc did not improve from 0.72480\nEpoch 486/500\n7500/7500 [==============================] - 2s 270us/step - loss: 0.1558 - acc: 0.7748 - val_loss: 0.2498 - val_acc: 0.7120\n\nEpoch 00486: val_acc did not improve from 0.72480\nEpoch 487/500\n7500/7500 [==============================] - 2s 272us/step - loss: 0.1565 - acc: 0.7779 - val_loss: 0.2520 - val_acc: 0.7132\n\nEpoch 00487: val_acc did not improve from 0.72480\nEpoch 488/500\n7500/7500 [==============================] - 2s 272us/step - loss: 0.1560 - acc: 0.7765 - val_loss: 0.2504 - val_acc: 0.7124\n\nEpoch 00488: val_acc did not improve from 0.72480\nEpoch 489/500\n7500/7500 [==============================] - 2s 268us/step - loss: 0.1552 - acc: 0.7769 - val_loss: 0.2523 - val_acc: 0.7120\n\nEpoch 00489: val_acc did not improve from 0.72480\nEpoch 490/500\n7500/7500 [==============================] - 2s 275us/step - loss: 0.1555 - acc: 0.7765 - val_loss: 0.2506 - val_acc: 0.7112\n\nEpoch 00490: val_acc did not improve from 0.72480\nEpoch 491/500\n7500/7500 [==============================] - 2s 269us/step - loss: 0.1563 - acc: 0.7749 - val_loss: 0.2520 - val_acc: 0.7120\n\nEpoch 00491: val_acc did not improve from 0.72480\nEpoch 492/500\n7500/7500 [==============================] - 2s 269us/step - loss: 0.1571 - acc: 0.7737 - val_loss: 0.2518 - val_acc: 0.7104\n\nEpoch 00492: val_acc did not improve from 0.72480\nEpoch 493/500\n7500/7500 [==============================] - 2s 269us/step - loss: 0.1554 - acc: 0.7785 - val_loss: 0.2530 - val_acc: 0.7128\n\nEpoch 00493: val_acc did not improve from 0.72480\nEpoch 494/500\n7500/7500 [==============================] - 2s 274us/step - loss: 0.1544 - acc: 0.7799 - val_loss: 0.2570 - val_acc: 0.7104\n\nEpoch 00494: val_acc did not improve from 0.72480\nEpoch 495/500\n7500/7500 [==============================] - 2s 270us/step - loss: 0.1560 - acc: 0.7768 - val_loss: 0.2531 - val_acc: 0.7116\n\nEpoch 00495: val_acc did not improve from 0.72480\nEpoch 496/500\n7500/7500 [==============================] - 2s 269us/step - loss: 0.1551 - acc: 0.7784 - val_loss: 0.2563 - val_acc: 0.7096\n\nEpoch 00496: val_acc did not improve from 0.72480\nEpoch 497/500\n7500/7500 [==============================] - 2s 268us/step - loss: 0.1568 - acc: 0.7759 - val_loss: 0.2539 - val_acc: 0.7120\n\nEpoch 00497: val_acc did not improve from 0.72480\nEpoch 498/500\n7500/7500 [==============================] - 2s 267us/step - loss: 0.1554 - acc: 0.7765 - val_loss: 0.2509 - val_acc: 0.7124\n\nEpoch 00498: val_acc did not improve from 0.72480\nEpoch 499/500\n7500/7500 [==============================] - 2s 273us/step - loss: 0.1536 - acc: 0.7797 - val_loss: 0.2508 - val_acc: 0.7120\n\nEpoch 00499: val_acc did not improve from 0.72480\nEpoch 500/500\n7500/7500 [==============================] - 2s 270us/step - loss: 0.1539 - acc: 0.7824 - val_loss: 0.2529 - val_acc: 0.7100\n\nEpoch 00500: val_acc did not improve from 0.72480\nacc: 72.48%\n" ] ], [ [ "### Bulid the 2 dimensions LSTM model", "_____no_output_____" ], [ "As for the data we have, we only have 1 output and that means we only have 1 time step, if we can delete that dimension in that model, then we can have a 2 dimensions LSTM model.", "_____no_output_____" ], [ "#### Load the data again", "_____no_output_____" ] ], [ [ "X_padded, y_scaled, abs_max_el = encode.encode_sequences_with_method(sample_path, method='One-Hot', scale_els=scale_els)\nnum_seqs, max_sequence_len = organize.get_num_and_len_of_seqs_from_file(sample_path)\ntest_size = 0.25\nX_train, X_test, y_train, y_test = train_test_split(X_padded, y_scaled, test_size=test_size)", "_____no_output_____" ] ], [ [ "#### Build up the model", "_____no_output_____" ] ], [ [ "# Define the model parameters\nbatch_size = int(len(y_scaled) * 0.01) # no bigger than 1 % of data\nepochs = 50\ndropout = 0.3\nlearning_rate = 0.01\n\n# Define the checkpointer to allow saving of models\nmodel_type = 'lstm_sequential_2d_onehot'\nsave_path = SAVE_DIR + model_type + '.hdf5'\ncheckpointer = ModelCheckpoint(monitor='val_acc', \n filepath=save_path, \n verbose=1, \n save_best_only=True)\n\n# Define the model\nmodel = Sequential()\n\n# Build up the layers\n\n\nmodel.add(LSTM(100,input_shape=(int(max_sequence_len), 5)))\nmodel.add(Dropout(dropout))\nmodel.add(Dense(50, activation='sigmoid'))\n# model.add(Dense(25, activation='sigmoid'))\n# model.add(Dense(12, activation='sigmoid'))\n# model.add(Dense(6, activation='sigmoid'))\n# model.add(Dense(3, activation='sigmoid'))\nmodel.add(Dense(1, activation='sigmoid'))\n\nmodel.compile(loss='mse',\n optimizer='rmsprop',\n metrics=['accuracy'])\nprint(model.summary())", "WARNING:tensorflow:From C:\\Users\\Lisboa\\Anaconda3\\lib\\site-packages\\tensorflow\\python\\framework\\op_def_library.py:263: colocate_with (from tensorflow.python.framework.ops) is deprecated and will be removed in a future version.\nInstructions for updating:\nColocations handled automatically by placer.\nWARNING:tensorflow:From C:\\Users\\Lisboa\\Anaconda3\\lib\\site-packages\\keras\\backend\\tensorflow_backend.py:3445: calling dropout (from tensorflow.python.ops.nn_ops) with keep_prob is deprecated and will be removed in a future version.\nInstructions for updating:\nPlease use `rate` instead of `keep_prob`. Rate should be set to `rate = 1 - keep_prob`.\n_________________________________________________________________\nLayer (type) Output Shape Param # \n=================================================================\nlstm_1 (LSTM) (None, 100) 42400 \n_________________________________________________________________\ndropout_1 (Dropout) (None, 100) 0 \n_________________________________________________________________\ndense_1 (Dense) (None, 50) 5050 \n_________________________________________________________________\ndense_2 (Dense) (None, 1) 51 \n=================================================================\nTotal params: 47,501\nTrainable params: 47,501\nNon-trainable params: 0\n_________________________________________________________________\nNone\n" ] ], [ [ "### Fit and Evaluate the model", "_____no_output_____" ] ], [ [ "# Fit\nhistory = model.fit(X_train, y_train, batch_size=batch_size, epochs=epochs,verbose=1,\n validation_data=(X_test, y_test), callbacks=[checkpointer])\n\n\n# Evaluate\nscore = max(history.history['val_acc'])\nprint(\"%s: %.2f%%\" % (model.metrics_names[1], score*100))\nplt = construct.plot_results(history.history)\nplt.show()", "WARNING:tensorflow:From C:\\Users\\Lisboa\\Anaconda3\\lib\\site-packages\\tensorflow\\python\\ops\\math_ops.py:3066: to_int32 (from tensorflow.python.ops.math_ops) is deprecated and will be removed in a future version.\nInstructions for updating:\nUse tf.cast instead.\nTrain on 7500 samples, validate on 2500 samples\nEpoch 1/500\n7500/7500 [==============================] - 6s 855us/step - loss: 0.2107 - acc: 0.6719 - val_loss: 0.1957 - val_acc: 0.7008\n\nEpoch 00001: val_acc improved from -inf to 0.70080, saving model to C:\\Users\\Lisboa\\011019\\ExpressYeaself/expressyeaself/models/lstm/saved_models/lstm_sequential_2d_onehot.hdf5\nEpoch 2/500\n7500/7500 [==============================] - 6s 755us/step - loss: 0.1912 - acc: 0.7187 - val_loss: 0.1911 - val_acc: 0.7304\n\nEpoch 00002: val_acc improved from 0.70080 to 0.73040, saving model to C:\\Users\\Lisboa\\011019\\ExpressYeaself/expressyeaself/models/lstm/saved_models/lstm_sequential_2d_onehot.hdf5\nEpoch 3/500\n7500/7500 [==============================] - 6s 735us/step - loss: 0.1859 - acc: 0.7241 - val_loss: 0.1872 - val_acc: 0.7116\n\nEpoch 00003: val_acc did not improve from 0.73040\nEpoch 4/500\n7500/7500 [==============================] - 5s 730us/step - loss: 0.1807 - acc: 0.7387 - val_loss: 0.1804 - val_acc: 0.7344\n\nEpoch 00004: val_acc improved from 0.73040 to 0.73440, saving model to C:\\Users\\Lisboa\\011019\\ExpressYeaself/expressyeaself/models/lstm/saved_models/lstm_sequential_2d_onehot.hdf5\nEpoch 5/500\n7500/7500 [==============================] - 5s 710us/step - loss: 0.1771 - acc: 0.7419 - val_loss: 0.1632 - val_acc: 0.7628\n\nEpoch 00005: val_acc improved from 0.73440 to 0.76280, saving model to C:\\Users\\Lisboa\\011019\\ExpressYeaself/expressyeaself/models/lstm/saved_models/lstm_sequential_2d_onehot.hdf5\nEpoch 6/500\n7500/7500 [==============================] - 5s 685us/step - loss: 0.1732 - acc: 0.7492 - val_loss: 0.1672 - val_acc: 0.7528\n\nEpoch 00006: val_acc did not improve from 0.76280\nEpoch 7/500\n7500/7500 [==============================] - 5s 691us/step - loss: 0.1692 - acc: 0.7588 - val_loss: 0.1605 - val_acc: 0.7716\n\nEpoch 00007: val_acc improved from 0.76280 to 0.77160, saving model to C:\\Users\\Lisboa\\011019\\ExpressYeaself/expressyeaself/models/lstm/saved_models/lstm_sequential_2d_onehot.hdf5\nEpoch 8/500\n7500/7500 [==============================] - 5s 680us/step - loss: 0.1668 - acc: 0.7659 - val_loss: 0.1562 - val_acc: 0.7824\n\nEpoch 00008: val_acc improved from 0.77160 to 0.78240, saving model to C:\\Users\\Lisboa\\011019\\ExpressYeaself/expressyeaself/models/lstm/saved_models/lstm_sequential_2d_onehot.hdf5\nEpoch 9/500\n7500/7500 [==============================] - 5s 663us/step - loss: 0.1624 - acc: 0.7704 - val_loss: 0.1764 - val_acc: 0.7528\n\nEpoch 00009: val_acc did not improve from 0.78240\nEpoch 10/500\n7500/7500 [==============================] - 5s 660us/step - loss: 0.1589 - acc: 0.7749 - val_loss: 0.1555 - val_acc: 0.7796\n\nEpoch 00010: val_acc did not improve from 0.78240\nEpoch 11/500\n7500/7500 [==============================] - 5s 654us/step - loss: 0.1566 - acc: 0.7779 - val_loss: 0.1450 - val_acc: 0.7932\n\nEpoch 00011: val_acc improved from 0.78240 to 0.79320, saving model to C:\\Users\\Lisboa\\011019\\ExpressYeaself/expressyeaself/models/lstm/saved_models/lstm_sequential_2d_onehot.hdf5\nEpoch 12/500\n7500/7500 [==============================] - 5s 659us/step - loss: 0.1494 - acc: 0.7923 - val_loss: 0.1880 - val_acc: 0.7312\n\nEpoch 00012: val_acc did not improve from 0.79320\nEpoch 13/500\n7500/7500 [==============================] - 5s 650us/step - loss: 0.1491 - acc: 0.7901 - val_loss: 0.1461 - val_acc: 0.7980\n\nEpoch 00013: val_acc improved from 0.79320 to 0.79800, saving model to C:\\Users\\Lisboa\\011019\\ExpressYeaself/expressyeaself/models/lstm/saved_models/lstm_sequential_2d_onehot.hdf5\nEpoch 14/500\n7500/7500 [==============================] - 5s 652us/step - loss: 0.1450 - acc: 0.7987 - val_loss: 0.1365 - val_acc: 0.8124\n\nEpoch 00014: val_acc improved from 0.79800 to 0.81240, saving model to C:\\Users\\Lisboa\\011019\\ExpressYeaself/expressyeaself/models/lstm/saved_models/lstm_sequential_2d_onehot.hdf5\nEpoch 15/500\n7500/7500 [==============================] - 5s 661us/step - loss: 0.1455 - acc: 0.7984 - val_loss: 0.1490 - val_acc: 0.7948\n\nEpoch 00015: val_acc did not improve from 0.81240\nEpoch 16/500\n7500/7500 [==============================] - 5s 652us/step - loss: 0.1411 - acc: 0.8060 - val_loss: 0.1462 - val_acc: 0.7960\n\nEpoch 00016: val_acc did not improve from 0.81240\nEpoch 17/500\n7500/7500 [==============================] - 5s 645us/step - loss: 0.1394 - acc: 0.8064 - val_loss: 0.1446 - val_acc: 0.7908\n\nEpoch 00017: val_acc did not improve from 0.81240\nEpoch 18/500\n7500/7500 [==============================] - 5s 644us/step - loss: 0.1390 - acc: 0.8063 - val_loss: 0.1290 - val_acc: 0.8244\n\nEpoch 00018: val_acc improved from 0.81240 to 0.82440, saving model to C:\\Users\\Lisboa\\011019\\ExpressYeaself/expressyeaself/models/lstm/saved_models/lstm_sequential_2d_onehot.hdf5\nEpoch 19/500\n7500/7500 [==============================] - 5s 643us/step - loss: 0.1400 - acc: 0.8059 - val_loss: 0.1333 - val_acc: 0.8128\n\nEpoch 00019: val_acc did not improve from 0.82440\nEpoch 20/500\n7500/7500 [==============================] - 5s 645us/step - loss: 0.1376 - acc: 0.8093 - val_loss: 0.1475 - val_acc: 0.7948\n\nEpoch 00020: val_acc did not improve from 0.82440\nEpoch 21/500\n7500/7500 [==============================] - 5s 642us/step - loss: 0.1347 - acc: 0.8155 - val_loss: 0.1319 - val_acc: 0.8136\n\nEpoch 00021: val_acc did not improve from 0.82440\nEpoch 22/500\n7500/7500 [==============================] - 5s 629us/step - loss: 0.1323 - acc: 0.8172 - val_loss: 0.1340 - val_acc: 0.8080\n\nEpoch 00022: val_acc did not improve from 0.82440\nEpoch 23/500\n7500/7500 [==============================] - 5s 637us/step - loss: 0.1306 - acc: 0.8225 - val_loss: 0.1524 - val_acc: 0.7848\n\nEpoch 00023: val_acc did not improve from 0.82440\nEpoch 24/500\n7500/7500 [==============================] - 5s 638us/step - loss: 0.1322 - acc: 0.8167 - val_loss: 0.1321 - val_acc: 0.8156\n\nEpoch 00024: val_acc did not improve from 0.82440\nEpoch 25/500\n7500/7500 [==============================] - 5s 632us/step - loss: 0.1312 - acc: 0.8196 - val_loss: 0.2003 - val_acc: 0.7308\n\nEpoch 00025: val_acc did not improve from 0.82440\nEpoch 26/500\n7500/7500 [==============================] - 5s 627us/step - loss: 0.1299 - acc: 0.8277 - val_loss: 0.1260 - val_acc: 0.8212\n\nEpoch 00026: val_acc did not improve from 0.82440\nEpoch 27/500\n7500/7500 [==============================] - 5s 634us/step - loss: 0.1287 - acc: 0.8293 - val_loss: 0.1286 - val_acc: 0.8188\n\nEpoch 00027: val_acc did not improve from 0.82440\nEpoch 28/500\n7500/7500 [==============================] - 5s 722us/step - loss: 0.1280 - acc: 0.8244 - val_loss: 0.1257 - val_acc: 0.8276\n\nEpoch 00028: val_acc improved from 0.82440 to 0.82760, saving model to C:\\Users\\Lisboa\\011019\\ExpressYeaself/expressyeaself/models/lstm/saved_models/lstm_sequential_2d_onehot.hdf5\nEpoch 29/500\n7500/7500 [==============================] - 6s 740us/step - loss: 0.1251 - acc: 0.8317 - val_loss: 0.1204 - val_acc: 0.8336\n\nEpoch 00029: val_acc improved from 0.82760 to 0.83360, saving model to C:\\Users\\Lisboa\\011019\\ExpressYeaself/expressyeaself/models/lstm/saved_models/lstm_sequential_2d_onehot.hdf5\nEpoch 30/500\n7500/7500 [==============================] - 5s 647us/step - loss: 0.1267 - acc: 0.8276 - val_loss: 0.1213 - val_acc: 0.8356\n\nEpoch 00030: val_acc improved from 0.83360 to 0.83560, saving model to C:\\Users\\Lisboa\\011019\\ExpressYeaself/expressyeaself/models/lstm/saved_models/lstm_sequential_2d_onehot.hdf5\nEpoch 31/500\n7500/7500 [==============================] - 5s 644us/step - loss: 0.1243 - acc: 0.8339 - val_loss: 0.1483 - val_acc: 0.7948\n\nEpoch 00031: val_acc did not improve from 0.83560\nEpoch 32/500\n7500/7500 [==============================] - 5s 703us/step - loss: 0.1248 - acc: 0.8328 - val_loss: 0.1208 - val_acc: 0.8328\n\nEpoch 00032: val_acc did not improve from 0.83560\nEpoch 33/500\n7500/7500 [==============================] - 5s 680us/step - loss: 0.1232 - acc: 0.8328 - val_loss: 0.1271 - val_acc: 0.8296\n\nEpoch 00033: val_acc did not improve from 0.83560\nEpoch 34/500\n7500/7500 [==============================] - 5s 647us/step - loss: 0.1227 - acc: 0.8347 - val_loss: 0.1294 - val_acc: 0.8224\n\nEpoch 00034: val_acc did not improve from 0.83560\nEpoch 35/500\n7500/7500 [==============================] - 5s 727us/step - loss: 0.1203 - acc: 0.8385 - val_loss: 0.1238 - val_acc: 0.8292\n\nEpoch 00035: val_acc did not improve from 0.83560\nEpoch 36/500\n7500/7500 [==============================] - 5s 671us/step - loss: 0.1217 - acc: 0.8352 - val_loss: 0.1247 - val_acc: 0.8240\n\nEpoch 00036: val_acc did not improve from 0.83560\nEpoch 37/500\n7500/7500 [==============================] - 5s 710us/step - loss: 0.1201 - acc: 0.8377 - val_loss: 0.1198 - val_acc: 0.8352\n\nEpoch 00037: val_acc did not improve from 0.83560\nEpoch 38/500\n7500/7500 [==============================] - 5s 650us/step - loss: 0.1191 - acc: 0.8423 - val_loss: 0.1190 - val_acc: 0.8392\n\nEpoch 00038: val_acc improved from 0.83560 to 0.83920, saving model to C:\\Users\\Lisboa\\011019\\ExpressYeaself/expressyeaself/models/lstm/saved_models/lstm_sequential_2d_onehot.hdf5\nEpoch 39/500\n7500/7500 [==============================] - 5s 684us/step - loss: 0.1170 - acc: 0.8437 - val_loss: 0.1232 - val_acc: 0.8320\n\nEpoch 00039: val_acc did not improve from 0.83920\nEpoch 40/500\n7500/7500 [==============================] - 5s 671us/step - loss: 0.1166 - acc: 0.8481 - val_loss: 0.1167 - val_acc: 0.8416\n\nEpoch 00040: val_acc improved from 0.83920 to 0.84160, saving model to C:\\Users\\Lisboa\\011019\\ExpressYeaself/expressyeaself/models/lstm/saved_models/lstm_sequential_2d_onehot.hdf5\nEpoch 41/500\n7500/7500 [==============================] - 5s 678us/step - loss: 0.1155 - acc: 0.8457 - val_loss: 0.1204 - val_acc: 0.8340\n\nEpoch 00041: val_acc did not improve from 0.84160\nEpoch 42/500\n7500/7500 [==============================] - 5s 629us/step - loss: 0.1162 - acc: 0.8461 - val_loss: 0.1291 - val_acc: 0.8240\n\nEpoch 00042: val_acc did not improve from 0.84160\nEpoch 43/500\n7500/7500 [==============================] - 5s 633us/step - loss: 0.1144 - acc: 0.8484 - val_loss: 0.1208 - val_acc: 0.8344\n\nEpoch 00043: val_acc did not improve from 0.84160\nEpoch 44/500\n7500/7500 [==============================] - 5s 650us/step - loss: 0.1125 - acc: 0.8524 - val_loss: 0.1253 - val_acc: 0.8288\n\nEpoch 00044: val_acc did not improve from 0.84160\nEpoch 45/500\n7500/7500 [==============================] - 5s 641us/step - loss: 0.1136 - acc: 0.8492 - val_loss: 0.1170 - val_acc: 0.8400\n\nEpoch 00045: val_acc did not improve from 0.84160\nEpoch 46/500\n7500/7500 [==============================] - 5s 646us/step - loss: 0.1134 - acc: 0.8475 - val_loss: 0.1445 - val_acc: 0.7992\n\nEpoch 00046: val_acc did not improve from 0.84160\nEpoch 47/500\n7500/7500 [==============================] - 5s 653us/step - loss: 0.1100 - acc: 0.8556 - val_loss: 0.1169 - val_acc: 0.8420\n\nEpoch 00047: val_acc improved from 0.84160 to 0.84200, saving model to C:\\Users\\Lisboa\\011019\\ExpressYeaself/expressyeaself/models/lstm/saved_models/lstm_sequential_2d_onehot.hdf5\nEpoch 48/500\n7500/7500 [==============================] - 5s 642us/step - loss: 0.1105 - acc: 0.8520 - val_loss: 0.1244 - val_acc: 0.8284\n\nEpoch 00048: val_acc did not improve from 0.84200\nEpoch 49/500\n7500/7500 [==============================] - 5s 652us/step - loss: 0.1105 - acc: 0.8555 - val_loss: 0.1208 - val_acc: 0.8452\n\nEpoch 00049: val_acc improved from 0.84200 to 0.84520, saving model to C:\\Users\\Lisboa\\011019\\ExpressYeaself/expressyeaself/models/lstm/saved_models/lstm_sequential_2d_onehot.hdf5\nEpoch 50/500\n7500/7500 [==============================] - 5s 640us/step - loss: 0.1080 - acc: 0.8541 - val_loss: 0.1176 - val_acc: 0.8456\n\nEpoch 00050: val_acc improved from 0.84520 to 0.84560, saving model to C:\\Users\\Lisboa\\011019\\ExpressYeaself/expressyeaself/models/lstm/saved_models/lstm_sequential_2d_onehot.hdf5\nEpoch 51/500\n7500/7500 [==============================] - 5s 634us/step - loss: 0.1077 - acc: 0.8592 - val_loss: 0.1267 - val_acc: 0.8288\n\nEpoch 00051: val_acc did not improve from 0.84560\nEpoch 52/500\n7500/7500 [==============================] - 5s 640us/step - loss: 0.1093 - acc: 0.8572 - val_loss: 0.1211 - val_acc: 0.8376\n\nEpoch 00052: val_acc did not improve from 0.84560\nEpoch 53/500\n7500/7500 [==============================] - 5s 692us/step - loss: 0.1069 - acc: 0.8597 - val_loss: 0.1179 - val_acc: 0.8460\n\nEpoch 00053: val_acc improved from 0.84560 to 0.84600, saving model to C:\\Users\\Lisboa\\011019\\ExpressYeaself/expressyeaself/models/lstm/saved_models/lstm_sequential_2d_onehot.hdf5\nEpoch 54/500\n7500/7500 [==============================] - 5s 677us/step - loss: 0.1059 - acc: 0.8583 - val_loss: 0.1441 - val_acc: 0.7984\n\nEpoch 00054: val_acc did not improve from 0.84600\nEpoch 55/500\n7500/7500 [==============================] - 5s 689us/step - loss: 0.1045 - acc: 0.8636 - val_loss: 0.1591 - val_acc: 0.7872\n\nEpoch 00055: val_acc did not improve from 0.84600\nEpoch 56/500\n7500/7500 [==============================] - 6s 734us/step - loss: 0.1026 - acc: 0.8647 - val_loss: 0.1213 - val_acc: 0.8372\n\nEpoch 00056: val_acc did not improve from 0.84600\nEpoch 57/500\n7500/7500 [==============================] - 6s 756us/step - loss: 0.1041 - acc: 0.8624 - val_loss: 0.1211 - val_acc: 0.8368\n\nEpoch 00057: val_acc did not improve from 0.84600\nEpoch 58/500\n7500/7500 [==============================] - 5s 650us/step - loss: 0.1030 - acc: 0.8643 - val_loss: 0.1216 - val_acc: 0.8368\n\nEpoch 00058: val_acc did not improve from 0.84600\nEpoch 59/500\n7500/7500 [==============================] - 5s 649us/step - loss: 0.1021 - acc: 0.8669 - val_loss: 0.1189 - val_acc: 0.8428\n\nEpoch 00059: val_acc did not improve from 0.84600\nEpoch 60/500\n7500/7500 [==============================] - 5s 645us/step - loss: 0.1012 - acc: 0.8687 - val_loss: 0.1474 - val_acc: 0.7988\n\nEpoch 00060: val_acc did not improve from 0.84600\nEpoch 61/500\n7500/7500 [==============================] - 5s 664us/step - loss: 0.0995 - acc: 0.8700 - val_loss: 0.1229 - val_acc: 0.8360\n\nEpoch 00061: val_acc did not improve from 0.84600\nEpoch 62/500\n7500/7500 [==============================] - 5s 694us/step - loss: 0.0993 - acc: 0.8717 - val_loss: 0.1248 - val_acc: 0.8384\n\nEpoch 00062: val_acc did not improve from 0.84600\nEpoch 63/500\n7500/7500 [==============================] - 5s 679us/step - loss: 0.0975 - acc: 0.8739 - val_loss: 0.1263 - val_acc: 0.8328\n\nEpoch 00063: val_acc did not improve from 0.84600\nEpoch 64/500\n7500/7500 [==============================] - 5s 722us/step - loss: 0.0949 - acc: 0.8765 - val_loss: 0.1365 - val_acc: 0.8124\n\nEpoch 00064: val_acc did not improve from 0.84600\nEpoch 65/500\n7500/7500 [==============================] - 5s 708us/step - loss: 0.0950 - acc: 0.8776 - val_loss: 0.1364 - val_acc: 0.8168\n\nEpoch 00065: val_acc did not improve from 0.84600\nEpoch 66/500\n7500/7500 [==============================] - 5s 695us/step - loss: 0.0961 - acc: 0.8755 - val_loss: 0.1274 - val_acc: 0.8344\n\nEpoch 00066: val_acc did not improve from 0.84600\nEpoch 67/500\n7500/7500 [==============================] - 5s 680us/step - loss: 0.0930 - acc: 0.8809 - val_loss: 0.1314 - val_acc: 0.8368\n\nEpoch 00067: val_acc did not improve from 0.84600\nEpoch 68/500\n7500/7500 [==============================] - 5s 671us/step - loss: 0.0929 - acc: 0.8792 - val_loss: 0.1286 - val_acc: 0.8280\n\nEpoch 00068: val_acc did not improve from 0.84600\nEpoch 69/500\n7500/7500 [==============================] - 5s 670us/step - loss: 0.0906 - acc: 0.8833 - val_loss: 0.1342 - val_acc: 0.8288\n\nEpoch 00069: val_acc did not improve from 0.84600\nEpoch 70/500\n7500/7500 [==============================] - 5s 687us/step - loss: 0.0906 - acc: 0.8815 - val_loss: 0.1328 - val_acc: 0.8208\n\nEpoch 00070: val_acc did not improve from 0.84600\nEpoch 71/500\n7500/7500 [==============================] - 5s 662us/step - loss: 0.0884 - acc: 0.8863 - val_loss: 0.1300 - val_acc: 0.8316\n\nEpoch 00071: val_acc did not improve from 0.84600\nEpoch 72/500\n7500/7500 [==============================] - 5s 681us/step - loss: 0.0893 - acc: 0.8865 - val_loss: 0.1374 - val_acc: 0.8128\n\nEpoch 00072: val_acc did not improve from 0.84600\nEpoch 73/500\n7500/7500 [==============================] - 5s 669us/step - loss: 0.0865 - acc: 0.8905 - val_loss: 0.1324 - val_acc: 0.8168\n\nEpoch 00073: val_acc did not improve from 0.84600\nEpoch 74/500\n7500/7500 [==============================] - 5s 668us/step - loss: 0.0898 - acc: 0.8872 - val_loss: 0.1345 - val_acc: 0.8204\n\nEpoch 00074: val_acc did not improve from 0.84600\nEpoch 75/500\n7500/7500 [==============================] - 5s 662us/step - loss: 0.0844 - acc: 0.8956 - val_loss: 0.1428 - val_acc: 0.8180\n\nEpoch 00075: val_acc did not improve from 0.84600\nEpoch 76/500\n7500/7500 [==============================] - 5s 675us/step - loss: 0.0840 - acc: 0.8955 - val_loss: 0.1374 - val_acc: 0.8272\n\nEpoch 00076: val_acc did not improve from 0.84600\nEpoch 77/500\n7500/7500 [==============================] - 5s 684us/step - loss: 0.0818 - acc: 0.8959 - val_loss: 0.1405 - val_acc: 0.8212\n\nEpoch 00077: val_acc did not improve from 0.84600\nEpoch 78/500\n7500/7500 [==============================] - 5s 658us/step - loss: 0.0818 - acc: 0.8991 - val_loss: 0.1362 - val_acc: 0.8296\n\nEpoch 00078: val_acc did not improve from 0.84600\nEpoch 79/500\n7500/7500 [==============================] - 5s 667us/step - loss: 0.0821 - acc: 0.8969 - val_loss: 0.1396 - val_acc: 0.8240\n\nEpoch 00079: val_acc did not improve from 0.84600\nEpoch 80/500\n7500/7500 [==============================] - 5s 663us/step - loss: 0.0796 - acc: 0.9016 - val_loss: 0.1527 - val_acc: 0.8020\n\nEpoch 00080: val_acc did not improve from 0.84600\nEpoch 81/500\n7500/7500 [==============================] - 5s 673us/step - loss: 0.0820 - acc: 0.8963 - val_loss: 0.1492 - val_acc: 0.8048\n\nEpoch 00081: val_acc did not improve from 0.84600\nEpoch 82/500\n7500/7500 [==============================] - 5s 671us/step - loss: 0.0803 - acc: 0.9007 - val_loss: 0.1426 - val_acc: 0.8216\n\nEpoch 00082: val_acc did not improve from 0.84600\nEpoch 83/500\n7500/7500 [==============================] - 5s 666us/step - loss: 0.0764 - acc: 0.9063 - val_loss: 0.1358 - val_acc: 0.8252\n\nEpoch 00083: val_acc did not improve from 0.84600\nEpoch 84/500\n7500/7500 [==============================] - 5s 657us/step - loss: 0.0753 - acc: 0.9061 - val_loss: 0.1397 - val_acc: 0.8216\n\nEpoch 00084: val_acc did not improve from 0.84600\nEpoch 85/500\n7500/7500 [==============================] - 5s 665us/step - loss: 0.0737 - acc: 0.9079 - val_loss: 0.1450 - val_acc: 0.8156\n\nEpoch 00085: val_acc did not improve from 0.84600\nEpoch 86/500\n7500/7500 [==============================] - 5s 673us/step - loss: 0.0732 - acc: 0.9096 - val_loss: 0.1459 - val_acc: 0.8080\n\nEpoch 00086: val_acc did not improve from 0.84600\nEpoch 87/500\n7500/7500 [==============================] - 5s 664us/step - loss: 0.0730 - acc: 0.9088 - val_loss: 0.1535 - val_acc: 0.8108\n\nEpoch 00087: val_acc did not improve from 0.84600\nEpoch 88/500\n7500/7500 [==============================] - 5s 659us/step - loss: 0.0719 - acc: 0.9117 - val_loss: 0.1426 - val_acc: 0.8140\n\nEpoch 00088: val_acc did not improve from 0.84600\nEpoch 89/500\n7500/7500 [==============================] - 5s 662us/step - loss: 0.0696 - acc: 0.9159 - val_loss: 0.1416 - val_acc: 0.8208\n\nEpoch 00089: val_acc did not improve from 0.84600\nEpoch 90/500\n7500/7500 [==============================] - 5s 661us/step - loss: 0.0701 - acc: 0.9136 - val_loss: 0.1448 - val_acc: 0.8200\n\nEpoch 00090: val_acc did not improve from 0.84600\nEpoch 91/500\n7500/7500 [==============================] - 5s 651us/step - loss: 0.0702 - acc: 0.9128 - val_loss: 0.1534 - val_acc: 0.8160\n\nEpoch 00091: val_acc did not improve from 0.84600\nEpoch 92/500\n7500/7500 [==============================] - 5s 649us/step - loss: 0.0663 - acc: 0.9192 - val_loss: 0.1568 - val_acc: 0.8028\n\nEpoch 00092: val_acc did not improve from 0.84600\nEpoch 93/500\n7500/7500 [==============================] - 5s 647us/step - loss: 0.0660 - acc: 0.9175 - val_loss: 0.1468 - val_acc: 0.8176\n\nEpoch 00093: val_acc did not improve from 0.84600\nEpoch 94/500\n7500/7500 [==============================] - 5s 648us/step - loss: 0.0643 - acc: 0.9219 - val_loss: 0.1532 - val_acc: 0.8096\n\nEpoch 00094: val_acc did not improve from 0.84600\nEpoch 95/500\n7500/7500 [==============================] - 5s 657us/step - loss: 0.0652 - acc: 0.9207 - val_loss: 0.1494 - val_acc: 0.8100\n\nEpoch 00095: val_acc did not improve from 0.84600\nEpoch 96/500\n7500/7500 [==============================] - 5s 656us/step - loss: 0.0634 - acc: 0.9224 - val_loss: 0.1468 - val_acc: 0.8192\n\nEpoch 00096: val_acc did not improve from 0.84600\nEpoch 97/500\n7500/7500 [==============================] - 5s 649us/step - loss: 0.0642 - acc: 0.9215 - val_loss: 0.1455 - val_acc: 0.8192\n\nEpoch 00097: val_acc did not improve from 0.84600\nEpoch 98/500\n7500/7500 [==============================] - 5s 651us/step - loss: 0.0608 - acc: 0.9260 - val_loss: 0.1606 - val_acc: 0.7996\n\nEpoch 00098: val_acc did not improve from 0.84600\nEpoch 99/500\n7500/7500 [==============================] - 5s 649us/step - loss: 0.0608 - acc: 0.9275 - val_loss: 0.1490 - val_acc: 0.8120\n\nEpoch 00099: val_acc did not improve from 0.84600\nEpoch 100/500\n7500/7500 [==============================] - 5s 652us/step - loss: 0.0606 - acc: 0.9252 - val_loss: 0.1508 - val_acc: 0.8160\n\nEpoch 00100: val_acc did not improve from 0.84600\nEpoch 101/500\n7500/7500 [==============================] - 5s 653us/step - loss: 0.0591 - acc: 0.9281 - val_loss: 0.1485 - val_acc: 0.8136\n\nEpoch 00101: val_acc did not improve from 0.84600\nEpoch 102/500\n7500/7500 [==============================] - 5s 649us/step - loss: 0.0569 - acc: 0.9316 - val_loss: 0.1509 - val_acc: 0.8148\n\nEpoch 00102: val_acc did not improve from 0.84600\nEpoch 103/500\n7500/7500 [==============================] - 5s 651us/step - loss: 0.0566 - acc: 0.9332 - val_loss: 0.1566 - val_acc: 0.8100\n\nEpoch 00103: val_acc did not improve from 0.84600\nEpoch 104/500\n7500/7500 [==============================] - 5s 648us/step - loss: 0.0563 - acc: 0.9309 - val_loss: 0.1525 - val_acc: 0.8104\n\nEpoch 00104: val_acc did not improve from 0.84600\nEpoch 105/500\n7500/7500 [==============================] - 5s 652us/step - loss: 0.0558 - acc: 0.9327 - val_loss: 0.1655 - val_acc: 0.8024\n\nEpoch 00105: val_acc did not improve from 0.84600\nEpoch 106/500\n7500/7500 [==============================] - 5s 649us/step - loss: 0.0552 - acc: 0.9352 - val_loss: 0.1774 - val_acc: 0.7936\n\nEpoch 00106: val_acc did not improve from 0.84600\nEpoch 107/500\n7500/7500 [==============================] - 5s 654us/step - loss: 0.0545 - acc: 0.9345 - val_loss: 0.1568 - val_acc: 0.8100\n\nEpoch 00107: val_acc did not improve from 0.84600\nEpoch 108/500\n7500/7500 [==============================] - 5s 652us/step - loss: 0.0552 - acc: 0.9336 - val_loss: 0.1540 - val_acc: 0.8124\n\nEpoch 00108: val_acc did not improve from 0.84600\nEpoch 109/500\n7500/7500 [==============================] - 5s 669us/step - loss: 0.0519 - acc: 0.9384 - val_loss: 0.1626 - val_acc: 0.8096\n\nEpoch 00109: val_acc did not improve from 0.84600\nEpoch 110/500\n7500/7500 [==============================] - 5s 669us/step - loss: 0.0517 - acc: 0.9388 - val_loss: 0.1588 - val_acc: 0.8124\n\nEpoch 00110: val_acc did not improve from 0.84600\nEpoch 111/500\n7500/7500 [==============================] - 5s 670us/step - loss: 0.0518 - acc: 0.9385 - val_loss: 0.1669 - val_acc: 0.8028\n\nEpoch 00111: val_acc did not improve from 0.84600\nEpoch 112/500\n7500/7500 [==============================] - 5s 659us/step - loss: 0.0523 - acc: 0.9375 - val_loss: 0.1603 - val_acc: 0.8080\n\nEpoch 00112: val_acc did not improve from 0.84600\nEpoch 113/500\n7500/7500 [==============================] - 5s 645us/step - loss: 0.0530 - acc: 0.9372 - val_loss: 0.1615 - val_acc: 0.8044\n\nEpoch 00113: val_acc did not improve from 0.84600\nEpoch 114/500\n7500/7500 [==============================] - 5s 644us/step - loss: 0.0487 - acc: 0.9429 - val_loss: 0.1583 - val_acc: 0.8104\n\nEpoch 00114: val_acc did not improve from 0.84600\nEpoch 115/500\n7500/7500 [==============================] - 5s 639us/step - loss: 0.0499 - acc: 0.9425 - val_loss: 0.1616 - val_acc: 0.8052\n\nEpoch 00115: val_acc did not improve from 0.84600\nEpoch 116/500\n7500/7500 [==============================] - 5s 638us/step - loss: 0.0480 - acc: 0.9453 - val_loss: 0.1607 - val_acc: 0.8104\n\nEpoch 00116: val_acc did not improve from 0.84600\nEpoch 117/500\n7500/7500 [==============================] - 5s 641us/step - loss: 0.0478 - acc: 0.9444 - val_loss: 0.1741 - val_acc: 0.7844\n\nEpoch 00117: val_acc did not improve from 0.84600\nEpoch 118/500\n7500/7500 [==============================] - 5s 633us/step - loss: 0.0468 - acc: 0.9451 - val_loss: 0.1607 - val_acc: 0.8064\n\nEpoch 00118: val_acc did not improve from 0.84600\nEpoch 119/500\n7500/7500 [==============================] - 5s 656us/step - loss: 0.0462 - acc: 0.9461 - val_loss: 0.1635 - val_acc: 0.8012\n\nEpoch 00119: val_acc did not improve from 0.84600\nEpoch 120/500\n7500/7500 [==============================] - 5s 632us/step - loss: 0.0450 - acc: 0.9489 - val_loss: 0.1610 - val_acc: 0.8076\n\nEpoch 00120: val_acc did not improve from 0.84600\nEpoch 121/500\n7500/7500 [==============================] - 5s 660us/step - loss: 0.0437 - acc: 0.9501 - val_loss: 0.1553 - val_acc: 0.8156\n\nEpoch 00121: val_acc did not improve from 0.84600\nEpoch 122/500\n7500/7500 [==============================] - 5s 647us/step - loss: 0.0436 - acc: 0.9495 - val_loss: 0.1667 - val_acc: 0.8024\n\nEpoch 00122: val_acc did not improve from 0.84600\nEpoch 123/500\n7500/7500 [==============================] - 5s 709us/step - loss: 0.0426 - acc: 0.9504 - val_loss: 0.1654 - val_acc: 0.8068\n\nEpoch 00123: val_acc did not improve from 0.84600\nEpoch 124/500\n7500/7500 [==============================] - 5s 700us/step - loss: 0.0441 - acc: 0.9483 - val_loss: 0.1639 - val_acc: 0.8084\n\nEpoch 00124: val_acc did not improve from 0.84600\nEpoch 125/500\n7500/7500 [==============================] - 5s 688us/step - loss: 0.0425 - acc: 0.9515 - val_loss: 0.1652 - val_acc: 0.8084\n\nEpoch 00125: val_acc did not improve from 0.84600\nEpoch 126/500\n7500/7500 [==============================] - 5s 652us/step - loss: 0.0411 - acc: 0.9541 - val_loss: 0.1615 - val_acc: 0.8092\n\nEpoch 00126: val_acc did not improve from 0.84600\nEpoch 127/500\n7500/7500 [==============================] - 5s 649us/step - loss: 0.0423 - acc: 0.9512 - val_loss: 0.1697 - val_acc: 0.8028\n\nEpoch 00127: val_acc did not improve from 0.84600\nEpoch 128/500\n7500/7500 [==============================] - 5s 633us/step - loss: 0.0399 - acc: 0.9557 - val_loss: 0.1695 - val_acc: 0.8076\n\nEpoch 00128: val_acc did not improve from 0.84600\nEpoch 129/500\n7500/7500 [==============================] - 5s 637us/step - loss: 0.0395 - acc: 0.9544 - val_loss: 0.1667 - val_acc: 0.8024\n\nEpoch 00129: val_acc did not improve from 0.84600\nEpoch 130/500\n7500/7500 [==============================] - 5s 636us/step - loss: 0.0402 - acc: 0.9541 - val_loss: 0.1760 - val_acc: 0.8012\n\nEpoch 00130: val_acc did not improve from 0.84600\nEpoch 131/500\n7500/7500 [==============================] - 5s 636us/step - loss: 0.0384 - acc: 0.9581 - val_loss: 0.1634 - val_acc: 0.8100\n\nEpoch 00131: val_acc did not improve from 0.84600\nEpoch 132/500\n7500/7500 [==============================] - 5s 636us/step - loss: 0.0389 - acc: 0.9555 - val_loss: 0.1680 - val_acc: 0.8056\n\nEpoch 00132: val_acc did not improve from 0.84600\nEpoch 133/500\n7500/7500 [==============================] - 5s 634us/step - loss: 0.0424 - acc: 0.9517 - val_loss: 0.1702 - val_acc: 0.8060\n\nEpoch 00133: val_acc did not improve from 0.84600\nEpoch 134/500\n7500/7500 [==============================] - 5s 642us/step - loss: 0.0365 - acc: 0.9592 - val_loss: 0.1714 - val_acc: 0.8052\n\nEpoch 00134: val_acc did not improve from 0.84600\nEpoch 135/500\n7500/7500 [==============================] - 5s 636us/step - loss: 0.0369 - acc: 0.9592 - val_loss: 0.1654 - val_acc: 0.8116\n\nEpoch 00135: val_acc did not improve from 0.84600\nEpoch 136/500\n7500/7500 [==============================] - 5s 636us/step - loss: 0.0382 - acc: 0.9577 - val_loss: 0.1785 - val_acc: 0.7944\n\nEpoch 00136: val_acc did not improve from 0.84600\nEpoch 137/500\n7500/7500 [==============================] - 5s 632us/step - loss: 0.0396 - acc: 0.9556 - val_loss: 0.1672 - val_acc: 0.8064\n\nEpoch 00137: val_acc did not improve from 0.84600\nEpoch 138/500\n7500/7500 [==============================] - 5s 638us/step - loss: 0.0345 - acc: 0.9624 - val_loss: 0.1655 - val_acc: 0.8136\n\nEpoch 00138: val_acc did not improve from 0.84600\nEpoch 139/500\n7500/7500 [==============================] - 5s 634us/step - loss: 0.0370 - acc: 0.9568 - val_loss: 0.1703 - val_acc: 0.7988\n\nEpoch 00139: val_acc did not improve from 0.84600\nEpoch 140/500\n7500/7500 [==============================] - 5s 632us/step - loss: 0.0376 - acc: 0.9580 - val_loss: 0.1716 - val_acc: 0.8016\n\nEpoch 00140: val_acc did not improve from 0.84600\nEpoch 141/500\n7500/7500 [==============================] - 5s 636us/step - loss: 0.0350 - acc: 0.9603 - val_loss: 0.1798 - val_acc: 0.7952\n\nEpoch 00141: val_acc did not improve from 0.84600\nEpoch 142/500\n7500/7500 [==============================] - 5s 640us/step - loss: 0.0355 - acc: 0.9603 - val_loss: 0.1731 - val_acc: 0.8040\n\nEpoch 00142: val_acc did not improve from 0.84600\nEpoch 143/500\n7500/7500 [==============================] - 5s 634us/step - loss: 0.0352 - acc: 0.9592 - val_loss: 0.1872 - val_acc: 0.7888\n\nEpoch 00143: val_acc did not improve from 0.84600\nEpoch 144/500\n7500/7500 [==============================] - 5s 701us/step - loss: 0.0374 - acc: 0.9569 - val_loss: 0.1749 - val_acc: 0.8000\n\nEpoch 00144: val_acc did not improve from 0.84600\nEpoch 145/500\n7500/7500 [==============================] - 5s 678us/step - loss: 0.0334 - acc: 0.9633 - val_loss: 0.1763 - val_acc: 0.8040\n\nEpoch 00145: val_acc did not improve from 0.84600\nEpoch 146/500\n7500/7500 [==============================] - 5s 666us/step - loss: 0.0341 - acc: 0.9633 - val_loss: 0.1757 - val_acc: 0.8080\n\nEpoch 00146: val_acc did not improve from 0.84600\nEpoch 147/500\n7500/7500 [==============================] - 5s 680us/step - loss: 0.0383 - acc: 0.9577 - val_loss: 0.1810 - val_acc: 0.7952\n\nEpoch 00147: val_acc did not improve from 0.84600\nEpoch 148/500\n7500/7500 [==============================] - 5s 673us/step - loss: 0.0365 - acc: 0.9588 - val_loss: 0.1694 - val_acc: 0.8052\n\nEpoch 00148: val_acc did not improve from 0.84600\nEpoch 149/500\n7500/7500 [==============================] - 5s 719us/step - loss: 0.0342 - acc: 0.9615 - val_loss: 0.1848 - val_acc: 0.7908\n\nEpoch 00149: val_acc did not improve from 0.84600\nEpoch 150/500\n7500/7500 [==============================] - 6s 780us/step - loss: 0.0345 - acc: 0.9613 - val_loss: 0.1774 - val_acc: 0.7992\n\nEpoch 00150: val_acc did not improve from 0.84600\nEpoch 151/500\n7500/7500 [==============================] - 6s 791us/step - loss: 0.0328 - acc: 0.9635 - val_loss: 0.1721 - val_acc: 0.8028\n\nEpoch 00151: val_acc did not improve from 0.84600\nEpoch 152/500\n7500/7500 [==============================] - 5s 706us/step - loss: 0.0364 - acc: 0.9596 - val_loss: 0.1732 - val_acc: 0.8040\n\nEpoch 00152: val_acc did not improve from 0.84600\nEpoch 153/500\n7500/7500 [==============================] - 5s 715us/step - loss: 0.0332 - acc: 0.9631 - val_loss: 0.1768 - val_acc: 0.7980\n\nEpoch 00153: val_acc did not improve from 0.84600\nEpoch 154/500\n7500/7500 [==============================] - 6s 830us/step - loss: 0.0323 - acc: 0.9644 - val_loss: 0.1791 - val_acc: 0.7976\n\nEpoch 00154: val_acc did not improve from 0.84600\nEpoch 155/500\n7500/7500 [==============================] - 7s 883us/step - loss: 0.0319 - acc: 0.9647 - val_loss: 0.1761 - val_acc: 0.8044\n\nEpoch 00155: val_acc did not improve from 0.84600\nEpoch 156/500\n7500/7500 [==============================] - 5s 691us/step - loss: 0.0332 - acc: 0.9640 - val_loss: 0.1730 - val_acc: 0.8048\n\nEpoch 00156: val_acc did not improve from 0.84600\nEpoch 157/500\n7500/7500 [==============================] - 5s 692us/step - loss: 0.0319 - acc: 0.9644 - val_loss: 0.1709 - val_acc: 0.8080\n\nEpoch 00157: val_acc did not improve from 0.84600\nEpoch 158/500\n7500/7500 [==============================] - 6s 808us/step - loss: 0.0309 - acc: 0.9660 - val_loss: 0.1719 - val_acc: 0.8048\n\nEpoch 00158: val_acc did not improve from 0.84600\nEpoch 159/500\n7500/7500 [==============================] - 6s 787us/step - loss: 0.0331 - acc: 0.9644 - val_loss: 0.1746 - val_acc: 0.8040\n\nEpoch 00159: val_acc did not improve from 0.84600\nEpoch 160/500\n7500/7500 [==============================] - 6s 735us/step - loss: 0.0328 - acc: 0.9643 - val_loss: 0.1740 - val_acc: 0.8032\n\nEpoch 00160: val_acc did not improve from 0.84600\nEpoch 161/500\n7500/7500 [==============================] - 7s 872us/step - loss: 0.0296 - acc: 0.9673 - val_loss: 0.1822 - val_acc: 0.7980\n\nEpoch 00161: val_acc did not improve from 0.84600\nEpoch 162/500\n7500/7500 [==============================] - 5s 711us/step - loss: 0.0327 - acc: 0.9637 - val_loss: 0.1735 - val_acc: 0.8024\n\nEpoch 00162: val_acc did not improve from 0.84600\nEpoch 163/500\n7500/7500 [==============================] - 5s 641us/step - loss: 0.0307 - acc: 0.9664 - val_loss: 0.1791 - val_acc: 0.7968\n\nEpoch 00163: val_acc did not improve from 0.84600\nEpoch 164/500\n7500/7500 [==============================] - 5s 642us/step - loss: 0.0313 - acc: 0.9643 - val_loss: 0.1734 - val_acc: 0.8080\n\nEpoch 00164: val_acc did not improve from 0.84600\nEpoch 165/500\n7500/7500 [==============================] - 5s 642us/step - loss: 0.0314 - acc: 0.9655 - val_loss: 0.1837 - val_acc: 0.7940\n\nEpoch 00165: val_acc did not improve from 0.84600\nEpoch 166/500\n7500/7500 [==============================] - 5s 642us/step - loss: 0.0336 - acc: 0.9621 - val_loss: 0.1715 - val_acc: 0.8104\n\nEpoch 00166: val_acc did not improve from 0.84600\nEpoch 167/500\n7500/7500 [==============================] - 5s 640us/step - loss: 0.0286 - acc: 0.9679 - val_loss: 0.1726 - val_acc: 0.8064\n\nEpoch 00167: val_acc did not improve from 0.84600\nEpoch 168/500\n7500/7500 [==============================] - 5s 653us/step - loss: 0.0305 - acc: 0.9667 - val_loss: 0.1762 - val_acc: 0.8020\n\nEpoch 00168: val_acc did not improve from 0.84600\nEpoch 169/500\n7500/7500 [==============================] - 5s 656us/step - loss: 0.0285 - acc: 0.9692 - val_loss: 0.1801 - val_acc: 0.8004\n\nEpoch 00169: val_acc did not improve from 0.84600\nEpoch 170/500\n7500/7500 [==============================] - 5s 648us/step - loss: 0.0296 - acc: 0.9677 - val_loss: 0.1774 - val_acc: 0.8028\n\nEpoch 00170: val_acc did not improve from 0.84600\nEpoch 171/500\n7500/7500 [==============================] - 5s 648us/step - loss: 0.0311 - acc: 0.9659 - val_loss: 0.1814 - val_acc: 0.7964\n\nEpoch 00171: val_acc did not improve from 0.84600\nEpoch 172/500\n7500/7500 [==============================] - 5s 658us/step - loss: 0.0289 - acc: 0.9689 - val_loss: 0.1724 - val_acc: 0.8040\n\nEpoch 00172: val_acc did not improve from 0.84600\nEpoch 173/500\n7500/7500 [==============================] - 5s 651us/step - loss: 0.0290 - acc: 0.9691 - val_loss: 0.1689 - val_acc: 0.8124\n\nEpoch 00173: val_acc did not improve from 0.84600\nEpoch 174/500\n7500/7500 [==============================] - 5s 644us/step - loss: 0.0301 - acc: 0.9679 - val_loss: 0.1758 - val_acc: 0.8016\n\nEpoch 00174: val_acc did not improve from 0.84600\nEpoch 175/500\n7500/7500 [==============================] - 5s 680us/step - loss: 0.0278 - acc: 0.9693 - val_loss: 0.1801 - val_acc: 0.8028\n\nEpoch 00175: val_acc did not improve from 0.84600\nEpoch 176/500\n7500/7500 [==============================] - 5s 656us/step - loss: 0.0286 - acc: 0.9699 - val_loss: 0.1800 - val_acc: 0.7980\n\nEpoch 00176: val_acc did not improve from 0.84600\nEpoch 177/500\n7500/7500 [==============================] - 5s 656us/step - loss: 0.0278 - acc: 0.9691 - val_loss: 0.1769 - val_acc: 0.8012\n\nEpoch 00177: val_acc did not improve from 0.84600\nEpoch 178/500\n7500/7500 [==============================] - 5s 663us/step - loss: 0.0281 - acc: 0.9692 - val_loss: 0.1892 - val_acc: 0.7912\n\nEpoch 00178: val_acc did not improve from 0.84600\nEpoch 179/500\n7500/7500 [==============================] - 5s 696us/step - loss: 0.0270 - acc: 0.9700 - val_loss: 0.1791 - val_acc: 0.7996\n\nEpoch 00179: val_acc did not improve from 0.84600\nEpoch 180/500\n7500/7500 [==============================] - 5s 658us/step - loss: 0.0275 - acc: 0.9704 - val_loss: 0.1872 - val_acc: 0.7904\n\nEpoch 00180: val_acc did not improve from 0.84600\nEpoch 181/500\n7500/7500 [==============================] - 5s 666us/step - loss: 0.0282 - acc: 0.9696 - val_loss: 0.1777 - val_acc: 0.8064\n\nEpoch 00181: val_acc did not improve from 0.84600\nEpoch 182/500\n7500/7500 [==============================] - 5s 680us/step - loss: 0.0280 - acc: 0.9692 - val_loss: 0.1793 - val_acc: 0.8004\n\nEpoch 00182: val_acc did not improve from 0.84600\nEpoch 183/500\n7500/7500 [==============================] - 5s 669us/step - loss: 0.0271 - acc: 0.9699 - val_loss: 0.1815 - val_acc: 0.8012\n\nEpoch 00183: val_acc did not improve from 0.84600\nEpoch 184/500\n7500/7500 [==============================] - 5s 649us/step - loss: 0.0288 - acc: 0.9691 - val_loss: 0.1801 - val_acc: 0.8028\n\nEpoch 00184: val_acc did not improve from 0.84600\nEpoch 185/500\n7500/7500 [==============================] - 5s 651us/step - loss: 0.0280 - acc: 0.9699 - val_loss: 0.1795 - val_acc: 0.8020\n\nEpoch 00185: val_acc did not improve from 0.84600\nEpoch 186/500\n7500/7500 [==============================] - 5s 649us/step - loss: 0.0287 - acc: 0.9684 - val_loss: 0.1861 - val_acc: 0.7980\n\nEpoch 00186: val_acc did not improve from 0.84600\nEpoch 187/500\n7500/7500 [==============================] - 5s 657us/step - loss: 0.0260 - acc: 0.9716 - val_loss: 0.1761 - val_acc: 0.8112\n\nEpoch 00187: val_acc did not improve from 0.84600\nEpoch 188/500\n7500/7500 [==============================] - 5s 662us/step - loss: 0.0296 - acc: 0.9679 - val_loss: 0.1892 - val_acc: 0.7932\n\nEpoch 00188: val_acc did not improve from 0.84600\nEpoch 189/500\n7500/7500 [==============================] - 5s 648us/step - loss: 0.0299 - acc: 0.9657 - val_loss: 0.1745 - val_acc: 0.8080\n\nEpoch 00189: val_acc did not improve from 0.84600\nEpoch 190/500\n7500/7500 [==============================] - 5s 657us/step - loss: 0.0265 - acc: 0.9704 - val_loss: 0.1807 - val_acc: 0.7968\n\nEpoch 00190: val_acc did not improve from 0.84600\nEpoch 191/500\n7500/7500 [==============================] - 5s 652us/step - loss: 0.0256 - acc: 0.9716 - val_loss: 0.1792 - val_acc: 0.8052\n\nEpoch 00191: val_acc did not improve from 0.84600\nEpoch 192/500\n7500/7500 [==============================] - 5s 646us/step - loss: 0.0263 - acc: 0.9711 - val_loss: 0.1762 - val_acc: 0.8036\n\nEpoch 00192: val_acc did not improve from 0.84600\nEpoch 193/500\n7500/7500 [==============================] - 5s 650us/step - loss: 0.0274 - acc: 0.9695 - val_loss: 0.1849 - val_acc: 0.7956\n\nEpoch 00193: val_acc did not improve from 0.84600\nEpoch 194/500\n7500/7500 [==============================] - 5s 650us/step - loss: 0.0253 - acc: 0.9729 - val_loss: 0.1807 - val_acc: 0.8040\n\nEpoch 00194: val_acc did not improve from 0.84600\nEpoch 195/500\n7500/7500 [==============================] - 5s 645us/step - loss: 0.0245 - acc: 0.9735 - val_loss: 0.1757 - val_acc: 0.8064\n\nEpoch 00195: val_acc did not improve from 0.84600\nEpoch 196/500\n7500/7500 [==============================] - 5s 714us/step - loss: 0.0287 - acc: 0.9681 - val_loss: 0.1806 - val_acc: 0.7984\n\nEpoch 00196: val_acc did not improve from 0.84600\nEpoch 197/500\n7500/7500 [==============================] - 5s 721us/step - loss: 0.0256 - acc: 0.9721 - val_loss: 0.1830 - val_acc: 0.8000\n\nEpoch 00197: val_acc did not improve from 0.84600\nEpoch 198/500\n7500/7500 [==============================] - 5s 685us/step - loss: 0.0277 - acc: 0.9699 - val_loss: 0.1777 - val_acc: 0.8028\n\nEpoch 00198: val_acc did not improve from 0.84600\nEpoch 199/500\n7500/7500 [==============================] - 5s 663us/step - loss: 0.0248 - acc: 0.9724 - val_loss: 0.1807 - val_acc: 0.7984\n\nEpoch 00199: val_acc did not improve from 0.84600\nEpoch 200/500\n7500/7500 [==============================] - 5s 660us/step - loss: 0.0250 - acc: 0.9729 - val_loss: 0.1744 - val_acc: 0.8064\n\nEpoch 00200: val_acc did not improve from 0.84600\nEpoch 201/500\n7500/7500 [==============================] - 5s 680us/step - loss: 0.0246 - acc: 0.9729 - val_loss: 0.1795 - val_acc: 0.8028\n\nEpoch 00201: val_acc did not improve from 0.84600\nEpoch 202/500\n7500/7500 [==============================] - 5s 668us/step - loss: 0.0254 - acc: 0.9723 - val_loss: 0.1823 - val_acc: 0.8000\n\nEpoch 00202: val_acc did not improve from 0.84600\nEpoch 203/500\n7500/7500 [==============================] - 5s 647us/step - loss: 0.0240 - acc: 0.9744 - val_loss: 0.1800 - val_acc: 0.8016\n\nEpoch 00203: val_acc did not improve from 0.84600\nEpoch 204/500\n7500/7500 [==============================] - 5s 652us/step - loss: 0.0284 - acc: 0.9692 - val_loss: 0.1859 - val_acc: 0.7964\n\nEpoch 00204: val_acc did not improve from 0.84600\nEpoch 205/500\n7500/7500 [==============================] - 5s 654us/step - loss: 0.0250 - acc: 0.9731 - val_loss: 0.1736 - val_acc: 0.8088\n\nEpoch 00205: val_acc did not improve from 0.84600\nEpoch 206/500\n7500/7500 [==============================] - 5s 653us/step - loss: 0.0257 - acc: 0.9716 - val_loss: 0.1794 - val_acc: 0.8040\n\nEpoch 00206: val_acc did not improve from 0.84600\nEpoch 207/500\n7500/7500 [==============================] - 5s 650us/step - loss: 0.0252 - acc: 0.9720 - val_loss: 0.1792 - val_acc: 0.8024\n\nEpoch 00207: val_acc did not improve from 0.84600\nEpoch 208/500\n7500/7500 [==============================] - 5s 663us/step - loss: 0.0240 - acc: 0.9737 - val_loss: 0.1874 - val_acc: 0.7956\n\nEpoch 00208: val_acc did not improve from 0.84600\nEpoch 209/500\n7500/7500 [==============================] - 5s 660us/step - loss: 0.0240 - acc: 0.9740 - val_loss: 0.1852 - val_acc: 0.7956\n\nEpoch 00209: val_acc did not improve from 0.84600\nEpoch 210/500\n7500/7500 [==============================] - 5s 654us/step - loss: 0.0241 - acc: 0.9740 - val_loss: 0.1806 - val_acc: 0.8012\n\nEpoch 00210: val_acc did not improve from 0.84600\nEpoch 211/500\n7500/7500 [==============================] - 5s 648us/step - loss: 0.0238 - acc: 0.9744 - val_loss: 0.1870 - val_acc: 0.7996\n\nEpoch 00211: val_acc did not improve from 0.84600\nEpoch 212/500\n7500/7500 [==============================] - 5s 659us/step - loss: 0.0255 - acc: 0.9712 - val_loss: 0.1857 - val_acc: 0.7976\n\nEpoch 00212: val_acc did not improve from 0.84600\nEpoch 213/500\n7500/7500 [==============================] - 5s 649us/step - loss: 0.0232 - acc: 0.9752 - val_loss: 0.1817 - val_acc: 0.8004\n\nEpoch 00213: val_acc did not improve from 0.84600\nEpoch 214/500\n7500/7500 [==============================] - 5s 647us/step - loss: 0.0246 - acc: 0.9732 - val_loss: 0.1806 - val_acc: 0.8052\n\nEpoch 00214: val_acc did not improve from 0.84600\nEpoch 215/500\n7500/7500 [==============================] - 5s 673us/step - loss: 0.0241 - acc: 0.9737 - val_loss: 0.1842 - val_acc: 0.7992\n\nEpoch 00215: val_acc did not improve from 0.84600\nEpoch 216/500\n7500/7500 [==============================] - 5s 657us/step - loss: 0.0244 - acc: 0.9743 - val_loss: 0.1927 - val_acc: 0.7912\n\nEpoch 00216: val_acc did not improve from 0.84600\nEpoch 217/500\n7500/7500 [==============================] - 5s 656us/step - loss: 0.0229 - acc: 0.9763 - val_loss: 0.1863 - val_acc: 0.7956\n\nEpoch 00217: val_acc did not improve from 0.84600\nEpoch 218/500\n7500/7500 [==============================] - 5s 646us/step - loss: 0.0245 - acc: 0.9739 - val_loss: 0.1835 - val_acc: 0.7988\n\nEpoch 00218: val_acc did not improve from 0.84600\nEpoch 219/500\n7500/7500 [==============================] - 5s 650us/step - loss: 0.0250 - acc: 0.9729 - val_loss: 0.1848 - val_acc: 0.7980\n\nEpoch 00219: val_acc did not improve from 0.84600\nEpoch 220/500\n7500/7500 [==============================] - 5s 646us/step - loss: 0.0269 - acc: 0.9697 - val_loss: 0.1806 - val_acc: 0.8020\n\nEpoch 00220: val_acc did not improve from 0.84600\nEpoch 221/500\n7500/7500 [==============================] - 5s 651us/step - loss: 0.0257 - acc: 0.9721 - val_loss: 0.1787 - val_acc: 0.8048\n\nEpoch 00221: val_acc did not improve from 0.84600\nEpoch 222/500\n7500/7500 [==============================] - 5s 684us/step - loss: 0.0225 - acc: 0.9761 - val_loss: 0.1814 - val_acc: 0.8012\n\nEpoch 00222: val_acc did not improve from 0.84600\nEpoch 223/500\n7500/7500 [==============================] - 5s 721us/step - loss: 0.0245 - acc: 0.9731 - val_loss: 0.1845 - val_acc: 0.7992\n\nEpoch 00223: val_acc did not improve from 0.84600\nEpoch 224/500\n7500/7500 [==============================] - 5s 713us/step - loss: 0.0235 - acc: 0.9745 - val_loss: 0.1778 - val_acc: 0.8096\n\nEpoch 00224: val_acc did not improve from 0.84600\nEpoch 225/500\n7500/7500 [==============================] - 5s 697us/step - loss: 0.0225 - acc: 0.9751 - val_loss: 0.1828 - val_acc: 0.8004\n\nEpoch 00225: val_acc did not improve from 0.84600\nEpoch 226/500\n7500/7500 [==============================] - 5s 653us/step - loss: 0.0262 - acc: 0.9719 - val_loss: 0.1819 - val_acc: 0.8036\n\nEpoch 00226: val_acc did not improve from 0.84600\nEpoch 227/500\n7500/7500 [==============================] - 5s 663us/step - loss: 0.0233 - acc: 0.9756 - val_loss: 0.1782 - val_acc: 0.8040\n\nEpoch 00227: val_acc did not improve from 0.84600\nEpoch 228/500\n7500/7500 [==============================] - 5s 666us/step - loss: 0.0226 - acc: 0.9761 - val_loss: 0.1845 - val_acc: 0.7936\n\nEpoch 00228: val_acc did not improve from 0.84600\nEpoch 229/500\n7500/7500 [==============================] - 5s 637us/step - loss: 0.0244 - acc: 0.9736 - val_loss: 0.1829 - val_acc: 0.8012\n\nEpoch 00229: val_acc did not improve from 0.84600\nEpoch 230/500\n7500/7500 [==============================] - 5s 634us/step - loss: 0.0229 - acc: 0.9761 - val_loss: 0.1805 - val_acc: 0.8000\n\nEpoch 00230: val_acc did not improve from 0.84600\nEpoch 231/500\n7500/7500 [==============================] - 5s 646us/step - loss: 0.0222 - acc: 0.9768 - val_loss: 0.1786 - val_acc: 0.8084\n\nEpoch 00231: val_acc did not improve from 0.84600\nEpoch 232/500\n7500/7500 [==============================] - 5s 660us/step - loss: 0.0222 - acc: 0.9771 - val_loss: 0.1815 - val_acc: 0.8016\n\nEpoch 00232: val_acc did not improve from 0.84600\nEpoch 233/500\n7500/7500 [==============================] - 5s 640us/step - loss: 0.0243 - acc: 0.9745 - val_loss: 0.1815 - val_acc: 0.8044\n\nEpoch 00233: val_acc did not improve from 0.84600\nEpoch 234/500\n7500/7500 [==============================] - 5s 668us/step - loss: 0.0229 - acc: 0.9745 - val_loss: 0.1953 - val_acc: 0.7868\n\nEpoch 00234: val_acc did not improve from 0.84600\nEpoch 235/500\n7500/7500 [==============================] - 5s 665us/step - loss: 0.0242 - acc: 0.9748 - val_loss: 0.1833 - val_acc: 0.8020\n\nEpoch 00235: val_acc did not improve from 0.84600\nEpoch 236/500\n7500/7500 [==============================] - 5s 665us/step - loss: 0.0220 - acc: 0.9772 - val_loss: 0.1824 - val_acc: 0.8020\n\nEpoch 00236: val_acc did not improve from 0.84600\nEpoch 237/500\n7500/7500 [==============================] - 5s 645us/step - loss: 0.0231 - acc: 0.9749 - val_loss: 0.1848 - val_acc: 0.7964\n\nEpoch 00237: val_acc did not improve from 0.84600\nEpoch 238/500\n7500/7500 [==============================] - 5s 641us/step - loss: 0.0243 - acc: 0.9735 - val_loss: 0.1848 - val_acc: 0.8020\n\nEpoch 00238: val_acc did not improve from 0.84600\nEpoch 239/500\n7500/7500 [==============================] - 5s 639us/step - loss: 0.0229 - acc: 0.9756 - val_loss: 0.1871 - val_acc: 0.7936\n\nEpoch 00239: val_acc did not improve from 0.84600\nEpoch 240/500\n7500/7500 [==============================] - 5s 641us/step - loss: 0.0222 - acc: 0.9765 - val_loss: 0.1838 - val_acc: 0.7984\n\nEpoch 00240: val_acc did not improve from 0.84600\nEpoch 241/500\n7500/7500 [==============================] - 5s 646us/step - loss: 0.0227 - acc: 0.9757 - val_loss: 0.1872 - val_acc: 0.7992\n\nEpoch 00241: val_acc did not improve from 0.84600\nEpoch 242/500\n7500/7500 [==============================] - 5s 636us/step - loss: 0.0227 - acc: 0.9756 - val_loss: 0.1794 - val_acc: 0.8004\n\nEpoch 00242: val_acc did not improve from 0.84600\nEpoch 243/500\n7500/7500 [==============================] - 5s 634us/step - loss: 0.0213 - acc: 0.9777 - val_loss: 0.1876 - val_acc: 0.7972\n\nEpoch 00243: val_acc did not improve from 0.84600\nEpoch 244/500\n7500/7500 [==============================] - 5s 640us/step - loss: 0.0215 - acc: 0.9775 - val_loss: 0.1910 - val_acc: 0.7924\n\nEpoch 00244: val_acc did not improve from 0.84600\nEpoch 245/500\n7500/7500 [==============================] - 5s 643us/step - loss: 0.0215 - acc: 0.9767 - val_loss: 0.1883 - val_acc: 0.7968\n\nEpoch 00245: val_acc did not improve from 0.84600\nEpoch 246/500\n7500/7500 [==============================] - 5s 639us/step - loss: 0.0215 - acc: 0.9767 - val_loss: 0.1980 - val_acc: 0.7876\n\nEpoch 00246: val_acc did not improve from 0.84600\nEpoch 247/500\n7500/7500 [==============================] - 5s 638us/step - loss: 0.0201 - acc: 0.9793 - val_loss: 0.1848 - val_acc: 0.8024\n\nEpoch 00247: val_acc did not improve from 0.84600\nEpoch 248/500\n7500/7500 [==============================] - 5s 634us/step - loss: 0.0230 - acc: 0.9760 - val_loss: 0.1851 - val_acc: 0.7996\n\nEpoch 00248: val_acc did not improve from 0.84600\nEpoch 249/500\n7500/7500 [==============================] - 5s 641us/step - loss: 0.0220 - acc: 0.9771 - val_loss: 0.1867 - val_acc: 0.8004\n\nEpoch 00249: val_acc did not improve from 0.84600\nEpoch 250/500\n7500/7500 [==============================] - 5s 640us/step - loss: 0.0213 - acc: 0.9776 - val_loss: 0.1871 - val_acc: 0.7956\n\nEpoch 00250: val_acc did not improve from 0.84600\nEpoch 251/500\n7500/7500 [==============================] - 5s 640us/step - loss: 0.0223 - acc: 0.9760 - val_loss: 0.1798 - val_acc: 0.8060\n\nEpoch 00251: val_acc did not improve from 0.84600\nEpoch 252/500\n7500/7500 [==============================] - 5s 637us/step - loss: 0.0220 - acc: 0.9769 - val_loss: 0.1847 - val_acc: 0.7968\n\nEpoch 00252: val_acc did not improve from 0.84600\nEpoch 253/500\n7500/7500 [==============================] - 5s 642us/step - loss: 0.0218 - acc: 0.9768 - val_loss: 0.1833 - val_acc: 0.7972\n\nEpoch 00253: val_acc did not improve from 0.84600\nEpoch 254/500\n7500/7500 [==============================] - 5s 639us/step - loss: 0.0212 - acc: 0.9777 - val_loss: 0.1821 - val_acc: 0.8056\n\nEpoch 00254: val_acc did not improve from 0.84600\nEpoch 255/500\n7500/7500 [==============================] - 5s 645us/step - loss: 0.0213 - acc: 0.9773 - val_loss: 0.1853 - val_acc: 0.7964\n\nEpoch 00255: val_acc did not improve from 0.84600\nEpoch 256/500\n7500/7500 [==============================] - 5s 635us/step - loss: 0.0235 - acc: 0.9747 - val_loss: 0.1822 - val_acc: 0.8008\n\nEpoch 00256: val_acc did not improve from 0.84600\nEpoch 257/500\n7500/7500 [==============================] - 5s 637us/step - loss: 0.0195 - acc: 0.9800 - val_loss: 0.1908 - val_acc: 0.7948\n\nEpoch 00257: val_acc did not improve from 0.84600\nEpoch 258/500\n7500/7500 [==============================] - 5s 641us/step - loss: 0.0213 - acc: 0.9783 - val_loss: 0.1841 - val_acc: 0.8016\n\nEpoch 00258: val_acc did not improve from 0.84600\nEpoch 259/500\n7500/7500 [==============================] - 5s 643us/step - loss: 0.0234 - acc: 0.9747 - val_loss: 0.1837 - val_acc: 0.8028\n\nEpoch 00259: val_acc did not improve from 0.84600\nEpoch 260/500\n7500/7500 [==============================] - 5s 638us/step - loss: 0.0225 - acc: 0.9760 - val_loss: 0.1893 - val_acc: 0.7980\n\nEpoch 00260: val_acc did not improve from 0.84600\nEpoch 261/500\n7500/7500 [==============================] - 5s 635us/step - loss: 0.0217 - acc: 0.9771 - val_loss: 0.1844 - val_acc: 0.8000\n\nEpoch 00261: val_acc did not improve from 0.84600\nEpoch 262/500\n7500/7500 [==============================] - 5s 642us/step - loss: 0.0226 - acc: 0.9763 - val_loss: 0.1840 - val_acc: 0.8032\n\nEpoch 00262: val_acc did not improve from 0.84600\nEpoch 263/500\n7500/7500 [==============================] - 5s 644us/step - loss: 0.0224 - acc: 0.9761 - val_loss: 0.1836 - val_acc: 0.8004\n\nEpoch 00263: val_acc did not improve from 0.84600\nEpoch 264/500\n7500/7500 [==============================] - 5s 641us/step - loss: 0.0230 - acc: 0.9753 - val_loss: 0.1819 - val_acc: 0.8040\n\nEpoch 00264: val_acc did not improve from 0.84600\nEpoch 265/500\n7500/7500 [==============================] - 5s 635us/step - loss: 0.0211 - acc: 0.9785 - val_loss: 0.1891 - val_acc: 0.7984\n\nEpoch 00265: val_acc did not improve from 0.84600\nEpoch 266/500\n7500/7500 [==============================] - 5s 648us/step - loss: 0.0213 - acc: 0.9783 - val_loss: 0.1865 - val_acc: 0.8012\n\nEpoch 00266: val_acc did not improve from 0.84600\nEpoch 267/500\n7500/7500 [==============================] - 5s 633us/step - loss: 0.0212 - acc: 0.9772 - val_loss: 0.1886 - val_acc: 0.7984\n\nEpoch 00267: val_acc did not improve from 0.84600\nEpoch 268/500\n7500/7500 [==============================] - 5s 637us/step - loss: 0.0217 - acc: 0.9771 - val_loss: 0.1813 - val_acc: 0.7984\n\nEpoch 00268: val_acc did not improve from 0.84600\nEpoch 269/500\n7500/7500 [==============================] - 5s 638us/step - loss: 0.0207 - acc: 0.9784 - val_loss: 0.1836 - val_acc: 0.8008\n\nEpoch 00269: val_acc did not improve from 0.84600\nEpoch 270/500\n7500/7500 [==============================] - 5s 637us/step - loss: 0.0221 - acc: 0.9772 - val_loss: 0.1711 - val_acc: 0.8200\n\nEpoch 00270: val_acc did not improve from 0.84600\nEpoch 271/500\n7500/7500 [==============================] - 5s 635us/step - loss: 0.0203 - acc: 0.9791 - val_loss: 0.1800 - val_acc: 0.8064\n\nEpoch 00271: val_acc did not improve from 0.84600\nEpoch 272/500\n7500/7500 [==============================] - 5s 636us/step - loss: 0.0196 - acc: 0.9803 - val_loss: 0.1817 - val_acc: 0.8028\n\nEpoch 00272: val_acc did not improve from 0.84600\nEpoch 273/500\n7500/7500 [==============================] - 5s 640us/step - loss: 0.0203 - acc: 0.9787 - val_loss: 0.1884 - val_acc: 0.7984\n\nEpoch 00273: val_acc did not improve from 0.84600\nEpoch 274/500\n7500/7500 [==============================] - 5s 642us/step - loss: 0.0208 - acc: 0.9784 - val_loss: 0.1779 - val_acc: 0.8084\n\nEpoch 00274: val_acc did not improve from 0.84600\nEpoch 275/500\n7500/7500 [==============================] - 5s 633us/step - loss: 0.0202 - acc: 0.9787 - val_loss: 0.1840 - val_acc: 0.8084\n\nEpoch 00275: val_acc did not improve from 0.84600\nEpoch 276/500\n7500/7500 [==============================] - 5s 639us/step - loss: 0.0207 - acc: 0.9779 - val_loss: 0.1832 - val_acc: 0.8036\n\nEpoch 00276: val_acc did not improve from 0.84600\nEpoch 277/500\n7500/7500 [==============================] - 5s 636us/step - loss: 0.0215 - acc: 0.9768 - val_loss: 0.1786 - val_acc: 0.8092\n\nEpoch 00277: val_acc did not improve from 0.84600\nEpoch 278/500\n7500/7500 [==============================] - 5s 640us/step - loss: 0.0205 - acc: 0.9787 - val_loss: 0.1822 - val_acc: 0.8040\n\nEpoch 00278: val_acc did not improve from 0.84600\nEpoch 279/500\n7500/7500 [==============================] - 5s 639us/step - loss: 0.0204 - acc: 0.9791 - val_loss: 0.1827 - val_acc: 0.7992\n\nEpoch 00279: val_acc did not improve from 0.84600\nEpoch 280/500\n7500/7500 [==============================] - 5s 640us/step - loss: 0.0211 - acc: 0.9779 - val_loss: 0.1806 - val_acc: 0.8044\n\nEpoch 00280: val_acc did not improve from 0.84600\nEpoch 281/500\n7500/7500 [==============================] - 5s 635us/step - loss: 0.0198 - acc: 0.9795 - val_loss: 0.1790 - val_acc: 0.8100\n\nEpoch 00281: val_acc did not improve from 0.84600\nEpoch 282/500\n7500/7500 [==============================] - 5s 635us/step - loss: 0.0221 - acc: 0.9773 - val_loss: 0.1820 - val_acc: 0.8016\n\nEpoch 00282: val_acc did not improve from 0.84600\nEpoch 283/500\n7500/7500 [==============================] - 5s 643us/step - loss: 0.0210 - acc: 0.9779 - val_loss: 0.1792 - val_acc: 0.8096\n\nEpoch 00283: val_acc did not improve from 0.84600\nEpoch 284/500\n7500/7500 [==============================] - 5s 639us/step - loss: 0.0206 - acc: 0.9777 - val_loss: 0.1793 - val_acc: 0.8072\n\nEpoch 00284: val_acc did not improve from 0.84600\nEpoch 285/500\n7500/7500 [==============================] - 5s 642us/step - loss: 0.0204 - acc: 0.9788 - val_loss: 0.1822 - val_acc: 0.8028\n\nEpoch 00285: val_acc did not improve from 0.84600\nEpoch 286/500\n7500/7500 [==============================] - 5s 639us/step - loss: 0.0219 - acc: 0.9768 - val_loss: 0.1849 - val_acc: 0.8020\n\nEpoch 00286: val_acc did not improve from 0.84600\nEpoch 287/500\n7500/7500 [==============================] - 5s 634us/step - loss: 0.0199 - acc: 0.9791 - val_loss: 0.1804 - val_acc: 0.8092\n\nEpoch 00287: val_acc did not improve from 0.84600\nEpoch 288/500\n7500/7500 [==============================] - 5s 640us/step - loss: 0.0198 - acc: 0.9793 - val_loss: 0.1922 - val_acc: 0.7960\n\nEpoch 00288: val_acc did not improve from 0.84600\nEpoch 289/500\n7500/7500 [==============================] - 5s 639us/step - loss: 0.0218 - acc: 0.9771 - val_loss: 0.1848 - val_acc: 0.8008\n\nEpoch 00289: val_acc did not improve from 0.84600\nEpoch 290/500\n7500/7500 [==============================] - 5s 641us/step - loss: 0.0209 - acc: 0.9783 - val_loss: 0.1808 - val_acc: 0.8044\n\nEpoch 00290: val_acc did not improve from 0.84600\nEpoch 291/500\n7500/7500 [==============================] - 5s 638us/step - loss: 0.0210 - acc: 0.9773 - val_loss: 0.1805 - val_acc: 0.8076\n\nEpoch 00291: val_acc did not improve from 0.84600\nEpoch 292/500\n7500/7500 [==============================] - 5s 639us/step - loss: 0.0204 - acc: 0.9787 - val_loss: 0.1846 - val_acc: 0.8000\n\nEpoch 00292: val_acc did not improve from 0.84600\nEpoch 293/500\n7500/7500 [==============================] - 5s 640us/step - loss: 0.0205 - acc: 0.9783 - val_loss: 0.1906 - val_acc: 0.7936\n\nEpoch 00293: val_acc did not improve from 0.84600\nEpoch 294/500\n7500/7500 [==============================] - 5s 637us/step - loss: 0.0192 - acc: 0.9804 - val_loss: 0.1862 - val_acc: 0.8016\n\nEpoch 00294: val_acc did not improve from 0.84600\nEpoch 295/500\n7500/7500 [==============================] - 5s 645us/step - loss: 0.0202 - acc: 0.9791 - val_loss: 0.1802 - val_acc: 0.8060\n\nEpoch 00295: val_acc did not improve from 0.84600\nEpoch 296/500\n7500/7500 [==============================] - 5s 639us/step - loss: 0.0207 - acc: 0.9776 - val_loss: 0.1866 - val_acc: 0.8000\n\nEpoch 00296: val_acc did not improve from 0.84600\nEpoch 297/500\n7500/7500 [==============================] - 5s 640us/step - loss: 0.0202 - acc: 0.9791 - val_loss: 0.1783 - val_acc: 0.8092\n\nEpoch 00297: val_acc did not improve from 0.84600\nEpoch 298/500\n7500/7500 [==============================] - 5s 640us/step - loss: 0.0189 - acc: 0.9803 - val_loss: 0.1804 - val_acc: 0.8080\n\nEpoch 00298: val_acc did not improve from 0.84600\nEpoch 299/500\n7500/7500 [==============================] - 5s 646us/step - loss: 0.0205 - acc: 0.9780 - val_loss: 0.1825 - val_acc: 0.8016\n\nEpoch 00299: val_acc did not improve from 0.84600\nEpoch 300/500\n7500/7500 [==============================] - 5s 638us/step - loss: 0.0214 - acc: 0.9773 - val_loss: 0.1839 - val_acc: 0.8012\n\nEpoch 00300: val_acc did not improve from 0.84600\nEpoch 301/500\n7500/7500 [==============================] - 5s 638us/step - loss: 0.0200 - acc: 0.9783 - val_loss: 0.1836 - val_acc: 0.8036\n\nEpoch 00301: val_acc did not improve from 0.84600\nEpoch 302/500\n7500/7500 [==============================] - 5s 640us/step - loss: 0.0202 - acc: 0.9788 - val_loss: 0.1854 - val_acc: 0.8052\n\nEpoch 00302: val_acc did not improve from 0.84600\nEpoch 303/500\n7500/7500 [==============================] - 5s 638us/step - loss: 0.0195 - acc: 0.9797 - val_loss: 0.1833 - val_acc: 0.8048\n\nEpoch 00303: val_acc did not improve from 0.84600\nEpoch 304/500\n7500/7500 [==============================] - 5s 644us/step - loss: 0.0200 - acc: 0.9791 - val_loss: 0.1858 - val_acc: 0.8004\n\nEpoch 00304: val_acc did not improve from 0.84600\nEpoch 305/500\n7500/7500 [==============================] - 5s 640us/step - loss: 0.0210 - acc: 0.9779 - val_loss: 0.1840 - val_acc: 0.8044\n\nEpoch 00305: val_acc did not improve from 0.84600\nEpoch 306/500\n7500/7500 [==============================] - 5s 636us/step - loss: 0.0190 - acc: 0.9803 - val_loss: 0.1796 - val_acc: 0.8080\n\nEpoch 00306: val_acc did not improve from 0.84600\nEpoch 307/500\n7500/7500 [==============================] - 5s 638us/step - loss: 0.0208 - acc: 0.9776 - val_loss: 0.1860 - val_acc: 0.7996\n\nEpoch 00307: val_acc did not improve from 0.84600\nEpoch 308/500\n7500/7500 [==============================] - 5s 641us/step - loss: 0.0183 - acc: 0.9812 - val_loss: 0.1894 - val_acc: 0.7960\n\nEpoch 00308: val_acc did not improve from 0.84600\nEpoch 309/500\n7500/7500 [==============================] - 5s 634us/step - loss: 0.0199 - acc: 0.9787 - val_loss: 0.1833 - val_acc: 0.8052\n\nEpoch 00309: val_acc did not improve from 0.84600\nEpoch 310/500\n7500/7500 [==============================] - 5s 637us/step - loss: 0.0197 - acc: 0.9791 - val_loss: 0.1868 - val_acc: 0.7984\n\nEpoch 00310: val_acc did not improve from 0.84600\nEpoch 311/500\n7500/7500 [==============================] - 5s 635us/step - loss: 0.0189 - acc: 0.9805 - val_loss: 0.1883 - val_acc: 0.8000\n\nEpoch 00311: val_acc did not improve from 0.84600\nEpoch 312/500\n7500/7500 [==============================] - 5s 639us/step - loss: 0.0204 - acc: 0.9783 - val_loss: 0.1832 - val_acc: 0.8036\n\nEpoch 00312: val_acc did not improve from 0.84600\nEpoch 313/500\n7500/7500 [==============================] - 5s 643us/step - loss: 0.0191 - acc: 0.9801 - val_loss: 0.1843 - val_acc: 0.8004\n\nEpoch 00313: val_acc did not improve from 0.84600\nEpoch 314/500\n7500/7500 [==============================] - 5s 639us/step - loss: 0.0189 - acc: 0.9805 - val_loss: 0.1780 - val_acc: 0.8088\n\nEpoch 00314: val_acc did not improve from 0.84600\nEpoch 315/500\n7500/7500 [==============================] - 5s 637us/step - loss: 0.0185 - acc: 0.9808 - val_loss: 0.1871 - val_acc: 0.8004\n\nEpoch 00315: val_acc did not improve from 0.84600\nEpoch 316/500\n7500/7500 [==============================] - 5s 639us/step - loss: 0.0202 - acc: 0.9785 - val_loss: 0.1897 - val_acc: 0.7948\n\nEpoch 00316: val_acc did not improve from 0.84600\nEpoch 317/500\n7500/7500 [==============================] - 5s 640us/step - loss: 0.0197 - acc: 0.9795 - val_loss: 0.1841 - val_acc: 0.8016\n\nEpoch 00317: val_acc did not improve from 0.84600\nEpoch 318/500\n7500/7500 [==============================] - 5s 634us/step - loss: 0.0193 - acc: 0.9799 - val_loss: 0.1841 - val_acc: 0.8040\n\nEpoch 00318: val_acc did not improve from 0.84600\nEpoch 319/500\n7500/7500 [==============================] - 5s 640us/step - loss: 0.0215 - acc: 0.9775 - val_loss: 0.1829 - val_acc: 0.8060\n\nEpoch 00319: val_acc did not improve from 0.84600\nEpoch 320/500\n7500/7500 [==============================] - 5s 641us/step - loss: 0.0211 - acc: 0.9781 - val_loss: 0.1874 - val_acc: 0.8016\n\nEpoch 00320: val_acc did not improve from 0.84600\nEpoch 321/500\n7500/7500 [==============================] - 5s 639us/step - loss: 0.0207 - acc: 0.9784 - val_loss: 0.1862 - val_acc: 0.7984\n\nEpoch 00321: val_acc did not improve from 0.84600\nEpoch 322/500\n7500/7500 [==============================] - 5s 639us/step - loss: 0.0198 - acc: 0.9789 - val_loss: 0.1906 - val_acc: 0.8000\n\nEpoch 00322: val_acc did not improve from 0.84600\nEpoch 323/500\n7500/7500 [==============================] - 5s 640us/step - loss: 0.0190 - acc: 0.9796 - val_loss: 0.1890 - val_acc: 0.7992\n\nEpoch 00323: val_acc did not improve from 0.84600\nEpoch 324/500\n7500/7500 [==============================] - 5s 638us/step - loss: 0.0183 - acc: 0.9808 - val_loss: 0.1905 - val_acc: 0.7996\n\nEpoch 00324: val_acc did not improve from 0.84600\nEpoch 325/500\n7500/7500 [==============================] - 5s 638us/step - loss: 0.0188 - acc: 0.9807 - val_loss: 0.1797 - val_acc: 0.8068\n\nEpoch 00325: val_acc did not improve from 0.84600\nEpoch 326/500\n7500/7500 [==============================] - 5s 642us/step - loss: 0.0190 - acc: 0.9803 - val_loss: 0.1876 - val_acc: 0.7980\n\nEpoch 00326: val_acc did not improve from 0.84600\nEpoch 327/500\n7500/7500 [==============================] - 5s 640us/step - loss: 0.0192 - acc: 0.9796 - val_loss: 0.1882 - val_acc: 0.7968\n\nEpoch 00327: val_acc did not improve from 0.84600\nEpoch 328/500\n7500/7500 [==============================] - 5s 639us/step - loss: 0.0193 - acc: 0.9801 - val_loss: 0.1841 - val_acc: 0.8024\n\nEpoch 00328: val_acc did not improve from 0.84600\nEpoch 329/500\n7500/7500 [==============================] - 5s 635us/step - loss: 0.0182 - acc: 0.9811 - val_loss: 0.1814 - val_acc: 0.8040\n\nEpoch 00329: val_acc did not improve from 0.84600\nEpoch 330/500\n7500/7500 [==============================] - 5s 637us/step - loss: 0.0185 - acc: 0.9808 - val_loss: 0.1899 - val_acc: 0.7988\n\nEpoch 00330: val_acc did not improve from 0.84600\nEpoch 331/500\n7500/7500 [==============================] - 5s 642us/step - loss: 0.0202 - acc: 0.9797 - val_loss: 0.1869 - val_acc: 0.8004\n\nEpoch 00331: val_acc did not improve from 0.84600\nEpoch 332/500\n7500/7500 [==============================] - 5s 637us/step - loss: 0.0185 - acc: 0.9809 - val_loss: 0.1874 - val_acc: 0.7984\n\nEpoch 00332: val_acc did not improve from 0.84600\nEpoch 333/500\n7500/7500 [==============================] - 5s 638us/step - loss: 0.0176 - acc: 0.9816 - val_loss: 0.1782 - val_acc: 0.8092\n\nEpoch 00333: val_acc did not improve from 0.84600\nEpoch 334/500\n7500/7500 [==============================] - 5s 642us/step - loss: 0.0195 - acc: 0.9793 - val_loss: 0.1795 - val_acc: 0.8100\n\nEpoch 00334: val_acc did not improve from 0.84600\nEpoch 335/500\n7500/7500 [==============================] - 5s 641us/step - loss: 0.0180 - acc: 0.9808 - val_loss: 0.1855 - val_acc: 0.8032\n\nEpoch 00335: val_acc did not improve from 0.84600\nEpoch 336/500\n7500/7500 [==============================] - 5s 639us/step - loss: 0.0200 - acc: 0.9789 - val_loss: 0.1875 - val_acc: 0.8008\n\nEpoch 00336: val_acc did not improve from 0.84600\nEpoch 337/500\n7500/7500 [==============================] - 5s 643us/step - loss: 0.0197 - acc: 0.9789 - val_loss: 0.1881 - val_acc: 0.7960\n\nEpoch 00337: val_acc did not improve from 0.84600\nEpoch 338/500\n7500/7500 [==============================] - 5s 639us/step - loss: 0.0181 - acc: 0.9811 - val_loss: 0.1873 - val_acc: 0.7976\n\nEpoch 00338: val_acc did not improve from 0.84600\nEpoch 339/500\n7500/7500 [==============================] - 5s 636us/step - loss: 0.0180 - acc: 0.9809 - val_loss: 0.1907 - val_acc: 0.7952\n\nEpoch 00339: val_acc did not improve from 0.84600\nEpoch 340/500\n7500/7500 [==============================] - 5s 639us/step - loss: 0.0196 - acc: 0.9791 - val_loss: 0.1836 - val_acc: 0.8044\n\nEpoch 00340: val_acc did not improve from 0.84600\nEpoch 341/500\n7500/7500 [==============================] - 5s 636us/step - loss: 0.0186 - acc: 0.9807 - val_loss: 0.1777 - val_acc: 0.8116\n\nEpoch 00341: val_acc did not improve from 0.84600\nEpoch 342/500\n7500/7500 [==============================] - 5s 638us/step - loss: 0.0189 - acc: 0.9801 - val_loss: 0.1892 - val_acc: 0.7936\n\nEpoch 00342: val_acc did not improve from 0.84600\nEpoch 343/500\n7500/7500 [==============================] - 5s 644us/step - loss: 0.0185 - acc: 0.9808 - val_loss: 0.1858 - val_acc: 0.8008\n\nEpoch 00343: val_acc did not improve from 0.84600\nEpoch 344/500\n7500/7500 [==============================] - 5s 635us/step - loss: 0.0191 - acc: 0.9799 - val_loss: 0.1825 - val_acc: 0.8060\n\nEpoch 00344: val_acc did not improve from 0.84600\nEpoch 345/500\n7500/7500 [==============================] - 5s 638us/step - loss: 0.0189 - acc: 0.9800 - val_loss: 0.1863 - val_acc: 0.8000\n\nEpoch 00345: val_acc did not improve from 0.84600\nEpoch 346/500\n7500/7500 [==============================] - 5s 636us/step - loss: 0.0183 - acc: 0.9811 - val_loss: 0.1845 - val_acc: 0.8040\n\nEpoch 00346: val_acc did not improve from 0.84600\nEpoch 347/500\n7500/7500 [==============================] - 5s 641us/step - loss: 0.0185 - acc: 0.9800 - val_loss: 0.1767 - val_acc: 0.8080\n\nEpoch 00347: val_acc did not improve from 0.84600\nEpoch 348/500\n7500/7500 [==============================] - 5s 638us/step - loss: 0.0176 - acc: 0.9817 - val_loss: 0.1776 - val_acc: 0.8100\n\nEpoch 00348: val_acc did not improve from 0.84600\nEpoch 349/500\n7500/7500 [==============================] - 5s 635us/step - loss: 0.0188 - acc: 0.9804 - val_loss: 0.1796 - val_acc: 0.8088\n\nEpoch 00349: val_acc did not improve from 0.84600\nEpoch 350/500\n7500/7500 [==============================] - 5s 640us/step - loss: 0.0179 - acc: 0.9813 - val_loss: 0.1819 - val_acc: 0.8060\n\nEpoch 00350: val_acc did not improve from 0.84600\nEpoch 351/500\n7500/7500 [==============================] - 5s 640us/step - loss: 0.0174 - acc: 0.9821 - val_loss: 0.1833 - val_acc: 0.8048\n\nEpoch 00351: val_acc did not improve from 0.84600\nEpoch 352/500\n7500/7500 [==============================] - 5s 639us/step - loss: 0.0167 - acc: 0.9831 - val_loss: 0.1824 - val_acc: 0.8024\n\nEpoch 00352: val_acc did not improve from 0.84600\nEpoch 353/500\n7500/7500 [==============================] - 5s 636us/step - loss: 0.0183 - acc: 0.9805 - val_loss: 0.1802 - val_acc: 0.8088\n\nEpoch 00353: val_acc did not improve from 0.84600\nEpoch 354/500\n7500/7500 [==============================] - 5s 640us/step - loss: 0.0172 - acc: 0.9824 - val_loss: 0.1810 - val_acc: 0.8064\n\nEpoch 00354: val_acc did not improve from 0.84600\nEpoch 355/500\n7500/7500 [==============================] - 5s 641us/step - loss: 0.0187 - acc: 0.9803 - val_loss: 0.1797 - val_acc: 0.8076\n\nEpoch 00355: val_acc did not improve from 0.84600\nEpoch 356/500\n7500/7500 [==============================] - 5s 635us/step - loss: 0.0180 - acc: 0.9811 - val_loss: 0.1849 - val_acc: 0.8028\n\nEpoch 00356: val_acc did not improve from 0.84600\nEpoch 357/500\n7500/7500 [==============================] - 5s 640us/step - loss: 0.0190 - acc: 0.9804 - val_loss: 0.1891 - val_acc: 0.8004\n\nEpoch 00357: val_acc did not improve from 0.84600\nEpoch 358/500\n7500/7500 [==============================] - 5s 644us/step - loss: 0.0195 - acc: 0.9792 - val_loss: 0.1853 - val_acc: 0.8032\n\nEpoch 00358: val_acc did not improve from 0.84600\nEpoch 359/500\n7500/7500 [==============================] - 5s 640us/step - loss: 0.0173 - acc: 0.9819 - val_loss: 0.1864 - val_acc: 0.8016\n\nEpoch 00359: val_acc did not improve from 0.84600\nEpoch 360/500\n7500/7500 [==============================] - 5s 653us/step - loss: 0.0180 - acc: 0.9820 - val_loss: 0.1813 - val_acc: 0.8068\n\nEpoch 00360: val_acc did not improve from 0.84600\nEpoch 361/500\n7500/7500 [==============================] - 5s 682us/step - loss: 0.0176 - acc: 0.9821 - val_loss: 0.1809 - val_acc: 0.8048\n\nEpoch 00361: val_acc did not improve from 0.84600\nEpoch 362/500\n7500/7500 [==============================] - 5s 698us/step - loss: 0.0187 - acc: 0.9803 - val_loss: 0.1894 - val_acc: 0.7984\n\nEpoch 00362: val_acc did not improve from 0.84600\nEpoch 363/500\n7500/7500 [==============================] - 5s 693us/step - loss: 0.0178 - acc: 0.9819 - val_loss: 0.1963 - val_acc: 0.7904\n\nEpoch 00363: val_acc did not improve from 0.84600\nEpoch 364/500\n7500/7500 [==============================] - 5s 644us/step - loss: 0.0167 - acc: 0.9827 - val_loss: 0.1868 - val_acc: 0.8000\n\nEpoch 00364: val_acc did not improve from 0.84600\nEpoch 365/500\n7500/7500 [==============================] - 5s 646us/step - loss: 0.0185 - acc: 0.9805 - val_loss: 0.1828 - val_acc: 0.8056\n\nEpoch 00365: val_acc did not improve from 0.84600\nEpoch 366/500\n7500/7500 [==============================] - 5s 643us/step - loss: 0.0174 - acc: 0.9820 - val_loss: 0.1905 - val_acc: 0.7916\n\nEpoch 00366: val_acc did not improve from 0.84600\nEpoch 367/500\n7500/7500 [==============================] - 5s 641us/step - loss: 0.0190 - acc: 0.9805 - val_loss: 0.1823 - val_acc: 0.8076\n\nEpoch 00367: val_acc did not improve from 0.84600\nEpoch 368/500\n7500/7500 [==============================] - 5s 636us/step - loss: 0.0182 - acc: 0.9812 - val_loss: 0.1856 - val_acc: 0.8040\n\nEpoch 00368: val_acc did not improve from 0.84600\nEpoch 369/500\n7500/7500 [==============================] - 5s 634us/step - loss: 0.0192 - acc: 0.9803 - val_loss: 0.1892 - val_acc: 0.7964\n\nEpoch 00369: val_acc did not improve from 0.84600\nEpoch 370/500\n7500/7500 [==============================] - 5s 636us/step - loss: 0.0182 - acc: 0.9811 - val_loss: 0.1829 - val_acc: 0.8028\n\nEpoch 00370: val_acc did not improve from 0.84600\nEpoch 371/500\n7500/7500 [==============================] - 5s 636us/step - loss: 0.0176 - acc: 0.9821 - val_loss: 0.1935 - val_acc: 0.7940\n\nEpoch 00371: val_acc did not improve from 0.84600\nEpoch 372/500\n7500/7500 [==============================] - 5s 637us/step - loss: 0.0177 - acc: 0.9819 - val_loss: 0.1812 - val_acc: 0.8060\n\nEpoch 00372: val_acc did not improve from 0.84600\nEpoch 373/500\n7500/7500 [==============================] - 5s 633us/step - loss: 0.0176 - acc: 0.9821 - val_loss: 0.1796 - val_acc: 0.8092\n\nEpoch 00373: val_acc did not improve from 0.84600\nEpoch 374/500\n7500/7500 [==============================] - 5s 633us/step - loss: 0.0185 - acc: 0.9801 - val_loss: 0.1896 - val_acc: 0.7972\n\nEpoch 00374: val_acc did not improve from 0.84600\nEpoch 375/500\n7500/7500 [==============================] - 5s 636us/step - loss: 0.0186 - acc: 0.9805 - val_loss: 0.1850 - val_acc: 0.8032\n\nEpoch 00375: val_acc did not improve from 0.84600\nEpoch 376/500\n7500/7500 [==============================] - 5s 635us/step - loss: 0.0168 - acc: 0.9828 - val_loss: 0.1793 - val_acc: 0.8060\n\nEpoch 00376: val_acc did not improve from 0.84600\nEpoch 377/500\n7500/7500 [==============================] - 5s 634us/step - loss: 0.0177 - acc: 0.9816 - val_loss: 0.1836 - val_acc: 0.8016\n\nEpoch 00377: val_acc did not improve from 0.84600\nEpoch 378/500\n7500/7500 [==============================] - 5s 636us/step - loss: 0.0178 - acc: 0.9812 - val_loss: 0.1855 - val_acc: 0.8024\n\nEpoch 00378: val_acc did not improve from 0.84600\nEpoch 379/500\n7500/7500 [==============================] - 5s 636us/step - loss: 0.0173 - acc: 0.9820 - val_loss: 0.1878 - val_acc: 0.7972\n\nEpoch 00379: val_acc did not improve from 0.84600\nEpoch 380/500\n7500/7500 [==============================] - 5s 630us/step - loss: 0.0160 - acc: 0.9833 - val_loss: 0.1824 - val_acc: 0.8060\n\nEpoch 00380: val_acc did not improve from 0.84600\nEpoch 381/500\n7500/7500 [==============================] - 5s 635us/step - loss: 0.0171 - acc: 0.9823 - val_loss: 0.1873 - val_acc: 0.8020\n\nEpoch 00381: val_acc did not improve from 0.84600\nEpoch 382/500\n7500/7500 [==============================] - 5s 638us/step - loss: 0.0174 - acc: 0.9819 - val_loss: 0.1784 - val_acc: 0.8124\n\nEpoch 00382: val_acc did not improve from 0.84600\nEpoch 383/500\n7500/7500 [==============================] - 5s 636us/step - loss: 0.0167 - acc: 0.9821 - val_loss: 0.1825 - val_acc: 0.8080\n\nEpoch 00383: val_acc did not improve from 0.84600\nEpoch 384/500\n7500/7500 [==============================] - 5s 634us/step - loss: 0.0175 - acc: 0.9821 - val_loss: 0.1875 - val_acc: 0.8004\n\nEpoch 00384: val_acc did not improve from 0.84600\nEpoch 385/500\n7500/7500 [==============================] - 5s 637us/step - loss: 0.0184 - acc: 0.9804 - val_loss: 0.1847 - val_acc: 0.8036\n\nEpoch 00385: val_acc did not improve from 0.84600\nEpoch 386/500\n7500/7500 [==============================] - 5s 638us/step - loss: 0.0171 - acc: 0.9824 - val_loss: 0.1858 - val_acc: 0.8020\n\nEpoch 00386: val_acc did not improve from 0.84600\nEpoch 387/500\n7500/7500 [==============================] - 5s 631us/step - loss: 0.0170 - acc: 0.9819 - val_loss: 0.1772 - val_acc: 0.8108\n\nEpoch 00387: val_acc did not improve from 0.84600\nEpoch 388/500\n7500/7500 [==============================] - 5s 637us/step - loss: 0.0188 - acc: 0.9804 - val_loss: 0.1814 - val_acc: 0.8076\n\nEpoch 00388: val_acc did not improve from 0.84600\nEpoch 389/500\n7500/7500 [==============================] - 5s 640us/step - loss: 0.0172 - acc: 0.9821 - val_loss: 0.1850 - val_acc: 0.8016\n\nEpoch 00389: val_acc did not improve from 0.84600\nEpoch 390/500\n7500/7500 [==============================] - 5s 634us/step - loss: 0.0176 - acc: 0.9812 - val_loss: 0.1817 - val_acc: 0.8068\n\nEpoch 00390: val_acc did not improve from 0.84600\nEpoch 391/500\n7500/7500 [==============================] - 5s 638us/step - loss: 0.0171 - acc: 0.9823 - val_loss: 0.1837 - val_acc: 0.8028\n\nEpoch 00391: val_acc did not improve from 0.84600\nEpoch 392/500\n7500/7500 [==============================] - 5s 637us/step - loss: 0.0163 - acc: 0.9833 - val_loss: 0.1839 - val_acc: 0.8032\n\nEpoch 00392: val_acc did not improve from 0.84600\nEpoch 393/500\n7500/7500 [==============================] - 5s 631us/step - loss: 0.0175 - acc: 0.9816 - val_loss: 0.1843 - val_acc: 0.8044\n\nEpoch 00393: val_acc did not improve from 0.84600\nEpoch 394/500\n7500/7500 [==============================] - 5s 633us/step - loss: 0.0172 - acc: 0.9821 - val_loss: 0.1850 - val_acc: 0.8048\n\nEpoch 00394: val_acc did not improve from 0.84600\nEpoch 395/500\n7500/7500 [==============================] - 5s 637us/step - loss: 0.0170 - acc: 0.9820 - val_loss: 0.1838 - val_acc: 0.8000\n\nEpoch 00395: val_acc did not improve from 0.84600\nEpoch 396/500\n7500/7500 [==============================] - 5s 636us/step - loss: 0.0175 - acc: 0.9817 - val_loss: 0.1830 - val_acc: 0.8076\n\nEpoch 00396: val_acc did not improve from 0.84600\nEpoch 397/500\n7500/7500 [==============================] - 5s 633us/step - loss: 0.0159 - acc: 0.9839 - val_loss: 0.1865 - val_acc: 0.8020\n\nEpoch 00397: val_acc did not improve from 0.84600\nEpoch 398/500\n7500/7500 [==============================] - 5s 634us/step - loss: 0.0176 - acc: 0.9819 - val_loss: 0.1822 - val_acc: 0.8064\n\nEpoch 00398: val_acc did not improve from 0.84600\nEpoch 399/500\n7500/7500 [==============================] - 5s 632us/step - loss: 0.0157 - acc: 0.9836 - val_loss: 0.1855 - val_acc: 0.8016\n\nEpoch 00399: val_acc did not improve from 0.84600\nEpoch 400/500\n7500/7500 [==============================] - 5s 637us/step - loss: 0.0163 - acc: 0.9832 - val_loss: 0.1815 - val_acc: 0.8068\n\nEpoch 00400: val_acc did not improve from 0.84600\nEpoch 401/500\n7500/7500 [==============================] - 5s 640us/step - loss: 0.0183 - acc: 0.9808 - val_loss: 0.1804 - val_acc: 0.8072\n\nEpoch 00401: val_acc did not improve from 0.84600\nEpoch 402/500\n7500/7500 [==============================] - 5s 632us/step - loss: 0.0165 - acc: 0.9831 - val_loss: 0.1947 - val_acc: 0.7932\n\nEpoch 00402: val_acc did not improve from 0.84600\nEpoch 403/500\n7500/7500 [==============================] - 5s 631us/step - loss: 0.0173 - acc: 0.9820 - val_loss: 0.1845 - val_acc: 0.8048\n\nEpoch 00403: val_acc did not improve from 0.84600\nEpoch 404/500\n7500/7500 [==============================] - 5s 631us/step - loss: 0.0172 - acc: 0.9820 - val_loss: 0.1794 - val_acc: 0.8108\n\nEpoch 00404: val_acc did not improve from 0.84600\nEpoch 405/500\n7500/7500 [==============================] - 5s 634us/step - loss: 0.0171 - acc: 0.9823 - val_loss: 0.1818 - val_acc: 0.8052\n\nEpoch 00405: val_acc did not improve from 0.84600\nEpoch 406/500\n7500/7500 [==============================] - 5s 638us/step - loss: 0.0181 - acc: 0.9808 - val_loss: 0.1904 - val_acc: 0.7972\n\nEpoch 00406: val_acc did not improve from 0.84600\nEpoch 407/500\n7500/7500 [==============================] - 5s 638us/step - loss: 0.0148 - acc: 0.9853 - val_loss: 0.1842 - val_acc: 0.8000\n\nEpoch 00407: val_acc did not improve from 0.84600\nEpoch 408/500\n7500/7500 [==============================] - 5s 636us/step - loss: 0.0169 - acc: 0.9825 - val_loss: 0.1837 - val_acc: 0.8036\n\nEpoch 00408: val_acc did not improve from 0.84600\nEpoch 409/500\n7500/7500 [==============================] - 5s 633us/step - loss: 0.0164 - acc: 0.9832 - val_loss: 0.1893 - val_acc: 0.7984\n\nEpoch 00409: val_acc did not improve from 0.84600\nEpoch 410/500\n7500/7500 [==============================] - 5s 632us/step - loss: 0.0182 - acc: 0.9808 - val_loss: 0.1777 - val_acc: 0.8124\n\nEpoch 00410: val_acc did not improve from 0.84600\nEpoch 411/500\n7500/7500 [==============================] - 5s 633us/step - loss: 0.0183 - acc: 0.9811 - val_loss: 0.1832 - val_acc: 0.8048\n\nEpoch 00411: val_acc did not improve from 0.84600\nEpoch 412/500\n7500/7500 [==============================] - 5s 637us/step - loss: 0.0166 - acc: 0.9833 - val_loss: 0.1847 - val_acc: 0.8032\n\nEpoch 00412: val_acc did not improve from 0.84600\nEpoch 413/500\n7500/7500 [==============================] - 5s 632us/step - loss: 0.0168 - acc: 0.9824 - val_loss: 0.1864 - val_acc: 0.8008\n\nEpoch 00413: val_acc did not improve from 0.84600\nEpoch 414/500\n7500/7500 [==============================] - 5s 633us/step - loss: 0.0176 - acc: 0.9815 - val_loss: 0.1954 - val_acc: 0.7880\n\nEpoch 00414: val_acc did not improve from 0.84600\nEpoch 415/500\n7500/7500 [==============================] - 5s 634us/step - loss: 0.0171 - acc: 0.9821 - val_loss: 0.1945 - val_acc: 0.7952\n\nEpoch 00415: val_acc did not improve from 0.84600\nEpoch 416/500\n7500/7500 [==============================] - 5s 635us/step - loss: 0.0175 - acc: 0.9820 - val_loss: 0.1899 - val_acc: 0.7984\n\nEpoch 00416: val_acc did not improve from 0.84600\nEpoch 417/500\n7500/7500 [==============================] - 5s 634us/step - loss: 0.0162 - acc: 0.9831 - val_loss: 0.1823 - val_acc: 0.8064\n\nEpoch 00417: val_acc did not improve from 0.84600\nEpoch 418/500\n7500/7500 [==============================] - 5s 673us/step - loss: 0.0168 - acc: 0.9819 - val_loss: 0.1892 - val_acc: 0.7988\n\nEpoch 00418: val_acc did not improve from 0.84600\nEpoch 419/500\n7500/7500 [==============================] - 5s 654us/step - loss: 0.0162 - acc: 0.9832 - val_loss: 0.1879 - val_acc: 0.8028\n\nEpoch 00419: val_acc did not improve from 0.84600\nEpoch 420/500\n7500/7500 [==============================] - 5s 650us/step - loss: 0.0161 - acc: 0.9833 - val_loss: 0.1841 - val_acc: 0.8028\n\nEpoch 00420: val_acc did not improve from 0.84600\nEpoch 421/500\n7500/7500 [==============================] - 5s 640us/step - loss: 0.0175 - acc: 0.9813 - val_loss: 0.1891 - val_acc: 0.7964\n\nEpoch 00421: val_acc did not improve from 0.84600\nEpoch 422/500\n7500/7500 [==============================] - 5s 634us/step - loss: 0.0167 - acc: 0.9825 - val_loss: 0.1795 - val_acc: 0.8064\n\nEpoch 00422: val_acc did not improve from 0.84600\nEpoch 423/500\n7500/7500 [==============================] - 5s 634us/step - loss: 0.0158 - acc: 0.9833 - val_loss: 0.1854 - val_acc: 0.8016\n\nEpoch 00423: val_acc did not improve from 0.84600\nEpoch 424/500\n7500/7500 [==============================] - 5s 644us/step - loss: 0.0156 - acc: 0.9839 - val_loss: 0.1833 - val_acc: 0.8064\n\nEpoch 00424: val_acc did not improve from 0.84600\nEpoch 425/500\n7500/7500 [==============================] - 5s 639us/step - loss: 0.0159 - acc: 0.9835 - val_loss: 0.1884 - val_acc: 0.7960\n\nEpoch 00425: val_acc did not improve from 0.84600\nEpoch 426/500\n7500/7500 [==============================] - 5s 639us/step - loss: 0.0163 - acc: 0.9829 - val_loss: 0.1849 - val_acc: 0.8008\n\nEpoch 00426: val_acc did not improve from 0.84600\nEpoch 427/500\n7500/7500 [==============================] - 5s 635us/step - loss: 0.0158 - acc: 0.9839 - val_loss: 0.1873 - val_acc: 0.8008\n\nEpoch 00427: val_acc did not improve from 0.84600\nEpoch 428/500\n7500/7500 [==============================] - 5s 637us/step - loss: 0.0171 - acc: 0.9820 - val_loss: 0.1796 - val_acc: 0.8116\n\nEpoch 00428: val_acc did not improve from 0.84600\nEpoch 429/500\n7500/7500 [==============================] - 5s 644us/step - loss: 0.0168 - acc: 0.9820 - val_loss: 0.1874 - val_acc: 0.8020\n\nEpoch 00429: val_acc did not improve from 0.84600\nEpoch 430/500\n7500/7500 [==============================] - 5s 636us/step - loss: 0.0173 - acc: 0.9820 - val_loss: 0.1861 - val_acc: 0.7996\n\nEpoch 00430: val_acc did not improve from 0.84600\nEpoch 431/500\n7500/7500 [==============================] - 5s 637us/step - loss: 0.0160 - acc: 0.9835 - val_loss: 0.1883 - val_acc: 0.7996\n\nEpoch 00431: val_acc did not improve from 0.84600\nEpoch 432/500\n7500/7500 [==============================] - 5s 641us/step - loss: 0.0161 - acc: 0.9833 - val_loss: 0.1916 - val_acc: 0.7960\n\nEpoch 00432: val_acc did not improve from 0.84600\nEpoch 433/500\n7500/7500 [==============================] - 5s 640us/step - loss: 0.0166 - acc: 0.9821 - val_loss: 0.1878 - val_acc: 0.7956\n\nEpoch 00433: val_acc did not improve from 0.84600\nEpoch 434/500\n7500/7500 [==============================] - 5s 631us/step - loss: 0.0156 - acc: 0.9836 - val_loss: 0.1861 - val_acc: 0.7996\n\nEpoch 00434: val_acc did not improve from 0.84600\nEpoch 435/500\n7500/7500 [==============================] - 5s 639us/step - loss: 0.0156 - acc: 0.9835 - val_loss: 0.1864 - val_acc: 0.8032\n\nEpoch 00435: val_acc did not improve from 0.84600\nEpoch 436/500\n7500/7500 [==============================] - 5s 636us/step - loss: 0.0158 - acc: 0.9832 - val_loss: 0.1852 - val_acc: 0.8044\n\nEpoch 00436: val_acc did not improve from 0.84600\nEpoch 437/500\n7500/7500 [==============================] - 5s 638us/step - loss: 0.0150 - acc: 0.9845 - val_loss: 0.1949 - val_acc: 0.7928\n\nEpoch 00437: val_acc did not improve from 0.84600\nEpoch 438/500\n7500/7500 [==============================] - 5s 633us/step - loss: 0.0152 - acc: 0.9845 - val_loss: 0.1939 - val_acc: 0.7956\n\nEpoch 00438: val_acc did not improve from 0.84600\nEpoch 439/500\n7500/7500 [==============================] - 5s 634us/step - loss: 0.0149 - acc: 0.9851 - val_loss: 0.1868 - val_acc: 0.8032\n\nEpoch 00439: val_acc did not improve from 0.84600\nEpoch 440/500\n7500/7500 [==============================] - 5s 636us/step - loss: 0.0159 - acc: 0.9836 - val_loss: 0.1917 - val_acc: 0.7948\n\nEpoch 00440: val_acc did not improve from 0.84600\nEpoch 441/500\n7500/7500 [==============================] - 5s 640us/step - loss: 0.0150 - acc: 0.9847 - val_loss: 0.1905 - val_acc: 0.7972\n\nEpoch 00441: val_acc did not improve from 0.84600\nEpoch 442/500\n7500/7500 [==============================] - 5s 639us/step - loss: 0.0163 - acc: 0.9827 - val_loss: 0.1894 - val_acc: 0.7956\n\nEpoch 00442: val_acc did not improve from 0.84600\nEpoch 443/500\n7500/7500 [==============================] - 5s 631us/step - loss: 0.0152 - acc: 0.9845 - val_loss: 0.1906 - val_acc: 0.7960\n\nEpoch 00443: val_acc did not improve from 0.84600\nEpoch 444/500\n7500/7500 [==============================] - 5s 633us/step - loss: 0.0156 - acc: 0.9839 - val_loss: 0.1898 - val_acc: 0.7980\n\nEpoch 00444: val_acc did not improve from 0.84600\nEpoch 445/500\n7500/7500 [==============================] - 5s 636us/step - loss: 0.0146 - acc: 0.9849 - val_loss: 0.1866 - val_acc: 0.8012\n\nEpoch 00445: val_acc did not improve from 0.84600\nEpoch 446/500\n7500/7500 [==============================] - 5s 639us/step - loss: 0.0158 - acc: 0.9837 - val_loss: 0.1841 - val_acc: 0.8036\n\nEpoch 00446: val_acc did not improve from 0.84600\nEpoch 447/500\n7500/7500 [==============================] - 5s 636us/step - loss: 0.0150 - acc: 0.9843 - val_loss: 0.1844 - val_acc: 0.8020\n\nEpoch 00447: val_acc did not improve from 0.84600\nEpoch 448/500\n7500/7500 [==============================] - 5s 634us/step - loss: 0.0153 - acc: 0.9844 - val_loss: 0.1901 - val_acc: 0.7992\n\nEpoch 00448: val_acc did not improve from 0.84600\nEpoch 449/500\n7500/7500 [==============================] - 5s 636us/step - loss: 0.0170 - acc: 0.9820 - val_loss: 0.1875 - val_acc: 0.8016\n\nEpoch 00449: val_acc did not improve from 0.84600\nEpoch 450/500\n7500/7500 [==============================] - 5s 634us/step - loss: 0.0155 - acc: 0.9835 - val_loss: 0.1931 - val_acc: 0.7940\n\nEpoch 00450: val_acc did not improve from 0.84600\nEpoch 451/500\n7500/7500 [==============================] - 5s 637us/step - loss: 0.0171 - acc: 0.9820 - val_loss: 0.1829 - val_acc: 0.8056\n\nEpoch 00451: val_acc did not improve from 0.84600\nEpoch 452/500\n7500/7500 [==============================] - 5s 632us/step - loss: 0.0154 - acc: 0.9835 - val_loss: 0.1877 - val_acc: 0.7992\n\nEpoch 00452: val_acc did not improve from 0.84600\nEpoch 453/500\n7500/7500 [==============================] - 5s 634us/step - loss: 0.0157 - acc: 0.9839 - val_loss: 0.1910 - val_acc: 0.7956\n\nEpoch 00453: val_acc did not improve from 0.84600\nEpoch 454/500\n7500/7500 [==============================] - 5s 630us/step - loss: 0.0161 - acc: 0.9832 - val_loss: 0.1934 - val_acc: 0.7936\n\nEpoch 00454: val_acc did not improve from 0.84600\nEpoch 455/500\n7500/7500 [==============================] - 5s 638us/step - loss: 0.0151 - acc: 0.9843 - val_loss: 0.1867 - val_acc: 0.8036\n\nEpoch 00455: val_acc did not improve from 0.84600\nEpoch 456/500\n7500/7500 [==============================] - 5s 637us/step - loss: 0.0160 - acc: 0.9837 - val_loss: 0.1876 - val_acc: 0.8028\n\nEpoch 00456: val_acc did not improve from 0.84600\nEpoch 457/500\n7500/7500 [==============================] - 5s 634us/step - loss: 0.0161 - acc: 0.9829 - val_loss: 0.1930 - val_acc: 0.7940\n\nEpoch 00457: val_acc did not improve from 0.84600\nEpoch 458/500\n7500/7500 [==============================] - 5s 634us/step - loss: 0.0163 - acc: 0.9835 - val_loss: 0.1870 - val_acc: 0.7988\n\nEpoch 00458: val_acc did not improve from 0.84600\nEpoch 459/500\n7500/7500 [==============================] - 5s 634us/step - loss: 0.0152 - acc: 0.9845 - val_loss: 0.1873 - val_acc: 0.8008\n\nEpoch 00459: val_acc did not improve from 0.84600\nEpoch 460/500\n7500/7500 [==============================] - 5s 634us/step - loss: 0.0146 - acc: 0.9849 - val_loss: 0.1938 - val_acc: 0.7952\n\nEpoch 00460: val_acc did not improve from 0.84600\nEpoch 461/500\n7500/7500 [==============================] - 5s 634us/step - loss: 0.0156 - acc: 0.9835 - val_loss: 0.1809 - val_acc: 0.8092\n\nEpoch 00461: val_acc did not improve from 0.84600\nEpoch 462/500\n7500/7500 [==============================] - 5s 637us/step - loss: 0.0170 - acc: 0.9817 - val_loss: 0.1875 - val_acc: 0.8024\n\nEpoch 00462: val_acc did not improve from 0.84600\nEpoch 463/500\n7500/7500 [==============================] - 5s 634us/step - loss: 0.0155 - acc: 0.9836 - val_loss: 0.1867 - val_acc: 0.8032\n\nEpoch 00463: val_acc did not improve from 0.84600\nEpoch 464/500\n7500/7500 [==============================] - 5s 641us/step - loss: 0.0152 - acc: 0.9843 - val_loss: 0.1955 - val_acc: 0.7932\n\nEpoch 00464: val_acc did not improve from 0.84600\nEpoch 465/500\n7500/7500 [==============================] - 5s 644us/step - loss: 0.0152 - acc: 0.9841 - val_loss: 0.1894 - val_acc: 0.8004\n\nEpoch 00465: val_acc did not improve from 0.84600\nEpoch 466/500\n7500/7500 [==============================] - 5s 634us/step - loss: 0.0153 - acc: 0.9840 - val_loss: 0.1880 - val_acc: 0.8024\n\nEpoch 00466: val_acc did not improve from 0.84600\nEpoch 467/500\n7500/7500 [==============================] - 5s 636us/step - loss: 0.0154 - acc: 0.9840 - val_loss: 0.1896 - val_acc: 0.8000\n\nEpoch 00467: val_acc did not improve from 0.84600\nEpoch 468/500\n7500/7500 [==============================] - 5s 635us/step - loss: 0.0152 - acc: 0.9845 - val_loss: 0.1877 - val_acc: 0.8012\n\nEpoch 00468: val_acc did not improve from 0.84600\nEpoch 469/500\n7500/7500 [==============================] - 5s 638us/step - loss: 0.0149 - acc: 0.9845 - val_loss: 0.1914 - val_acc: 0.7944\n\nEpoch 00469: val_acc did not improve from 0.84600\nEpoch 470/500\n7500/7500 [==============================] - 5s 639us/step - loss: 0.0144 - acc: 0.9851 - val_loss: 0.1860 - val_acc: 0.8024\n\nEpoch 00470: val_acc did not improve from 0.84600\nEpoch 471/500\n7500/7500 [==============================] - 5s 635us/step - loss: 0.0153 - acc: 0.9837 - val_loss: 0.1819 - val_acc: 0.8084\n\nEpoch 00471: val_acc did not improve from 0.84600\nEpoch 472/500\n7500/7500 [==============================] - 5s 635us/step - loss: 0.0161 - acc: 0.9828 - val_loss: 0.1953 - val_acc: 0.7908\n\nEpoch 00472: val_acc did not improve from 0.84600\nEpoch 473/500\n7500/7500 [==============================] - 5s 633us/step - loss: 0.0143 - acc: 0.9848 - val_loss: 0.1809 - val_acc: 0.8076\n\nEpoch 00473: val_acc did not improve from 0.84600\nEpoch 474/500\n7500/7500 [==============================] - 5s 630us/step - loss: 0.0155 - acc: 0.9837 - val_loss: 0.1953 - val_acc: 0.7900\n\nEpoch 00474: val_acc did not improve from 0.84600\nEpoch 475/500\n7500/7500 [==============================] - 5s 644us/step - loss: 0.0154 - acc: 0.9833 - val_loss: 0.1857 - val_acc: 0.7976\n\nEpoch 00475: val_acc did not improve from 0.84600\nEpoch 476/500\n7500/7500 [==============================] - 5s 638us/step - loss: 0.0156 - acc: 0.9832 - val_loss: 0.1836 - val_acc: 0.8056\n\nEpoch 00476: val_acc did not improve from 0.84600\nEpoch 477/500\n7500/7500 [==============================] - 5s 635us/step - loss: 0.0146 - acc: 0.9848 - val_loss: 0.1815 - val_acc: 0.8096\n\nEpoch 00477: val_acc did not improve from 0.84600\nEpoch 478/500\n7500/7500 [==============================] - 5s 633us/step - loss: 0.0149 - acc: 0.9848 - val_loss: 0.1846 - val_acc: 0.8028\n\nEpoch 00478: val_acc did not improve from 0.84600\nEpoch 479/500\n7500/7500 [==============================] - 5s 633us/step - loss: 0.0148 - acc: 0.9841 - val_loss: 0.1866 - val_acc: 0.8024\n\nEpoch 00479: val_acc did not improve from 0.84600\nEpoch 480/500\n7500/7500 [==============================] - 5s 634us/step - loss: 0.0148 - acc: 0.9845 - val_loss: 0.1878 - val_acc: 0.8000\n\nEpoch 00480: val_acc did not improve from 0.84600\nEpoch 481/500\n7500/7500 [==============================] - 5s 639us/step - loss: 0.0138 - acc: 0.9856 - val_loss: 0.1859 - val_acc: 0.8052\n\nEpoch 00481: val_acc did not improve from 0.84600\nEpoch 482/500\n7500/7500 [==============================] - 5s 640us/step - loss: 0.0143 - acc: 0.9849 - val_loss: 0.1797 - val_acc: 0.8080\n\nEpoch 00482: val_acc did not improve from 0.84600\nEpoch 483/500\n7500/7500 [==============================] - 5s 638us/step - loss: 0.0160 - acc: 0.9829 - val_loss: 0.1861 - val_acc: 0.8052\n\nEpoch 00483: val_acc did not improve from 0.84600\nEpoch 484/500\n7500/7500 [==============================] - 5s 636us/step - loss: 0.0144 - acc: 0.9844 - val_loss: 0.1836 - val_acc: 0.8052\n\nEpoch 00484: val_acc did not improve from 0.84600\nEpoch 485/500\n7500/7500 [==============================] - 5s 637us/step - loss: 0.0138 - acc: 0.9859 - val_loss: 0.1843 - val_acc: 0.8048\n\nEpoch 00485: val_acc did not improve from 0.84600\nEpoch 486/500\n7500/7500 [==============================] - 5s 636us/step - loss: 0.0137 - acc: 0.9860 - val_loss: 0.1924 - val_acc: 0.7964\n\nEpoch 00486: val_acc did not improve from 0.84600\nEpoch 487/500\n7500/7500 [==============================] - 5s 632us/step - loss: 0.0150 - acc: 0.9843 - val_loss: 0.1864 - val_acc: 0.7992\n\nEpoch 00487: val_acc did not improve from 0.84600\nEpoch 488/500\n7500/7500 [==============================] - 5s 631us/step - loss: 0.0150 - acc: 0.9841 - val_loss: 0.1907 - val_acc: 0.8004\n\nEpoch 00488: val_acc did not improve from 0.84600\nEpoch 489/500\n7500/7500 [==============================] - 5s 634us/step - loss: 0.0146 - acc: 0.9845 - val_loss: 0.1877 - val_acc: 0.8020\n\nEpoch 00489: val_acc did not improve from 0.84600\nEpoch 490/500\n7500/7500 [==============================] - 5s 636us/step - loss: 0.0150 - acc: 0.9844 - val_loss: 0.1876 - val_acc: 0.8032\n\nEpoch 00490: val_acc did not improve from 0.84600\nEpoch 491/500\n7500/7500 [==============================] - 5s 638us/step - loss: 0.0155 - acc: 0.9839 - val_loss: 0.1893 - val_acc: 0.7984\n\nEpoch 00491: val_acc did not improve from 0.84600\nEpoch 492/500\n7500/7500 [==============================] - 5s 637us/step - loss: 0.0149 - acc: 0.9847 - val_loss: 0.1752 - val_acc: 0.8148\n\nEpoch 00492: val_acc did not improve from 0.84600\nEpoch 493/500\n7500/7500 [==============================] - 5s 638us/step - loss: 0.0154 - acc: 0.9835 - val_loss: 0.1774 - val_acc: 0.8128\n\nEpoch 00493: val_acc did not improve from 0.84600\nEpoch 494/500\n7500/7500 [==============================] - 5s 633us/step - loss: 0.0145 - acc: 0.9851 - val_loss: 0.1856 - val_acc: 0.8020\n\nEpoch 00494: val_acc did not improve from 0.84600\nEpoch 495/500\n7500/7500 [==============================] - 5s 632us/step - loss: 0.0148 - acc: 0.9848 - val_loss: 0.1789 - val_acc: 0.8096\n\nEpoch 00495: val_acc did not improve from 0.84600\nEpoch 496/500\n7500/7500 [==============================] - 5s 636us/step - loss: 0.0134 - acc: 0.9860 - val_loss: 0.1879 - val_acc: 0.8024\n\nEpoch 00496: val_acc did not improve from 0.84600\nEpoch 497/500\n7500/7500 [==============================] - 5s 636us/step - loss: 0.0146 - acc: 0.9848 - val_loss: 0.1767 - val_acc: 0.8144\n\nEpoch 00497: val_acc did not improve from 0.84600\nEpoch 498/500\n7500/7500 [==============================] - 5s 636us/step - loss: 0.0136 - acc: 0.9859 - val_loss: 0.1873 - val_acc: 0.8036\n\nEpoch 00498: val_acc did not improve from 0.84600\nEpoch 499/500\n7500/7500 [==============================] - 5s 636us/step - loss: 0.0140 - acc: 0.9859 - val_loss: 0.1845 - val_acc: 0.8076\n\nEpoch 00499: val_acc did not improve from 0.84600\nEpoch 500/500\n7500/7500 [==============================] - 5s 632us/step - loss: 0.0136 - acc: 0.9861 - val_loss: 0.1843 - val_acc: 0.8044\n\nEpoch 00500: val_acc did not improve from 0.84600\nacc: 84.60%\n" ] ], [ [ "## Checking predictions on a small sample of native data", "_____no_output_____" ] ], [ [ "input_seqs = ROOT_DIR + 'expressyeaself/models/lstm/native_sample.txt'\nmodel_to_use = 'lstm_sequential_2d'", "_____no_output_____" ], [ "lstm_result = construct.get_predictions_for_input_file(input_seqs, model_to_use, sort_df=True, write_to_file=False)", "_____no_output_____" ], [ "lstm_result.to_csv('lstm_result')", "_____no_output_____" ], [ "lstm_result", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code", "code" ] ]
d0228891397648a1c2fc6ba056b6a5cb9e3c47a9
19,847
ipynb
Jupyter Notebook
Woodgreen_Data_Science_&_Python_Nov_2021_Week_3.ipynb
tjido/woodgreen
24e6a999e096c3c520aec5d10e8628401c3a848a
[ "MIT" ]
null
null
null
Woodgreen_Data_Science_&_Python_Nov_2021_Week_3.ipynb
tjido/woodgreen
24e6a999e096c3c520aec5d10e8628401c3a848a
[ "MIT" ]
null
null
null
Woodgreen_Data_Science_&_Python_Nov_2021_Week_3.ipynb
tjido/woodgreen
24e6a999e096c3c520aec5d10e8628401c3a848a
[ "MIT" ]
null
null
null
27.262363
263
0.479367
[ [ [ "<a href=\"https://colab.research.google.com/github/tjido/woodgreen/blob/master/Woodgreen_Data_Science_%26_Python_Nov_2021_Week_3.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>", "_____no_output_____" ], [ "<h1>Welcome to the Woodgreen Data Science & Python Program by Fireside Analytics</h1>\n\n\n<h4>Data science is the process of ethically acquiring, engineering, analyzing, visualizaing and ultimately, creating value with data.\n\n<p>In this tutorial, participants will be introduced to the Python programming language in this Python cloud environment called Google Colab.</p> </h4>\n<p>For more information about this tutorial or other tutorials by Fireside Analytics, contact: info@firesideanalytics.com</p>\n\n<h3><strong>Table of contents</h3>\n<div class=\"alert alert-block alert-info\" style=\"margin-top: 20px\">\n <ol>\n <li>How does a computer work?</li>\n <Li>What is \"data\"?</li>\n <li>An introduction to Python</li>\n </ol>\n</div>\n<br>\n<hr>", "_____no_output_____" ], [ "**Let's get started! Firstly, this page you are reading is not regular website, it is an interactive computer programming environment called a Colab notebook that lets you write and execute code in Python.**", "_____no_output_____" ], [ "# 1. How does a computer work?", "_____no_output_____" ], [ "## A computer is a device that takes INPUTS, does some PROCESSES and results in OUTPUTS\n\nEXAMPLES OF INPUTS\n1. Keyboard\n2. Mouse\n3. Touch screen\n\nPROCESSES\n1. CPU - Central Processing Unit\n2. Data storage\n3. Converts inputs from words and numbers to 1s and 0s\n4. Computes 1s and 0s\n5. Produces outputs and information\n\nOUTPUTS\n1. Screen - words, numbers, pictures or sounds\n2. Printer\n3. Speaker\n", "_____no_output_____" ], [ "# 2. What is \"data\"?", "_____no_output_____" ], [ "## A computer is a device that takes INPUTS, does some PROCESSES and results in OUTPUTS\n1. Computers use many on and off switches to work\n2. The 'on' switch is represented by a '1' and the 'off' switch is \n3. A BIT is a one or a zero, and a BYTE is a combination of 8 ones and zeros e.g., 1100 0010\n4. Combinations of Ones and Zeros in a computer, represent whole words and numbers, symbols and even pictures in the real world\n5. Information stored in ones and zeros, in bits and bytes, is data!\n\n* The letter a = 0110 0001\n* The letter b = 0110 0010\n* The letter A = 0100 0001\n* The letter B = 0100 0010\n* The symbol @ = 1000 0000\n\n## This conversion is done with the ASCII Code, American Standard Code Information Interchange", "_____no_output_____" ], [ "*Computer programming is the process of giving a computer instructions in human readable language so a computer will know what to do in computer language.*", "_____no_output_____" ], [ "# 3. An introduction to Python", "_____no_output_____" ], [ "### Let's get to know Python. The following code is an example of a Python Progam. Run the code by clicking on the 'play' button and you will see the result of your program beneath the code.", "_____no_output_____" ] ], [ [ "## Your first computer progam can be to say hello!\nprint (\"Hello, World\")", "_____no_output_____" ], [ "# We will need to learn some syntax! Syntax are the words used in a Python program\n# the '#' sign tells Python to ignore a line. We use it for notes that we want humans to read\n# print() is a function built into the core of Python\n# For more sophisticed operations we'll load libraries which come with additional functions that we can use\n# Famous ones are numpy, pandas, matplotlib, seaborn, and scikitlearn\n\n# Now, let's write some programs!\n\n# Edit the line below to add your first name between the \"\"\n## Here we assign the letters between \"\" to an object called \"my_name\" - it is now stored and you can call it later\n## Like saving a number in your phone versus just typing it in and calling it", "_____no_output_____" ], [ "my_name = \"\"\n", "_____no_output_____" ], [ "# Let's see what we've created\nmy_name", "_____no_output_____" ], [ "greeting = \"Hello, world, my name is \"\n", "_____no_output_____" ], [ "# Let's look at it\ngreeting", "_____no_output_____" ], [ "# The = sign is what we call an 'assignment operator' and it assigns things", "_____no_output_____" ], [ "# See how we use the '+' sign", "_____no_output_____" ], [ " print(greeting + my_name)", "_____no_output_____" ], [ "# Asking for input, using simple function and printing it\n\ndef say_hello():\n username = input(\"What is your name?\\n\")\n print(\"Hello \" + username)\n", "_____no_output_____" ], [ "# Lets call the function\nsay_hello()", "_____no_output_____" ], [ "# Creating an 'If else' conditional block inside the function. Here we are validating the response entered.\n# If the person simply hits \"Enter\" without entering any value in the field, \n# then the if statement prints \"You can't introduce yourself if you don't add your name!\"\n# the == operator is used to test if something is equal to something else\n\ndef say_hello():\n username = input(\"What is your name?\\n\")\n if username == \"\":\n print(\"You can't introduce yourself if you don't add your name!\")\n else:\n print(\"Hello \" + username)\n", "_____no_output_____" ], [ "# While calling the function, try leaving the field blank\nsay_hello()", "_____no_output_____" ], [ "# Dealing with a blank\ndef say_hello(name):\n if name == \"\":\n print(\"You can't introduce yourself if you don't add your name!\")\n else:\n print(greeting + name)", "_____no_output_____" ], [ "# Click the \"play\" button to execute this code.\nsay_hello(my_name)", "_____no_output_____" ], [ "# In programming there are often many ways to do things, for example\nprint(\"Hello world, my name is \" + my_name + \".\")", "_____no_output_____" ] ], [ [ "# **We can do simple calculations in Python**", "_____no_output_____" ] ], [ [ "5 + 5", "_____no_output_____" ], [ "# Some actions already programmed in:\n\nx = 5\nprint(x + 7)", "_____no_output_____" ], [ "# What happens when we say \"X=5\"\n\n# x 'points' at the number 5\nx = 5\nprint(\"Initial x is:\", x)\n\n# y now 'points' at 'x' which 'points' at 5, so then y points at 5\ny = x\nprint(\"Initial y is:\", y)\n\nx = 6\n# What happens when we now change what x is?\n\nprint(\"Current x is:\", x)\nprint(\"Current y is:\", y)", "_____no_output_____" ] ], [ [ "------------------------------------------------------------------------", "_____no_output_____" ], [ "**We can do complex calculations in Python** - Remember we said Netflix users stream 404,444 hours of movies every minute? Let's calculate how many days that is!", "_____no_output_____" ] ], [ [ "## In Python we create objects \n## Converting from 404444 hours to days, we divide by___________?\n\ndays_watching_netflix = 404444/24", "_____no_output_____" ] ], [ [ "How can we do a survey in Python? We type 'input' to let Python know to wait for a user response. Once you type in the name, Python will remember it!\nPress 'enter' after your input. ", "_____no_output_____" ] ], [ [ "response_1 = input(\"Response 1: What is your name?\")", "_____no_output_____" ], [ "## We can now look at the response\nresponse_1", "_____no_output_____" ], [ "response_2 = input(\"Response 2: What is your name?\")\nresponse_3 = input(\"Response 3: What is your name?\")\nresponse_4 = input(\"Response 4: What is your name?\")\nresponse_5 = input(\"Response 5: What is your name?\")", "_____no_output_____" ] ], [ [ "Let's look at response_5", "_____no_output_____" ] ], [ [ "print(response_1, \n response_2, \n response_3, \n response_4,\n response_5)", "_____no_output_____" ] ], [ [ "We can also add the names one at a time by typing them.", "_____no_output_____" ] ], [ [ "## Let's create an object for the 5 names from question 1\nsurvey_names = [response_1, response_2, response_3, response_4, response_5]", "_____no_output_____" ], [ "## Let's look at the object we've just created!\nsurvey_names", "_____no_output_____" ], [ "print(survey_names)", "_____no_output_____" ] ], [ [ "# Let's make a simple bar chart in Python", "_____no_output_____" ] ], [ [ "import matplotlib.pyplot as plt\n\nx = ['A', 'B', 'C', 'D', 'E']\ny = [22, 9, 40, 27, 55]\n\nplt.bar(x, y, color = 'red')\n\nplt.title('Simple Bar Chart')\nplt.xlabel('Width Names')\nplt.ylabel('Height Values')\n\nplt.show()", "_____no_output_____" ], [ "# Replot the same chart and change the color of the bars", "_____no_output_____" ] ], [ [ "## Here's a sample chart with some survey responses. ", "_____no_output_____" ] ], [ [ "import numpy as np\nimport pandas as pd\nfrom pandas import Series, DataFrame\nimport matplotlib.pyplot as plt\n\ndata = [3,2]\nlabels = ['yes', 'no']\nplt.xticks(range(len(data)), labels)\nplt.xlabel('Responses')\nplt.ylabel('Number of People')\nplt.title('Shingai - Woodgreen Data Science & Python Program: Survey Results for Questions 2: \"Do you know how a computer works?\"')\nplt.bar(range(len(data)), data, color = 'blue') \nplt.show()\n", "_____no_output_____" ] ], [ [ "# Practice what you have learned\n", "_____no_output_____" ], [ "* Enter the results for Question 2 of your survey data and produce a chart\n* Add your name to your chart heading\n* Change the labels and headings of your charts", "_____no_output_____" ], [ "#Conclusion\n\n1. Computer programming is a set of instructions we give a computer. \n2. Computers must process the instructions in 'binary', in ones and zeros.\n3. Anything 'digital' is data.\n\n\n\n\n\n", "_____no_output_____" ], [ "# Contact Information", "_____no_output_____" ], [ "Congratulations, you have completed a tutorial in the Python Programming language!\n\n\n\nFireside Analytics Inc. | \nInstructor: Shingai Manjengwa (Twitter: @tjido) |\nWoodgreen Community Services Summer Camp |\nContact: info@firesideanalytics.com or [www.firesideanalytics.com](www.firesideanalytics.com)\n\nNever stop learning!\n\n", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown", "markdown", "markdown" ] ]
d0228e0ab13d5f1668dc3354744ca667c417e75c
409,821
ipynb
Jupyter Notebook
Modulo 4 - Analisi per regioni/regioni/Sardegna/SARDEGNA - SARIMA mensile.ipynb
SofiaBlack/Towards-a-software-to-measure-the-impact-of-the-COVID-19-outbreak-on-Italian-deaths
c418eba90dc07f58633e7e4cd2719c46f0a6b202
[ "Unlicense" ]
3
2021-04-02T21:54:52.000Z
2021-04-13T14:24:29.000Z
Modulo 4 - Analisi per regioni/regioni/Sardegna/SARDEGNA - SARIMA mensile.ipynb
SofiaBlack/COVID-19_deaths_analysis
c418eba90dc07f58633e7e4cd2719c46f0a6b202
[ "Unlicense" ]
null
null
null
Modulo 4 - Analisi per regioni/regioni/Sardegna/SARDEGNA - SARIMA mensile.ipynb
SofiaBlack/COVID-19_deaths_analysis
c418eba90dc07f58633e7e4cd2719c46f0a6b202
[ "Unlicense" ]
null
null
null
401.391773
70,264
0.933827
[ [ [ "<h1>CREAZIONE MODELLO SARIMA REGIONE SARDEGNA", "_____no_output_____" ] ], [ [ "import pandas as pd\ndf = pd.read_csv('../../csv/regioni/sardegna.csv')\ndf.head()", "_____no_output_____" ], [ "df['DATA'] = pd.to_datetime(df['DATA'])", "_____no_output_____" ], [ "df.info()", "<class 'pandas.core.frame.DataFrame'>\nRangeIndex: 69 entries, 0 to 68\nData columns (total 2 columns):\n # Column Non-Null Count Dtype \n--- ------ -------------- ----- \n 0 DATA 69 non-null datetime64[ns]\n 1 TOTALE 69 non-null int64 \ndtypes: datetime64[ns](1), int64(1)\nmemory usage: 1.2 KB\n" ], [ "df=df.set_index('DATA')\ndf.head()", "_____no_output_____" ] ], [ [ "<h3>Creazione serie storica dei decessi totali della regione Sardegna", "_____no_output_____" ] ], [ [ "ts = df.TOTALE\nts.head()", "_____no_output_____" ], [ "from datetime import datetime\nfrom datetime import timedelta\nstart_date = datetime(2015,1,1)\nend_date = datetime(2020,9,30)\nlim_ts = ts[start_date:end_date]\n\n#visulizzo il grafico\nimport matplotlib.pyplot as plt\nplt.figure(figsize=(12,6))\nplt.title('Decessi mensili regione Sardegna dal 2015 a settembre 2020', size=20)\nplt.plot(lim_ts)\nfor year in range(start_date.year,end_date.year+1):\n plt.axvline(pd.to_datetime(str(year)+'-01-01'), color='k', linestyle='--', alpha=0.5)", "_____no_output_____" ] ], [ [ "<h3>Decomposizione", "_____no_output_____" ] ], [ [ "from statsmodels.tsa.seasonal import seasonal_decompose\n\ndecomposition = seasonal_decompose(ts, period=12, two_sided=True, extrapolate_trend=1, model='multiplicative')\n\nts_trend = decomposition.trend #andamento della curva\nts_seasonal = decomposition.seasonal #stagionalità \nts_residual = decomposition.resid #parti rimanenti\nplt.subplot(411)\nplt.plot(ts,label='original')\nplt.legend(loc='best')\nplt.subplot(412)\nplt.plot(ts_trend,label='trend')\nplt.legend(loc='best')\nplt.subplot(413)\nplt.plot(ts_seasonal,label='seasonality')\nplt.legend(loc='best')\nplt.subplot(414)\nplt.plot(ts_residual,label='residual')\nplt.legend(loc='best')\nplt.tight_layout()", "_____no_output_____" ] ], [ [ "<h3>Test di stazionarietà", "_____no_output_____" ] ], [ [ "from statsmodels.tsa.stattools import adfuller\ndef test_stationarity(timeseries):\n \n dftest = adfuller(timeseries, autolag='AIC')\n dfoutput = pd.Series(dftest[0:4], index=['Test Statistic','p-value','#Lags Used','Number of Observations Used'])\n for key,value in dftest[4].items():\n dfoutput['Critical Value (%s)'%key] = value\n \n critical_value = dftest[4]['5%']\n test_statistic = dftest[0]\n alpha = 1e-3\n pvalue = dftest[1]\n if pvalue < alpha and test_statistic < critical_value: # null hypothesis: x is non stationary\n print(\"X is stationary\")\n return True\n else:\n print(\"X is not stationary\")\n return False\n ", "_____no_output_____" ], [ "test_stationarity(ts)", "X is not stationary\n" ] ], [ [ "<h3>Suddivisione in Train e Test", "_____no_output_____" ], [ "<b>Train</b>: da gennaio 2015 a ottobre 2019; <br />\n<b>Test</b>: da ottobre 2019 a dicembre 2019.", "_____no_output_____" ] ], [ [ "from datetime import datetime\ntrain_end = datetime(2019,10,31)\ntest_end = datetime (2019,12,31)\ncovid_end = datetime(2020,9,30)\n", "_____no_output_____" ], [ "from dateutil.relativedelta import *\ntsb = ts[:test_end]\ndecomposition = seasonal_decompose(tsb, period=12, two_sided=True, extrapolate_trend=1, model='multiplicative')\n\ntsb_trend = decomposition.trend #andamento della curva\ntsb_seasonal = decomposition.seasonal #stagionalità \ntsb_residual = decomposition.resid #parti rimanenti\n\n\ntsb_diff = pd.Series(tsb_trend)\nd = 0\nwhile test_stationarity(tsb_diff) is False:\n tsb_diff = tsb_diff.diff().dropna()\n d = d + 1\nprint(d)\n\n#TEST: dal 01-01-2015 al 31-10-2019\ntrain = tsb[:train_end]\n\n#TRAIN: dal 01-11-2019 al 31-12-2019\ntest = tsb[train_end + relativedelta(months=+1): test_end]", "X is not stationary\nX is not stationary\nX is not stationary\nX is stationary\n3\n" ] ], [ [ "<h3>Grafici di Autocorrelazione e Autocorrelazione Parziale", "_____no_output_____" ] ], [ [ "from statsmodels.graphics.tsaplots import plot_acf, plot_pacf\nplot_acf(ts, lags =12)\nplot_pacf(ts, lags =12)\nplt.show()", "_____no_output_____" ] ], [ [ "<h2>Creazione del modello SARIMA sul Train", "_____no_output_____" ] ], [ [ "from statsmodels.tsa.statespace.sarimax import SARIMAX\n\nmodel = SARIMAX(train, order=(6,1,8))\nmodel_fit = model.fit()\nprint(model_fit.summary())", "c:\\users\\monta\\appdata\\local\\programs\\python\\python38\\lib\\site-packages\\statsmodels\\tsa\\base\\tsa_model.py:524: ValueWarning: No frequency information was provided, so inferred frequency M will be used.\n warnings.warn('No frequency information was'\nc:\\users\\monta\\appdata\\local\\programs\\python\\python38\\lib\\site-packages\\statsmodels\\tsa\\base\\tsa_model.py:524: ValueWarning: No frequency information was provided, so inferred frequency M will be used.\n warnings.warn('No frequency information was'\nc:\\users\\monta\\appdata\\local\\programs\\python\\python38\\lib\\site-packages\\statsmodels\\tsa\\statespace\\sarimax.py:977: UserWarning: Non-invertible starting MA parameters found. Using zeros as starting parameters.\n warn('Non-invertible starting MA parameters found.'\n" ] ], [ [ "<h4>Verifica della stazionarietà dei residui del modello ottenuto", "_____no_output_____" ] ], [ [ "residuals = model_fit.resid\ntest_stationarity(residuals)", "X is stationary\n" ], [ "plt.figure(figsize=(12,6))\nplt.title('Confronto valori previsti dal modello con valori reali del Train', size=20)\nplt.plot (train.iloc[1:], color='red', label='train values')\nplt.plot (model_fit.fittedvalues.iloc[1:], color = 'blue', label='model values')\n\nplt.legend()\nplt.show()", "_____no_output_____" ], [ "conf = model_fit.conf_int()\n\nplt.figure(figsize=(12,6))\nplt.title('Intervalli di confidenza del modello', size=20)\nplt.plot(conf)\nplt.xticks(rotation=45)\nplt.show()", "_____no_output_____" ] ], [ [ "<h3>Predizione del modello sul Test", "_____no_output_____" ] ], [ [ "#inizio e fine predizione\npred_start = test.index[0]\npred_end = test.index[-1]\n\n#pred_start= len(train)\n#pred_end = len(tsb)\n#predizione del modello sul test\npredictions_test= model_fit.predict(start=pred_start, end=pred_end)\n\nplt.plot(test, color='red', label='actual')\nplt.plot(predictions_test, label='prediction' )\nplt.xticks(rotation=45)\nplt.legend()\nplt.show()\n\nprint(predictions_test)", "_____no_output_____" ], [ "# Accuracy metrics\nimport numpy as np\ndef forecast_accuracy(forecast, actual):\n mape = np.mean(np.abs(forecast - actual)/np.abs(actual)) # MAPE: errore percentuale medio assoluto\n me = np.mean(forecast - actual) # ME: errore medio\n mae = np.mean(np.abs(forecast - actual)) # MAE: errore assoluto medio\n mpe = np.mean((forecast - actual)/actual) # MPE: errore percentuale medio\n rmse = np.mean((forecast - actual)**2)**.5 # RMSE\n corr = np.corrcoef(forecast, actual)[0,1] # corr: correlazione tra effettivo e previsione\n mins = np.amin(np.hstack([forecast[:,None], \n actual[:,None]]), axis=1)\n maxs = np.amax(np.hstack([forecast[:,None], \n actual[:,None]]), axis=1)\n minmax = 1 - np.mean(mins/maxs) # minmax: errore min-max\n return({'mape':mape, 'me':me, 'mae': mae, \n 'mpe': mpe, 'rmse':rmse, \n 'corr':corr, 'minmax':minmax})\n\nforecast_accuracy(predictions_test, test)", "_____no_output_____" ], [ "import numpy as np\nfrom statsmodels.tools.eval_measures import rmse\nnrmse = rmse(predictions_test, test)/(np.max(test)-np.min(test))\nprint('NRMSE: %f'% nrmse)", "NRMSE: 0.028047\n" ] ], [ [ "<h2>Predizione del modello compreso l'anno 2020", "_____no_output_____" ] ], [ [ "#inizio e fine predizione\nstart_prediction = ts.index[0]\nend_prediction = ts.index[-1]\n\npredictions_tot = model_fit.predict(start=start_prediction, end=end_prediction)\n\nplt.figure(figsize=(12,6))\nplt.title('Previsione modello su dati osservati - dal 2015 al 30 settembre 2020', size=20)\nplt.plot(ts, color='blue', label='actual')\nplt.plot(predictions_tot.iloc[1:], color='red', label='predict')\nplt.xticks(rotation=45)\nplt.legend(prop={'size': 12})\nplt.show()", "_____no_output_____" ], [ "diff_predictions_tot = (ts - predictions_tot)\nplt.figure(figsize=(12,6))\nplt.title('Differenza tra i valori osservati e i valori stimati del modello', size=20)\nplt.plot(diff_predictions_tot)\nplt.show()", "_____no_output_____" ], [ "diff_predictions_tot['24-02-2020':].sum()", "_____no_output_____" ], [ "predictions_tot.to_csv('../../csv/pred/predictions_SARIMA_sardegna.csv')", "_____no_output_____" ] ], [ [ "<h2>Intervalli di confidenza della previsione totale", "_____no_output_____" ] ], [ [ "forecast = model_fit.get_prediction(start=start_prediction, end=end_prediction)\nin_c = forecast.conf_int()\nprint(forecast.predicted_mean)\nprint(in_c)\nprint(forecast.predicted_mean - in_c['lower TOTALE'])", "2015-01-31 0.000000\n2015-02-28 1670.968978\n2015-03-31 1666.038647\n2015-04-30 1595.285394\n2015-05-31 1431.489871\n ... \n2020-05-31 1385.942263\n2020-06-30 1376.573224\n2020-07-31 1217.516475\n2020-08-31 1291.448154\n2020-09-30 1230.251797\nFreq: M, Name: predicted_mean, Length: 69, dtype: float64\n lower TOTALE upper TOTALE\n2015-01-31 -1989.324384 1989.324384\n2015-02-28 1330.657701 2011.280254\n2015-03-31 1332.701862 1999.375432\n2015-04-30 1263.504264 1927.066524\n2015-05-31 1100.103872 1762.875871\n... ... ...\n2020-05-31 1114.556803 1657.327722\n2020-06-30 1105.166766 1647.979683\n2020-07-31 945.721045 1489.311905\n2020-08-31 1019.579189 1563.317119\n2020-09-30 958.663709 1501.839884\n\n[69 rows x 2 columns]\n2015-01-31 1989.324384\n2015-02-28 340.311276\n2015-03-31 333.336785\n2015-04-30 331.781130\n2015-05-31 331.385999\n ... \n2020-05-31 271.385459\n2020-06-30 271.406458\n2020-07-31 271.795430\n2020-08-31 271.868965\n2020-09-30 271.588087\nFreq: M, Length: 69, dtype: float64\n" ], [ "plt.plot(in_c)\nplt.show()", "_____no_output_____" ], [ "upper = in_c['upper TOTALE']\nlower = in_c['lower TOTALE']", "_____no_output_____" ], [ "lower.to_csv('../../csv/lower/predictions_SARIMA_sardegna_lower.csv')\nupper.to_csv('../../csv/upper/predictions_SARIMA_sardegna_upper.csv')", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown" ], [ "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown", "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code" ] ]
d022a4d0df6edb6965f07dcfa466eb7849bb928d
52,652
ipynb
Jupyter Notebook
#01. Data Tables & Basic Concepts of Programming/Untitled.ipynb
gabisintope/machine-learning-program
f693371937e19a5d2d6b9e26bfc8063f6724c970
[ "MIT" ]
null
null
null
#01. Data Tables & Basic Concepts of Programming/Untitled.ipynb
gabisintope/machine-learning-program
f693371937e19a5d2d6b9e26bfc8063f6724c970
[ "MIT" ]
null
null
null
#01. Data Tables & Basic Concepts of Programming/Untitled.ipynb
gabisintope/machine-learning-program
f693371937e19a5d2d6b9e26bfc8063f6724c970
[ "MIT" ]
null
null
null
36.81958
102
0.349218
[ [ [ "# Preparation", "_____no_output_____" ] ], [ [ "import pandas as pd", "_____no_output_____" ], [ "df_mortality = pd.read_excel(io='MortalityDataWHR2021C2.xlsx')", "_____no_output_____" ], [ "df_happiness = pd.read_excel(io='DataForFigure2.1WHR2021C2.xls')", "_____no_output_____" ], [ "df_regions = df_happiness[['Country name', 'Regional indicator']]", "_____no_output_____" ], [ "df = df_regions.merge(df_mortality)", "_____no_output_____" ], [ "df.head()", "_____no_output_____" ] ], [ [ "# Islands", "_____no_output_____" ], [ "## Number of Islands ", "_____no_output_____" ] ], [ [ "df.Island.sum()", "_____no_output_____" ] ], [ [ "## Which region had more Islands?", "_____no_output_____" ] ], [ [ "df.groupby('Regional indicator').Island.sum()", "_____no_output_____" ] ], [ [ "## Show all Columns for these Islands", "_____no_output_____" ] ], [ [ "mask_region = df['Regional indicator'] == 'Western Europe'", "_____no_output_____" ], [ "mask_island = df['Island'] == 1", "_____no_output_____" ], [ "df_europe_islands = df[mask_region & mask_island]", "_____no_output_____" ], [ "df_europe_islands", "_____no_output_____" ] ], [ [ "## Mean Age of across All Islands?", "_____no_output_____" ] ], [ [ "df_europe_islands['Median age'].mean()", "_____no_output_____" ] ], [ [ "# Female Heads of State", "_____no_output_____" ], [ "## Number of Countries with Female Heads of State ", "_____no_output_____" ] ], [ [ "df['Female head of government'].sum()", "_____no_output_____" ] ], [ [ "## Which region had more Female Heads of State?", "_____no_output_____" ] ], [ [ "df.groupby('Regional indicator')['Female head of government'].sum().sort_values(ascending=False)", "_____no_output_____" ] ], [ [ "## Show all Columns for these Countries", "_____no_output_____" ] ], [ [ "mask_region = df['Regional indicator'] == 'Western Europe'", "_____no_output_____" ], [ "mask_female = df['Female head of government'] == 1", "_____no_output_____" ], [ "df_europe_femaleheads = df[mask_region & mask_female]", "_____no_output_____" ], [ "df_europe_femaleheads", "_____no_output_____" ] ], [ [ "## Mean Age of across All Countries?", "_____no_output_____" ] ], [ [ "df_europe_femaleheads['Median age'].mean()", "_____no_output_____" ] ], [ [ "# Pivot Tables", "_____no_output_____" ] ], [ [ "df_panel = pd.read_excel(io='DataPanelWHR2021C2.xls')", "_____no_output_____" ], [ "df = df_panel.merge(df_regions)", "_____no_output_____" ], [ "df.pivot_table(index='Regional indicator', columns='year', values='Log GDP per capita')", "_____no_output_____" ] ], [ [ "## Which Region had a higher ", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "markdown" ], [ "code", "code", "code", "code", "code", "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ] ]
d022aa6f23c8161910edc62d1e2c3d014738626b
6,066
ipynb
Jupyter Notebook
kernel/Mermaid/_sc.ipynb
nufeng1999/Myjupyter-kernel
7862ce8afae139d39ad2896f3e36a19b5df9923e
[ "MIT" ]
null
null
null
kernel/Mermaid/_sc.ipynb
nufeng1999/Myjupyter-kernel
7862ce8afae139d39ad2896f3e36a19b5df9923e
[ "MIT" ]
null
null
null
kernel/Mermaid/_sc.ipynb
nufeng1999/Myjupyter-kernel
7862ce8afae139d39ad2896f3e36a19b5df9923e
[ "MIT" ]
null
null
null
36.542169
137
0.523244
[ [ [ "##%overwritefile\n##%file:src/compile_out_file.py\n##%noruncode\n def getCompout_filename(self,cflags,outfileflag,defoutfile):\n outfile=''\n binary_filename=defoutfile\n index=0\n for s in cflags:\n if s.startswith(outfileflag):\n if(len(s)>len(outfileflag)):\n outfile=s[len(outfileflag):]\n del cflags[index]\n else:\n outfile=cflags[cflags.index(outfileflag)+1]\n if outfile.startswith('-'):\n outfile=binary_filename\n del cflags[cflags.index(outfileflag)+1]\n del cflags[cflags.index(outfileflag)]\n binary_filename=outfile\n index+=1\n return binary_filename\n", "[MyPythonKernel135905] Info:file h:\\Jupyter\\Myjupyter-kernel\\kernel\\Mermaid\\src/compile_out_file.py created successfully\n" ], [ "##%overwritefile\n##%file:src/compile_with_sc.py\n##%noruncode\n def compile_with_sc(self, source_filename, binary_filename, cflags=None, ldflags=None,env=None,magics=None):\n outfile=binary_filename\n orig_cflags=cflags\n orig_ldflags=ldflags\n ccmd=[]\n clargs=[]\n crargs=[]\n outfileflag=[]\n oft=''\n if len(self.kernel_info['compiler']['outfileflag'])>0:\n oft=self.kernel_info['compiler']['outfileflag']\n outfileflag=[oft]\n binary_filename=self.getCompout_filename(cflags,oft,outfile)\n args=[]\n if magics!=None and len(self.mymagics.addkey2dict(magics,'ccompiler'))>0:\n ## use code line ccompiler lable\n args = magics['ccompiler'] + orig_cflags +[source_filename] + orig_ldflags\n else:\n ## use kernel default compiler -> kernel_info['compiler']['cmd']\n if len(self.kernel_info['compiler']['cmd'])>0:\n ccmd+=[self.kernel_info['compiler']['cmd']]\n if len(self.kernel_info['compiler']['clargs'])>0:\n clargs+=self.kernel_info['compiler']['clargs']\n if len(self.kernel_info['compiler']['crargs'])>0:\n crargs+=self.kernel_info['compiler']['crargs']\n\n args = ccmd+cflags+[source_filename] +clargs+outfileflag+ [binary_filename]+crargs+ ldflags\n # self._log(''.join((' '+ str(s) for s in args))+\"\\n\")\n return self.mymagics.create_jupyter_subprocess(args,env=env,magics=magics),binary_filename,args\n", "[MyPythonKernel135905] Info:file h:\\Jupyter\\Myjupyter-kernel\\kernel\\Mermaid\\src/compile_with_sc.py created successfully\n" ], [ "##%overwritefile\n##%file:src/c_exec_sc_.py\n##%noruncode\n def _exec_sc_(self,source_filename,magics):\n self.mymagics._logln('Generating executable file')\n with self.mymagics.new_temp_file(suffix=self.kernel_info['execsuffix']) as binary_file:\n magics['status']='compiling'\n p,outfile,tsccmd = self.compile_with_sc(\n source_filename, \n binary_file.name,\n self.mymagics.get_magicsSvalue(magics,'cflags'),\n self.mymagics.get_magicsSvalue(magics,'ldflags'),\n self.mymagics.get_magicsbykey(magics,'env'),\n magics=magics)\n returncode=p.wait_end(magics)\n p.write_contents()\n magics['status']=''\n binary_file.name=os.path.join(os.path.abspath(''),outfile)\n if returncode != 0: \n ## Compilation failed\n self.mymagics._logln(' '.join((str(s) for s in tsccmd))+\"\\n\",3)\n self.mymagics._logln(\"compiler exited with code {}, the executable will not be executed\".format(returncode),3)\n ## delete source files before exit\n ## os.remove(source_filename)\n os.remove(binary_file.name)\n return p.returncode,binary_file.name\n", "[MyPythonKernel135905] Info:file h:\\Jupyter\\Myjupyter-kernel\\kernel\\Mermaid\\src/c_exec_sc_.py created successfully\n" ] ] ]
[ "code" ]
[ [ "code", "code", "code" ] ]
d022b1d5a676fb9e8eb1553bd52e7492f26f4d34
4,550
ipynb
Jupyter Notebook
03_Grouping/Occupation/Exercise.ipynb
mtzupan/pandas_exercises
3527cda51234e126ba5600ab9596e4bd4cca5d63
[ "BSD-3-Clause" ]
null
null
null
03_Grouping/Occupation/Exercise.ipynb
mtzupan/pandas_exercises
3527cda51234e126ba5600ab9596e4bd4cca5d63
[ "BSD-3-Clause" ]
null
null
null
03_Grouping/Occupation/Exercise.ipynb
mtzupan/pandas_exercises
3527cda51234e126ba5600ab9596e4bd4cca5d63
[ "BSD-3-Clause" ]
null
null
null
21.980676
129
0.541978
[ [ [ "# Occupation", "_____no_output_____" ], [ "### Introduction:\n\nSpecial thanks to: https://github.com/justmarkham for sharing the dataset and materials.\n\n### Step 1. Import the necessary libraries", "_____no_output_____" ] ], [ [ "import pandas as pd\nimport numpy as np", "_____no_output_____" ] ], [ [ "### Step 2. Import the dataset from this [address](https://raw.githubusercontent.com/justmarkham/DAT8/master/data/u.user). ", "_____no_output_____" ], [ "### Step 3. Assign it to a variable called users.", "_____no_output_____" ] ], [ [ "url = 'https://raw.githubusercontent.com/justmarkham/DAT8/master/data/u.user'\n\nusers = pd.read_csv(url, sep='\\|')\nusers", "_____no_output_____" ] ], [ [ "### Step 4. Discover what is the mean age per occupation", "_____no_output_____" ] ], [ [ "users.groupby(['occupation'])['age'].mean()", "_____no_output_____" ] ], [ [ "### Step 5. Discover the Male ratio per occupation and sort it from the most to the least", "_____no_output_____" ] ], [ [ "if 'is_male' not in users:\n users['is_male'] = users['gender'].apply(lambda x: x == 'M')\nusers", "_____no_output_____" ], [ "male_employees = users.loc[users['gender'] == 'M'].groupby(['occupation']).size().astype('float')\n# print(\"male employees:\", male_employees)\nfemale_employees = users.loc[users['gender'] == 'F'].groupby(['occupation']).size().astype('float')\n# print(type(female_employees[0]))\n# print(\"female employees:\", female_employees)\n\nm_f_ratio_occupations = male_employees.divide(female_employees, fill_value=0)\nm_f_ratio_occupations.sort_values(ascending=False)", "_____no_output_____" ] ], [ [ "### Step 6. For each occupation, calculate the minimum and maximum ages", "_____no_output_____" ] ], [ [ "users.groupby(['occupation'])['age'].min()", "_____no_output_____" ], [ "users.groupby(['occupation'])['age'].max()", "_____no_output_____" ] ], [ [ "### Step 7. For each combination of occupation and gender, calculate the mean age", "_____no_output_____" ] ], [ [ "users.loc[users['gender'] == 'M'].groupby(['occupation'])['age'].mean()", "_____no_output_____" ], [ "users.loc[users['gender']=='F'].groupby(['occupation'])['age'].mean()", "_____no_output_____" ] ], [ [ "### Step 8. For each occupation present the percentage of women and men", "_____no_output_____" ] ], [ [ "percent_male = np.abs((male_employees - female_employees))/male_employees\npercent_male", "_____no_output_____" ], [ "percent_female = 1 - percent_male\npercent_female", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ] ]
d022b988aa144e4da0b63bf101c582d993f61e0a
32,020
ipynb
Jupyter Notebook
Assignment 6/sentiment_svm/sentiment-svm.ipynb
ksopan/Edx_Machine_Learning_DSE220x
88841bbe7e400f05eeee25e52f2780082ec99f74
[ "MIT" ]
null
null
null
Assignment 6/sentiment_svm/sentiment-svm.ipynb
ksopan/Edx_Machine_Learning_DSE220x
88841bbe7e400f05eeee25e52f2780082ec99f74
[ "MIT" ]
null
null
null
Assignment 6/sentiment_svm/sentiment-svm.ipynb
ksopan/Edx_Machine_Learning_DSE220x
88841bbe7e400f05eeee25e52f2780082ec99f74
[ "MIT" ]
null
null
null
80.05
18,036
0.798345
[ [ [ "# Sentiment analysis with support vector machines\n\nIn this notebook, we will revisit a learning task that we encountered earlier in the course: predicting the *sentiment* (positive or negative) of a single sentence taken from a review of a movie, restaurant, or product. The data set consists of 3000 labeled sentences, which we divide into a training set of size 2500 and a test set of size 500. Previously we found a logistic regression classifier. Today we will use a support vector machine.\n\nBefore starting on this notebook, make sure the folder `sentiment_labelled_sentences` (containing the data file `full_set.txt`) is in the same directory. Recall that the data can be downloaded from https://archive.ics.uci.edu/ml/datasets/Sentiment+Labelled+Sentences. ", "_____no_output_____" ], [ "## 1. Loading and preprocessing the data\n \nHere we follow exactly the same steps as we did earlier.", "_____no_output_____" ] ], [ [ "%matplotlib inline\nimport string\nimport numpy as np\nimport matplotlib\nimport matplotlib.pyplot as plt\nmatplotlib.rc('xtick', labelsize=14) \nmatplotlib.rc('ytick', labelsize=14)", "_____no_output_____" ], [ "from sklearn.feature_extraction.text import CountVectorizer\n\n## Read in the data set.\nwith open(\"sentiment_labelled_sentences/full_set.txt\") as f:\n content = f.readlines()\n \n## Remove leading and trailing white space\ncontent = [x.strip() for x in content]\n\n## Separate the sentences from the labels\nsentences = [x.split(\"\\t\")[0] for x in content]\nlabels = [x.split(\"\\t\")[1] for x in content]\n\n## Transform the labels from '0 v.s. 1' to '-1 v.s. 1'\ny = np.array(labels, dtype='int8')\ny = 2*y - 1\n\n## full_remove takes a string x and a list of characters removal_list \n## returns x with all the characters in removal_list replaced by ' '\ndef full_remove(x, removal_list):\n for w in removal_list:\n x = x.replace(w, ' ')\n return x\n\n## Remove digits\ndigits = [str(x) for x in range(10)]\ndigit_less = [full_remove(x, digits) for x in sentences]\n\n## Remove punctuation\npunc_less = [full_remove(x, list(string.punctuation)) for x in digit_less]\n\n## Make everything lower-case\nsents_lower = [x.lower() for x in punc_less]\n\n## Define our stop words\nstop_set = set(['the', 'a', 'an', 'i', 'he', 'she', 'they', 'to', 'of', 'it', 'from'])\n\n## Remove stop words\nsents_split = [x.split() for x in sents_lower]\nsents_processed = [\" \".join(list(filter(lambda a: a not in stop_set, x))) for x in sents_split]\n\n## Transform to bag of words representation.\nvectorizer = CountVectorizer(analyzer = \"word\", tokenizer = None, preprocessor = None, stop_words = None, max_features = 4500)\ndata_features = vectorizer.fit_transform(sents_processed)\n\n## Append '1' to the end of each vector.\ndata_mat = data_features.toarray()\n\n## Split the data into testing and training sets\nnp.random.seed(0)\ntest_inds = np.append(np.random.choice((np.where(y==-1))[0], 250, replace=False), np.random.choice((np.where(y==1))[0], 250, replace=False))\ntrain_inds = list(set(range(len(labels))) - set(test_inds))\n\ntrain_data = data_mat[train_inds,]\ntrain_labels = y[train_inds]\n\ntest_data = data_mat[test_inds,]\ntest_labels = y[test_inds]\n\nprint(\"train data: \", train_data.shape)\nprint(\"test data: \", test_data.shape)", "train data: (2500, 4500)\ntest data: (500, 4500)\n" ] ], [ [ "## 2. Fitting a support vector machine to the data\n\nIn support vector machines, we are given a set of examples $(x_1, y_1), \\ldots, (x_n, y_n)$ and we want to find a weight vector $w \\in \\mathbb{R}^d$ that solves the following optimization problem:\n\n$$ \\min_{w \\in \\mathbb{R}^d} \\| w \\|^2 + C \\sum_{i=1}^n \\xi_i $$\n$$ \\text{subject to } y_i \\langle w, x_i \\rangle \\geq 1 - \\xi_i \\text{ for all } i=1,\\ldots, n$$\n\n`scikit-learn` provides an SVM solver that we will use. The following routine takes as input the constant `C` (from the above optimization problem) and returns the training and test error of the resulting SVM model. It is invoked as follows:\n\n* `training_error, test_error = fit_classifier(C)`\n\nThe default value for parameter `C` is 1.0.", "_____no_output_____" ] ], [ [ "from sklearn import svm\ndef fit_classifier(C_value=1.0):\n clf = svm.LinearSVC(C=C_value, loss='hinge')\n clf.fit(train_data,train_labels)\n ## Get predictions on training data\n train_preds = clf.predict(train_data)\n train_error = float(np.sum((train_preds > 0.0) != (train_labels > 0.0)))/len(train_labels)\n ## Get predictions on test data\n test_preds = clf.predict(test_data)\n test_error = float(np.sum((test_preds > 0.0) != (test_labels > 0.0)))/len(test_labels)\n ##\n return train_error, test_error", "_____no_output_____" ], [ "cvals = [0.01,0.1,1.0,10.0,100.0,1000.0,10000.0]\nfor c in cvals:\n train_error, test_error = fit_classifier(c)\n print (\"Error rate for C = %0.2f: train %0.3f test %0.3f\" % (c, train_error, test_error))", "Error rate for C = 0.01: train 0.215 test 0.250\nError rate for C = 0.10: train 0.074 test 0.174\nError rate for C = 1.00: train 0.011 test 0.152\nError rate for C = 10.00: train 0.002 test 0.188\nError rate for C = 100.00: train 0.002 test 0.198\nError rate for C = 1000.00: train 0.003 test 0.212\nError rate for C = 10000.00: train 0.001 test 0.208\n" ] ], [ [ "## 3. Evaluating C by k-fold cross-validation\n\nAs we can see, the choice of `C` has a very significant effect on the performance of the SVM classifier. We were able to assess this because we have a separate test set. In general, however, this is a luxury we won't possess. How can we choose `C` based only on the training set?\n\nA reasonable way to estimate the error associated with a specific value of `C` is by **`k-fold cross validation`**:\n* Partition the training set `S` into `k` equal-sized sized subsets `S_1, S_2, ..., S_k`.\n* For `i=1,2,...,k`, train a classifier with parameter `C` on `S - S_i` (all the training data except `S_i`) and test it on `S_i` to get error estimate `e_i`.\n* Average the errors: `(e_1 + ... + e_k)/k`\n\nThe following procedure, **cross_validation_error**, does exactly this. It takes as input:\n* the training set `x,y`\n* the value of `C` to be evaluated\n* the integer `k`\n\nand it returns the estimated error of the classifier for that particular setting of `C`. <font color=\"magenta\">Look over the code carefully to understand exactly what it is doing.</font>", "_____no_output_____" ] ], [ [ "def cross_validation_error(x,y,C_value,k):\n n = len(y)\n ## Randomly shuffle indices\n indices = np.random.permutation(n)\n \n ## Initialize error\n err = 0.0\n \n ## Iterate over partitions\n for i in range(k):\n ## Partition indices\n test_indices = indices[int(i*(n/k)):int((i+1)*(n/k) - 1)]\n train_indices = np.setdiff1d(indices, test_indices)\n \n ## Train classifier with parameter c\n clf = svm.LinearSVC(C=C_value, loss='hinge')\n clf.fit(x[train_indices], y[train_indices])\n \n ## Get predictions on test partition\n preds = clf.predict(x[test_indices])\n \n ## Compute error\n err += float(np.sum((preds > 0.0) != (y[test_indices] > 0.0)))/len(test_indices)\n \n return err/k", "_____no_output_____" ] ], [ [ "## 4. Picking a value of C", "_____no_output_____" ], [ "The procedure **cross_validation_error** (above) evaluates a single candidate value of `C`. We need to use it repeatedly to identify a good `C`. \n\n<font color=\"magenta\">**For you to do:**</font> Write a function to choose `C`. It will be invoked as follows:\n\n* `c, err = choose_parameter(x,y,k)`\n\nwhere\n* `x,y` is the training data\n* `k` is the number of folds of cross-validation\n* `c` is chosen value of the parameter `C`\n* `err` is the cross-validation error estimate at `c`\n\n<font color=\"magenta\">Note:</font> This is a tricky business because a priori, even the order of magnitude of `C` is unknown. Should it be 0.0001 or 10000? You might want to think about trying multiple values that are arranged in a geometric progression (such as powers of ten). *In addition to returning a specific value of `C`, your function should **plot** the cross-validation errors for all the values of `C` it tried out (possibly using a log-scale for the `C`-axis).*", "_____no_output_____" ] ], [ [ "def choose_parameter(x,y,k):\n C = [0.0001,0.001,0.01,0.1,1,10,100,1000,10000]\n err=[]\n for c in C:\n err.append(cross_validation_error(x,y,c,k))\n err_min,cc=min(list(zip(err,C))) #C value for minimum error\n plt.plot(np.log(C),err)\n plt.xlabel(\"Log(C)\")\n plt.ylabel(\"Corresponding error\")\n return cc,err_min", "_____no_output_____" ] ], [ [ "Now let's try out your routine!", "_____no_output_____" ] ], [ [ "c, err = choose_parameter(train_data, train_labels, 10)\nprint(\"Choice of C: \", c)\nprint(\"Cross-validation error estimate: \", err)\n## Train it and test it\nclf = svm.LinearSVC(C=c, loss='hinge')\nclf.fit(train_data, train_labels)\npreds = clf.predict(test_data)\nerror = float(np.sum((preds > 0.0) != (test_labels > 0.0)))/len(test_labels)\nprint(\"Test error: \", error)", "Choice of C: 1\nCross-validation error estimate: 0.18554216867469878\nTest error: 0.152\n" ] ], [ [ "<font color=\"magenta\">**For you to ponder:**</font> How does the plot of cross-validation errors for different `C` look? Is there clearly a trough in which the returned value of `C` falls? Does the plot provide some reassurance that the choice is reasonable?", "_____no_output_____" ], [ "U-shaped. Yes. Yes. ", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "markdown", "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ] ]
d022c798fc4624392513cda9d6daa1ca7f820243
68,240
ipynb
Jupyter Notebook
probability/probability-course/notebooks/[Clase9]Distribucion_normal.ipynb
Elkinmt19/data-science-dojo
9e3d7ca8774474e1ad74138c7215ca3acdabf07c
[ "MIT" ]
1
2022-01-14T03:16:23.000Z
2022-01-14T03:16:23.000Z
probability/probability-course/notebooks/[Clase9]Distribucion_normal.ipynb
Elkinmt19/data-engineer-dojo
15857ba5b72681e15c4b170f5a2505513e6d43ec
[ "MIT" ]
null
null
null
probability/probability-course/notebooks/[Clase9]Distribucion_normal.ipynb
Elkinmt19/data-engineer-dojo
15857ba5b72681e15c4b170f5a2505513e6d43ec
[ "MIT" ]
null
null
null
209.969231
16,034
0.903019
[ [ [ "import pandas as pd\nimport numpy as np \nimport matplotlib.pyplot as plt\nfrom scipy.stats import norm ", "_____no_output_____" ] ], [ [ "## Distribución normal teórica\n\n\n$$P(X) = \\frac{1}{\\sigma \\sqrt{2 \\pi}} \\exp{\\left[-\\frac{1}{2}\\left(\\frac{X-\\mu}{\\sigma} \\right)^2 \\right]}$$\n\n* $\\mu$: media de la distribución\n* $\\sigma$: desviación estándar de la distribución", "_____no_output_____" ] ], [ [ "# definimos nuestra distribución gaussiana\ndef gaussian(x, mu, sigma):\n return 1/(sigma*np.sqrt(2*np.pi))*np.exp(-0.5*pow((x-mu)/sigma,2))", "_____no_output_____" ], [ "x = np.arange(-4,4,0.1)\ny = gaussian(x, 0.0, 1.0)\n\nplt.plot(x, y)", "_____no_output_____" ], [ "# usando scipy\ndist = norm(0, 1)\nx = np.arange(-4,4,0.1)\ny = [dist.pdf(value) for value in x]\nplt.plot(x, y)", "_____no_output_____" ], [ "# calculando la distribución acumulada\ndist = norm(0, 1)\nx = np.arange(-4,4,0.1)\ny = [dist.cdf(value) for value in x]\nplt.plot(x, y)", "_____no_output_____" ] ], [ [ "## Distribución normal (gausiana) a partir de los datos\n\n* *El archivo excel* lo puedes descargar en esta página: https://seattlecentral.edu/qelp/sets/057/057.html", "_____no_output_____" ] ], [ [ "df = pd.read_excel('s057.xls')\narr = df['Normally Distributed Housefly Wing Lengths'].values[4:]\nvalues, dist = np.unique(arr, return_counts=True)\nprint(values)\nplt.bar(values, dist)", "[37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55]\n" ], [ "# estimación de la distribución de probabilidad\nmu = arr.mean()\n\n#distribución teórica\nsigma = arr.std()\ndist = norm(mu, sigma)\nx = np.arange(30,60,0.1)\ny = [dist.pdf(value) for value in x]\nplt.plot(x, y)\n\n# datos\nvalues, dist = np.unique(arr, return_counts=True)\nplt.bar(values, dist/len(arr))", "_____no_output_____" ], [ "", "_____no_output_____" ] ] ]
[ "code", "markdown", "code", "markdown", "code" ]
[ [ "code" ], [ "markdown" ], [ "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code" ] ]
d022ec3565eccfd85c5d926e06eab6f9dfe6ab6e
27,237
ipynb
Jupyter Notebook
Equipped_AI_Test.ipynb
VAD3R-95/Hackathons_and_Interviews
54b8f770e3af7012eea44f0c905d30cdc2a8fcb2
[ "MIT" ]
null
null
null
Equipped_AI_Test.ipynb
VAD3R-95/Hackathons_and_Interviews
54b8f770e3af7012eea44f0c905d30cdc2a8fcb2
[ "MIT" ]
null
null
null
Equipped_AI_Test.ipynb
VAD3R-95/Hackathons_and_Interviews
54b8f770e3af7012eea44f0c905d30cdc2a8fcb2
[ "MIT" ]
null
null
null
28.640379
173
0.371737
[ [ [ "import pandas as pd\nimport warnings\nwarnings.filterwarnings('ignore')\nimport numpy as np\nfrom datetime import timedelta\nfrom functools import reduce", "_____no_output_____" ], [ "df_1 = pd.read_excel('Table1.xlsx')\ndf_2 = pd.read_excel('Table2.xlsx')", "_____no_output_____" ], [ "df_2.head()", "_____no_output_____" ], [ "df_1.head()", "_____no_output_____" ] ], [ [ "## ANS -1 ", "_____no_output_____" ] ], [ [ "df_1['diff_in_days'] = df_1['Cut Off Date'] - df_1['Borrower DOB (MM/DD/YYYY)'] \ndf_1['diff_in_years'] = df_1[\"diff_in_days\"] / timedelta(days=365) \navg_borrower_age = df_1.groupby('Product Group')['diff_in_years'].mean()\navg_borrower_age", "_____no_output_____" ], [ "df_1['orig_year'] = df_1['Origination Date'].dt.year\norigination_year = df_1.groupby('Product Group').agg({'orig_year':min})\norigination_year", "_____no_output_____" ], [ "total_accounts = df_1.groupby('Product Group').size().reset_index()\ntotal_accounts.rename(columns={0:'Total Accounts'},inplace = True)\ntotal_accounts", "_____no_output_____" ], [ "df_3 = pd.merge(df_1,df_2,on='LoanID',how='inner')\ntotal_balances = df_3.groupby('Product Group').agg({'Origination Balance':sum,'Outstanding Balance':sum})\ntotal_balances", "_____no_output_____" ], [ "innsured_loans = df_1.groupby('Product Group')['Insurance'].apply(lambda x: (x=='Y').sum()).reset_index(name='Insured Loans')\ninnsured_loans", "_____no_output_____" ], [ "max_maturity_date = df_1.groupby('Product Group').agg({'Loan MaturityDate':max})\ndf_4 = pd.merge(max_maturity_date,df_1,on=['Product Group','Loan MaturityDate'],how='inner')\nloan_id_maturity = df_4.drop_duplicates(subset = ['Product Group', 'Loan MaturityDate'], keep = 'first').reset_index(drop = True)\nloanID_max_maturity = loan_id_maturity[['Product Group','LoanID']]\nloanID_max_maturity", "_____no_output_____" ] ], [ [ "## ANS -2 ", "_____no_output_____" ] ], [ [ "df_test = [origination_year,innsured_loans,loanID_max_maturity,total_balances,total_accounts]\ndf_ans_2 = reduce(lambda left,right: pd.merge(left,right,on=['Product Group'],how='inner'), df_test)\ndf_ans_2", "_____no_output_____" ] ], [ [ "## ANS -3 ", "_____no_output_____" ] ], [ [ "max_originating_balance = df_1.groupby('Product Group').agg({'Origination Balance':max})\ndf_merged = pd.merge(max_originating_balance,df_1,on=['Product Group','Origination Balance'],how='inner')\nloan_id_originating_balance = df_merged.drop_duplicates(subset = ['Product Group', 'Origination Balance'], keep = 'first').reset_index(drop = True)\nloanID_max_originating_balance = loan_id_originating_balance[['Product Group','LoanID']]\nloanID_max_originating_balance", "_____no_output_____" ] ], [ [ "## ANS -4", "_____no_output_____" ] ], [ [ "df_ques3 = pd.merge(df_1,df_2,on='LoanID',how='inner')\ndf_ans_3 = df_ques3.groupby(['Product Group']).apply(lambda x: x['Outstanding Balance'].sum()/x['Origination Balance'].sum()).reset_index(name='Balance Ammortized')\ndf_ans_3", "_____no_output_____" ] ] ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ] ]
d022f1ea09394a71240b040ba27b4bd897f45876
232,515
ipynb
Jupyter Notebook
experiments/tl_1v2/cores-oracle.run1.framed/trials/14/trial.ipynb
stevester94/csc500-notebooks
4c1b04c537fe233a75bed82913d9d84985a89177
[ "MIT" ]
null
null
null
experiments/tl_1v2/cores-oracle.run1.framed/trials/14/trial.ipynb
stevester94/csc500-notebooks
4c1b04c537fe233a75bed82913d9d84985a89177
[ "MIT" ]
null
null
null
experiments/tl_1v2/cores-oracle.run1.framed/trials/14/trial.ipynb
stevester94/csc500-notebooks
4c1b04c537fe233a75bed82913d9d84985a89177
[ "MIT" ]
null
null
null
98.816405
71,996
0.76254
[ [ [ "# Transfer Learning Template", "_____no_output_____" ] ], [ [ "%load_ext autoreload\n%autoreload 2\n%matplotlib inline\n\n \nimport os, json, sys, time, random\nimport numpy as np\nimport torch\nfrom torch.optim import Adam\nfrom easydict import EasyDict\nimport matplotlib.pyplot as plt\n\nfrom steves_models.steves_ptn import Steves_Prototypical_Network\n\nfrom steves_utils.lazy_iterable_wrapper import Lazy_Iterable_Wrapper\nfrom steves_utils.iterable_aggregator import Iterable_Aggregator\nfrom steves_utils.ptn_train_eval_test_jig import PTN_Train_Eval_Test_Jig\nfrom steves_utils.torch_sequential_builder import build_sequential\nfrom steves_utils.torch_utils import get_dataset_metrics, ptn_confusion_by_domain_over_dataloader\nfrom steves_utils.utils_v2 import (per_domain_accuracy_from_confusion, get_datasets_base_path)\nfrom steves_utils.PTN.utils import independent_accuracy_assesment\n\nfrom torch.utils.data import DataLoader\n\nfrom steves_utils.stratified_dataset.episodic_accessor import Episodic_Accessor_Factory\n\nfrom steves_utils.ptn_do_report import (\n get_loss_curve,\n get_results_table,\n get_parameters_table,\n get_domain_accuracies,\n)\n\nfrom steves_utils.transforms import get_chained_transform", "_____no_output_____" ] ], [ [ "# Allowed Parameters\nThese are allowed parameters, not defaults\nEach of these values need to be present in the injected parameters (the notebook will raise an exception if they are not present)\n\nPapermill uses the cell tag \"parameters\" to inject the real parameters below this cell.\nEnable tags to see what I mean", "_____no_output_____" ] ], [ [ "required_parameters = {\n \"experiment_name\",\n \"lr\",\n \"device\",\n \"seed\",\n \"dataset_seed\",\n \"n_shot\",\n \"n_query\",\n \"n_way\",\n \"train_k_factor\",\n \"val_k_factor\",\n \"test_k_factor\",\n \"n_epoch\",\n \"patience\",\n \"criteria_for_best\",\n \"x_net\",\n \"datasets\",\n \"torch_default_dtype\",\n \"NUM_LOGS_PER_EPOCH\",\n \"BEST_MODEL_PATH\",\n \"x_shape\",\n}", "_____no_output_____" ], [ "from steves_utils.CORES.utils import (\n ALL_NODES,\n ALL_NODES_MINIMUM_1000_EXAMPLES,\n ALL_DAYS\n)\n\nfrom steves_utils.ORACLE.utils_v2 import (\n ALL_DISTANCES_FEET_NARROWED,\n ALL_RUNS,\n ALL_SERIAL_NUMBERS,\n)\n\nstandalone_parameters = {}\nstandalone_parameters[\"experiment_name\"] = \"STANDALONE PTN\"\nstandalone_parameters[\"lr\"] = 0.001\nstandalone_parameters[\"device\"] = \"cuda\"\n\nstandalone_parameters[\"seed\"] = 1337\nstandalone_parameters[\"dataset_seed\"] = 1337\n\nstandalone_parameters[\"n_way\"] = 8\nstandalone_parameters[\"n_shot\"] = 3\nstandalone_parameters[\"n_query\"] = 2\nstandalone_parameters[\"train_k_factor\"] = 1\nstandalone_parameters[\"val_k_factor\"] = 2\nstandalone_parameters[\"test_k_factor\"] = 2\n\n\nstandalone_parameters[\"n_epoch\"] = 50\n\nstandalone_parameters[\"patience\"] = 10\nstandalone_parameters[\"criteria_for_best\"] = \"source_loss\"\n\nstandalone_parameters[\"datasets\"] = [\n {\n \"labels\": ALL_SERIAL_NUMBERS,\n \"domains\": ALL_DISTANCES_FEET_NARROWED,\n \"num_examples_per_domain_per_label\": 100,\n \"pickle_path\": os.path.join(get_datasets_base_path(), \"oracle.Run1_framed_2000Examples_stratified_ds.2022A.pkl\"),\n \"source_or_target_dataset\": \"source\",\n \"x_transforms\": [\"unit_mag\", \"minus_two\"],\n \"episode_transforms\": [],\n \"domain_prefix\": \"ORACLE_\"\n },\n {\n \"labels\": ALL_NODES,\n \"domains\": ALL_DAYS,\n \"num_examples_per_domain_per_label\": 100,\n \"pickle_path\": os.path.join(get_datasets_base_path(), \"cores.stratified_ds.2022A.pkl\"),\n \"source_or_target_dataset\": \"target\",\n \"x_transforms\": [\"unit_power\", \"times_zero\"],\n \"episode_transforms\": [],\n \"domain_prefix\": \"CORES_\"\n } \n]\n\nstandalone_parameters[\"torch_default_dtype\"] = \"torch.float32\" \n\n\n\nstandalone_parameters[\"x_net\"] = [\n {\"class\": \"nnReshape\", \"kargs\": {\"shape\":[-1, 1, 2, 256]}},\n {\"class\": \"Conv2d\", \"kargs\": { \"in_channels\":1, \"out_channels\":256, \"kernel_size\":(1,7), \"bias\":False, \"padding\":(0,3), },},\n {\"class\": \"ReLU\", \"kargs\": {\"inplace\": True}},\n {\"class\": \"BatchNorm2d\", \"kargs\": {\"num_features\":256}},\n\n {\"class\": \"Conv2d\", \"kargs\": { \"in_channels\":256, \"out_channels\":80, \"kernel_size\":(2,7), \"bias\":True, \"padding\":(0,3), },},\n {\"class\": \"ReLU\", \"kargs\": {\"inplace\": True}},\n {\"class\": \"BatchNorm2d\", \"kargs\": {\"num_features\":80}},\n {\"class\": \"Flatten\", \"kargs\": {}},\n\n {\"class\": \"Linear\", \"kargs\": {\"in_features\": 80*256, \"out_features\": 256}}, # 80 units per IQ pair\n {\"class\": \"ReLU\", \"kargs\": {\"inplace\": True}},\n {\"class\": \"BatchNorm1d\", \"kargs\": {\"num_features\":256}},\n\n {\"class\": \"Linear\", \"kargs\": {\"in_features\": 256, \"out_features\": 256}},\n]\n\n# Parameters relevant to results\n# These parameters will basically never need to change\nstandalone_parameters[\"NUM_LOGS_PER_EPOCH\"] = 10\nstandalone_parameters[\"BEST_MODEL_PATH\"] = \"./best_model.pth\"\n\n\n\n\n", "_____no_output_____" ], [ "# Parameters\nparameters = {\n \"experiment_name\": \"tl_1v2:cores-oracle.run1.framed\",\n \"device\": \"cuda\",\n \"lr\": 0.0001,\n \"n_shot\": 3,\n \"n_query\": 2,\n \"train_k_factor\": 3,\n \"val_k_factor\": 2,\n \"test_k_factor\": 2,\n \"torch_default_dtype\": \"torch.float32\",\n \"n_epoch\": 50,\n \"patience\": 3,\n \"criteria_for_best\": \"target_accuracy\",\n \"x_net\": [\n {\"class\": \"nnReshape\", \"kargs\": {\"shape\": [-1, 1, 2, 256]}},\n {\n \"class\": \"Conv2d\",\n \"kargs\": {\n \"in_channels\": 1,\n \"out_channels\": 256,\n \"kernel_size\": [1, 7],\n \"bias\": False,\n \"padding\": [0, 3],\n },\n },\n {\"class\": \"ReLU\", \"kargs\": {\"inplace\": True}},\n {\"class\": \"BatchNorm2d\", \"kargs\": {\"num_features\": 256}},\n {\n \"class\": \"Conv2d\",\n \"kargs\": {\n \"in_channels\": 256,\n \"out_channels\": 80,\n \"kernel_size\": [2, 7],\n \"bias\": True,\n \"padding\": [0, 3],\n },\n },\n {\"class\": \"ReLU\", \"kargs\": {\"inplace\": True}},\n {\"class\": \"BatchNorm2d\", \"kargs\": {\"num_features\": 80}},\n {\"class\": \"Flatten\", \"kargs\": {}},\n {\"class\": \"Linear\", \"kargs\": {\"in_features\": 20480, \"out_features\": 256}},\n {\"class\": \"ReLU\", \"kargs\": {\"inplace\": True}},\n {\"class\": \"BatchNorm1d\", \"kargs\": {\"num_features\": 256}},\n {\"class\": \"Linear\", \"kargs\": {\"in_features\": 256, \"out_features\": 256}},\n ],\n \"NUM_LOGS_PER_EPOCH\": 10,\n \"BEST_MODEL_PATH\": \"./best_model.pth\",\n \"n_way\": 16,\n \"datasets\": [\n {\n \"labels\": [\n \"1-10.\",\n \"1-11.\",\n \"1-15.\",\n \"1-16.\",\n \"1-17.\",\n \"1-18.\",\n \"1-19.\",\n \"10-4.\",\n \"10-7.\",\n \"11-1.\",\n \"11-14.\",\n \"11-17.\",\n \"11-20.\",\n \"11-7.\",\n \"13-20.\",\n \"13-8.\",\n \"14-10.\",\n \"14-11.\",\n \"14-14.\",\n \"14-7.\",\n \"15-1.\",\n \"15-20.\",\n \"16-1.\",\n \"16-16.\",\n \"17-10.\",\n \"17-11.\",\n \"17-2.\",\n \"19-1.\",\n \"19-16.\",\n \"19-19.\",\n \"19-20.\",\n \"19-3.\",\n \"2-10.\",\n \"2-11.\",\n \"2-17.\",\n \"2-18.\",\n \"2-20.\",\n \"2-3.\",\n \"2-4.\",\n \"2-5.\",\n \"2-6.\",\n \"2-7.\",\n \"2-8.\",\n \"3-13.\",\n \"3-18.\",\n \"3-3.\",\n \"4-1.\",\n \"4-10.\",\n \"4-11.\",\n \"4-19.\",\n \"5-5.\",\n \"6-15.\",\n \"7-10.\",\n \"7-14.\",\n \"8-18.\",\n \"8-20.\",\n \"8-3.\",\n \"8-8.\",\n ],\n \"domains\": [1, 2, 3, 4, 5],\n \"num_examples_per_domain_per_label\": -1,\n \"pickle_path\": \"/root/csc500-main/datasets/cores.stratified_ds.2022A.pkl\",\n \"source_or_target_dataset\": \"source\",\n \"x_transforms\": [],\n \"episode_transforms\": [],\n \"domain_prefix\": \"CORES_\",\n },\n {\n \"labels\": [\n \"3123D52\",\n \"3123D65\",\n \"3123D79\",\n \"3123D80\",\n \"3123D54\",\n \"3123D70\",\n \"3123D7B\",\n \"3123D89\",\n \"3123D58\",\n \"3123D76\",\n \"3123D7D\",\n \"3123EFE\",\n \"3123D64\",\n \"3123D78\",\n \"3123D7E\",\n \"3124E4A\",\n ],\n \"domains\": [32, 38, 8, 44, 14, 50, 20, 26],\n \"num_examples_per_domain_per_label\": 2000,\n \"pickle_path\": \"/root/csc500-main/datasets/oracle.Run1_framed_2000Examples_stratified_ds.2022A.pkl\",\n \"source_or_target_dataset\": \"target\",\n \"x_transforms\": [],\n \"episode_transforms\": [],\n \"domain_prefix\": \"ORACLE.run1_\",\n },\n ],\n \"dataset_seed\": 154325,\n \"seed\": 154325,\n}\n", "_____no_output_____" ], [ "# Set this to True if you want to run this template directly\nSTANDALONE = False\nif STANDALONE:\n print(\"parameters not injected, running with standalone_parameters\")\n parameters = standalone_parameters\n\nif not 'parameters' in locals() and not 'parameters' in globals():\n raise Exception(\"Parameter injection failed\")\n\n#Use an easy dict for all the parameters\np = EasyDict(parameters)\n\nif \"x_shape\" not in p:\n p.x_shape = [2,256] # Default to this if we dont supply x_shape\n\n\nsupplied_keys = set(p.keys())\n\nif supplied_keys != required_parameters:\n print(\"Parameters are incorrect\")\n if len(supplied_keys - required_parameters)>0: print(\"Shouldn't have:\", str(supplied_keys - required_parameters))\n if len(required_parameters - supplied_keys)>0: print(\"Need to have:\", str(required_parameters - supplied_keys))\n raise RuntimeError(\"Parameters are incorrect\")", "_____no_output_____" ], [ "###################################\n# Set the RNGs and make it all deterministic\n###################################\nnp.random.seed(p.seed)\nrandom.seed(p.seed)\ntorch.manual_seed(p.seed)\n\ntorch.use_deterministic_algorithms(True) ", "_____no_output_____" ], [ "###########################################\n# The stratified datasets honor this\n###########################################\ntorch.set_default_dtype(eval(p.torch_default_dtype))", "_____no_output_____" ], [ "###################################\n# Build the network(s)\n# Note: It's critical to do this AFTER setting the RNG\n###################################\nx_net = build_sequential(p.x_net)", "_____no_output_____" ], [ "start_time_secs = time.time()", "_____no_output_____" ], [ "p.domains_source = []\np.domains_target = []\n\n\ntrain_original_source = []\nval_original_source = []\ntest_original_source = []\n\ntrain_original_target = []\nval_original_target = []\ntest_original_target = []", "_____no_output_____" ], [ "# global_x_transform_func = lambda x: normalize(x.to(torch.get_default_dtype()), \"unit_power\") # unit_power, unit_mag\n# global_x_transform_func = lambda x: normalize(x, \"unit_power\") # unit_power, unit_mag", "_____no_output_____" ], [ "def add_dataset(\n labels,\n domains,\n pickle_path,\n x_transforms,\n episode_transforms,\n domain_prefix,\n num_examples_per_domain_per_label,\n source_or_target_dataset:str,\n iterator_seed=p.seed,\n dataset_seed=p.dataset_seed,\n n_shot=p.n_shot,\n n_way=p.n_way,\n n_query=p.n_query,\n train_val_test_k_factors=(p.train_k_factor,p.val_k_factor,p.test_k_factor),\n):\n \n if x_transforms == []: x_transform = None\n else: x_transform = get_chained_transform(x_transforms)\n \n if episode_transforms == []: episode_transform = None\n else: raise Exception(\"episode_transforms not implemented\")\n \n episode_transform = lambda tup, _prefix=domain_prefix: (_prefix + str(tup[0]), tup[1])\n\n\n eaf = Episodic_Accessor_Factory(\n labels=labels,\n domains=domains,\n num_examples_per_domain_per_label=num_examples_per_domain_per_label,\n iterator_seed=iterator_seed,\n dataset_seed=dataset_seed,\n n_shot=n_shot,\n n_way=n_way,\n n_query=n_query,\n train_val_test_k_factors=train_val_test_k_factors,\n pickle_path=pickle_path,\n x_transform_func=x_transform,\n )\n\n train, val, test = eaf.get_train(), eaf.get_val(), eaf.get_test()\n train = Lazy_Iterable_Wrapper(train, episode_transform)\n val = Lazy_Iterable_Wrapper(val, episode_transform)\n test = Lazy_Iterable_Wrapper(test, episode_transform)\n\n if source_or_target_dataset==\"source\":\n train_original_source.append(train)\n val_original_source.append(val)\n test_original_source.append(test)\n\n p.domains_source.extend(\n [domain_prefix + str(u) for u in domains]\n )\n elif source_or_target_dataset==\"target\":\n train_original_target.append(train)\n val_original_target.append(val)\n test_original_target.append(test)\n p.domains_target.extend(\n [domain_prefix + str(u) for u in domains]\n )\n else:\n raise Exception(f\"invalid source_or_target_dataset: {source_or_target_dataset}\")\n ", "_____no_output_____" ], [ "for ds in p.datasets:\n add_dataset(**ds)", "_____no_output_____" ], [ "# from steves_utils.CORES.utils import (\n# ALL_NODES,\n# ALL_NODES_MINIMUM_1000_EXAMPLES,\n# ALL_DAYS\n# )\n\n# add_dataset(\n# labels=ALL_NODES,\n# domains = ALL_DAYS,\n# num_examples_per_domain_per_label=100,\n# pickle_path=os.path.join(get_datasets_base_path(), \"cores.stratified_ds.2022A.pkl\"),\n# source_or_target_dataset=\"target\",\n# x_transform_func=global_x_transform_func,\n# domain_modifier=lambda u: f\"cores_{u}\"\n# )", "_____no_output_____" ], [ "# from steves_utils.ORACLE.utils_v2 import (\n# ALL_DISTANCES_FEET,\n# ALL_RUNS,\n# ALL_SERIAL_NUMBERS,\n# )\n\n\n# add_dataset(\n# labels=ALL_SERIAL_NUMBERS,\n# domains = list(set(ALL_DISTANCES_FEET) - {2,62}),\n# num_examples_per_domain_per_label=100,\n# pickle_path=os.path.join(get_datasets_base_path(), \"oracle.Run2_framed_2000Examples_stratified_ds.2022A.pkl\"),\n# source_or_target_dataset=\"source\",\n# x_transform_func=global_x_transform_func,\n# domain_modifier=lambda u: f\"oracle1_{u}\"\n# )\n", "_____no_output_____" ], [ "# from steves_utils.ORACLE.utils_v2 import (\n# ALL_DISTANCES_FEET,\n# ALL_RUNS,\n# ALL_SERIAL_NUMBERS,\n# )\n\n\n# add_dataset(\n# labels=ALL_SERIAL_NUMBERS,\n# domains = list(set(ALL_DISTANCES_FEET) - {2,62,56}),\n# num_examples_per_domain_per_label=100,\n# pickle_path=os.path.join(get_datasets_base_path(), \"oracle.Run2_framed_2000Examples_stratified_ds.2022A.pkl\"),\n# source_or_target_dataset=\"source\",\n# x_transform_func=global_x_transform_func,\n# domain_modifier=lambda u: f\"oracle2_{u}\"\n# )", "_____no_output_____" ], [ "# add_dataset(\n# labels=list(range(19)),\n# domains = [0,1,2],\n# num_examples_per_domain_per_label=100,\n# pickle_path=os.path.join(get_datasets_base_path(), \"metehan.stratified_ds.2022A.pkl\"),\n# source_or_target_dataset=\"target\",\n# x_transform_func=global_x_transform_func,\n# domain_modifier=lambda u: f\"met_{u}\"\n# )", "_____no_output_____" ], [ "# # from steves_utils.wisig.utils import (\n# # ALL_NODES_MINIMUM_100_EXAMPLES,\n# # ALL_NODES_MINIMUM_500_EXAMPLES,\n# # ALL_NODES_MINIMUM_1000_EXAMPLES,\n# # ALL_DAYS\n# # )\n\n# import steves_utils.wisig.utils as wisig\n\n\n# add_dataset(\n# labels=wisig.ALL_NODES_MINIMUM_100_EXAMPLES,\n# domains = wisig.ALL_DAYS,\n# num_examples_per_domain_per_label=100,\n# pickle_path=os.path.join(get_datasets_base_path(), \"wisig.node3-19.stratified_ds.2022A.pkl\"),\n# source_or_target_dataset=\"target\",\n# x_transform_func=global_x_transform_func,\n# domain_modifier=lambda u: f\"wisig_{u}\"\n# )", "_____no_output_____" ], [ "###################################\n# Build the dataset\n###################################\ntrain_original_source = Iterable_Aggregator(train_original_source, p.seed)\nval_original_source = Iterable_Aggregator(val_original_source, p.seed)\ntest_original_source = Iterable_Aggregator(test_original_source, p.seed)\n\n\ntrain_original_target = Iterable_Aggregator(train_original_target, p.seed)\nval_original_target = Iterable_Aggregator(val_original_target, p.seed)\ntest_original_target = Iterable_Aggregator(test_original_target, p.seed)\n\n# For CNN We only use X and Y. And we only train on the source.\n# Properly form the data using a transform lambda and Lazy_Iterable_Wrapper. Finally wrap them in a dataloader\n\ntransform_lambda = lambda ex: ex[1] # Original is (<domain>, <episode>) so we strip down to episode only\n\ntrain_processed_source = Lazy_Iterable_Wrapper(train_original_source, transform_lambda)\nval_processed_source = Lazy_Iterable_Wrapper(val_original_source, transform_lambda)\ntest_processed_source = Lazy_Iterable_Wrapper(test_original_source, transform_lambda)\n\ntrain_processed_target = Lazy_Iterable_Wrapper(train_original_target, transform_lambda)\nval_processed_target = Lazy_Iterable_Wrapper(val_original_target, transform_lambda)\ntest_processed_target = Lazy_Iterable_Wrapper(test_original_target, transform_lambda)\n\ndatasets = EasyDict({\n \"source\": {\n \"original\": {\"train\":train_original_source, \"val\":val_original_source, \"test\":test_original_source},\n \"processed\": {\"train\":train_processed_source, \"val\":val_processed_source, \"test\":test_processed_source}\n },\n \"target\": {\n \"original\": {\"train\":train_original_target, \"val\":val_original_target, \"test\":test_original_target},\n \"processed\": {\"train\":train_processed_target, \"val\":val_processed_target, \"test\":test_processed_target}\n },\n})", "_____no_output_____" ], [ "from steves_utils.transforms import get_average_magnitude, get_average_power\n\nprint(set([u for u,_ in val_original_source]))\nprint(set([u for u,_ in val_original_target]))\n\ns_x, s_y, q_x, q_y, _ = next(iter(train_processed_source))\nprint(s_x)\n\n# for ds in [\n# train_processed_source,\n# val_processed_source,\n# test_processed_source,\n# train_processed_target,\n# val_processed_target,\n# test_processed_target\n# ]:\n# for s_x, s_y, q_x, q_y, _ in ds:\n# for X in (s_x, q_x):\n# for x in X:\n# assert np.isclose(get_average_magnitude(x.numpy()), 1.0)\n# assert np.isclose(get_average_power(x.numpy()), 1.0)\n ", "{'CORES_3', 'CORES_4', 'CORES_1', 'CORES_5', 'CORES_2'}\n" ], [ "###################################\n# Build the model\n###################################\n# easfsl only wants a tuple for the shape\nmodel = Steves_Prototypical_Network(x_net, device=p.device, x_shape=tuple(p.x_shape))\noptimizer = Adam(params=model.parameters(), lr=p.lr)", "(2, 256)\n" ], [ "###################################\n# train\n###################################\njig = PTN_Train_Eval_Test_Jig(model, p.BEST_MODEL_PATH, p.device)\n\njig.train(\n train_iterable=datasets.source.processed.train,\n source_val_iterable=datasets.source.processed.val,\n target_val_iterable=datasets.target.processed.val,\n num_epochs=p.n_epoch,\n num_logs_per_epoch=p.NUM_LOGS_PER_EPOCH,\n patience=p.patience,\n optimizer=optimizer,\n criteria_for_best=p.criteria_for_best,\n)", "epoch: 1, [batch: 1 / 6297], examples_per_second: 35.2027, train_label_loss: 2.0536, \n" ], [ "total_experiment_time_secs = time.time() - start_time_secs", "_____no_output_____" ], [ "###################################\n# Evaluate the model\n###################################\nsource_test_label_accuracy, source_test_label_loss = jig.test(datasets.source.processed.test)\ntarget_test_label_accuracy, target_test_label_loss = jig.test(datasets.target.processed.test)\n\nsource_val_label_accuracy, source_val_label_loss = jig.test(datasets.source.processed.val)\ntarget_val_label_accuracy, target_val_label_loss = jig.test(datasets.target.processed.val)\n\nhistory = jig.get_history()\n\ntotal_epochs_trained = len(history[\"epoch_indices\"])\n\nval_dl = Iterable_Aggregator((datasets.source.original.val,datasets.target.original.val))\n\nconfusion = ptn_confusion_by_domain_over_dataloader(model, p.device, val_dl)\nper_domain_accuracy = per_domain_accuracy_from_confusion(confusion)\n\n# Add a key to per_domain_accuracy for if it was a source domain\nfor domain, accuracy in per_domain_accuracy.items():\n per_domain_accuracy[domain] = {\n \"accuracy\": accuracy,\n \"source?\": domain in p.domains_source\n }\n\n# Do an independent accuracy assesment JUST TO BE SURE!\n# _source_test_label_accuracy = independent_accuracy_assesment(model, datasets.source.processed.test, p.device)\n# _target_test_label_accuracy = independent_accuracy_assesment(model, datasets.target.processed.test, p.device)\n# _source_val_label_accuracy = independent_accuracy_assesment(model, datasets.source.processed.val, p.device)\n# _target_val_label_accuracy = independent_accuracy_assesment(model, datasets.target.processed.val, p.device)\n\n# assert(_source_test_label_accuracy == source_test_label_accuracy)\n# assert(_target_test_label_accuracy == target_test_label_accuracy)\n# assert(_source_val_label_accuracy == source_val_label_accuracy)\n# assert(_target_val_label_accuracy == target_val_label_accuracy)\n\nexperiment = {\n \"experiment_name\": p.experiment_name,\n \"parameters\": dict(p),\n \"results\": {\n \"source_test_label_accuracy\": source_test_label_accuracy,\n \"source_test_label_loss\": source_test_label_loss,\n \"target_test_label_accuracy\": target_test_label_accuracy,\n \"target_test_label_loss\": target_test_label_loss,\n \"source_val_label_accuracy\": source_val_label_accuracy,\n \"source_val_label_loss\": source_val_label_loss,\n \"target_val_label_accuracy\": target_val_label_accuracy,\n \"target_val_label_loss\": target_val_label_loss,\n \"total_epochs_trained\": total_epochs_trained,\n \"total_experiment_time_secs\": total_experiment_time_secs,\n \"confusion\": confusion,\n \"per_domain_accuracy\": per_domain_accuracy,\n },\n \"history\": history,\n \"dataset_metrics\": get_dataset_metrics(datasets, \"ptn\"),\n}", "_____no_output_____" ], [ "ax = get_loss_curve(experiment)\nplt.show()", "_____no_output_____" ], [ "get_results_table(experiment)", "_____no_output_____" ], [ "get_domain_accuracies(experiment)", "_____no_output_____" ], [ "print(\"Source Test Label Accuracy:\", experiment[\"results\"][\"source_test_label_accuracy\"], \"Target Test Label Accuracy:\", experiment[\"results\"][\"target_test_label_accuracy\"])\nprint(\"Source Val Label Accuracy:\", experiment[\"results\"][\"source_val_label_accuracy\"], \"Target Val Label Accuracy:\", experiment[\"results\"][\"target_val_label_accuracy\"])", "Source Test Label Accuracy: 0.9984252519596865 Target Test Label Accuracy: 0.5067708333333333\nSource Val Label Accuracy: 0.9984982837528604 Target Val Label Accuracy: 0.5095703125\n" ], [ "json.dumps(experiment)", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code" ]
[ [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ] ]
d0230d4e31ce4e6f4bd2c86515598c4727ce2b29
9,939
ipynb
Jupyter Notebook
Spark/HeartDataset-MLlib.ipynb
elifcansuyildiz/MachineLearningNotebooks
a27b924948b82172be3d90d7edaf8fb60c6e22ca
[ "MIT" ]
1
2020-07-28T12:13:15.000Z
2020-07-28T12:13:15.000Z
Spark/HeartDataset-MLlib.ipynb
elifcansuyildiz/MachineLearningNotebooks
a27b924948b82172be3d90d7edaf8fb60c6e22ca
[ "MIT" ]
null
null
null
Spark/HeartDataset-MLlib.ipynb
elifcansuyildiz/MachineLearningNotebooks
a27b924948b82172be3d90d7edaf8fb60c6e22ca
[ "MIT" ]
null
null
null
32.480392
141
0.499346
[ [ [ "# Logistic Regression on 'HEART DISEASE' Dataset \nElif Cansu YILDIZ", "_____no_output_____" ] ], [ [ "from pyspark.sql import SparkSession\nfrom pyspark.sql.types import *\nfrom pyspark.sql.functions import col, countDistinct\nfrom pyspark.ml.feature import OneHotEncoderEstimator, StringIndexer, VectorAssembler, MinMaxScaler, IndexToString\nfrom pyspark.ml import Pipeline\nfrom pyspark.ml.classification import LogisticRegression\nfrom pyspark.ml.evaluation import BinaryClassificationEvaluator, MulticlassClassificationEvaluator", "_____no_output_____" ], [ "spark = SparkSession\\\n .builder\\\n .appName(\"MachineLearningExample\")\\\n .getOrCreate()", "_____no_output_____" ] ], [ [ "The dataset used is 'Heart Disease' dataset from Kaggle. You can get from this [link](https://www.kaggle.com/ronitf/heart-disease-uci).", "_____no_output_____" ] ], [ [ "df = spark.read.csv('datasets/heart.csv', header = True, inferSchema = True) #Kaggle Dataset\ndf.printSchema()\ndf.show(5)", "root\n |-- age: integer (nullable = true)\n |-- sex: integer (nullable = true)\n |-- cp: integer (nullable = true)\n |-- trestbps: integer (nullable = true)\n |-- chol: integer (nullable = true)\n |-- fbs: integer (nullable = true)\n |-- restecg: integer (nullable = true)\n |-- thalach: integer (nullable = true)\n |-- exang: integer (nullable = true)\n |-- oldpeak: double (nullable = true)\n |-- slope: integer (nullable = true)\n |-- ca: integer (nullable = true)\n |-- thal: integer (nullable = true)\n |-- target: integer (nullable = true)\n\n+---+---+---+--------+----+---+-------+-------+-----+-------+-----+---+----+------+\n|age|sex| cp|trestbps|chol|fbs|restecg|thalach|exang|oldpeak|slope| ca|thal|target|\n+---+---+---+--------+----+---+-------+-------+-----+-------+-----+---+----+------+\n| 63| 1| 3| 145| 233| 1| 0| 150| 0| 2.3| 0| 0| 1| 1|\n| 37| 1| 2| 130| 250| 0| 1| 187| 0| 3.5| 0| 0| 2| 1|\n| 41| 0| 1| 130| 204| 0| 0| 172| 0| 1.4| 2| 0| 2| 1|\n| 56| 1| 1| 120| 236| 0| 1| 178| 0| 0.8| 2| 0| 2| 1|\n| 57| 0| 0| 120| 354| 0| 1| 163| 1| 0.6| 2| 0| 2| 1|\n+---+---+---+--------+----+---+-------+-------+-----+-------+-----+---+----+------+\nonly showing top 5 rows\n\n" ] ], [ [ "__HOW MANY DISTINCT VALUE DO COLUMNS HAVE?__", "_____no_output_____" ] ], [ [ "df.agg(*(countDistinct(col(c)).alias(c) for c in df.columns)).show()", "+---+---+---+--------+----+---+-------+-------+-----+-------+-----+---+----+------+\n|age|sex| cp|trestbps|chol|fbs|restecg|thalach|exang|oldpeak|slope| ca|thal|target|\n+---+---+---+--------+----+---+-------+-------+-----+-------+-----+---+----+------+\n| 41| 2| 4| 49| 152| 2| 3| 91| 2| 40| 3| 5| 4| 2|\n+---+---+---+--------+----+---+-------+-------+-----+-------+-----+---+----+------+\n\n" ] ], [ [ "__SET the Label Column and Input Columns__", "_____no_output_____" ] ], [ [ "labelColumn = \"thal\"\ninput_columns = [t[0] for t in df.dtypes if t[0]!=labelColumn]", "_____no_output_____" ], [ "# Split the data into training and test sets (30% held out for testing)\n(trainingData, testData) = df.randomSplit([0.7, 0.3])\nprint(\"total data count: \", df.count())\nprint(\"train data count: \", trainingData.count())\nprint(\"test data count: \", testData.count())", "total data count: 303\ntrain data count: 218\ntest data count: 85\n" ] ], [ [ "__TRAINING__", "_____no_output_____" ] ], [ [ "assembler = VectorAssembler(inputCols = input_columns, outputCol='features')\n\nlr = LogisticRegression(featuresCol='features', labelCol=labelColumn,\n maxIter=10, regParam=0.3, elasticNetParam=0.8)\n\nstages = [assembler, lr]\npartialPipeline = Pipeline().setStages(stages)\nmodel = partialPipeline.fit(trainingData)", "_____no_output_____" ] ], [ [ "__MAKE PREDICTIONS__", "_____no_output_____" ] ], [ [ "predictions = model.transform(testData)\n\npredictionss = predictions.select(\"probability\", \"rawPrediction\", \"prediction\", \n col(labelColumn).alias(\"label\"))\npredictionss[[\"probability\", \"prediction\", \"label\"]].show(5, truncate=False)", "+--------------------------------------------------------------------------------+----------+-----+\n|probability |prediction|label|\n+--------------------------------------------------------------------------------+----------+-----+\n|[0.011082788245690223,0.05729867172540959,0.5740584251416755,0.3575601148872248]|2.0 |2 |\n|[0.011082788245690223,0.05729867172540959,0.5740584251416755,0.3575601148872248]|2.0 |3 |\n|[0.011082788245690223,0.05729867172540959,0.5740584251416755,0.3575601148872248]|2.0 |2 |\n|[0.011082788245690223,0.05729867172540959,0.5740584251416755,0.3575601148872248]|2.0 |2 |\n|[0.012875234771605678,0.06656572644096996,0.5051698495258184,0.4153891892616059]|2.0 |3 |\n+--------------------------------------------------------------------------------+----------+-----+\nonly showing top 5 rows\n\n" ] ], [ [ "__EVALUATION for Binary Classification__", "_____no_output_____" ] ], [ [ "evaluator = BinaryClassificationEvaluator(labelCol=\"label\", rawPredictionCol=\"prediction\", metricName=\"areaUnderROC\")\nareaUnderROC = evaluator.evaluate(predictionss)\nprint(\"Area under ROC = %g\" % areaUnderROC)\n\nevaluator = BinaryClassificationEvaluator(labelCol=\"label\", rawPredictionCol=\"prediction\", metricName=\"areaUnderPR\")\nareaUnderPR = evaluator.evaluate(predictionss)\nprint(\"areaUnderPR = %g\" % areaUnderPR)", "_____no_output_____" ] ], [ [ "__EVALUATION for Multiclass Classification__", "_____no_output_____" ] ], [ [ "evaluator = MulticlassClassificationEvaluator(labelCol=\"label\", predictionCol=\"prediction\", metricName=\"accuracy\")\naccuracy = evaluator.evaluate(predictionss)\nprint(\"accuracy = %g\" % accuracy)\n\nevaluator = MulticlassClassificationEvaluator(labelCol=\"label\", predictionCol=\"prediction\", metricName=\"f1\")\nf1 = evaluator.evaluate(predictionss)\nprint(\"f1 = %g\" % f1)\n\nevaluator = MulticlassClassificationEvaluator(labelCol=\"label\", predictionCol=\"prediction\", metricName=\"weightedPrecision\")\nweightedPrecision = evaluator.evaluate(predictionss)\nprint(\"weightedPrecision = %g\" % weightedPrecision)\n\nevaluator = MulticlassClassificationEvaluator(labelCol=\"label\", predictionCol=\"prediction\", metricName=\"weightedRecall\")\nweightedRecall = evaluator.evaluate(predictionss)\nprint(\"weightedRecall = %g\" % weightedRecall)", "accuracy = 0.564706\nf1 = 0.407607\nweightedPrecision = 0.318893\nweightedRecall = 0.564706\n" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ] ]
d023473c65b21a45ca4175db20e6e3fbc18c0d90
181,151
ipynb
Jupyter Notebook
sklearn-guide/chapter03/ml-3.ipynb
a630140621/machine-learning-course
7fba4dd46fe458c8754c2fe7d64627ee98a89c42
[ "MIT" ]
null
null
null
sklearn-guide/chapter03/ml-3.ipynb
a630140621/machine-learning-course
7fba4dd46fe458c8754c2fe7d64627ee98a89c42
[ "MIT" ]
null
null
null
sklearn-guide/chapter03/ml-3.ipynb
a630140621/machine-learning-course
7fba4dd46fe458c8754c2fe7d64627ee98a89c42
[ "MIT" ]
null
null
null
102.172025
70,136
0.828591
[ [ [ "# 一个完整的机器学习项目", "_____no_output_____" ] ], [ [ "import os\nimport tarfile\nimport urllib\nimport pandas as pd\nimport numpy as np\nfrom CategoricalEncoder import CategoricalEncoder", "_____no_output_____" ] ], [ [ "# 下载数据集", "_____no_output_____" ] ], [ [ "DOWNLOAD_ROOT = \"https://raw.githubusercontent.com/ageron/handson-ml/master/\"\nHOUSING_PATH = \"../datasets/housing\"\nHOUSING_URL = DOWNLOAD_ROOT + HOUSING_PATH + \"/housing.tgz\"\n\ndef fetch_housing_data(housing_url=HOUSING_URL, housing_path=HOUSING_PATH):\n if os.path.isfile(housing_path + \"/housing.tgz\"):\n return print(\"already download\")\n if not os.path.isdir(housing_path):\n os.makedirs(housing_path)\n\n tgz_path = os.path.join(housing_path, \"housing.tgz\")\n urllib.request.urlretrieve(housing_url, tgz_path)\n housing_tgz = tarfile.open(tgz_path)\n housing_tgz.extractall(path=housing_path)\n housing_tgz.close()", "_____no_output_____" ], [ "fetch_housing_data()", "already download\n" ] ], [ [ "# 加载数据集", "_____no_output_____" ] ], [ [ "def load_housing_data(housing_path=HOUSING_PATH):\n csv_path = os.path.join(housing_path, \"housing.csv\")\n return pd.read_csv(csv_path)", "_____no_output_____" ], [ "housing_data = load_housing_data()", "_____no_output_____" ], [ "housing_data.head()", "_____no_output_____" ], [ "housing_data.info()", "<class 'pandas.core.frame.DataFrame'>\nRangeIndex: 20640 entries, 0 to 20639\nData columns (total 10 columns):\nlongitude 20640 non-null float64\nlatitude 20640 non-null float64\nhousing_median_age 20640 non-null float64\ntotal_rooms 20640 non-null float64\ntotal_bedrooms 20433 non-null float64\npopulation 20640 non-null float64\nhouseholds 20640 non-null float64\nmedian_income 20640 non-null float64\nmedian_house_value 20640 non-null float64\nocean_proximity 20640 non-null object\ndtypes: float64(9), object(1)\nmemory usage: 1.6+ MB\n" ], [ "housing_data[\"ocean_proximity\"].value_counts()", "_____no_output_____" ], [ "housing_data.describe()", "_____no_output_____" ] ], [ [ "# 绘图", "_____no_output_____" ] ], [ [ "%matplotlib inline", "_____no_output_____" ], [ "import matplotlib.pyplot as plt", "_____no_output_____" ], [ "housing_data.hist(bins=50, figsize=(20, 15))", "_____no_output_____" ] ], [ [ "# 创建测试集", "_____no_output_____" ] ], [ [ "from sklearn.model_selection import train_test_split\n\ntrain_set, test_set = train_test_split(housing_data, test_size=0.2, random_state=42)", "_____no_output_____" ], [ "housing = train_set.copy()\nhousing.plot(kind=\"scatter\" , x=\"longitude\", y=\"latitude\", alpha= 0.3, s=housing[ \"population\" ]/100, label= \"population\", c=\"median_house_value\", cmap=plt.get_cmap(\"jet\"), colorbar=True)", "_____no_output_____" ] ], [ [ "## 皮尔逊相关系数\n因为数据集并不是非常大,你以很容易地使用 `corr()` 方法计算出每对属性间的标准相关系数(standard correlation coefficient,也称作皮尔逊相关系数。\n\n相关系数的范围是 -1 到 1。当接近 1 时,意味强正相关;例如,当收入中位数增加时,房价中位数也会增加。当相关系数接近 -1 时,意味强负相关;你可以看到,纬度和房价中位数有轻微的负相关性(即,越往北,房价越可能降低)。最后,相关系数接近 0,意味没有线性相关性。\n\n> 相关系数可能会完全忽略非线性关系", "_____no_output_____" ] ], [ [ "corr_matrix = housing.corr()", "_____no_output_____" ], [ "corr_matrix[\"median_house_value\"].sort_values(ascending=False)", "_____no_output_____" ] ], [ [ "## 创建一些新的特征", "_____no_output_____" ] ], [ [ "housing[\"rooms_per_household\"] = housing[\"total_rooms\"] / housing[\"households\"]\nhousing[\"bedrooms_per_room\"] = housing[\"total_bedrooms\"] / housing[\"total_rooms\"]\nhousing[\"population_per_household\"] = housing[\"population\"] / housing[\"households\"]", "_____no_output_____" ], [ "corr_matrix = housing.corr()", "_____no_output_____" ], [ "corr_matrix[\"median_house_value\"].sort_values(ascending=False)", "_____no_output_____" ] ], [ [ "# 为机器学习准备数据\n\n所有的数据处理 __只能在训练集上进行__,不能使用测试集数据。", "_____no_output_____" ] ], [ [ "housing = train_set.drop(\"median_house_value\", axis=1)\nhousing_labels = train_set[\"median_house_value\"].copy()", "_____no_output_____" ] ], [ [ "## 数据清洗\n\n大多机器学习算法不能处理缺失的特征,因此先创建一些函数来处理特征缺失的问题。\n\n前面,你应该注意到了属性 total_bedrooms 有一些缺失值。有三个解决选项:\n\n* 去掉对应的街区;\n* 去掉整个属性;\n* 进行赋值(0、平均值、中位数等等)。\n\n用 DataFrame 的 `dropna()`,`drop()`,和 `fillna()` 方法,可以方便地实现:\n\n```python\nhousing.dropna(subset=[\"total_bedrooms\"]) # 选项1\nhousing.drop(\"total_bedrooms\", axis= 1) # 选项2\n\nmedian = housing[\"total_bedrooms\"].median()\nhousing[\"total_bedrooms\"].fillna(median) # 选项3\n```", "_____no_output_____" ], [ "Scikit-Learn 提供了一个方便的类来处理缺失值: `Imputer`。下面是其使用方法:首先,需要创建一个 `Imputer` 实例,指定用某属性的中位数来替换该属性所有的缺失值:\n\n```python\nfrom sklearn.impute import SimpleImputer\nimputer = SimpleImputer(missing_values=np.nan, strategy='mean')\n# 因为只有数值属性才能算出中位数,所以需要创建一份不包括文本属性 ocean_proximity 的数据副本:\nhousing_num = housing.drop(\"ocean_proximity\", axis=1)\n# 用 fit() 方法将 imputer 实例拟合到训练数据:\nimputer.fit(housing_num)\n# 使用这个“训练过的” imputer 来对训练集进行转换,将缺失值替换为中位数:\nX = imputer.transform(housing_num)\n```", "_____no_output_____" ] ], [ [ "from sklearn.impute import SimpleImputer\nimputer = SimpleImputer(missing_values=np.nan, strategy='mean')", "_____no_output_____" ], [ "housing_num = housing.drop(\"ocean_proximity\", axis=1)\nimputer.fit(housing_num)", "_____no_output_____" ], [ "X = imputer.transform(housing_num)\nhousing_tr = pd.DataFrame(X, columns=housing_num.columns)", "_____no_output_____" ], [ "housing_tr.info()", "<class 'pandas.core.frame.DataFrame'>\nRangeIndex: 16512 entries, 0 to 16511\nData columns (total 8 columns):\nlongitude 16512 non-null float64\nlatitude 16512 non-null float64\nhousing_median_age 16512 non-null float64\ntotal_rooms 16512 non-null float64\ntotal_bedrooms 16512 non-null float64\npopulation 16512 non-null float64\nhouseholds 16512 non-null float64\nmedian_income 16512 non-null float64\ndtypes: float64(8)\nmemory usage: 1.0 MB\n" ] ], [ [ "## 处理文本和类别属性\n\n前面,我们丢弃了类别属性 ocean_proximity,因为它是一个文本属性,不能计算出中位数。__大多数机器学习算法跟喜欢和数字打交道,所以让我们把这些文本标签转换为数字__。", "_____no_output_____" ], [ "### LabelEncoder\n\nScikit-Learn 为这个任务提供了一个转换器 `LabelEncoder`", "_____no_output_____" ] ], [ [ "from sklearn.preprocessing import LabelEncoder\n\nencoder = LabelEncoder()\nhousing_cat = housing[\"ocean_proximity\"]\nhousing_cat_encoded = encoder.fit_transform(housing_cat)\nhousing_cat_encoded", "_____no_output_____" ], [ "encoder.classes_ # <1H OCEAN 被映射为 0, INLAND 被映射为 1 等等", "_____no_output_____" ] ], [ [ "### OneHotEncoder\n\n注意输出结果是一个 SciPy 稀疏矩阵,而不是 NumPy 数组。\n\n> 当类别属性有数千个分类时,这样非常有用。经过独热编码,我们得到了一个有数千列的矩阵,这个矩阵每行只有一个 1,其余都是 0。使用大量内存来存储这些 0 非常浪费,所以稀疏矩阵只存储非零元素的位置。你可以像一个 2D 数据那样进行使用,但是如果你真的想将其转变成一个(密集的)NumPy 数组,只需调用 `toarray()` 方法。", "_____no_output_____" ] ], [ [ "from sklearn.preprocessing import OneHotEncoder\nencoder = OneHotEncoder()\nhousing_cat_1hot = encoder.fit_transform(housing_cat_encoded.reshape( -1 , 1 ))\nhousing_cat_1hot", "/home/lovecrazy/.local/lib/python3.6/site-packages/sklearn/preprocessing/_encoders.py:415: FutureWarning: The handling of integer data will change in version 0.22. Currently, the categories are determined based on the range [0, max(values)], while in the future they will be determined based on the unique values.\nIf you want the future behaviour and silence this warning, you can specify \"categories='auto'\".\nIn case you used a LabelEncoder before this OneHotEncoder to convert the categories to integers, then you can now use the OneHotEncoder directly.\n warnings.warn(msg, FutureWarning)\n" ], [ "housing_cat_1hot.toarray()", "_____no_output_____" ] ], [ [ "### LabelBinarizer\n\n使用类 LabelBinarizer ,我们可以用一步执行这两个转换。\n\n> 向构造器 `LabelBinarizer` 传递 `sparse_output=True`,就可以得到一个稀疏矩阵。", "_____no_output_____" ] ], [ [ "from sklearn.preprocessing import LabelBinarizer\nencoder = LabelBinarizer()\nhousing_cat_1hot = encoder.fit_transform(housing_cat)\nhousing_cat_1hot", "_____no_output_____" ] ], [ [ "## 自定义转换器\n\n尽管 Scikit-Learn 提供了许多有用的转换器,你还是需要自己动手写转换器执行任务,比如自定义的清理操作,或属性组合。你需要让自制的转换器与 Scikit-Learn 组件(比如流水线)无缝衔接工作,因为 Scikit-Learn 是依赖鸭子类型的(而不是继承),你所需要做的是创建一个类并执行三个方法: `fit()`(返回 `self` ),`transform()` ,和 `fit_transform()`。\n\n通过添加 `TransformerMixin` 作为基类,可以很容易地得到最后一个。另外,如果你添加 `BaseEstimator` 作为基类(且构造器中避免使用 `*args` 和 `**kargs`),就能得到两个额外的方法(`get_params()` 和 `set_params()`),二者可以方便地进行超参数自动微调。", "_____no_output_____" ] ], [ [ "from sklearn.base import BaseEstimator, TransformerMixin\nrooms_ix, bedrooms_ix, population_ix, household_ix = 3, 4, 5, 6 \n\nclass CombinedAttributesAdder(BaseEstimator, TransformerMixin):\n def __init__ (self, add_bedrooms_per_room = True): # no *args or **kargs \n self.add_bedrooms_per_room = add_bedrooms_per_room\n\n def fit(self, X, y=None):\n return self # nothing else to do \n\n def transform(self, X, y=None):\n rooms_per_household = X[:, rooms_ix] / X[:, household_ix]\n population_per_household = X[:, population_ix] / X[:, household_ix]\n if self.add_bedrooms_per_room:\n bedrooms_per_room = X[:, bedrooms_ix] / X[:, rooms_ix]\n return np.c_[X, rooms_per_household, population_per_household, bedrooms_per_room]\n else:\n return np.c_[X, rooms_per_household, population_per_household]\n\nattr_adder = CombinedAttributesAdder(add_bedrooms_per_room=False)\nhousing_extra_attribs = attr_adder.transform(housing.values)", "_____no_output_____" ], [ "class DataFrameSelector(BaseEstimator, TransformerMixin):\n def __init__(self, attribute_names):\n self.attribute_names = attribute_names\n\n def fit(self, X, y=None):\n return self\n\n def transform(self, X):\n return X[self.attribute_names].values", "_____no_output_____" ] ], [ [ "## 特征缩放\n\n有两种常见的方法可以让所有的属性有相同的量度:线性函数归一化(Min-Max scaling)和标准化(standardization)。\n\n1. 线性函数归一化(许多人称其为归一化(normalization))很简单:值被转变、重新缩放,直到范围变成 0 到 1。我们通过减去最小值,然后再除以最大值与最小值的差值,来进行归一化。\n> Scikit-Learn 提供了一个转换器 `MinMaxScaler` 来实现这个功能。它有一个超参数 `feature_range`,可以让你改变范围,如果不希望范围是 0 到 1。\n2. 标准化:首先减去平均值(所以标准化值的平均值总是 0),然后除以方差,使得到的分布具有单位方差。标准化受到异常值的影响很小。例如,假设一个街区的收入中位数由于某种错误变成了100,归一化会将其它范围是 0 到 15 的值变为 `0-0.15`,但是标准化不会受什么影响。\n> Scikit-Learn 提供了一个转换器 `StandardScaler` 来进行标准化。", "_____no_output_____" ], [ "## 转换流水线\n\n因为存在许多数据转换步骤,需要按一定的顺序执行。所以,Scikit-Learn 提供了类 `Pipeline`,来进行这一系列的转换。\n\nPipeline 构造器需要一个定义步骤顺序的名字/估计器对的列表。__除了最后一个估计器,其余都要是转换器__(即,它们都要有 `fit_transform()` 方法)。\n\n当调用流水线的 `fit()` 方法,就会对所有转换器顺序调用 `fit_transform()` 方法,将每次调用的输出作为参数传递给下一个调用,一直到最后一个估计器,它只执行 `fit()` 方法。", "_____no_output_____" ] ], [ [ "from sklearn.pipeline import Pipeline\nfrom sklearn.preprocessing import StandardScaler\n\nnum_pipeline = Pipeline([\n ('imputer', SimpleImputer(strategy=\"median\")),\n ('attribs_adder', CombinedAttributesAdder()),\n ('std_scaler', StandardScaler())\n])\n\nhousing_num_tr = num_pipeline.fit_transform(housing_num)", "_____no_output_____" ] ], [ [ "现在就有了一个对数值的流水线,还需要对分类值应用 `LabelBinarizer`:如何将这些转换写成一个流水线呢?\n\nScikit-Learn 提供了一个类 `FeatureUnion` 实现这个功能。你给它一列转换器(可以是所有的转换器),当调用它的 `transform()` 方法,每个转换器的 `transform()` 会被 __并行执行__,等待输出,然后将输出合并起来,并返回结果(当然,调用它的 `fit()` 方法就会调用每个转换器的 `fit()`)。", "_____no_output_____" ] ], [ [ "from sklearn.pipeline import FeatureUnion\n\n\nnum_attribs = list(housing_num)\ncat_attribs = [\"ocean_proximity\"]\nnum_pipeline = Pipeline([\n ('selector', DataFrameSelector(num_attribs)),\n ('imputer', SimpleImputer(strategy=\"median\")),\n ('attribs_adder', CombinedAttributesAdder()),\n ('std_scaler', StandardScaler())\n])\n\ncat_pipeline = Pipeline([\n ('selector', DataFrameSelector(cat_attribs)),\n# ('label_binarizer', LabelBinarizer()),\n ('label_binarizer', CategoricalEncoder()),\n])\n\nfull_pipeline = FeatureUnion(transformer_list=[\n (\"num_pipeline\", num_pipeline),\n (\"cat_pipeline\", cat_pipeline),\n])", "_____no_output_____" ], [ "housing_prepared = full_pipeline.fit_transform(housing)\n\n# d = DataFrameSelector(num_attribs)\n# housing_d = d.fit_transform(housing)\n# imputer = SimpleImputer(strategy=\"median\")\n# housing_i = imputer.fit_transform(housing_d)\n# c = CombinedAttributesAdder()\n# housing_c = c.fit_transform(housing_i)\n# s = StandardScaler()\n# housing_s = s.fit_transform(housing_c)\n\n\n# d = DataFrameSelector(cat_attribs)\n# housing_d = d.fit_transform(housing)\n# l = LabelBinarizer()\n# housing_l = l.fit_transform(housing_d)", "_____no_output_____" ], [ "housing_prepared.toarray()", "_____no_output_____" ] ], [ [ "# 选择并训练模型", "_____no_output_____" ], [ "## 线性回归", "_____no_output_____" ] ], [ [ "from sklearn.linear_model import LinearRegression\n\nlin_reg = LinearRegression()\nlin_reg.fit(housing_prepared, housing_labels)", "_____no_output_____" ] ], [ [ "完毕!你现在就有了一个可用的线性回归模型。用一些训练集中的实例做下验证:", "_____no_output_____" ] ], [ [ "some_data = housing.iloc[:5]\nsome_labels = housing_labels.iloc[:5]\nsome_data_prepared = full_pipeline.transform(some_data)\nprint(\"Predictions:\\t\", lin_reg.predict(some_data_prepared))\nprint(\"Labels:\\t\\t\", list(some_labels))", "Predictions:\t [181746.54358872 290558.74963381 244957.50041055 146498.51057872\n 163230.42389721]\nLabels:\t\t [103000.0, 382100.0, 172600.0, 93400.0, 96500.0]\n" ] ], [ [ "## RMSE\n\n使用 Scikit-Learn 的 `mean_squared_error` 函数,用全部训练集来计算下这个回归模型的 RMSE:", "_____no_output_____" ] ], [ [ "from sklearn.metrics import mean_squared_error\nhousing_predictions = lin_reg.predict(housing_prepared)\nlin_mse = mean_squared_error(housing_labels, housing_predictions)\nlin_rmse = np.sqrt(lin_mse)\nlin_rmse", "_____no_output_____" ] ], [ [ "尝试一个更为复杂的模型。\n\n## DecisionTreeRegressor", "_____no_output_____" ] ], [ [ "from sklearn.tree import DecisionTreeRegressor\ntree_reg = DecisionTreeRegressor()\ntree_reg.fit(housing_prepared, housing_labels)", "_____no_output_____" ] ], [ [ "RMSE 评估", "_____no_output_____" ] ], [ [ "housing_predictions = tree_reg.predict(housing_prepared)\nlin_mse = mean_squared_error(housing_labels, housing_predictions)\nlin_rmse = np.sqrt(lin_mse)\nlin_rmse", "_____no_output_____" ] ], [ [ "可以发现该模型严重过拟合", "_____no_output_____" ], [ "## 交叉验证\n\n评估模型的一种方法是用函数 `train_test_split` 来分割训练集,得到一个更小的训练集和一个 __交叉验证集__,然后用更小的训练集来训练模型,用验证集来评估。\n\n另一种更好的方法是 __使用 Scikit-Learn 的交叉验证功能__。\n\n下面的代码采用了 K 折交叉验证(K-fold cross-validation):它随机地将训练集分成十个不同的子集,成为“折”,然后训练评估决策树模型 10 次,每次选一个不用的折来做评估,用其它 9 个来做训练。结果是一个包含 10 个评分的数组\n\n> Scikit-Learn 交叉验证功能期望的是效用函数(越大越好)而不是损失函数(越低越好),因此得分函数实际上与 MSE 相反(即负值),所以在计算平方根之前先计算 -scores 。", "_____no_output_____" ] ], [ [ "from sklearn.model_selection import cross_val_score\nscores = cross_val_score(tree_reg, housing_prepared, housing_labels, scoring=\"neg_mean_squared_error\", cv=10)\nrmse_scores = np.sqrt(-scores)", "_____no_output_____" ], [ "def display_scores(scores):\n print(\"Scores:\", scores)\n print(\"Mean:\", scores.mean())\n print(\"Standard deviation:\", scores.std())", "_____no_output_____" ], [ "display_scores(rmse_scores)", "Scores: [64669.81202575 70631.54431519 68182.27830444 70392.73509393\n 72864.28420412 67109.28516943 66338.75100355 69542.07611318\n 65752.27281003 70391.54164896]\nMean: 68587.45806885832\nStandard deviation: 2463.4659300283547\n" ] ], [ [ "## RandomForestRegressor\n\n随机森林是通过用特征的随机子集训练许多决策树。在其它多个模型之上建立模型称为集成学习(Ensemble Learning),它是推进 ML 算法的一种好方法。", "_____no_output_____" ] ], [ [ "from sklearn.ensemble import RandomForestRegressor\nforest_reg = RandomForestRegressor()\nforest_reg.fit(housing_prepared, housing_labels)", "/home/lovecrazy/.local/lib/python3.6/site-packages/sklearn/ensemble/forest.py:245: FutureWarning: The default value of n_estimators will change from 10 in version 0.20 to 100 in 0.22.\n \"10 in version 0.20 to 100 in 0.22.\", FutureWarning)\n" ], [ "scores = cross_val_score(forest_reg, housing_prepared, housing_labels, scoring=\"neg_mean_squared_error\", cv=10, n_jobs=-1)\nrmse_scores = np.sqrt(-scores)", "_____no_output_____" ], [ "display_scores(rmse_scores)", "Scores: [49751.31861666 54615.84913363 52738.25864141 54820.43695375\n 55833.78571584 49535.30004953 49969.23161663 52868.72231176\n 51471.9865128 51848.05631902]\nMean: 52345.29458710363\nStandard deviation: 2125.0902130050936\n" ] ], [ [ "# 保存模型\n\n可以使用python自带的 pickle 或 下述函数\n\n```python\nfrom sklearn.externals import joblib\njoblib.dump(my_model, \"my_model.pkl\")\n# load \nmy_model_loaded = joblib.load(\"my_model.pkl\")\n```", "_____no_output_____" ], [ "# 模型微调\n\n假设现在有了一个列表,列表里有几个有希望的模型。现在需要对它们进行微调。", "_____no_output_____" ], [ "## 网格搜索\n\n微调的一种方法是手工调整超参数,直到找到一个好的超参数组合。这么做的话会非常冗长,你也可能没有时间探索多种组合。\n\n应该使用 Scikit-Learn 的 `GridSearchCV` 来做这项搜索工作。你所需要做的是告诉 `GridSearchCV` 要试验有哪些超参数,要试验什么值,`GridSearchCV` 就能用交叉验证试验所有可能超参数值的组合。\n\n例如,下面的代码搜索了 `RandomForestRegressor` 超参数值的最佳组合:", "_____no_output_____" ] ], [ [ "from sklearn.model_selection import GridSearchCV\n\nparam_grid = [\n {'n_estimators': [3, 10, 30], 'max_features': [2, 4, 6, 8]},\n {'bootstrap': [False], 'n_estimators': [3, 10], 'max_features': [2, 3, 4]},\n]\n\nforest_reg = RandomForestRegressor()\ngrid_search = GridSearchCV(forest_reg, param_grid, cv=5, scoring='neg_mean_squared_error', n_jobs=-1)\ngrid_search.fit(housing_prepared, housing_labels)", "_____no_output_____" ] ], [ [ "`param_grid` 告诉 Scikit-Learn 首先评估所有的列在第一个 `dict` 中的 `n_estimators` 和 `max_features` 的 `3 × 4 = 12` 种组合。然后尝试第二个 `dict` 中超参数的 `2 × 3 = 6` 种组合,这次会将超参数 `bootstrap` 设为 `False`。\n\n总之,网格搜索会探索 `12 + 6 = 18` 种 `RandomForestRegressor` 的超参数组合,会训练每个模型五次(因为用的是五折交叉验证)。换句话说,训练总共有 `18 × 5 = 90` 轮!K 折将要花费大量时间,完成后,你就能获得参数的最佳组合,如下所示:", "_____no_output_____" ] ], [ [ "grid_search.best_params_ # 参数最佳组合", "_____no_output_____" ], [ "grid_search.best_estimator_ # 最佳估计器", "_____no_output_____" ] ], [ [ "可以像超参数一样处理数据准备的步骤。例如,__网格搜索可以自动判断是否添加一个你不确定的特征__(比如,使用转换器 `CombinedAttributesAdder` 的超参数 `add_bedrooms_per_room`)。它还能用相似的方法来自动找到处理异常值、缺失特征、特征选择等任务的最佳方法。", "_____no_output_____" ], [ "## 随机搜索\n\n当探索相对较少的组合时,网格搜索还可以。但是当超参数的搜索空间很大时,最好使用 `RandomizedSearchCV`。这个类的使用方法和类`GridSearchCV` 很相似,但它不是尝试所有可能的组合,而是通过选择每个超参数的一个随机值的特定数量的随机组合。\n\n这个方法有两个优点:\n\n* 如果你让随机搜索运行,比如 1000 次,它会探索每个超参数的 1000 个不同的值(而不是像网格搜索那样,只搜索每个超参数的几个值)。\n* 可以方便地通过设定搜索次数,控制超参数搜索的计算量。", "_____no_output_____" ], [ "# 集成方法\n\n另一种微调系统的方法是将表现最好的模型组合起来。组合(集成)之后的性能通常要比单独的模型要好(就像随机森林要比单独的决策树要好),特别是当单独模型的误差类型不同时。", "_____no_output_____" ], [ "# 分析最佳模型和它们的误差\n\n通过分析最佳模型,常常可以获得对问题更深的了解。", "_____no_output_____" ], [ "# 用测试集评估系统\n\n调节完系统之后,终于有了一个性能足够好的系统。现在就可以用测试集评估最后的模型了。\n\n__注意:在测试集上如果模型效果不是很好,一定不要调参,因为这样也无法泛化__", "_____no_output_____" ] ], [ [ "final_model = grid_search.best_estimator_\n\nX_test = test_set.drop(\"median_house_value\", axis=1)\ny_test = test_set[\"median_house_value\"].copy()\n# 清洗数据\nX_test_prepared = full_pipeline.transform(X_test)\n# 预测\nfinal_predictions = final_model.predict(X_test_prepared)\n# RMSE\nfinal_mse = mean_squared_error(y_test, final_predictions)\nfinal_rmse = np.sqrt(final_mse)", "_____no_output_____" ], [ "final_rmse", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code", "code", "code", "code" ], [ "markdown", "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown", "markdown", "markdown", "markdown", "markdown" ], [ "code", "code" ] ]
d0234b84393f6273898e1ed97305af471971dfb7
46,962
ipynb
Jupyter Notebook
02_usecases/sagemaker_recommendations/wip/02_Recommenders_Retrieval_AdHoc.ipynb
MarcusFra/workshop
83f16d41f5e10f9c23242066f77a14bb61ac78d7
[ "Apache-2.0" ]
2,327
2020-03-01T09:47:34.000Z
2021-11-25T12:38:42.000Z
02_usecases/sagemaker_recommendations/wip/02_Recommenders_Retrieval_AdHoc.ipynb
MarcusFra/workshop
83f16d41f5e10f9c23242066f77a14bb61ac78d7
[ "Apache-2.0" ]
209
2020-03-01T17:14:12.000Z
2021-11-08T20:35:42.000Z
02_usecases/sagemaker_recommendations/wip/02_Recommenders_Retrieval_AdHoc.ipynb
MarcusFra/workshop
83f16d41f5e10f9c23242066f77a14bb61ac78d7
[ "Apache-2.0" ]
686
2020-03-03T17:24:51.000Z
2021-11-25T23:39:12.000Z
34.203933
764
0.614454
[ [ [ "# Recommending Movies: Retrieval", "_____no_output_____" ], [ "Real-world recommender systems are often composed of two stages:\n\n1. The retrieval stage is responsible for selecting an initial set of hundreds of candidates from all possible candidates. The main objective of this model is to efficiently weed out all candidates that the user is not interested in. Because the retrieval model may be dealing with millions of candidates, it has to be computationally efficient.\n2. The ranking stage takes the outputs of the retrieval model and fine-tunes them to select the best possible handful of recommendations. Its task is to narrow down the set of items the user may be interested in to a shortlist of likely candidates.\n\nIn this tutorial, we're going to focus on the first stage, retrieval. If you are interested in the ranking stage, have a look at our [ranking](basic_ranking) tutorial.\n\nRetrieval models are often composed of two sub-models:\n\n1. A query model computing the query representation (normally a fixed-dimensionality embedding vector) using query features.\n2. A candidate model computing the candidate representation (an equally-sized vector) using the candidate features\n\nThe outputs of the two models are then multiplied together to give a query-candidate affinity score, with higher scores expressing a better match between the candidate and the query.\n\nIn this tutorial, we're going to build and train such a two-tower model using the Movielens dataset.\n\nWe're going to:\n\n1. Get our data and split it into a training and test set.\n2. Implement a retrieval model.\n3. Fit and evaluate it.\n4. Export it for efficient serving by building an approximate nearest neighbours (ANN) index.\n\n", "_____no_output_____" ], [ "## The dataset\n\nThe Movielens dataset is a classic dataset from the [GroupLens](https://grouplens.org/datasets/movielens/) research group at the University of Minnesota. It contains a set of ratings given to movies by a set of users, and is a workhorse of recommender system research.\n\nThe data can be treated in two ways:\n\n1. It can be interpreted as expressesing which movies the users watched (and rated), and which they did not. This is a form of implicit feedback, where users' watches tell us which things they prefer to see and which they'd rather not see.\n2. It can also be seen as expressesing how much the users liked the movies they did watch. This is a form of explicit feedback: given that a user watched a movie, we can tell roughly how much they liked by looking at the rating they have given.\n\nIn this tutorial, we are focusing on a retrieval system: a model that predicts a set of movies from the catalogue that the user is likely to watch. Often, implicit data is more useful here, and so we are going to treat Movielens as an implicit system. This means that every movie a user watched is a positive example, and every movie they have not seen is an implicit negative example.", "_____no_output_____" ], [ "## Imports\n\n\nLet's first get our imports out of the way.", "_____no_output_____" ] ], [ [ "import os\nimport pprint\nimport tempfile\n\nfrom typing import Dict, Text\n\nimport numpy as np\nimport tensorflow as tf\nimport tensorflow_datasets as tfds", "_____no_output_____" ], [ "import tensorflow_recommenders as tfrs", "_____no_output_____" ] ], [ [ "## Preparing the dataset\n\nLet's first have a look at the data.\n\nWe use the MovieLens dataset from [Tensorflow Datasets](https://www.tensorflow.org/datasets). Loading `movie_lens/100k_ratings` yields a `tf.data.Dataset` object containing the ratings data and loading `movie_lens/100k_movies` yields a `tf.data.Dataset` object containing only the movies data.\n\nNote that since the MovieLens dataset does not have predefined splits, all data are under `train` split.", "_____no_output_____" ] ], [ [ "# Ratings data.\nratings = tfds.load(\"movie_lens/100k-ratings\", split=\"train\")\n# Features of all the available movies.\nmovies = tfds.load(\"movie_lens/100k-movies\", split=\"train\")", "_____no_output_____" ] ], [ [ "The ratings dataset returns a dictionary of movie id, user id, the assigned rating, timestamp, movie information, and user information:", "_____no_output_____" ] ], [ [ "for x in ratings.take(1).as_numpy_iterator():\n pprint.pprint(x)", "{'bucketized_user_age': 45.0,\n 'movie_genres': array([7]),\n 'movie_id': b'357',\n 'movie_title': b\"One Flew Over the Cuckoo's Nest (1975)\",\n 'raw_user_age': 46.0,\n 'timestamp': 879024327,\n 'user_gender': True,\n 'user_id': b'138',\n 'user_occupation_label': 4,\n 'user_occupation_text': b'doctor',\n 'user_rating': 4.0,\n 'user_zip_code': b'53211'}\n" ] ], [ [ "The movies dataset contains the movie id, movie title, and data on what genres it belongs to. Note that the genres are encoded with integer labels.", "_____no_output_____" ] ], [ [ "for x in movies.take(1).as_numpy_iterator():\n pprint.pprint(x)", "{'movie_genres': array([4]),\n 'movie_id': b'1681',\n 'movie_title': b'You So Crazy (1994)'}\n" ] ], [ [ "In this example, we're going to focus on the ratings data. Other tutorials explore how to use the movie information data as well to improve the model quality.\n\nWe keep only the `user_id`, and `movie_title` fields in the dataset.", "_____no_output_____" ] ], [ [ "ratings = ratings.map(lambda x: {\n \"movie_title\": x[\"movie_title\"],\n \"user_id\": x[\"user_id\"],\n})\nmovies = movies.map(lambda x: x[\"movie_title\"])", "_____no_output_____" ] ], [ [ "To fit and evaluate the model, we need to split it into a training and evaluation set. In an industrial recommender system, this would most likely be done by time: the data up to time $T$ would be used to predict interactions after $T$.\n\n\nIn this simple example, however, let's use a random split, putting 80% of the ratings in the train set, and 20% in the test set.", "_____no_output_____" ] ], [ [ "tf.random.set_seed(42)\nshuffled = ratings.shuffle(100_000, seed=42, reshuffle_each_iteration=False)\n\ntrain = shuffled.take(80_000)\ntest = shuffled.skip(80_000).take(20_000)", "_____no_output_____" ] ], [ [ "Let's also figure out unique user ids and movie titles present in the data. \n\nThis is important because we need to be able to map the raw values of our categorical features to embedding vectors in our models. To do that, we need a vocabulary that maps a raw feature value to an integer in a contiguous range: this allows us to look up the corresponding embeddings in our embedding tables.", "_____no_output_____" ] ], [ [ "movie_titles = movies.batch(1_000)\nuser_ids = ratings.batch(1_000_000).map(lambda x: x[\"user_id\"])\n\nunique_movie_titles = np.unique(np.concatenate(list(movie_titles)))\nunique_user_ids = np.unique(np.concatenate(list(user_ids)))\n\nunique_movie_titles[:10]", "_____no_output_____" ] ], [ [ "## Implementing a model\n\nChoosing the architecure of our model a key part of modelling.\n\nBecause we are building a two-tower retrieval model, we can build each tower separately and then combine them in the final model.", "_____no_output_____" ], [ "### The query tower\n\nLet's start with the query tower.\n\nThe first step is to decide on the dimensionality of the query and candidate representations:", "_____no_output_____" ] ], [ [ "embedding_dimension = 32", "_____no_output_____" ] ], [ [ "Higher values will correspond to models that may be more accurate, but will also be slower to fit and more prone to overfitting.\n\nThe second is to define the model itself. Here, we're going to use Keras preprocessing layers to first convert user ids to integers, and then convert those to user embeddings via an `Embedding` layer. Note that we use the list of unique user ids we computed earlier as a vocabulary:", "_____no_output_____" ], [ "# _Note: Requires TF 2.3.0_", "_____no_output_____" ] ], [ [ "user_model = tf.keras.Sequential([\n tf.keras.layers.experimental.preprocessing.StringLookup(\n vocabulary=unique_user_ids, mask_token=None),\n # We add an additional embedding to account for unknown tokens.\n tf.keras.layers.Embedding(len(unique_user_ids) + 1, embedding_dimension)\n])", "_____no_output_____" ] ], [ [ "A simple model like this corresponds exactly to a classic [matrix factorization](https://ieeexplore.ieee.org/abstract/document/4781121) approach. While defining a subclass of `tf.keras.Model` for this simple model might be overkill, we can easily extend it to an arbitrarily complex model using standard Keras components, as long as we return an `embedding_dimension`-wide output at the end.", "_____no_output_____" ], [ "### The candidate tower\n\nWe can do the same with the candidate tower.", "_____no_output_____" ] ], [ [ "movie_model = tf.keras.Sequential([\n tf.keras.layers.experimental.preprocessing.StringLookup(\n vocabulary=unique_movie_titles, mask_token=None),\n tf.keras.layers.Embedding(len(unique_movie_titles) + 1, embedding_dimension)\n])", "_____no_output_____" ] ], [ [ "### Metrics\n\nIn our training data we have positive (user, movie) pairs. To figure out how good our model is, we need to compare the affinity score that the model calculates for this pair to the scores of all the other possible candidates: if the score for the positive pair is higher than for all other candidates, our model is highly accurate.\n\nTo do this, we can use the `tfrs.metrics.FactorizedTopK` metric. The metric has one required argument: the dataset of candidates that are used as implicit negatives for evaluation.\n\nIn our case, that's the `movies` dataset, converted into embeddings via our movie model:", "_____no_output_____" ] ], [ [ "metrics = tfrs.metrics.FactorizedTopK(\n candidates=movies.batch(128).map(movie_model)\n)", "_____no_output_____" ] ], [ [ "### Loss\n\nThe next component is the loss used to train our model. TFRS has several loss layers and tasks to make this easy.\n\nIn this instance, we'll make use of the `Retrieval` task object: a convenience wrapper that bundles together the loss function and metric computation:", "_____no_output_____" ] ], [ [ "task = tfrs.tasks.Retrieval(\n metrics=metrics\n)", "_____no_output_____" ] ], [ [ "The task itself is a Keras layer that takes the query and candidate embeddings as arguments, and returns the computed loss: we'll use that to implement the model's training loop.", "_____no_output_____" ], [ "### The full model\n\nWe can now put it all together into a model. TFRS exposes a base model class (`tfrs.models.Model`) which streamlines bulding models: all we need to do is to set up the components in the `__init__` method, and implement the `compute_loss` method, taking in the raw features and returning a loss value.\n\nThe base model will then take care of creating the appropriate training loop to fit our model.", "_____no_output_____" ] ], [ [ "class MovielensModel(tfrs.Model):\n\n def __init__(self, user_model, movie_model):\n super().__init__()\n self.movie_model: tf.keras.Model = movie_model\n self.user_model: tf.keras.Model = user_model\n self.task: tf.keras.layers.Layer = task\n\n def compute_loss(self, features: Dict[Text, tf.Tensor], training=False) -> tf.Tensor:\n # We pick out the user features and pass them into the user model.\n user_embeddings = self.user_model(features[\"user_id\"])\n # And pick out the movie features and pass them into the movie model,\n # getting embeddings back.\n positive_movie_embeddings = self.movie_model(features[\"movie_title\"])\n\n # The task computes the loss and the metrics.\n return self.task(user_embeddings, positive_movie_embeddings)", "_____no_output_____" ] ], [ [ "The `tfrs.Model` base class is a simply convenience class: it allows us to compute both training and test losses using the same method.\n\nUnder the hood, it's still a plain Keras model. You could achieve the same functionality by inheriting from `tf.keras.Model` and overriding the `train_step` and `test_step` functions (see [the guide](https://keras.io/guides/customizing_what_happens_in_fit/) for details):", "_____no_output_____" ] ], [ [ "class NoBaseClassMovielensModel(tf.keras.Model):\n\n def __init__(self, user_model, movie_model):\n super().__init__()\n self.movie_model: tf.keras.Model = movie_model\n self.user_model: tf.keras.Model = user_model\n self.task: tf.keras.layers.Layer = task\n\n def train_step(self, features: Dict[Text, tf.Tensor]) -> tf.Tensor:\n\n # Set up a gradient tape to record gradients.\n with tf.GradientTape() as tape:\n\n # Loss computation.\n user_embeddings = self.user_model(features[\"user_id\"])\n positive_movie_embeddings = self.movie_model(features[\"movie_title\"])\n loss = self.task(user_embeddings, positive_movie_embeddings)\n\n # Handle regularization losses as well.\n regularization_loss = sum(self.losses)\n\n total_loss = loss + regularization_loss\n\n gradients = tape.gradient(total_loss, self.trainable_variables)\n self.optimizer.apply_gradients(zip(gradients, self.trainable_variables))\n\n metrics = {metric.name: metric.result() for metric in self.metrics}\n metrics[\"loss\"] = loss\n metrics[\"regularization_loss\"] = regularization_loss\n metrics[\"total_loss\"] = total_loss\n\n return metrics\n\n def test_step(self, features: Dict[Text, tf.Tensor]) -> tf.Tensor:\n\n # Loss computation.\n user_embeddings = self.user_model(features[\"user_id\"])\n positive_movie_embeddings = self.movie_model(features[\"movie_title\"])\n loss = self.task(user_embeddings, positive_movie_embeddings)\n\n # Handle regularization losses as well.\n regularization_loss = sum(self.losses)\n\n total_loss = loss + regularization_loss\n\n metrics = {metric.name: metric.result() for metric in self.metrics}\n metrics[\"loss\"] = loss\n metrics[\"regularization_loss\"] = regularization_loss\n metrics[\"total_loss\"] = total_loss\n\n return metrics", "_____no_output_____" ] ], [ [ "In these tutorials, however, we stick to using the `tfrs.Model` base class to keep our focus on modelling and abstract away some of the boilerplate.", "_____no_output_____" ], [ "## Fitting and evaluating\n\nAfter defining the model, we can use standard Keras fitting and evaluation routines to fit and evaluate the model.\n\nLet's first instantiate the model.", "_____no_output_____" ] ], [ [ "model = MovielensModel(user_model, movie_model)\nmodel.compile(optimizer=tf.keras.optimizers.Adagrad(learning_rate=0.1))", "_____no_output_____" ] ], [ [ "Then shuffle, batch, and cache the training and evaluation data.", "_____no_output_____" ] ], [ [ "cached_train = train.shuffle(100_000).batch(8192).cache()\ncached_test = test.batch(4096).cache()", "_____no_output_____" ] ], [ [ "Then train the model:", "_____no_output_____" ] ], [ [ "model.fit(cached_train, epochs=3)", "Epoch 1/3\n10/10 [==============================] - 5s 464ms/step - factorized_top_k: 0.0508 - factorized_top_k/top_1_categorical_accuracy: 3.2500e-04 - factorized_top_k/top_5_categorical_accuracy: 0.0046 - factorized_top_k/top_10_categorical_accuracy: 0.0117 - factorized_top_k/top_50_categorical_accuracy: 0.0808 - factorized_top_k/top_100_categorical_accuracy: 0.1566 - loss: 69885.1072 - regularization_loss: 0.0000e+00 - total_loss: 69885.1072\nEpoch 2/3\n10/10 [==============================] - 5s 453ms/step - factorized_top_k: 0.1006 - factorized_top_k/top_1_categorical_accuracy: 0.0021 - factorized_top_k/top_5_categorical_accuracy: 0.0168 - factorized_top_k/top_10_categorical_accuracy: 0.0346 - factorized_top_k/top_50_categorical_accuracy: 0.1626 - factorized_top_k/top_100_categorical_accuracy: 0.2866 - loss: 67523.3714 - regularization_loss: 0.0000e+00 - total_loss: 67523.3714\nEpoch 3/3\n10/10 [==============================] - 5s 454ms/step - factorized_top_k: 0.1136 - factorized_top_k/top_1_categorical_accuracy: 0.0029 - factorized_top_k/top_5_categorical_accuracy: 0.0215 - factorized_top_k/top_10_categorical_accuracy: 0.0443 - factorized_top_k/top_50_categorical_accuracy: 0.1854 - factorized_top_k/top_100_categorical_accuracy: 0.3139 - loss: 66302.9609 - regularization_loss: 0.0000e+00 - total_loss: 66302.9609\n" ] ], [ [ "As the model trains, the loss is falling and a set of top-k retrieval metrics is updated. These tell us whether the true positive is in the top-k retrieved items from the entire candidate set. For example, a top-5 categorical accuracy metric of 0.2 would tell us that, on average, the true positive is in the top 5 retrieved items 20% of the time.\n\nNote that, in this example, we evaluate the metrics during training as well as evaluation. Because this can be quite slow with large candidate sets, it may be prudent to turn metric calculation off in training, and only run it in evaluation.", "_____no_output_____" ], [ "Finally, we can evaluate our model on the test set:", "_____no_output_____" ] ], [ [ "model.evaluate(cached_test, return_dict=True)", "5/5 [==============================] - 1s 169ms/step - factorized_top_k: 0.0782 - factorized_top_k/top_1_categorical_accuracy: 0.0010 - factorized_top_k/top_5_categorical_accuracy: 0.0097 - factorized_top_k/top_10_categorical_accuracy: 0.0226 - factorized_top_k/top_50_categorical_accuracy: 0.1248 - factorized_top_k/top_100_categorical_accuracy: 0.2328 - loss: 31079.0635 - regularization_loss: 0.0000e+00 - total_loss: 31079.0635\n" ] ], [ [ "Test set performance is much worse than training performance. This is due to two factors:\n\n1. Our model is likely to perform better on the data that it has seen, simply because it can memorize it. This overfitting phenomenon is especially strong when models have many parameters. It can be mediated by model regularization and use of user and movie features that help the model generalize better to unseen data.\n2. The model is re-recommending some of users' already watched movies. These known-positive watches can crowd out test movies out of top K recommendations.\n\nThe second phenomenon can be tackled by excluding previously seen movies from test recommendations. This approach is relatively common in the recommender systems literature, but we don't follow it in these tutorials. If not recommending past watches is important, we should expect appropriately specified models to learn this behaviour automatically from past user history and contextual information. Additionally, it is often appropriate to recommend the same item multiple times (say, an evergreen TV series or a regularly purchased item).", "_____no_output_____" ], [ "## Making predictions\n\nNow that we have a model, we would like to be able to make predictions. We can use the `tfrs.layers.ann.BruteForce` layer to do this.", "_____no_output_____" ] ], [ [ "# Create a model that takes in raw query features, and\nindex = tfrs.layers.ann.BruteForce(model.user_model)\n# recommends movies out of the entire movies dataset.\nindex.index(movies.batch(100).map(model.movie_model), movies)\n\n# Get recommendations.\n_, titles = index(tf.constant([\"42\"]))\nprint(f\"Recommendations for user 42: {titles[0, :3]}\")", "Recommendations for user 42: [b'Bridges of Madison County, The (1995)'\n b'Father of the Bride Part II (1995)' b'Rudy (1993)']\n" ] ], [ [ "Of course, the `BruteForce` layer is going to be too slow to serve a model with many possible candidates. The following sections shows how to speed this up by using an approximate retrieval index.", "_____no_output_____" ], [ "## Model serving\n\nAfter the model is trained, we need a way to deploy it.\n\nIn a two-tower retrieval model, serving has two components:\n\n- a serving query model, taking in features of the query and transforming them into a query embedding, and\n- a serving candidate model. This most often takes the form of an approximate nearest neighbours (ANN) index which allows fast approximate lookup of candidates in response to a query produced by the query model.", "_____no_output_____" ], [ "### Exporting a query model to serving\n\nExporting the query model is easy: we can either serialize the Keras model directly, or export it to a `SavedModel` format to make it possible to serve using [TensorFlow Serving](https://www.tensorflow.org/tfx/guide/serving).\n\nTo export to a `SavedModel` format, we can do the following:", "_____no_output_____" ] ], [ [ "model_dir = './models'", "_____no_output_____" ], [ "!mkdir $model_dir", "_____no_output_____" ], [ "# Export the query model.\npath = '{}/query_model'.format(model_dir)\nmodel.user_model.save(path)", "INFO:tensorflow:Assets written to: ./models/query_model/assets\n" ], [ "# Load the query model\nloaded = tf.keras.models.load_model(path, compile=False)\nquery_embedding = loaded(tf.constant([\"10\"]))\n\nprint(f\"Query embedding: {query_embedding[0, :3]}\")", "WARNING:tensorflow:11 out of the last 11 calls to <function recreate_function.<locals>.restored_function_body at 0x7f85d75cce18> triggered tf.function retracing. Tracing is expensive and the excessive number of tracings could be due to (1) creating @tf.function repeatedly in a loop, (2) passing tensors with different shapes, (3) passing Python objects instead of tensors. For (1), please define your @tf.function outside of the loop. For (2), @tf.function has experimental_relax_shapes=True option that relaxes argument shapes that can avoid unnecessary retracing. For (3), please refer to https://www.tensorflow.org/tutorials/customization/performance#python_or_tensor_args and https://www.tensorflow.org/api_docs/python/tf/function for more details.\n" ] ], [ [ "### Building a candidate ANN index\n\nExporting candidate representations is more involved. Firstly, we want to pre-compute them to make sure serving is fast; this is especially important if the candidate model is computationally intensive (for example, if it has many or wide layers; or uses complex representations for text or images). Secondly, we would like to take the precomputed representations and use them to construct a fast approximate retrieval index.\n\n\nWe can use [Annoy](https://github.com/spotify/annoy) to build such an index.\n\nAnnoy isn't included in the base TFRS package. To install it, run:", "_____no_output_____" ], [ "### We can now create the index object.", "_____no_output_____" ] ], [ [ "from annoy import AnnoyIndex\n\nindex = AnnoyIndex(embedding_dimension, \"dot\")", "_____no_output_____" ] ], [ [ "Then take the candidate dataset and transform its raw features into embeddings using the movie model:", "_____no_output_____" ] ], [ [ "print(movies)", "<MapDataset shapes: (), types: tf.string>\n" ], [ "movie_embeddings = movies.enumerate().map(lambda idx, title: (idx, title, model.movie_model(title)))\n", "WARNING:tensorflow:Model was constructed with shape (None,) for input Tensor(\"string_lookup_4_input:0\", shape=(None,), dtype=string), but it was called on an input with incompatible shape ().\n" ], [ "print(movie_embeddings.as_numpy_iterator().next())", "(0, b'You So Crazy (1994)', array([ 0.02039416, 0.15982407, 0.0063992 , -0.02597233, 0.12776582,\n -0.07474077, -0.14477485, -0.03757067, 0.09737739, 0.05545571,\n 0.06205893, 0.00479794, -0.1288748 , -0.09362403, 0.03417863,\n -0.03058628, -0.02924258, -0.09905305, -0.08250699, -0.12956885,\n -0.00052435, -0.07832637, -0.00451247, 0.04807298, -0.07815737,\n -0.18195164, 0.10836799, -0.01164408, -0.10894814, -0.03122996,\n -0.10479282, -0.09899054], dtype=float32))\n" ] ], [ [ "And then index the movie_id, movie embedding pairs into our Annoy index:", "_____no_output_____" ] ], [ [ "%%time\n\nmovie_id_to_title = dict((idx, title) for idx, title, _ in movie_embeddings.as_numpy_iterator())\n\n# We unbatch the dataset because Annoy accepts only scalar (id, embedding) pairs.\nfor movie_id, _, movie_embedding in movie_embeddings.as_numpy_iterator():\n index.add_item(movie_id, movie_embedding)\n\n# Build a 10-tree ANN index.\nindex.build(10)", "_____no_output_____" ] ], [ [ "We can then retrieve nearest neighbours:", "_____no_output_____" ] ], [ [ "for row in test.batch(1).take(3):\n query_embedding = model.user_model(row[\"user_id\"])[0]\n candidates = index.get_nns_by_vector(query_embedding, 3)\n print(f\"User ID: {row['user_id']}, Candidates: {[movie_id_to_title[x] for x in candidates]}.\")\n", "User ID: [b'346'], Candidates: [b'Cliffhanger (1993)', b'Hard Target (1993)', b'Rising Sun (1993)'].\nUser ID: [b'602'], Candidates: [b'Jungle2Jungle (1997)', b'Beautician and the Beast, The (1997)', b'Picture Perfect (1997)'].\nUser ID: [b'393'], Candidates: [b'Little Big League (1994)', b'Rent-a-Kid (1995)', b'Corrina, Corrina (1994)'].\n" ], [ "print(type(candidates))", "<class 'list'>\n" ] ], [ [ "## Next steps\n\nThis concludes the retrieval tutorial.\n\nTo expand on what is presented here, have a look at:\n\n1. Learning multi-task models: jointly optimizing for ratings and clicks.\n2. Using movie metadata: building a more complex movie model to alleviate cold-start.", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "markdown", "markdown", "markdown", "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown" ], [ "code", "code", "code", "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ] ]
d02373de98d97dca796f1591fbec9f93968be53f
4,183
ipynb
Jupyter Notebook
clean_code.ipynb
MaiaNgo/python-zerotomastery
6b37021af531b9adc029f1dd20b4aa0be3c6a800
[ "Apache-2.0" ]
null
null
null
clean_code.ipynb
MaiaNgo/python-zerotomastery
6b37021af531b9adc029f1dd20b4aa0be3c6a800
[ "Apache-2.0" ]
null
null
null
clean_code.ipynb
MaiaNgo/python-zerotomastery
6b37021af531b9adc029f1dd20b4aa0be3c6a800
[ "Apache-2.0" ]
null
null
null
16.732
69
0.443701
[ [ [ "## CLEAN CODE", "_____no_output_____" ] ], [ [ "def is_even(num):\n if num % 2 == 0:\n return True\n elif num % 2 != 0: # We really don't need this condition\n return False\n", "_____no_output_____" ], [ "is_even(25)", "_____no_output_____" ], [ "is_even(26)", "_____no_output_____" ], [ "# We will clean up our code above a little bit:\ndef is_even(num):\n if num % 2 == 0:\n return True\n else:\n return False", "_____no_output_____" ], [ "is_even(12)", "_____no_output_____" ], [ "is_even(11)", "_____no_output_____" ], [ "# We can clean up a little more:\ndef is_even(num):\n if num % 2 == 0:\n return True\n return False", "_____no_output_____" ], [ "is_even(5)", "_____no_output_____" ], [ "is_even(6)", "_____no_output_____" ], [ "# We can make our code even nice and simple a little more:\ndef is_even(num):\n return num % 2 == 0", "_____no_output_____" ], [ "is_even(22)", "_____no_output_____" ], [ "is_even(19)", "_____no_output_____" ] ] ]
[ "markdown", "code" ]
[ [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ] ]
d023904228dba223d06ed99b44cc7ab6210b7b33
541,370
ipynb
Jupyter Notebook
Robert_Cacho_Proj2_stats_notebook.ipynb
freshskates/machine-learning
50927ec5be8b471ace839cdfcfbbe56f1f90432e
[ "MIT" ]
null
null
null
Robert_Cacho_Proj2_stats_notebook.ipynb
freshskates/machine-learning
50927ec5be8b471ace839cdfcfbbe56f1f90432e
[ "MIT" ]
null
null
null
Robert_Cacho_Proj2_stats_notebook.ipynb
freshskates/machine-learning
50927ec5be8b471ace839cdfcfbbe56f1f90432e
[ "MIT" ]
null
null
null
173.794543
43,124
0.861932
[ [ [ "## Instructions\n\nPlease make a copy and rename it with your name (ex: Proj6_Ilmi_Yoon). All grading points should be explored in the notebook but some can be done in a separate pdf file. \n\n*Graded questions will be listed with \"Q:\" followed by the corresponding points.* \n\nYou will be submitting **a pdf** file containing **the url of your own proj6.**\n\n\n---", "_____no_output_____" ], [ "**Hypothesis testing**\n===\n\n**Outline**\n\nAt the end of this week, you will be a pro at:\n- **hypothesis testing** \n * is there something interesting/meaningful going on in my data?\n - one-sample t-test\n - two-sample t-test\n- **correcting for multiple testing**\n * doing thousands of hypothesis tests at a time will increase your likelihood of incorrect conclusions\n * you'll learn how to account for that\n- **false discovery rates**\n * you could be a perfectionist (\"even one wrong conclusion is the worst\"), aka family-wise error rate (FWER) \n * or become a pragmatic (\"of my significant discoveries, i expect x% of them to be false positives.\"), aka false discovery rate (FDR)\n- **permutation tests**\n * if your assumptions about your data are wrong, you may over/underestimate your confidence in your conclusions\n * assume as little as possible about the data with a permutation test\n\n\n", "_____no_output_____" ], [ "**Examples**\n\nIn class, we will talk about 3 examples:\n- confidence intervals\n - how much time do Americans spend on average per day on Netflix? \n\n- one-sample t-test \n - do Americans spend more time on average per day on Netflix compared to before the pandemic?\n\n- two-sample t-test \n - does exercise affect baseline blood pressure? \n \n\n\n**Your project**\n- RNA sequencing: which genes differentiate the different immune cells in your blood?\n - two-sample t-test\n - multiple testing correction\n\n\n\n", "_____no_output_____" ], [ "**How do you make the best of this week?**\n- start seeing all statistics reported around you, and think of how they relate to what we have learned. \n- do rigorous statistics in your work from now on", "_____no_output_____" ], [ "**LET'S BEGIN!**\n\n===============================================================", "_____no_output_____" ] ], [ [ "#import python packages\r\n\r\nimport numpy as np\r\nimport scipy as sp\r\nimport scipy.stats as st\r\nimport pandas as pd\r\nimport seaborn as sns\r\nimport matplotlib.pyplot as plt", "_____no_output_____" ], [ "rng=np.random.RandomState(1234) #this will ensure the reproducibility of the notebook", "_____no_output_____" ] ], [ [ "**EXAMPLE I:** \n===\n\nHow much time do subscribers spend on average each day on Netflix?\n--\n\nExample discussed in class (Lecture 1). The data we are working with are simulated, but the mean time spent on Netflix is inspired by https://www.pcmag.com/news/us-netflix-subscribers-watch-32-hours-and-use-96-gb-of-data-per-day (average of 3.2 hours for subscribers).\n\n\n", "_____no_output_____" ] ], [ [ "#Summarizing data\r\n#================\r\npopulation=np.array([1,1.8,2,3.2,3.3,4,4,4.2])\r\nour_sample=np.array([2,3.2,4])\r\n\r\n#means\r\npopulation_mean=np.mean(population)\r\nprint('Population mean',population_mean.round(2))\r\n\r\nsample_mean=np.mean(our_sample)\r\nprint('- Sample mean',sample_mean.round(2))\r\n\r\n#standard deviations\r\npopulation_sd=np.std(population)\r\nprint('Population standard deviation',population_sd.round(2))\r\n#biased sample sd\r\nbiased_sample_sd=np.sqrt((np.power(our_sample-sample_mean,2).sum())/our_sample.shape[0])\r\nprint('- Biased sample standard deviation',biased_sample_sd.round(2))\r\n#unbiased sample sd\r\nunbiased_sample_sd=np.sqrt((np.power(our_sample-sample_mean,2).sum())/(our_sample.shape[0]-1))\r\nprint('- Unbiased sample standard deviation',unbiased_sample_sd.round(2))\r\n\r\nplt.hist(population,range(0,6),color='black')\r\nplt.yticks([0,1,2])\r\nplt.xlabel('Number of hours spent\\nper day on Netflix')\r\nplt.ylabel('Number of observations')\r\nplt.show()", "Population mean 2.94\n- Sample mean 3.07\nPopulation standard deviation 1.12\n- Biased sample standard deviation 0.82\n- Unbiased sample standard deviation 1.01\n" ], [ "#larger example\r\n\r\nMEAN_NETFLIX=3.2\r\nSD_NETFLIX=1\r\npopulation=rng.normal(loc=MEAN_NETFLIX, \r\n scale=SD_NETFLIX, \r\n size=1000)\r\npopulation[population<0]=0\r\nour_sample=population[0:100]\r\n\r\n#means\r\npopulation_mean=np.mean(population)\r\nprint('Population mean',population_mean.round(2))\r\n\r\nsample_mean=np.mean(our_sample)\r\nprint('- Sample mean',sample_mean.round(2))\r\n\r\n#standard deviations\r\npopulation_sd=np.std(population)\r\nprint('Population standard deviation',population_sd.round(2))\r\n#biased sample sd\r\nbiased_sample_sd=np.sqrt((np.power(our_sample-sample_mean,2).sum())/our_sample.shape[0])\r\nprint('- Biased sample standard deviation',biased_sample_sd.round(2))\r\n#unbiased sample sd\r\nunbiased_sample_sd=np.sqrt((np.power(our_sample-sample_mean,2).sum())/(our_sample.shape[0]-1))\r\nprint('- Unbiased sample standard deviation',unbiased_sample_sd.round(2))", "Population mean 3.22\n- Sample mean 3.24\nPopulation standard deviation 0.97\n- Biased sample standard deviation 0.98\n- Unbiased sample standard deviation 0.99\n" ], [ "#representing sets of datapoints\r\n#===============================\r\n\r\n#histograms\r\nplt.hist(population,[x*0.6 for x in range(10)],color='lightgray',edgecolor='black')\r\nplt.xlabel('Number of hours spent on Netflix\\nper day',fontsize=15)\r\nplt.ylabel('Number of respondents',fontsize=15)\r\nplt.xlim(0,6)\r\nplt.show()\r\n\r\nplt.hist(our_sample,[x*0.6 for x in range(10)],color='lightblue',edgecolor='black')\r\nplt.xlabel('Number of hours spent on Netflix\\nper day',fontsize=15)\r\nplt.ylabel('Number of respondents',fontsize=15)\r\nplt.xlim(0,6)\r\nplt.show()\r\n\r\n#densities\r\nsns.distplot(population, hist=True, kde=True, \r\n bins=[x*0.6 for x in range(10)], color = 'black', \r\n hist_kws={'edgecolor':'black','color':'black'},\r\n kde_kws={'linewidth': 4})\r\nplt.xlabel('Number of hours spent on Netflix\\nper day',fontsize=15)\r\nplt.ylabel('Density',fontsize=15)\r\nplt.xlim(0,6)\r\nplt.show()\r\n\r\nsns.distplot(our_sample, hist=True, kde=True, \r\n bins=[x*0.6 for x in range(10)], color = 'blue', \r\n hist_kws={'edgecolor':'black','color':'lightblue'},\r\n kde_kws={'linewidth': 4})\r\nplt.xlabel('Number of hours spent on Netflix\\nper day',fontsize=15)\r\nplt.ylabel('Density',fontsize=15)\r\nplt.xlim(0,6)\r\nplt.show()\r\n#put both data in the same plot\r\nfig,plots=plt.subplots(1)\r\nsns.distplot(population, hist=False, kde=True, \r\n bins=[x*0.6 for x in range(10)], color = 'black', \r\n hist_kws={'edgecolor':'black','color':'black'},\r\n kde_kws={'linewidth': 4},ax=plots)\r\nplots.set_xlim(0,6)\r\nsns.distplot(our_sample, hist=False, kde=True, \r\n bins=[x*0.6 for x in range(10)], color = 'blue', \r\n hist_kws={'edgecolor':'black','color':'black'},\r\n kde_kws={'linewidth': 4},ax=plots)\r\nplots.set_xlabel('Number of hours spent on Netflix\\nper day',fontsize=15)\r\nplots.set_ylabel('Density',fontsize=15)\r\nx = plots.lines[-1].get_xdata()\r\ny = plots.lines[-1].get_ydata()\r\nplots.fill_between(x, 0, y, where=x < 2, color='lightblue', alpha=0.3)\r\nplt.xlim(0,6)\r\nplt.show()\r\n\r\n", "_____no_output_____" ], [ "\r\n#put both data in the same plot\r\nfig,plots=plt.subplots(1)\r\nsns.distplot(population, hist=False, kde=True, \r\n bins=[x*0.6 for x in range(10)], color = 'black', \r\n hist_kws={'edgecolor':'black','color':'black'},\r\n kde_kws={'linewidth': 4},ax=plots)\r\nplots.set_xlim(0,6)\r\nx = plots.lines[-1].get_xdata()\r\ny = plots.lines[-1].get_ydata()\r\nplots.fill_between(x, 0, y, where=(x < 4) & (x>2), color='gray', alpha=0.3)\r\nplt.xlim(0,6)\r\nplots.set_xlabel('Number of hours spent on Netflix\\nper day',fontsize=15)\r\nplots.set_ylabel('Density',fontsize=15)\r\nplt.show()\r\n\r\nnp.multiply((population<=4),(population>=2)).sum()/population.shape[0]", "C:\\Users\\freshskates\\.conda\\envs\\ml\\lib\\site-packages\\seaborn\\distributions.py:2619: FutureWarning: `distplot` is a deprecated function and will be removed in a future version. Please adapt your code to use either `displot` (a figure-level function with similar flexibility) or `kdeplot` (an axes-level function for kernel density plots).\n warnings.warn(msg, FutureWarning)\n" ], [ "#brute force confidence interval\r\nN_POPULATION=10000\r\nN_SAMPLE=1000\r\npopulation=np.random.normal(loc=MEAN_NETFLIX, \r\n scale=SD_NETFLIX, \r\n size=N_POPULATION)\r\npopulation[population<0]=0\r\nsample_means=[]\r\nfor i in range(N_SAMPLE):\r\n sample_i=np.random.choice(population,10)\r\n mean_i=np.mean(sample_i)\r\n sample_means.append(mean_i)\r\nsample_means=np.array(sample_means)\r\n\r\n#sd of the mean\r\nmeans_mean=np.mean(sample_means)\r\nmeans_sd=np.std(sample_means)\r\nprint('Mean of the means',means_mean)\r\nprint('SEM (SD of the means)',means_sd)\r\n\r\n\r\nplt.hist(sample_means,100,color='red')\r\nplt.xlabel('Number of hours spent on Netflix\\nper day\\nMEANS OF SAMPLES')\r\nplt.xlim(0,6)\r\nplt.axvline(x=means_mean,color='black')\r\nplt.axvline(x=means_mean-means_sd,color='black',linestyle='--')\r\nplt.axvline(x=means_mean+means_sd,color='black',linestyle='--')\r\nplt.show()\r\n\r\n#compute what fraction of points are within 1 means_sd from means_mean\r\nwithin_1sd=0\r\nwithin_2sd=0\r\nfor i in range(sample_means.shape[0]):\r\n m=sample_means[i]\r\n if m>=(means_mean-means_sd) and m<=(means_mean+means_sd):\r\n within_1sd+=1\r\n if m>=(means_mean-2*means_sd) and m<=(means_mean+2*means_sd):\r\n within_2sd+=1\r\nprint('within 1 means SD:',within_1sd/sample_means.shape[0])\r\nprint('within 1 means SD:',within_2sd/sample_means.shape[0])", "Mean of the means 3.1934702093436456\nSEM (SD of the means) 0.3191115562654143\n" ], [ "from scipy import stats\r\nprint('SEM (SD of the means), empirically calculated',means_sd.round(2))\r\nprint('SEM computed in python',stats.sem(sample_i).round(2))\r\n", "SEM (SD of the means), empirically calculated 0.32\nSEM computed in python 0.34\n" ], [ "#one sample t test in python\r\nfrom scipy.stats import ttest_1samp\r\n\r\nMEAN_NETFLIX=3.2\r\nSD_NETFLIX=1\r\npopulation=rng.normal(loc=MEAN_NETFLIX, \r\n scale=SD_NETFLIX, \r\n size=1000)\r\npopulation[population<0]=0\r\nour_sample=population[0:10]\r\nprint(our_sample.round(2))\r\nprint(our_sample.mean())\r\nprint(our_sample.std())\r\n\r\nTEST_VALUE=1.5\r\n\r\nt, pvalue = ttest_1samp(our_sample, popmean=TEST_VALUE)\r\nprint('t', t.round(2)) \r\nprint('p-value', pvalue.round(6))", "[1.62 1.58 3.25 1.52 4.6 2.36 4.01 3.15 3.73 2.39]\n2.8206757978474783\n1.0381608736065147\nt 3.82\np-value 0.004113\n" ], [ "#confidence intervals\r\n#=====================\r\n#take 100 samples\r\n#compute their confidence intervals\r\n#plot them\r\n\r\nimport scipy.stats as st\r\n\r\nN_SAMPLE=200\r\nfor CONFIDENCE in [0.9,0.98,0.999999]:\r\n population=rng.normal(loc=MEAN_NETFLIX, \r\n scale=SD_NETFLIX, \r\n size=N_POPULATION)\r\n population[population<0]=0\r\n sample_means=[]\r\n ci_lows=[]\r\n ci_highs=[]\r\n for i in range(N_SAMPLE):\r\n sample_i=np.random.choice(population,10)\r\n mean_i=np.mean(sample_i)\r\n ci=st.t.interval(alpha=CONFIDENCE, \r\n df=sample_i.shape[0]-1, loc=mean_i, scale=st.sem(sample_i))\r\n ci_lows.append(ci[0])\r\n ci_highs.append(ci[1])\r\n sample_means.append(mean_i)\r\n\r\n data=pd.DataFrame({'mean':sample_means,'ci_low':ci_lows,'ci_high':ci_highs})\r\n data=data.sort_values(by='mean')\r\n data.index=range(N_SAMPLE)\r\n print(data)\r\n\r\n for i in range(N_SAMPLE):\r\n color='gray'\r\n if MEAN_NETFLIX>data['ci_high'][i] or MEAN_NETFLIX<data['ci_low'][i]:\r\n color='red'\r\n plt.plot((data['ci_low'][i],data['ci_high'][i]),(i,i),color=color)\r\n #plt.scatter(data['mean'],range(N_SAMPLE),color='black')\r\n plt.axvline(x=MEAN_NETFLIX,color='black',linestyle='--')\r\n plt.xlabel('Mean time spent on Netflix')\r\n plt.ylabel('Sampling iteration')\r\n plt.xlim(0,10)\r\n plt.show()", " mean ci_low ci_high\n0 2.328428 1.865753 2.791103\n1 2.380752 1.843621 2.917883\n2 2.447427 1.780282 3.114572\n3 2.563095 2.087869 3.038322\n4 2.631692 2.047800 3.215584\n.. ... ... ...\n195 3.897081 3.015483 4.778678\n196 3.908490 3.559905 4.257074\n197 3.961320 3.316092 4.606547\n198 3.977324 3.158234 4.796413\n199 4.137394 3.541674 4.733114\n\n[200 rows x 3 columns]\n" ], [ "#confidence intervals\r\n#=====================\r\n#take 100 samples\r\n#compute their confidence intervals\r\n#plot them\r\n\r\nimport scipy.stats as st\r\n\r\nN_SAMPLE=200\r\nfor CONFIDENCE in [0.9,0.98,0.999999]:\r\n population=rng.normal(loc=MEAN_NETFLIX, \r\n scale=SD_NETFLIX, \r\n size=N_POPULATION)\r\n population[population<0]=0\r\n sample_means=[]\r\n ci_lows=[]\r\n ci_highs=[]\r\n for i in range(N_SAMPLE):\r\n sample_i=np.random.choice(population,100)\r\n mean_i=np.mean(sample_i)\r\n ci=st.t.interval(alpha=CONFIDENCE, \r\n df=sample_i.shape[0]-1, loc=mean_i, scale=st.sem(sample_i))\r\n ci_lows.append(ci[0])\r\n ci_highs.append(ci[1])\r\n sample_means.append(mean_i)\r\n\r\n data=pd.DataFrame({'mean':sample_means,'ci_low':ci_lows,'ci_high':ci_highs})\r\n data=data.sort_values(by='mean')\r\n data.index=range(N_SAMPLE)\r\n print(data)\r\n\r\n for i in range(N_SAMPLE):\r\n color='gray'\r\n if MEAN_NETFLIX>data['ci_high'][i] or MEAN_NETFLIX<data['ci_low'][i]:\r\n color='red'\r\n plt.plot((data['ci_low'][i],data['ci_high'][i]),(i,i),color=color)\r\n #plt.scatter(data['mean'],range(N_SAMPLE),color='black')\r\n plt.axvline(x=MEAN_NETFLIX,color='black',linestyle='--')\r\n plt.xlabel('Mean time spent on Netflix')\r\n plt.ylabel('Sampling iteration')\r\n plt.xlim(0,10)\r\n plt.show()", " mean ci_low ci_high\n0 2.892338 2.733312 3.051364\n1 2.895408 2.729131 3.061684\n2 2.946665 2.799507 3.093822\n3 2.957376 2.781855 3.132897\n4 2.968571 2.784393 3.152748\n.. ... ... ...\n195 3.427391 3.250704 3.604077\n196 3.431677 3.258160 3.605194\n197 3.434345 3.274334 3.594356\n198 3.450142 3.279478 3.620806\n199 3.476758 3.310008 3.643508\n\n[200 rows x 3 columns]\n" ] ], [ [ "**EXAMPLE II:** \n===\n\nIs exercise associated with lower baseline blood pressure?\n--\n\nWe will simulate data with control mean 120 mmHg, treatment mean 116 mmHg and population sd 5 for both conditions.", "_____no_output_____" ] ], [ [ "#simulate dataset\r\n#=====================\r\n\r\ndef sample_condition_values(condition_mean,\r\n condition_var,\r\n condition_N,\r\n condition=''):\r\n \r\n condition_values=np.random.normal(loc = condition_mean, \r\n scale=condition_var,\r\n size = condition_N)\r\n\r\n data_condition_here=pd.DataFrame({'BP':condition_values,\r\n 'condition':condition})\r\n return(data_condition_here)\r\n\r\n#=========================================================================\r\nN_per_condition=10\r\nctrl_mean=120\r\ntest_mean=116 \r\nv=5\r\n\r\nnp.random.seed(1)\r\n\r\ndata_ctrl=sample_condition_values(condition_mean=ctrl_mean,\r\n condition_N=N_per_condition,\r\n condition_var=v,\r\n condition='couch')\r\n\r\ndata_test=sample_condition_values(condition_mean=test_mean,\r\n condition_N=N_per_condition,\r\n condition_var=v,\r\n condition='exercise')\r\n\r\ndata=pd.concat([data_ctrl,data_test],axis=0)\r\nprint(data)", " BP condition\n0 128.121727 couch\n1 116.941218 couch\n2 117.359141 couch\n3 114.635157 couch\n4 124.327038 couch\n5 108.492307 couch\n6 128.724059 couch\n7 116.193965 couch\n8 121.595195 couch\n9 118.753148 couch\n0 123.310540 exercise\n1 105.699296 exercise\n2 114.387914 exercise\n3 114.079728 exercise\n4 121.668847 exercise\n5 110.500544 exercise\n6 115.137859 exercise\n7 111.610708 exercise\n8 116.211069 exercise\n9 118.914076 exercise\n" ], [ "#visualize data\r\n#=====================\r\n\r\nsns.catplot(x='condition',y='BP',data=data,height=2,aspect=1.5)\r\nplt.ylabel('BP')\r\nplt.show()", "_____no_output_____" ], [ "sns.catplot(data=data,x='condition',y='BP',\r\n jitter=1,\r\n )\r\nplt.show()\r\n\r\nsns.catplot(data=data,x='condition',y='BP',kind='box',)\r\nplt.show()\r\n\r\nsns.catplot(data=data,x='condition',y='BP',kind='violin',)\r\nplt.show()\r\n\r\nfig,plots=plt.subplots(1)\r\nsns.boxplot(data=data,x='condition',y='BP',\r\n ax=plots,\r\n )\r\nsns.stripplot(data=data,x='condition',y='BP',\r\n jitter=1,\r\n ax=plots,alpha=0.25,\r\n )\r\n\r\nplt.show()", "_____no_output_____" ] ], [ [ "In our hypothesis test, we ask if these two groups differ significantly from each other. It's a bit hard to say just from looking at the plot. \n\nThis is where statistics comes in. It's time to:\n\n*3. Think about how much the data surprise you, given your null model*\n\nWe'll convert this step to some math, as follows:\n\n**Step 1. summarize the difference between the groups with a number.**\n\nThis is called a **test statistic** \n\n\"How to define the test statistic?\" you say?\n\nThe world is your oyster. You are free to choose anything you wish. \n\n(Later, we'll see that some choices come with nice math, which is why they are typically used. But a test statistic could be anything)\n\nTo demonstrate this intuition, let's come up with a very basic test statistic. For example, let's compute the difference between the BP in the 2 groups.\n\n", "_____no_output_____" ] ], [ [ "mean_ctrl=np.mean(data[data['condition']=='couch']['BP'])\r\nmean_test=np.mean(data[data['condition']=='exercise']['BP'])\r\n\r\ntest_stat=mean_test-mean_ctrl\r\nprint('test statistic =',test_stat)", "test statistic = -4.362237456546268\n" ] ], [ [ "What is this number telling us? Is the BP significantly different between the 2 conditions? It's impossible to say looking at only this number.\n\nWe have to ask ourselves, well, what did you expect?\n\nThis takes us to the next step.\n", "_____no_output_____" ], [ "\n**ii) think about what the test statistic would be if in reality there were no difference between the 2 groups. It will be a distribution, not just a single number, because you would expect to see some variation in the test statistic whenever you do an experiment, due to sampling noise, and due to variation in the population.**\n\nHere is where the wasteful part comes in. You go and repeat the measurement on 1000 different couch grouos. Then, for each of these, you compute the same test statistic = the difference between the mean in that sample and your original couch group.\n", "_____no_output_____" ] ], [ [ "np.random.seed(1)\r\ndata_exp2=sample_condition_values(condition_mean=ctrl_mean,\r\n condition_N=N_per_condition,\r\n condition_var=v,\r\n condition='control_0')\r\nfor i in range(1,1001):\r\n data_exp2=pd.concat([data_exp2,sample_condition_values(condition_mean=ctrl_mean,\r\n condition_N=N_per_condition,\r\n condition_var=v,\r\n condition='control_'+str(i))])\r\n\r\nprint(data_exp2)", " BP condition\n0 128.121727 control_0\n1 116.941218 control_0\n2 117.359141 control_0\n3 114.635157 control_0\n4 124.327038 control_0\n.. ... ...\n5 120.846771 control_1000\n6 123.368115 control_1000\n7 118.363992 control_1000\n8 118.473504 control_1000\n9 122.624327 control_1000\n\n[10010 rows x 2 columns]\n" ], [ "#now, let's plot the distribution of the test statistic under the null hypothesis\r\n\r\n#get mean of each control\r\nexp2_means=data_exp2.groupby('condition').mean()\r\nprint(exp2_means.head())\r\n\r\nnull_test_stats=exp2_means-ctrl_mean\r\n\r\nplt.hist(np.array(null_test_stats).flatten(),20,color='black')\r\nplt.xlabel('Test statistic')\r\nplt.axvline(x=test_stat,color='red')\r\n", " BP\ncondition \ncontrol_0 119.514296\ncontrol_1 119.152058\ncontrol_10 120.201086\ncontrol_100 118.518008\ncontrol_1000 119.698545\n" ], [ "null_test_stats", "_____no_output_____" ], [ "for i in range(null_test_stats.shape[0]):\r\n if null_test_stats['BP'][i] > 4:\r\n print(null_test_stats.index[i], null_test_stats['BP'][i])", "control_179 5.185586379422304\ncontrol_43 4.129129723892561\ncontrol_665 4.129526751544148\ncontrol_775 4.117971581535585\ncontrol_838 4.442499228276958\ncontrol_952 4.185169220649826\ncontrol_970 4.141431724614719\n" ], [ "for i in range(null_test_stats.shape[0]):\r\n if null_test_stats['BP'][i]<-4:\r\n print(null_test_stats.index[i],null_test_stats['BP'][i])", "control_161 -4.148404587530905\ncontrol_202 -4.336675796802723\ncontrol_234 -4.137633016906577\ncontrol_854 -4.612546175530198\ncontrol_955 -4.0685297239182034\n" ], [ "sns.catplot(data=data_exp2,x='condition',y='BP',order=['control_0',\r\n 'control_1','control_2','control_3',\r\n 'control_4','control_5',#'control_6',\r\n #'control_7','control_8','control_9','control_10',\r\n 'control_179','control_161',],\r\n color='black',#kind='box',\r\n aspect=2,height=2)", "_____no_output_____" ], [ "x=5\r\nplt.hist(np.array(null_test_stats[1:2]).flatten(),range(-4,4),color='black',)\r\nplt.xlabel('Test statistic (t)')\r\nplt.xlim(-x,x)\r\nplt.ylim(0,5)\r\nplt.show()\r\n#plt.axvline(x=t_stat,color='red')\r\n\r\nplt.hist(np.array(null_test_stats[1:3]).flatten(),range(-4,4),color='black',)\r\nplt.xlabel('Test statistic (t)')\r\nplt.xlim(-x,x)\r\nplt.ylim(0,5)\r\n\r\nplt.show()\r\n#plt.axvline(x=t_stat,color='red')\r\n\r\nplt.hist(np.array(null_test_stats[1:4]).flatten(),range(-4,4),color='black',)\r\nplt.xlabel('Test statistic (t)')\r\nplt.xlim(-x,x)\r\nplt.ylim(0,5)\r\n\r\nplt.show()\r\n#plt.axvline(x=t_stat,color='red')\r\n\r\nplt.hist(np.array(null_test_stats[1:5]).flatten(),range(-4,4),color='black',)\r\nplt.xlabel('Test statistic (t)')\r\nplt.xlim(-x,x)\r\nplt.ylim(0,5)\r\n\r\nplt.show()\r\n#plt.axvline(x=t_stat,color='red')\r\n\r\nplt.hist(np.array(null_test_stats[1:6]).flatten(),range(-4,4),color='black',)\r\nplt.xlabel('Test statistic (t)')\r\nplt.xlim(-x,x)\r\nplt.ylim(0,5)\r\n\r\nplt.show()\r\n#plt.axvline(x=t_stat,color='red')\r\n\r\nplt.hist(np.array(null_test_stats).flatten(),20,color='black',)\r\nplt.xlabel('Test statistic (t)')\r\nplt.xlim(-x,x)\r\n#plt.ylim(0,5)\r\nplt.show()\r\n\r\nplt.hist(np.array(null_test_stats).flatten(),20,color='black',)\r\nplt.xlabel('Test statistic (t)')\r\nplt.xlim(-x,x)\r\n#plt.ylim(0,5)\r\nplt.axvline(x=test_stat,color='red')\r\nplt.show()", "_____no_output_____" ] ], [ [ "In black we have the distribution of test statistics we obtained from the 1000 experiments measuring couch participants. In other words, this is the distribution of the test statistic under the null hypothesis.\n\nThe red line shows the test statistic from our comparison of exercise group vs with couch group.", "_____no_output_____" ], [ "**Is our difference in expression significant?**\n\nif the null is true, in other words, if in reality there is no difference between couch and exercise, what is the probability of seeing such an extreme difference between their means (in other words, such an extreme test statistic)?\n\nWe can compute this from the plot above. We go to our null distribution, and count how many times we got a more extreme test statistic in our null experiment than the one we got for the couch vs exercise comparison.\n\n", "_____no_output_____" ] ], [ [ "count_more_extreme=int(np.sum(np.abs(null_test_stats)>=np.abs(test_stat)))\r\nprint(count_more_extreme,'times we got a more extreme test statistic under the null')\r\nprint(count_more_extreme / 1000,'fraction of the time we got a more extreme test statistic under the null')", "3 times we got a more extreme test statistic under the null\n0.003 fraction of the time we got a more extreme test statistic under the null\n" ] ], [ [ "What we computed above is called a **p-value**. Now, this is a very often misunderstood term, so let's think about it deeply. \n\nDeeply.\n\nDeeply.\n\nAbout what it is, what it is not.\n\n**P-values**\n--\n\nTo remember what a p-value is, you decide to make a promise to me and more importantly yourself, that from now on, any sentence in which you mention a p-value will start with \"if the null were true, ...\".\n\n**A p-value IS:**\n- if the null were true, the probability of observing something as extreme or more extreme than your test statistic.\n- it's the quantification of your \"whoa!\", given your null hypothesis. More \"whoa!\" = smaller p-value.\n\n**A p-value is NOT:**\n- the probability that the null hypothesis is wrong. We don't know the probability of that. That's sort of up to the universe. \n- the probability that the null hypothesis is wrong. This is so important, that it's worth putting it on the list twice.\n\nWhy is this distinction so important? \n\nFirst, because we can be very good at estimating what happens under the null. It's much more challenging to think about other scenarios. For instance, if you needed to make a model for the BP being different between 2 conditions, how different do you expect them to be? Is the average couch group at 120 and the exercise at 110? Or the couch at 125 and exercise at 130? Do you make a model for each option and grow old estimating all possible models?\n\nSecond, it's also a matter of being conservative. It's common courtesy to assume the 2 conditions are the same. I expect you to come to me and convince me that it would be REALLY unlikely to observe what we have just seen given the null, to make it worthwhile my time. It would be weird to just assume the BP is different between the 2 conditions and have to prove that they are the same. We'd be swimming in false positives.\n\n**Statistical significance**\n\nNow that we have a p-value, you need to ask yourself where you set a cutoff for something being unlikely enough to be \"significant\", or worth your attention. Usually, that's 0.05, or 0.01, or 0.1. Yes, essentially it's a somewhat arbitrary small number.\n\nI reiterate: this does not mean that the exercise group is different from the couch group for sure. If you were to do the experiment 1000 times with groups of participants assigned to \"couch\", in a small subset of your experiments, you'll get a test statistic as or more extreme than the one we found in our experiment comparing. But given that it's unlikely to get this result under the null hypohesis, you call it a significant difference, one that makes you think.\n\nIn summary: \n- you look at your p-value - and you think about the probability of getting your result under the null, as you need to include these words in any sentence with p-values -\n- compare it with your significance threshold \n- if it is less than that threshold, you call that difference in expression significant between KO and control.\n\n**Technical note: one-tailed vs two-tailed tests**\n\n*Depending on what you believe would be the possible alternative to your null hypothesis (conveniently called the alternative hypothesis), you may compute the p-value differently.*\n\n*Specifically, in our example above, we computed the p-value by asking:*\n- *if the null were true, what is the probability of obtaining a test statistic as extreme or more extreme than the one we've seen. That means we asked whether there were test statistics larger than our test statistic, or lower than minus our test statistic. This is called a two-tailed test, because we looked at both sides (both tails) of the distribution under the null.*\n\n*If your alternative hypothesis were that the treatment specifically decreases baseline blood pressure, you'd compute the p-value differently, as you'd look under the null at only what fraction of the time you've seen a test statistic lower than the one we've seen. This is a one-tailed test.*\n\n*Of course, this is not an invitation to use one-tailed tests to try to get more significant p-values, since by definition the p-values from a one-tailed test will be smaller than those for a two-tailed test. You should define your alternative hypothesis based on deep thought. I personally like to be as conservative as possible, and as such strongly prefer two-tailed tests.*\n\n\n\n\n\n\n\n", "_____no_output_____" ], [ "**Hypothesis testing in a nutshell**\n\n- come up with a **null hypothesis**. \n * In our case: the gene does not change in expression.\n- collect some data\n * yay, we love data!\n- define a **test statistic** to measure your quantity of interest. \n * here we looked at the difference between means, but as we'll see below, there are more sophisticated ways to go about it.\n- figure out the **distribution of the test statistic under the null** hypothesis\n * here, we did this by repeating the measurement on the same type of cells 1000 times. Next, we'll learn that under certain conditions we can comoute this distribution analytically, rather than having to do thousands of experiments.\n- compute a **p-value**\n * that tells you if the null were true, the probability of getting your test statistic or something even more outrageous\n- decide if **significant**\n * is p-value below a pre-defined threshold\n\nIf you deeply understand this, you're on a very good path to understand a LARGE fraction of all statistics you'll find in genomics.", "_____no_output_____" ], [ "**PART II. EXAMPLE HYPOTHESIS TESTING USING THE T-TEST**\n---\n\nNow, let's do a t-test.\n\n", "_____no_output_____" ] ], [ [ "from scipy.stats import ttest_ind\r\nt_stat,pvalue=ttest_ind(data[data['condition']=='exercise']['BP'],\r\n data[data['condition']=='couch']['BP'],\r\n )\r\nprint(t_stat,pvalue)\r\n", "-1.6837025738594624 0.10950131551739636\n" ], [ "#as before, compare to the distribution\r\nnull_test_stats=[]\r\nfor i in range(1000):\r\n current_t,current_pvalue=ttest_ind(data_exp2[data_exp2['condition']=='control_'+str(i)]['BP'],\r\n data_exp2[data_exp2['condition']=='control_0']['BP'],\r\n )\r\n null_test_stats.append(current_t)\r\n\r\nplt.hist(np.array(null_test_stats).flatten(),color='black')\r\nplt.xlabel('Test statistic (t)')\r\nplt.axvline(x=t_stat,color='red')\r\n\r\ncount_more_extreme=int(np.sum(np.abs(null_test_stats)>=np.abs(t_stat)))\r\nprint(count_more_extreme,'times we got a more extreme test statistic under the null')\r\nprint(count_more_extreme/1000,'fraction of the time we got a more extreme test statistic under the null = p-value')", "10 times we got a more extreme test statistic under the null\n0.01 fraction of the time we got a more extreme test statistic under the null = p-value\n" ] ], [ [ "Now, the exciting thing is that we didn't have to perform the second experiment to get an empirical distribution of the test statistic under the null. Rather, we were able to estimate it analytically. And indeed, the p-value we obtained from the t-test is similar to the one we got from our big experiment!\n", "_____no_output_____" ], [ "Ok, so by now, you should be pros at hypothesis tests.\n\nRemember: decide on the null, compute test statistic, get the distribution of the test statistic under the null, compute a p-value, decide if significant.", "_____no_output_____" ], [ "There are of course many other types of hypothesis tests that don't look at the difference between groups as we did here. For instace, in GWAS, you want to see if a mutation is enriched in a disease cohort compared to healthy samples, and you do a chi-square test. \nOr maybe you have more than 2 conditions. Then you do ANOVA, rather than a t-test.\n", "_____no_output_____" ], [ "**PROJECT: EXAMPLE III:** \r\n===\r\n\r\nRNA sequencing: which genes are characteristic for different types of immune cells in your body?\r\n--", "_____no_output_____" ], [ "Motivation\n--\n\nAlthough all cells in our body have the same DNA, they can have wildly different functions. That is because they activate different genes, for example your brain cells turn on genes that lead to production of neurotransmitters while liver cells activate genes encoding enzymes.\n\nHere, you will compare different types of immune cells (e.g. B-cells that make your antibodies, and T-cells which fight infections), and identify which genes are specifically active in each type of cell.", "_____no_output_____" ] ], [ [ "#install scanpy\r\n!pip install scanpy", "Requirement already satisfied: scanpy in c:\\users\\freshskates\\.conda\\envs\\ml\\lib\\site-packages (1.8.1)\nRequirement already satisfied: numpy>=1.17.0 in c:\\users\\freshskates\\.conda\\envs\\ml\\lib\\site-packages (from scanpy) (1.20.0)\nRequirement already satisfied: h5py>=2.10.0 in c:\\users\\freshskates\\.conda\\envs\\ml\\lib\\site-packages (from scanpy) (3.4.0)\nRequirement already satisfied: scikit-learn>=0.22 in c:\\users\\freshskates\\.conda\\envs\\ml\\lib\\site-packages (from scanpy) (1.0)\nRequirement already satisfied: numba>=0.41.0 in c:\\users\\freshskates\\.conda\\envs\\ml\\lib\\site-packages (from scanpy) (0.54.1)\nRequirement already satisfied: natsort in c:\\users\\freshskates\\.conda\\envs\\ml\\lib\\site-packages (from scanpy) (7.1.1)\nRequirement already satisfied: umap-learn>=0.3.10 in c:\\users\\freshskates\\.conda\\envs\\ml\\lib\\site-packages (from scanpy) (0.5.1)\nRequirement already satisfied: joblib in c:\\users\\freshskates\\.conda\\envs\\ml\\lib\\site-packages (from scanpy) (1.1.0)\nRequirement already satisfied: pandas>=0.21 in c:\\users\\freshskates\\.conda\\envs\\ml\\lib\\site-packages (from scanpy) (1.3.3)\nRequirement already satisfied: anndata>=0.7.4 in c:\\users\\freshskates\\.conda\\envs\\ml\\lib\\site-packages (from scanpy) (0.7.6)\nRequirement already satisfied: patsy in c:\\users\\freshskates\\.conda\\envs\\ml\\lib\\site-packages (from scanpy) (0.5.2)\nRequirement already satisfied: sinfo in c:\\users\\freshskates\\.conda\\envs\\ml\\lib\\site-packages (from scanpy) (0.3.4)\nRequirement already satisfied: tqdm in c:\\users\\freshskates\\.conda\\envs\\ml\\lib\\site-packages (from scanpy) (4.62.3)\nRequirement already satisfied: seaborn in c:\\users\\freshskates\\.conda\\envs\\ml\\lib\\site-packages (from scanpy) (0.11.2)\nRequirement already satisfied: packaging in c:\\users\\freshskates\\.conda\\envs\\ml\\lib\\site-packages (from scanpy) (21.0)\nRequirement already satisfied: matplotlib>=3.1.2 in c:\\users\\freshskates\\.conda\\envs\\ml\\lib\\site-packages (from scanpy) (3.4.3)\nRequirement already satisfied: statsmodels>=0.10.0rc2 in c:\\users\\freshskates\\.conda\\envs\\ml\\lib\\site-packages (from scanpy) (0.13.0)\nRequirement already satisfied: scipy>=1.4 in c:\\users\\freshskates\\.conda\\envs\\ml\\lib\\site-packages (from scanpy) (1.7.1)\nRequirement already satisfied: tables in c:\\users\\freshskates\\.conda\\envs\\ml\\lib\\site-packages (from scanpy) (3.6.1)\nRequirement already satisfied: networkx>=2.3 in c:\\users\\freshskates\\.conda\\envs\\ml\\lib\\site-packages (from scanpy) (2.6.3)\nRequirement already satisfied: xlrd<2.0 in c:\\users\\freshskates\\.conda\\envs\\ml\\lib\\site-packages (from anndata>=0.7.4->scanpy) (1.2.0)\nRequirement already satisfied: python-dateutil>=2.7 in c:\\users\\freshskates\\.conda\\envs\\ml\\lib\\site-packages (from matplotlib>=3.1.2->scanpy) (2.8.2)\nRequirement already satisfied: cycler>=0.10 in c:\\users\\freshskates\\.conda\\envs\\ml\\lib\\site-packages (from matplotlib>=3.1.2->scanpy) (0.10.0)\nRequirement already satisfied: pyparsing>=2.2.1 in c:\\users\\freshskates\\.conda\\envs\\ml\\lib\\site-packages (from matplotlib>=3.1.2->scanpy) (2.4.7)\nRequirement already satisfied: kiwisolver>=1.0.1 in c:\\users\\freshskates\\.conda\\envs\\ml\\lib\\site-packages (from matplotlib>=3.1.2->scanpy) (1.3.2)\nRequirement already satisfied: pillow>=6.2.0 in c:\\users\\freshskates\\.conda\\envs\\ml\\lib\\site-packages (from matplotlib>=3.1.2->scanpy) (8.3.2)\nRequirement already satisfied: six in c:\\users\\freshskates\\.conda\\envs\\ml\\lib\\site-packages (from cycler>=0.10->matplotlib>=3.1.2->scanpy) (1.16.0)\nRequirement already satisfied: setuptools in c:\\users\\freshskates\\.conda\\envs\\ml\\lib\\site-packages (from numba>=0.41.0->scanpy) (58.0.4)\nRequirement already satisfied: llvmlite<0.38,>=0.37.0rc1 in c:\\users\\freshskates\\.conda\\envs\\ml\\lib\\site-packages (from numba>=0.41.0->scanpy) (0.37.0)\nRequirement already satisfied: pytz>=2017.3 in c:\\users\\freshskates\\.conda\\envs\\ml\\lib\\site-packages (from pandas>=0.21->scanpy) (2021.3)\nRequirement already satisfied: threadpoolctl>=2.0.0 in c:\\users\\freshskates\\.conda\\envs\\ml\\lib\\site-packages (from scikit-learn>=0.22->scanpy) (3.0.0)\nRequirement already satisfied: pynndescent>=0.5 in c:\\users\\freshskates\\.conda\\envs\\ml\\lib\\site-packages (from umap-learn>=0.3.10->scanpy) (0.5.5)\nRequirement already satisfied: stdlib-list in c:\\users\\freshskates\\.conda\\envs\\ml\\lib\\site-packages (from sinfo->scanpy) (0.8.0)\nRequirement already satisfied: numexpr>=2.6.2 in c:\\users\\freshskates\\.conda\\envs\\ml\\lib\\site-packages (from tables->scanpy) (2.7.3)\nRequirement already satisfied: colorama in c:\\users\\freshskates\\.conda\\envs\\ml\\lib\\site-packages (from tqdm->scanpy) (0.4.4)\n" ] ], [ [ "RNA sequencing\n--\n\nRNA sequencing allows us to quantify the extent to which each gene is active in a sample. When a gene is active, its DNA is transcribed into mRNA and then translated into protein. With RNA sequencing, we are counting how frequent mRNAs for each gene occur in a sample. Genes that are more active will have higher counts, while genes that are not made into mRNA will have 0 counts.\n\nData\n--\n\nThe code below will download the data for you, and organize it into a data frame, where:\n- every row is a different gene\n- every column is a different sample. \n - We have 6 samples, 3 of T cells (called \"CD4 T cells\" and B cells (\"B cells\").\n- every value is the number of reads from each gene in each sample. \n - Note: the values have been normalized to be comparable between samples.", "_____no_output_____" ] ], [ [ "import scanpy as sc\r\ndef prep_data():\r\n adata=sc.datasets.pbmc3k_processed()\r\n counts=pd.DataFrame(np.expm1(adata.raw.X.toarray()),\r\n index=adata.raw.obs_names,\r\n columns=adata.raw.var_names)\r\n \r\n #make 3 reps T-cells and 3 reps B-cells\r\n cells_per_bulk=100\r\n celltype='CD4 T cells'\r\n cells=adata.obs_names[adata.obs['louvain']==celltype]\r\n bulks=pd.DataFrame(columns=[celltype+'.rep1',celltype+'.rep2',celltype+'.rep3'],\r\n index=adata.raw.var_names)\r\n\r\n for i in range(3):\r\n cells_here=cells[(i*100):((i+1)*100)]\r\n bulks[celltype+'.rep'+str(i+1)]=list(counts.loc[cells_here,:].sum(axis=0))\r\n bulk_t=bulks\r\n\r\n celltype='B cells'\r\n cells=adata.obs_names[adata.obs['louvain']==celltype]\r\n bulks=pd.DataFrame(columns=[celltype+'.rep1',celltype+'.rep2',celltype+'.rep3'],\r\n index=adata.raw.var_names)\r\n\r\n for i in range(3):\r\n cells_here=cells[(i*100):((i+1)*100)]\r\n bulks[celltype+'.rep'+str(i+1)]=list(counts.loc[cells_here,:].sum(axis=0))\r\n\r\n bulks=pd.concat([bulk_t,bulks],axis=1)\r\n bulks=bulks.sort_values(by=bulks.columns[0],ascending=False)\r\n return(bulks)\r\n\r\ndata=prep_data()\r\nprint(data.head())\r\n\r\nprint(\"min: \", data.min())\r\nprint(\"max: \", data.max())\r\n", " CD4 T cells.rep1 CD4 T cells.rep2 CD4 T cells.rep3 B cells.rep1 \\\nindex \nMALAT1 8303.0 7334.0 7697.0 5246.0 \nB2M 4493.0 4675.0 4546.0 2861.0 \nTMSB4X 4198.0 4297.0 3932.0 2551.0 \nRPL10 3615.0 3565.0 3965.0 3163.0 \nRPL13 3501.0 3556.0 3679.0 2997.0 \n\n B cells.rep2 B cells.rep3 \nindex \nMALAT1 5336.0 4950.0 \nB2M 2844.0 2796.0 \nTMSB4X 2066.0 2276.0 \nRPL10 2830.0 2753.0 \nRPL13 2636.0 2506.0 \nmin: CD4 T cells.rep1 0.0\nCD4 T cells.rep2 0.0\nCD4 T cells.rep3 0.0\nB cells.rep1 0.0\nB cells.rep2 0.0\nB cells.rep3 0.0\ndtype: float64\nmax: CD4 T cells.rep1 8303.0\nCD4 T cells.rep2 7334.0\nCD4 T cells.rep3 7697.0\nB cells.rep1 5246.0\nB cells.rep2 5336.0\nB cells.rep3 4950.0\ndtype: float64\n" ] ], [ [ "**Let's explore the dataset**\n\n**(1 pt)** What are the names of the samples?\n\n\n\n**(2 pts)** What is the highest recorded value? What is the lowest?\n\n", "_____no_output_____" ], [ "#write code to answer the questions here\r\n\r\n1) \r\nSample names are \r\n- CD4 T cells.rep1, CD4 T cells.rep2, CD4 T cells.rep3, \r\n- B cells.rep1, B cells.rep2, B cells.rep3 \r\n\r\n2) \r\n\r\n- The highest recorded value:\r\n**max: CD4 T cells.rep1 8303.0**\r\n\r\n- The lowest recorded value:\r\n**min: CD4 T cells.rep1 0.0**", "_____no_output_____" ], [ "**Exploring the data**\r\n\r\nOne gene that is different between our 2 cell types is IL7R. \r\n\r\n**(1 pt)** Plot the distribution of the IL7R gene in the 2 conditions. Which cell type (CD4 T cells or B cells) has the higher level of this gene?\r\n\r\n\r\n\r\n**(1 pt)** How many samples do we have for each condition?\r\n4) ", "_____no_output_____" ], [ "# Answers\r\n3) \r\n- CD4 T has a higher level of this gene, it can be seen in the graph plotted\r\n\r\n4) \r\n- Three samples for each condition\r\n\r\nFor CD4 T Cells: \r\n- CD4 T cells rep1\r\n- CD4 T cells rep2\r\n- CD4 T cells rep3\r\n\r\nFor B Cells:\r\n- B cells rep1 \r\n- B cells rep2 \r\n- B cells rep3 \r\n", "_____no_output_____" ] ], [ [ "#inspect the data \r\nGENE='IL7R'\r\nlong_data=pd.DataFrame({GENE:data.loc[GENE,:],\r\n 'condition':[x.split('.')[0] for x in data.columns]})\r\nprint(long_data)\r\n\r\nsns.catplot(data=long_data,x='condition', y=GENE)", " IL7R condition\nCD4 T cells.rep1 175.0 CD4 T cells\nCD4 T cells.rep2 128.0 CD4 T cells\nCD4 T cells.rep3 146.0 CD4 T cells\nB cells.rep1 13.0 B cells\nB cells.rep2 10.0 B cells\nB cells.rep3 20.0 B cells\n" ] ], [ [ "**Two-sample t-test for one gene across 2 conditions**\r\n\r\nWe are now going to check whether the gene IL7R is differentially active in CD4 T cells vs B cells. \r\n\r\n**(1 pt)** What is the null hypothesis?\r\n\r\n\r\n**(1 pt)** Based on your plot of the gene in the two conditions, and the fact that there looks like there might be a difference, what do you expect the sign of the t-statistic to be (CD4 T cells vs B cells)?\r\n", "_____no_output_____" ], [ "We are going to use the function ttest_ind to perform our t-test. You can read about it here: https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.ttest_ind.html.", "_____no_output_____" ], [ "\r\n\r\n**(1 pt)** What is the t-statistic?\r\n\r\n\r\n\r\n**(1 pt)** What is the p-value?\r\n\r\n\r\n\r\n\r\n**(1 pt)** Describe in your own words what the p-value means.\r\n\r\n\r\n**(1 pt)** Is the p-value significant at alpha = 0.05?\r\n", "_____no_output_____" ], [ "# Answers\r\n5) \r\n\r\n- The graph is interesting, at first glance it seems like IL7R is not differentially active in CD4 T cells vs B cells \r\n- they are similar \r\n\r\n6) \r\n\r\n- If the sign is positive, then reject the null hypothesis\r\n\r\n--- \r\n\r\n7) \r\n- t statistic: 9.66\r\n\r\n<br>\r\ntest statistic tells us how much our sample mean deviates from the null hypothesis mean\r\n<br>\r\nT Statistic is the value calculated when you replace the population std with the sd(sample standard)\r\n\r\n8) \r\n- p-value: 0.00064<br>\r\n\r\n\r\n9) \r\n- the p valueis the porbability / likelihood of the null hypothesis, since it was rejected it should be low. the smaller the p value, the more \"wrong\" the null hypothesis it is\r\n\r\n10) \r\n- p value != alpha\r\n- 0.00064 != 0.05\r\n- null hypothesis rejected because of that, therefore making the P value significant\r\n ", "_____no_output_____" ] ], [ [ "#pick 1 gene, do 1 t-test\r\nGENE='IL7R'\r\nCOND1=['CD4 T cells.rep' + str(x+1) for x in range(3)]\r\nCOND2=['B cells.rep' + str(x+1) for x in range(3)]\r\n\r\n#plot gene across samples\r\n\r\n#t-test\r\nfrom scipy.stats import ttest_ind\r\nt_stat,pvalue=ttest_ind(data.loc[GENE,COND1],data.loc[GENE,COND2])\r\nprint('t statistic',t_stat.round(2))\r\nprint('p-value',pvalue.round(5))", "t statistic 9.66\np-value 0.00064\n" ] ], [ [ "**Two-sample t-tests for each gene across 2 conditions**\n\nWe are now going to repeat our analysis from before for all genes in our dataset.\n\n**(1 pt)** How many genes are present in our dataset?\n\n", "_____no_output_____" ], [ "#Answers\r\n\r\n11) \r\n- 13714 genes present in the dataset, displayed with display(results)", "_____no_output_____" ] ], [ [ "from IPython.display import display\r\n\r\n#all genes t-tests\r\nPSEUDOCOUNT=1\r\nresults=pd.DataFrame(index=data.index,\r\n columns=['t','p','lfc'])\r\nfor gene in data.index:\r\n t_stat,pvalue=ttest_ind(data.loc[gene,COND1],data.loc[gene,COND2])\r\n lfc=np.log2((data.loc[gene,COND1].mean()+PSEUDOCOUNT)/(data.loc[gene,COND2].mean()+PSEUDOCOUNT))\r\n results.loc[gene,'t']=t_stat\r\n results.loc[gene,'p']=pvalue\r\n results.loc[gene,'lfc']=lfc\r\n", "_____no_output_____" ] ], [ [ "**Ranking discoveries by either significance or fold change**\n\nFor each gene, we have obtained:\n- a t-statistic\n- a p-value for the difference between the 2 conditions\n- a log2 fold change between CD4 T cells and B cells\n\nWe can inspect how fold changes relate to the significance of the differences. \n\n**(1 pt)** What do you expect the relationship to be between significance/p-values and fold changes?\n\n\n \n", "_____no_output_____" ], [ "#Answers\r\n\r\n12) Fold change has a correlation p value, the bigger the fold change from 0 the bigger p value", "_____no_output_____" ] ], [ [ "#volcano plot\r\n###### \r\nresults['p']=results['p'].fillna(1)\r\nPS2=1e-7\r\nplt.scatter(results['lfc'],-np.log10(results['p']+PS2),s=5,alpha=0.5,color='black')\r\nplt.xlabel('Log2 fold change (CD4 T cells/B cells)')\r\nplt.ylabel('-log10(p-value)')\r\nplt.show()\r\ndisplay(results)", "_____no_output_____" ] ], [ [ "**Multiple testing correction**\n\nNow, we will explore how the number of differentially active genes differs depending on how we correct for multiple tests.\n\n**(1 pt)** How many genes pass the significance level of 0.05, without performing any correction for multiple testing?\n\n", "_____no_output_____" ], [ "#Answers\r\n\r\n13)\r\n- there are 1607 genes that pass the significance level of 0.05\r\n", "_____no_output_____" ] ], [ [ "ALPHA=0.05\r\nprint((results['p']<=ALPHA).sum())", "1607\n" ] ], [ [ "We will use a function that adjusts our p-values using different methods, called \"multipletests\". You can read about it here: https://www.statsmodels.org/dev/generated/statsmodels.stats.multitest.multipletests.html\n\nWe will use the following settings:\n- for Bonferroni correction, we set method='bonferroni'. This will multiply our p-values by the number of tests we did. If the resulting values are greated than 1 they will be clipped to 1.\n- for Benjamini-Hochberg correction, we set method='fdr_bh'", "_____no_output_____" ], [ "**(2 pts)** How many genes pass the significance level of 0.05, after correcting for multiple testing using the Bonferroni method? What is the revised p-value threshold?\n\n**(1 pt)** Would the gene we tested before, IL7R, pass this threshold?\n\n", "_____no_output_____" ], [ "#Answers\r\n\r\n14) \r\n\r\n\r\n- 63 genes pass the significance level of 0.05(after correcting multiple testing using the bonferroni method)\r\n- new p value threshold: 0.05/13714 = 3.6 * 10^-6\r\n\r\n- uses 13714 - alpha = alpha / k \r\n\r\n15)\r\n\r\n- Yes it would, with the corrected p-value greater than 1 \r\n\r\n", "_____no_output_____" ] ], [ [ "#multiple testing correction \r\n\r\n#bonferroni\r\nfrom statsmodels.stats.multitest import multipletests\r\nresults['p.adj.bonferroni']=multipletests(results['p'], method='bonferroni')[1]\r\n\r\nFDR=ALPHA\r\nplt.hist(results['p'],100)\r\nplt.axvline(x=FDR,color='red',linestyle='--')\r\nplt.xlabel('Unadjusted p-values')\r\nplt.ylabel('Number of genes')\r\nplt.show()\r\nplt.hist(results['p.adj.bonferroni'],100)\r\n#plt.ylim(0,200)\r\nplt.axvline(x=FDR,color='red',linestyle='--')\r\nplt.xlabel('P-values (Bonferroni corrected)')\r\nplt.ylabel('Number of genes')\r\nplt.show()\r\nplt.show()\r\n\r\nprint('DE Bonferroni',(results['p.adj.bonferroni']<=FDR).sum())", "_____no_output_____" ] ], [ [ "**(1 pt)** How many genes pass the significance level of 0.05, after correcting for multiple testing using the Benjamini-Hochberg method? \n\n", "_____no_output_____" ], [ "#Answers\r\n\r\n16)\r\n- 220", "_____no_output_____" ] ], [ [ "results['p.adj.bh']=multipletests(results['p'], method='fdr_bh')[1]\r\n\r\nFDR=0.05\r\nplt.hist(results['p'],100)\r\nplt.axvline(x=FDR,color='red',linestyle='--')\r\nplt.xlabel('Unadjusted p-values')\r\nplt.ylabel('Number of genes')\r\nplt.show()\r\nplt.hist(results['p.adj.bh'],100)\r\nplt.ylim(0,2000)\r\nplt.axvline(x=FDR,color='red',linestyle='--')\r\nplt.xlabel('P-values (Benjamini-Hochberg corrected)')\r\nplt.ylabel('Number of genes')\r\nplt.show()\r\n\r\nprint('DE BH',(results['p.adj.bh']<=FDR).sum())", "_____no_output_____" ] ], [ [ "**(1 pt)** Which multiple testing correction is the most stringent? \n\n\nFinally, let's look at our results. Print the significant differential genes and look up a few on the internet.", "_____no_output_____" ], [ "#Answers\r\n\r\n17)\r\n- Bonferroni, and this is because the corrected p values resulted in values of 1 or greater, so it was limited to 1", "_____no_output_____" ] ], [ [ "results.loc[results['p.adj.bonferroni']<=FDR,:].sort_values(by='lfc')", "_____no_output_____" ] ], [ [ "For example, CD7 is a gene found on T cells, whereas HLA genes are found on B cells.", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "markdown", "markdown", "markdown", "markdown", "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code", "code", "code", "code", "code", "code", "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown" ], [ "code", "code" ], [ "markdown", "markdown", "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ] ]
d023b8a82c989009828292c380c8ddce26c68761
26,223
ipynb
Jupyter Notebook
docs/tutorials/6_reinforce_tutorial.ipynb
Zuu97/agents
3299a62165027c7844e4574260dca6512e0369a0
[ "Apache-2.0" ]
1
2020-10-28T07:54:38.000Z
2020-10-28T07:54:38.000Z
docs/tutorials/6_reinforce_tutorial.ipynb
Zuu97/agents
3299a62165027c7844e4574260dca6512e0369a0
[ "Apache-2.0" ]
null
null
null
docs/tutorials/6_reinforce_tutorial.ipynb
Zuu97/agents
3299a62165027c7844e4574260dca6512e0369a0
[ "Apache-2.0" ]
null
null
null
33.879845
504
0.541509
[ [ [ "##### Copyright 2018 The TF-Agents Authors.", "_____no_output_____" ] ], [ [ "#@title Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.", "_____no_output_____" ] ], [ [ "# REINFORCE agent\n\n<table class=\"tfo-notebook-buttons\" align=\"left\">\n <td>\n <a target=\"_blank\" href=\"https://www.tensorflow.org/agents/tutorials/6_reinforce_tutorial\">\n <img src=\"https://www.tensorflow.org/images/tf_logo_32px.png\" />\n View on TensorFlow.org</a>\n </td>\n <td>\n <a target=\"_blank\" href=\"https://colab.research.google.com/github/tensorflow/agents/blob/master/docs/tutorials/6_reinforce_tutorial.ipynb\">\n <img src=\"https://www.tensorflow.org/images/colab_logo_32px.png\" />\n Run in Google Colab</a>\n </td>\n <td>\n <a target=\"_blank\" href=\"https://github.com/tensorflow/agents/blob/master/docs/tutorials/6_reinforce_tutorial.ipynb\">\n <img src=\"https://www.tensorflow.org/images/GitHub-Mark-32px.png\" />\n View source on GitHub</a>\n </td>\n <td>\n <a href=\"https://storage.googleapis.com/tensorflow_docs/agents/docs/tutorials/6_reinforce_tutorial.ipynb\"><img src=\"https://www.tensorflow.org/images/download_logo_32px.png\" />Download notebook</a>\n </td>\n</table>", "_____no_output_____" ], [ "## Introduction", "_____no_output_____" ], [ "This example shows how to train a [REINFORCE](http://www-anw.cs.umass.edu/~barto/courses/cs687/williams92simple.pdf) agent on the Cartpole environment using the TF-Agents library, similar to the [DQN tutorial](1_dqn_tutorial.ipynb).\n\n![Cartpole environment](images/cartpole.png)\n\nWe will walk you through all the components in a Reinforcement Learning (RL) pipeline for training, evaluation and data collection.\n", "_____no_output_____" ], [ "## Setup", "_____no_output_____" ], [ "If you haven't installed the following dependencies, run:", "_____no_output_____" ] ], [ [ "!sudo apt-get install -y xvfb ffmpeg\n!pip install gym\n!pip install 'imageio==2.4.0'\n!pip install PILLOW\n!pip install 'pyglet==1.3.2'\n!pip install pyvirtualdisplay\n!pip install tf-agents", "_____no_output_____" ], [ "from __future__ import absolute_import\nfrom __future__ import division\nfrom __future__ import print_function\n\nimport base64\nimport imageio\nimport IPython\nimport matplotlib\nimport matplotlib.pyplot as plt\nimport numpy as np\nimport PIL.Image\nimport pyvirtualdisplay\n\nimport tensorflow as tf\n\nfrom tf_agents.agents.reinforce import reinforce_agent\nfrom tf_agents.drivers import dynamic_step_driver\nfrom tf_agents.environments import suite_gym\nfrom tf_agents.environments import tf_py_environment\nfrom tf_agents.eval import metric_utils\nfrom tf_agents.metrics import tf_metrics\nfrom tf_agents.networks import actor_distribution_network\nfrom tf_agents.replay_buffers import tf_uniform_replay_buffer\nfrom tf_agents.trajectories import trajectory\nfrom tf_agents.utils import common\n\ntf.compat.v1.enable_v2_behavior()\n\n\n# Set up a virtual display for rendering OpenAI gym environments.\ndisplay = pyvirtualdisplay.Display(visible=0, size=(1400, 900)).start()", "_____no_output_____" ] ], [ [ "## Hyperparameters", "_____no_output_____" ] ], [ [ "env_name = \"CartPole-v0\" # @param {type:\"string\"}\nnum_iterations = 250 # @param {type:\"integer\"}\ncollect_episodes_per_iteration = 2 # @param {type:\"integer\"}\nreplay_buffer_capacity = 2000 # @param {type:\"integer\"}\n\nfc_layer_params = (100,)\n\nlearning_rate = 1e-3 # @param {type:\"number\"}\nlog_interval = 25 # @param {type:\"integer\"}\nnum_eval_episodes = 10 # @param {type:\"integer\"}\neval_interval = 50 # @param {type:\"integer\"}", "_____no_output_____" ] ], [ [ "## Environment\n\nEnvironments in RL represent the task or problem that we are trying to solve. Standard environments can be easily created in TF-Agents using `suites`. We have different `suites` for loading environments from sources such as the OpenAI Gym, Atari, DM Control, etc., given a string environment name.\n\nNow let us load the CartPole environment from the OpenAI Gym suite.", "_____no_output_____" ] ], [ [ "env = suite_gym.load(env_name)", "_____no_output_____" ] ], [ [ "We can render this environment to see how it looks. A free-swinging pole is attached to a cart. The goal is to move the cart right or left in order to keep the pole pointing up.", "_____no_output_____" ] ], [ [ "#@test {\"skip\": true}\nenv.reset()\nPIL.Image.fromarray(env.render())", "_____no_output_____" ] ], [ [ "The `time_step = environment.step(action)` statement takes `action` in the environment. The `TimeStep` tuple returned contains the environment's next observation and reward for that action. The `time_step_spec()` and `action_spec()` methods in the environment return the specifications (types, shapes, bounds) of the `time_step` and `action` respectively.", "_____no_output_____" ] ], [ [ "print('Observation Spec:')\nprint(env.time_step_spec().observation)\nprint('Action Spec:')\nprint(env.action_spec())", "_____no_output_____" ] ], [ [ "So, we see that observation is an array of 4 floats: the position and velocity of the cart, and the angular position and velocity of the pole. Since only two actions are possible (move left or move right), the `action_spec` is a scalar where 0 means \"move left\" and 1 means \"move right.\"", "_____no_output_____" ] ], [ [ "time_step = env.reset()\nprint('Time step:')\nprint(time_step)\n\naction = np.array(1, dtype=np.int32)\n\nnext_time_step = env.step(action)\nprint('Next time step:')\nprint(next_time_step)", "_____no_output_____" ] ], [ [ "Usually we create two environments: one for training and one for evaluation. Most environments are written in pure python, but they can be easily converted to TensorFlow using the `TFPyEnvironment` wrapper. The original environment's API uses numpy arrays, the `TFPyEnvironment` converts these to/from `Tensors` for you to more easily interact with TensorFlow policies and agents.\n", "_____no_output_____" ] ], [ [ "train_py_env = suite_gym.load(env_name)\neval_py_env = suite_gym.load(env_name)\n\ntrain_env = tf_py_environment.TFPyEnvironment(train_py_env)\neval_env = tf_py_environment.TFPyEnvironment(eval_py_env)", "_____no_output_____" ] ], [ [ "## Agent\n\nThe algorithm that we use to solve an RL problem is represented as an `Agent`. In addition to the REINFORCE agent, TF-Agents provides standard implementations of a variety of `Agents` such as [DQN](https://storage.googleapis.com/deepmind-media/dqn/DQNNaturePaper.pdf), [DDPG](https://arxiv.org/pdf/1509.02971.pdf), [TD3](https://arxiv.org/pdf/1802.09477.pdf), [PPO](https://arxiv.org/abs/1707.06347) and [SAC](https://arxiv.org/abs/1801.01290).\n\nTo create a REINFORCE Agent, we first need an `Actor Network` that can learn to predict the action given an observation from the environment.\n\nWe can easily create an `Actor Network` using the specs of the observations and actions. We can specify the layers in the network which, in this example, is the `fc_layer_params` argument set to a tuple of `ints` representing the sizes of each hidden layer (see the Hyperparameters section above).\n", "_____no_output_____" ] ], [ [ "actor_net = actor_distribution_network.ActorDistributionNetwork(\n train_env.observation_spec(),\n train_env.action_spec(),\n fc_layer_params=fc_layer_params)", "_____no_output_____" ] ], [ [ "We also need an `optimizer` to train the network we just created, and a `train_step_counter` variable to keep track of how many times the network was updated.\n", "_____no_output_____" ] ], [ [ "optimizer = tf.compat.v1.train.AdamOptimizer(learning_rate=learning_rate)\n\ntrain_step_counter = tf.compat.v2.Variable(0)\n\ntf_agent = reinforce_agent.ReinforceAgent(\n train_env.time_step_spec(),\n train_env.action_spec(),\n actor_network=actor_net,\n optimizer=optimizer,\n normalize_returns=True,\n train_step_counter=train_step_counter)\ntf_agent.initialize()", "_____no_output_____" ] ], [ [ "## Policies\n\nIn TF-Agents, policies represent the standard notion of policies in RL: given a `time_step` produce an action or a distribution over actions. The main method is `policy_step = policy.step(time_step)` where `policy_step` is a named tuple `PolicyStep(action, state, info)`. The `policy_step.action` is the `action` to be applied to the environment, `state` represents the state for stateful (RNN) policies and `info` may contain auxiliary information such as log probabilities of the actions.\n\nAgents contain two policies: the main policy that is used for evaluation/deployment (agent.policy) and another policy that is used for data collection (agent.collect_policy).", "_____no_output_____" ] ], [ [ "eval_policy = tf_agent.policy\ncollect_policy = tf_agent.collect_policy", "_____no_output_____" ] ], [ [ "## Metrics and Evaluation\n\nThe most common metric used to evaluate a policy is the average return. The return is the sum of rewards obtained while running a policy in an environment for an episode, and we usually average this over a few episodes. We can compute the average return metric as follows.\n", "_____no_output_____" ] ], [ [ "#@test {\"skip\": true}\ndef compute_avg_return(environment, policy, num_episodes=10):\n\n total_return = 0.0\n for _ in range(num_episodes):\n\n time_step = environment.reset()\n episode_return = 0.0\n\n while not time_step.is_last():\n action_step = policy.action(time_step)\n time_step = environment.step(action_step.action)\n episode_return += time_step.reward\n total_return += episode_return\n\n avg_return = total_return / num_episodes\n return avg_return.numpy()[0]\n\n\n# Please also see the metrics module for standard implementations of different\n# metrics.", "_____no_output_____" ] ], [ [ "## Replay Buffer\n\nIn order to keep track of the data collected from the environment, we will use the TFUniformReplayBuffer. This replay buffer is constructed using specs describing the tensors that are to be stored, which can be obtained from the agent using `tf_agent.collect_data_spec`.", "_____no_output_____" ] ], [ [ "replay_buffer = tf_uniform_replay_buffer.TFUniformReplayBuffer(\n data_spec=tf_agent.collect_data_spec,\n batch_size=train_env.batch_size,\n max_length=replay_buffer_capacity)", "_____no_output_____" ] ], [ [ "For most agents, the `collect_data_spec` is a `Trajectory` named tuple containing the observation, action, reward etc.", "_____no_output_____" ], [ "## Data Collection\n\nAs REINFORCE learns from whole episodes, we define a function to collect an episode using the given data collection policy and save the data (observations, actions, rewards etc.) as trajectories in the replay buffer.", "_____no_output_____" ] ], [ [ "#@test {\"skip\": true}\n\ndef collect_episode(environment, policy, num_episodes):\n\n episode_counter = 0\n environment.reset()\n\n while episode_counter < num_episodes:\n time_step = environment.current_time_step()\n action_step = policy.action(time_step)\n next_time_step = environment.step(action_step.action)\n traj = trajectory.from_transition(time_step, action_step, next_time_step)\n\n # Add trajectory to the replay buffer\n replay_buffer.add_batch(traj)\n\n if traj.is_boundary():\n episode_counter += 1\n\n\n# This loop is so common in RL, that we provide standard implementations of\n# these. For more details see the drivers module.", "_____no_output_____" ] ], [ [ "## Training the agent\n\nThe training loop involves both collecting data from the environment and optimizing the agent's networks. Along the way, we will occasionally evaluate the agent's policy to see how we are doing.\n\nThe following will take ~3 minutes to run.", "_____no_output_____" ] ], [ [ "#@test {\"skip\": true}\ntry:\n %%time\nexcept:\n pass\n\n# (Optional) Optimize by wrapping some of the code in a graph using TF function.\ntf_agent.train = common.function(tf_agent.train)\n\n# Reset the train step\ntf_agent.train_step_counter.assign(0)\n\n# Evaluate the agent's policy once before training.\navg_return = compute_avg_return(eval_env, tf_agent.policy, num_eval_episodes)\nreturns = [avg_return]\n\nfor _ in range(num_iterations):\n\n # Collect a few episodes using collect_policy and save to the replay buffer.\n collect_episode(\n train_env, tf_agent.collect_policy, collect_episodes_per_iteration)\n\n # Use data from the buffer and update the agent's network.\n experience = replay_buffer.gather_all()\n train_loss = tf_agent.train(experience)\n replay_buffer.clear()\n\n step = tf_agent.train_step_counter.numpy()\n\n if step % log_interval == 0:\n print('step = {0}: loss = {1}'.format(step, train_loss.loss))\n\n if step % eval_interval == 0:\n avg_return = compute_avg_return(eval_env, tf_agent.policy, num_eval_episodes)\n print('step = {0}: Average Return = {1}'.format(step, avg_return))\n returns.append(avg_return)", "_____no_output_____" ] ], [ [ "## Visualization\n", "_____no_output_____" ], [ "### Plots\n\nWe can plot return vs global steps to see the performance of our agent. In `Cartpole-v0`, the environment gives a reward of +1 for every time step the pole stays up, and since the maximum number of steps is 200, the maximum possible return is also 200.", "_____no_output_____" ] ], [ [ "#@test {\"skip\": true}\n\nsteps = range(0, num_iterations + 1, eval_interval)\nplt.plot(steps, returns)\nplt.ylabel('Average Return')\nplt.xlabel('Step')\nplt.ylim(top=250)", "_____no_output_____" ] ], [ [ "### Videos", "_____no_output_____" ], [ "It is helpful to visualize the performance of an agent by rendering the environment at each step. Before we do that, let us first create a function to embed videos in this colab.", "_____no_output_____" ] ], [ [ "def embed_mp4(filename):\n \"\"\"Embeds an mp4 file in the notebook.\"\"\"\n video = open(filename,'rb').read()\n b64 = base64.b64encode(video)\n tag = '''\n <video width=\"640\" height=\"480\" controls>\n <source src=\"data:video/mp4;base64,{0}\" type=\"video/mp4\">\n Your browser does not support the video tag.\n </video>'''.format(b64.decode())\n\n return IPython.display.HTML(tag)", "_____no_output_____" ] ], [ [ "The following code visualizes the agent's policy for a few episodes:", "_____no_output_____" ] ], [ [ "num_episodes = 3\nvideo_filename = 'imageio.mp4'\nwith imageio.get_writer(video_filename, fps=60) as video:\n for _ in range(num_episodes):\n time_step = eval_env.reset()\n video.append_data(eval_py_env.render())\n while not time_step.is_last():\n action_step = tf_agent.policy.action(time_step)\n time_step = eval_env.step(action_step.action)\n video.append_data(eval_py_env.render())\n\nembed_mp4(video_filename)", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown", "markdown", "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ] ]
d023c70bab97d347c794117f3a84dfd645fbf486
16,266
ipynb
Jupyter Notebook
src/main/paradox/docs/tutorial/notebooks/Query_Sparql_View.ipynb
clifle/nexus
53cbe57349c6267ccd3365e0879e7c9912268223
[ "Apache-2.0" ]
null
null
null
src/main/paradox/docs/tutorial/notebooks/Query_Sparql_View.ipynb
clifle/nexus
53cbe57349c6267ccd3365e0879e7c9912268223
[ "Apache-2.0" ]
null
null
null
src/main/paradox/docs/tutorial/notebooks/Query_Sparql_View.ipynb
clifle/nexus
53cbe57349c6267ccd3365e0879e7c9912268223
[ "Apache-2.0" ]
null
null
null
28.94306
371
0.572728
[ [ [ "## Querying Nexus knowledge graph using SPARQL\n\nThe goal of this notebook is to learn the basics of SPARQL. Only the READ part of SPARQL will be exposed.\n", "_____no_output_____" ], [ "## Prerequisites\n\nThis notebook assumes you've created a project within the AWS deployment of Nexus. If not follow the Blue Brain Nexus [Quick Start tutorial](https://bluebrain.github.io/nexus/docs/tutorial/getting-started/quick-start/index.html).", "_____no_output_____" ], [ "## Overview\n\nYou'll work through the following steps:\n\n1. Create a sparql wrapper around your project's SparqlView\n2. Explore and navigate data using the SPARQL query language\n", "_____no_output_____" ], [ "## Step 1: Create a sparql wrapper around your project's SparqlView", "_____no_output_____" ], [ "Every project in Blue Brain Nexus comes with a SparqlView enabling to navigate the data as a graph and to query it using the W3C SPARQL Language. The address of such SparqlView is https://nexus-sandbox.io/v1/views/tutorialnexus/\\$PROJECTLABEL/graph/sparql for a project withe label \\$PROJECTLABEL. The address of a SparqlView is also called a **SPARQL endpoint**.", "_____no_output_____" ] ], [ [ "#Configuration for the Nexus deployment\nnexus_deployment = \"https://nexus-sandbox.io/v1\"\n\ntoken= \"your token here\"\n\norg =\"tutorialnexus\"\nproject =\"$PROJECTLABEL\"\n\nheaders = {}", "_____no_output_____" ], [ "#Let install sparqlwrapper which a python wrapper around sparql client\n!pip install git+https://github.com/RDFLib/sparqlwrapper", "_____no_output_____" ], [ "# Utility functions to create sparql wrapper around a sparql endpoint\n\nfrom SPARQLWrapper import SPARQLWrapper, JSON, POST, GET, POSTDIRECTLY, CSV\nimport requests\n\n\n\ndef create_sparql_client(sparql_endpoint, http_query_method=POST, result_format= JSON, token=None):\n sparql_client = SPARQLWrapper(sparql_endpoint)\n #sparql_client.addCustomHttpHeader(\"Content-Type\", \"application/sparql-query\")\n if token:\n sparql_client.addCustomHttpHeader(\"Authorization\",\"Bearer {}\".format(token))\n sparql_client.setMethod(http_query_method)\n sparql_client.setReturnFormat(result_format)\n if http_query_method == POST:\n sparql_client.setRequestMethod(POSTDIRECTLY)\n \n return sparql_client", "_____no_output_____" ], [ "# Utility functions\nimport pandas as pd\n\npd.set_option('display.max_colwidth', -1)\n\n# Convert SPARQL results into a Pandas data frame\ndef sparql2dataframe(json_sparql_results):\n cols = json_sparql_results['head']['vars']\n out = []\n for row in json_sparql_results['results']['bindings']:\n item = []\n for c in cols:\n item.append(row.get(c, {}).get('value'))\n out.append(item)\n return pd.DataFrame(out, columns=cols)\n\n# Send a query using a sparql wrapper \ndef query_sparql(query, sparql_client):\n sparql_client.setQuery(query)\n \n\n result_object = sparql_client.query()\n if sparql_client.returnFormat == JSON:\n return result_object._convertJSON()\n return result_object.convert()", "_____no_output_____" ], [ "# Let create a sparql wrapper around the project sparql view\nsparqlview_endpoint = nexus_deployment+\"/views/\"+org+\"/\"+project+\"/graph/sparql\"\nsparqlview_wrapper = create_sparql_client(sparql_endpoint=sparqlview_endpoint, token=token,http_query_method= POST, result_format=JSON)", "_____no_output_____" ] ], [ [ "## Step 2: Explore and navigate data using the SPARQL query language\n", "_____no_output_____" ], [ "Let write our first query.", "_____no_output_____" ] ], [ [ "select_all_query = \"\"\"\nSELECT ?s ?p ?o\nWHERE\n{\n ?s ?p ?o\n}\nOFFSET 0\nLIMIT 5\n\"\"\"\n\nnexus_results = query_sparql(select_all_query,sparqlview_wrapper)\n\nnexus_df =sparql2dataframe(nexus_results)\nnexus_df.head()", "_____no_output_____" ] ], [ [ "Most SPARQL queries you'll see will have the anotomy above with:\n* a **SELECT** clause that let you select the variables you want to retrieve\n* a **WHERE** clause defining a set of constraints that the variables should satisfy to be retrieved\n* **LIMIT** and **OFFSET** clauses to enable pagination\n* the constraints are usually graph patterns in the form of **triple** (?s for subject, ?p for property and ?o for ?object)", "_____no_output_____" ], [ "Multiple triples can be provided as graph pattern to match but each triple should end with a period. As an example, let retrieve 5 movies (?movie) along with their titles (?title).", "_____no_output_____" ] ], [ [ "movie_with_title = \"\"\"\nPREFIX vocab: <https://nexus-sandbox.io/v1/vocabs/%s/%s/>\nPREFIX nxv: <https://bluebrain.github.io/nexus/vocabulary/>\nSelect ?movie ?title\n WHERE {\n ?movie a vocab:Movie.\n ?movie vocab:title ?title.\n} LIMIT 5\n\"\"\"%(org,project)\n\nnexus_results = query_sparql(movie_with_title,sparqlview_wrapper)\n\nnexus_df =sparql2dataframe(nexus_results)\nnexus_df.head()", "_____no_output_____" ] ], [ [ "Note PREFIX clauses. It is way to shorten URIS within a SPARQL query. Without them we would have to use full URI for all properties.\n\nThe ?movie variable is bound to a URI (the internal Nexus id). Let retrieve the movieId just like in the MovieLens csv files for simplicity.", "_____no_output_____" ] ], [ [ "movie_with_title = \"\"\"\nPREFIX vocab: <https://nexus-sandbox.io/v1/vocabs/%s/%s/>\nPREFIX nxv: <https://bluebrain.github.io/nexus/vocabulary/>\nSelect ?movieId ?title\n WHERE {\n \n # Select movies\n ?movie a vocab:Movie.\n\n # Select their movieId value\n ?movie vocab:movieId ?movieId.\n \n #\n ?movie vocab:title ?title.\n \n} LIMIT 5\n\"\"\"%(org,project)\n\nnexus_results = query_sparql(movie_with_title,sparqlview_wrapper)\n\nnexus_df =sparql2dataframe(nexus_results)\nnexus_df.head()", "_____no_output_____" ] ], [ [ "In the above query movies are things (or entities) of type vocab:Movie. \nThis is a typical instance query where entities are filtered by their type(s) and then some of their properties are retrieved (here ?title). \n\nLet retrieve everything that is linked (outgoing) to the movies. \nThe * character in the SELECT clause indicates to retreve all variables: ?movie, ?p, ?o", "_____no_output_____" ] ], [ [ "movie_with_properties = \"\"\"\nPREFIX vocab: <https://nexus-sandbox.io/v1/vocabs/%s/%s/>\nPREFIX nxv: <https://bluebrain.github.io/nexus/vocabulary/>\nSelect *\n WHERE {\n ?movie a vocab:Movie.\n ?movie ?p ?o.\n} LIMIT 20\n\"\"\"%(org,project)\n\nnexus_results = query_sparql(movie_with_properties,sparqlview_wrapper)\n\nnexus_df =sparql2dataframe(nexus_results)\nnexus_df.head(20)", "_____no_output_____" ] ], [ [ "As a little exercise, write a query retrieving incoming entities to movies. You can copy past the query above and modify it.\n\nHints: ?s ?p ?o can be read as: ?o is linked to ?s with an outgoing link.\n\nDo you have results ?", "_____no_output_____" ] ], [ [ "#Your query here\n", "_____no_output_____" ] ], [ [ "Let retrieve the movie ratings", "_____no_output_____" ] ], [ [ "movie_with_properties = \"\"\"\nPREFIX vocab: <https://nexus-sandbox.io/v1/vocabs/%s/%s/>\nPREFIX nxv: <https://bluebrain.github.io/nexus/vocabulary/>\nSelect ?userId ?movieId ?rating ?timestamp\n WHERE {\n ?movie a vocab:Movie.\n ?movie vocab:movieId ?movieId.\n \n \n ?ratingNode vocab:movieId ?ratingmovieId.\n ?ratingNode vocab:rating ?rating.\n ?ratingNode vocab:userId ?userId.\n ?ratingNode vocab:timestamp ?timestamp.\n \n # Somehow pandas is movieId as double for rating \n FILTER(xsd:integer(?ratingmovieId) = ?movieId)\n \n} LIMIT 20\n\"\"\"%(org,project)\n\nnexus_results = query_sparql(movie_with_properties,sparqlview_wrapper)\n\nnexus_df =sparql2dataframe(nexus_results)\nnexus_df.head(20)", "_____no_output_____" ] ], [ [ "As a little exercise, write a query retrieving the movie tags along with the user id and timestamp. You can copy and past the query above and modify it.\n", "_____no_output_____" ] ], [ [ "#Your query here\n\n", "_____no_output_____" ] ], [ [ "### Aggregate queries", "_____no_output_____" ], [ "[Aggregates](https://www.w3.org/TR/sparql11-query/#aggregates) apply some operations over a group of solutions.\nAvailable aggregates are: COUNT, SUM, MIN, MAX, AVG, GROUP_CONCAT, and SAMPLE.\n\nWe will not see them all but we'll look at some examples.", "_____no_output_____" ], [ "The next query will compute the average rating score for 'funny' movies.", "_____no_output_____" ] ], [ [ "tag_value = \"funny\"\nmovie_avg_ratings = \"\"\"\nPREFIX vocab: <https://nexus-sandbox.io/v1/vocabs/%s/%s/>\nPREFIX nxv: <https://bluebrain.github.io/nexus/vocabulary/>\n\nSelect ( AVG(?ratingvalue) AS ?score)\n WHERE {\n # Select movies\n ?movie a vocab:Movie.\n\n # Select their movieId value\n ?movie vocab:movieId ?movieId.\n\n ?tag vocab:movieId ?movieId.\n ?tag vocab:tag ?tagvalue.\n FILTER(?tagvalue = \"%s\").\n\n # Keep movies with ratings\n ?rating vocab:movieId ?ratingmovidId.\n FILTER(xsd:integer(?ratingmovidId) = xsd:integer(?movieId))\n ?rating vocab:rating ?ratingvalue.\n\n}\n\"\"\" %(org,project,tag_value)\n\nnexus_results = query_sparql(movie_avg_ratings,sparqlview_wrapper)\n\nnexus_df =sparql2dataframe(nexus_results)\ndisplay(nexus_df.head(20))\nnexus_df=nexus_df.astype(float)\n", "_____no_output_____" ] ], [ [ "Retrieve the number of tags per movie. Can be a little bit slow depending on the size of your data.", "_____no_output_____" ] ], [ [ "nbr_tags_per_movie = \"\"\"\nPREFIX vocab: <https://nexus-sandbox.io/v1/vocabs/%s/%s/>\nPREFIX nxv: <https://bluebrain.github.io/nexus/vocabulary/>\n\nSelect ?title (COUNT(?tagvalue) as ?tagnumber)\n WHERE {\n # Select movies\n ?movie a vocab:Movie.\n # Select their movieId value\n ?movie vocab:movieId ?movieId.\n \n ?tag a vocab:Tag.\n ?tag vocab:movieId ?tagmovieId.\n FILTER(?tagmovieId = ?movieId)\n ?movie vocab:title ?title.\n ?tag vocab:tag ?tagvalue.\n}\n\nGROUP BY ?title \nORDER BY DESC(?tagnumber)\nLIMIT 10\n\"\"\" %(org,project)\n\nnexus_results = query_sparql(nbr_tags_per_movie,sparqlview_wrapper)\n\nnexus_df =sparql2dataframe(nexus_results)\ndisplay(nexus_df.head(20))\n", "_____no_output_____" ], [ "#Let plot the result\nnexus_df.tagnumber = pd.to_numeric(nexus_df.tagnumber)\nnexus_df.plot(x=\"title\",y=\"tagnumber\",kind=\"bar\")\n", "_____no_output_____" ] ], [ [ "The next query will retrieve movies along with users that tagged them separated by a comma", "_____no_output_____" ] ], [ [ "# Group Concat\n\nmovie_tag_users = \"\"\"\nPREFIX vocab: <https://nexus-sandbox.io/v1/vocabs/%s/%s/>\nPREFIX nxv: <https://bluebrain.github.io/nexus/vocabulary/>\n\nSelect ?movieId (group_concat(DISTINCT ?userId;separator=\",\") as ?users)\n WHERE {\n # Select movies\n ?movie a vocab:Movie.\n\n # Select their movieId value\n ?movie vocab:movieId ?movieId.\n\n ?tag vocab:movieId ?movieId.\n ?tag vocab:userId ?userId.\n\n \n}\nGROUP BY ?movieId\nLIMIT 10\n\"\"\"%(org,project)\n\nnexus_results = query_sparql(movie_tag_users,sparqlview_wrapper)\n\nnexus_df =sparql2dataframe(nexus_results)\nnexus_df.head(20)", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown", "markdown", "markdown", "markdown", "markdown" ], [ "code", "code", "code", "code", "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ] ]
d023ccc3403c8915d180cbbadb5ac67f394a74fa
19,621
ipynb
Jupyter Notebook
Boosted Late-Fusion.ipynb
Sakina8/Multimodal-Classification2020
8753ab6be535e59b3b95c4a99eda7b97e4fc5461
[ "MIT" ]
22
2020-07-30T06:53:16.000Z
2022-03-25T19:38:03.000Z
Boosted Late-Fusion.ipynb
Sakina8/Multimodal-Classification2020
8753ab6be535e59b3b95c4a99eda7b97e4fc5461
[ "MIT" ]
1
2020-10-28T14:41:13.000Z
2020-10-28T14:41:13.000Z
Boosted Late-Fusion.ipynb
Sakina8/Multimodal-Classification2020
8753ab6be535e59b3b95c4a99eda7b97e4fc5461
[ "MIT" ]
6
2020-07-30T06:53:36.000Z
2022-03-07T05:07:17.000Z
39.478873
119
0.530299
[ [ [ "import pandas as pd\nimport numpy as np\nfrom tqdm import tqdm\ntqdm.pandas()\n\nimport os, time, datetime\nfrom sklearn.model_selection import train_test_split\nfrom sklearn.metrics import roc_auc_score, f1_score, roc_curve, auc\nimport lightgbm as lgb\nimport xgboost as xgb", "_____no_output_____" ], [ "def format_time(elapsed):\n '''\n Takes a time in seconds and returns a string hh:mm:ss\n '''\n # Round to the nearest second.\n elapsed_rounded = int(round((elapsed)))\n \n # Format as hh:mm:ss\n return str(datetime.timedelta(seconds=elapsed_rounded))\n\nclass SigirPreprocess():\n \n def __init__(self, text_data_path):\n self.text_data_path = text_data_path\n self.train = None\n self.dict_code_to_id = {}\n self.dict_id_to_code = {}\n self.list_tags = {}\n self.sentences = []\n self.labels = []\n self.text_col = None\n self.X_test = None\n \n def prepare_data(self ):\n catalog_eng= pd.read_csv(self.text_data_path+\"data/catalog_english_taxonomy.tsv\",sep=\"\\t\")\n X_train= pd.read_csv(self.text_data_path+\"data/X_train.tsv\",sep=\"\\t\")\n Y_train= pd.read_csv(self.text_data_path+\"data/Y_train.tsv\",sep=\"\\t\")\n \n self.list_tags = list(Y_train['Prdtypecode'].unique())\n for i,tag in enumerate(self.list_tags):\n self.dict_code_to_id[tag] = i \n self.dict_id_to_code[i]=tag\n print(self.dict_code_to_id)\n \n Y_train['labels']=Y_train['Prdtypecode'].map(self.dict_code_to_id)\n train=pd.merge(left=X_train,right=Y_train,\n how='left',left_on=['Integer_id','Image_id','Product_id'],\n right_on=['Integer_id','Image_id','Product_id'])\n prod_map=pd.Series(catalog_eng['Top level category'].values,\n index=catalog_eng['Prdtypecode']).to_dict()\n\n train['product'] = train['Prdtypecode'].map(prod_map)\n train['title_len']=train['Title'].progress_apply(lambda x : len(x.split()) if pd.notna(x) else 0)\n train['desc_len']=train['Description'].progress_apply(lambda x : len(x.split()) if pd.notna(x) else 0)\n train['title_desc_len']=train['title_len'] + train['desc_len']\n train.loc[train['Description'].isnull(), 'Description'] = \" \"\n train['title_desc'] = train['Title'] + \" \" + train['Description']\n \n self.train = train\n \n def get_sentences(self, text_col, remove_null_rows=False):\n self.text_col = text_col\n if remove_null_rows==True:\n new_train = self.train[self.train[text_col].notnull()]\n\n else:\n new_train = self.train.copy()\n \n self.sentences = new_train[text_col].values\n self.labels = new_train['labels'].values\n \n def prepare_test(self, text_col, test_data_path, phase=1):\n X_test=pd.read_csv(test_data_path+f\"data/x_test_task1_phase{phase}.tsv\",sep=\"\\t\")\n X_test.loc[X_test['Description'].isnull(), 'Description'] = \" \"\n X_test['title_desc'] = X_test['Title'] + \" \" + X_test['Description']\n self.X_test = X_test\n self.test_sentences = X_test[text_col].values\n ", "_____no_output_____" ], [ "text_col = 'title_desc'\nval_size = 0.1\nrandom_state=2020\nnum_class = 27\ndo_gridsearch = False", "_____no_output_____" ], [ "kwargs = {'add_logits':['cam', 'fla']}\n\n\ncam_path = '/../input/camembert-vec-256m768-10ep/'\nflau_path = '/../input/flaubertlogits2107/' \nres_path = '/../input/resnextfinal/'\ncms_path = '/../input/crossmodal-v0/'\nvca_path = '/../input/vec-concat-9093/'\nvca_path_phase2 = '/../input/predictions-test-phase2-vec-fusion/'\naem_path = '/../input/addition-ensemble-latest/'\n\n\nval_logits_path = {'cam':cam_path + 'validation_set_softmax_logits.npy',\n 'fla':flau_path + 'validation_set_softmax_logits.npy',\n 'res':res_path + 'Valid_resnext50_32x4d_phase1_softmax_logits.npy',\n 'vca':vca_path + 'softmax_logits_val_9093.npy',\n 'aem':aem_path + 'softmax_logits_val_add.npy'}\n\ntest_logits_path_phase1 = {'cam':cam_path+f'X_test_phase1_softmax_logits.npy',\n 'fla':flau_path + f'X_test_phase1_softmax_logits.npy', \n 'res':res_path + f'Test_resnext50_32x4d_phase1_softmax_logits.npy',\n 'vca':vca_path + f'softmax_logits_test_9093.npy'}\n\ntest_logits_path_phase2 = {'cam':cam_path+f'X_test_phase2_softmax_logits.npy',\n 'fla':flau_path + f'X_test_phase2_softmax_logits.npy', \n 'res':res_path + f'Test_resnext50_32x4d_phase2_softmax_logits.npy',\n 'vca':vca_path_phase2 + f'softmax_logits_test_phase2_9093.npy'}\n \n\n", "_____no_output_____" ], [ "## Get valdation dataset from original train dataset\nPreprocess = SigirPreprocess(\"/../input/textphase1/\")\nPreprocess.prepare_data()\nPreprocess.get_sentences(text_col, True)\n\nfull_data = Preprocess.train\nlabels = Preprocess.labels\nindex = full_data.Integer_id\n\n\ntr_index, val_index, tr_labels, val_labels = train_test_split(index, labels,\n stratify=labels,\n random_state=random_state, \n test_size=val_size)\n\ntrain_data = full_data.loc[tr_index, :]\ntrain_data.reset_index(inplace=True, drop=True)\nval_data = full_data.loc[val_index, :]\nval_data.reset_index(inplace=True, drop=True)\n\nfull_data.loc[val_index, 'sample'] = 'val'\nfull_data['sample'].fillna('train', inplace=True)", "_____no_output_____" ], [ "def preparelogits_df(logit_paths, df=None, val_labels=None, **kwargs):\n ### Prepare and combine Logits data with original validation dataset\n logits_dict = {}\n dfs_dict = {}\n for key, logit_path in logit_paths.items():\n logits_dict[key] = np.load(logit_path)\n \n dfs_dict[key] = pd.DataFrame(logits_dict[key], \n columns=[key + \"_\" + str(i) for i in range(1,28)])\n print(\"Shape of logit arrays: {}\", logits_dict[key].shape)\n \n if kwargs['add_logits']:\n if len(kwargs['add_logits'])>0:\n add_str = '_'.join(kwargs['add_logits'])\n logits_dict[add_str] = logits_dict[kwargs['add_logits'][0]]\n for k in kwargs['add_logits'][1:]:\n logits_dict[add_str] += logits_dict[k]\n logits_dict[add_str] = logits_dict[add_str]/len(kwargs['add_logits'])\n dfs_dict[add_str] = pd.DataFrame(logits_dict[add_str], \n columns=[add_str + \"_\" + str(i) for i in range(1,28)])\n print(\"Shape of logit arrays: {}\", logits_dict[add_str].shape)\n\n\n \n if type(val_labels) == np.ndarray:\n for key,logits in logits_dict.items():\n print(\"\"\"Validation F1 scores for {} logits: {} \"\"\".format(key, \n f1_score(val_labels, np.argmax(logits, axis=1), average='macro')))\n \n \n\n df = pd.concat([df] + list(dfs_dict.values()), axis=1)\n \n return df", "_____no_output_____" ], [ "val_data = preparelogits_df(val_logits_path, df=val_data, \n val_labels=val_labels, **kwargs)", "_____no_output_____" ] ], [ [ "# Model Data Prep", "_____no_output_____" ] ], [ [ "df_log = val_data.copy()\n\nprobas_cols = [\"fla_\" + str(i) for i in range(1,28)] + [\"cam_\" + str(i) for i in range(1,28)] +\\\n[\"res_\" + str(i) for i in range(1,28)] \\\n+ [\"vca_\" + str(i) for i in range(1,28)] \\\n\nX = df_log[probas_cols]\ny = df_log['labels'].values\nX_train, X_test, y_train, y_test = train_test_split(X, y, stratify=y, test_size=0.2, random_state=random_state)\n", "_____no_output_____" ], [ "from scipy.stats import randint as sp_randint\nfrom scipy.stats import uniform as sp_uniform\n\nfrom sklearn.model_selection import RandomizedSearchCV, GridSearchCV\nn_HP_points_to_test = 100\n\n\nparam_test ={'num_leaves': sp_randint(6, 50), \n 'min_child_samples': sp_randint(100, 500), \n 'min_child_weight': [1e-5, 1e-3, 1e-2, 1e-1, 1, 1e1, 1e2, 1e3, 1e4],\n 'subsample': sp_uniform(loc=0.2, scale=0.8), \n 'colsample_bytree': sp_uniform(loc=0.4, scale=0.6),\n 'reg_alpha': [0, 1e-1, 1, 2, 5, 7, 10, 50, 100],\n 'reg_lambda': [0, 1e-1, 1, 5, 10, 20, 50, 100],\n# \"bagging_fraction\" : [0.5, 0.6, 0.7, 0.8, 0.9],\n# \"feature_fraction\":[0.5, 0.6, 0.7, 0.8, 0.9]\n }\n\n\n\n\nfit_params={\n \"early_stopping_rounds\":100, \n \"eval_metric\" : 'multi_logloss', \n \"eval_set\" : [(X_test,y_test)],\n 'eval_names': ['valid'],\n #'callbacks': [lgb.reset_parameter(learning_rate=learning_rate_010_decay_power_099)],\n 'verbose': 100,\n 'categorical_feature': 'auto'}\n\n\nclf = lgb.LGBMClassifier(num_iteration=1000, max_depth=-1, random_state=314, silent=True,\n metric='multi_logloss', n_jobs=4, early_stopping_rounds=100,\n num_class=num_class, objective= \"multiclass\")\ngs = RandomizedSearchCV(\n estimator=clf, param_distributions=param_test, \n n_iter=n_HP_points_to_test,\n cv=3,\n refit=True,\n random_state=314,\n verbose=True)\n\nif do_gridsearch==True:\n gs.fit(X_train, y_train, **fit_params)\n print('Best score reached: {} with params: {} '.format(gs.best_score_, gs.best_params_))", "_____no_output_____" ], [ "# opt_parameters = gs.best_params_\nopt_parameters = {'colsample_bytree': 0.5284213741879101, 'min_child_samples': 125, \n 'min_child_weight': 10.0, 'num_leaves': 22, \n 'reg_alpha': 0.1, 'reg_lambda': 20, 'subsample': 0.3080033455431848} \n", "_____no_output_____" ] ], [ [ "# Model Training", "_____no_output_____" ] ], [ [ "### Run lightgbm to get weights for different class logits\n\nt0 = time.time()\n\nmodel_met = 'fit' #'xgb'#'train' #fit\n\nparams = {\n \"objective\" : \"multiclass\",\n \"num_class\" : num_class,\n \"num_leaves\" : 60,\n \"max_depth\": -1,\n \"learning_rate\" : 0.01,\n \"bagging_fraction\" : 0.9, # subsample\n \"feature_fraction\" : 0.9, # colsample_bytree\n \"bagging_freq\" : 5, # subsample_freq\n \"bagging_seed\" : 2018,\n \"verbosity\" : -1 }\n\nlgtrain, lgval = lgb.Dataset(X_train, y_train), lgb.Dataset(X_test, y_test)\n\nif model_met == 'train':\n params.update(opt_parameters)\n params.update(fit_params)\n \n lgbmodel = lgb.train(params, lgtrain, valid_sets=[lgtrain, lgval], \n num_iterations = 1000, metric= 'multi_logloss')\n train_logits = lgbmodel.predict(X_train) \n test_logits = lgbmodel.predict(X_test)\n\n train_pred = np.argmax(train_logits, axis=1) \n test_pred = np.argmax(test_logits, axis=1) \nelif model_met == 'xgb':\n dtrain = xgb.DMatrix(X_train, label=y_train)\n dtrain.save_binary('xgb_train.buffer')\n dtest = xgb.DMatrix(X_test, label=y_test)\n \n num_round = 200\n xgb_param = {'max_depth': 5, 'eta': 0.1, 'seed':2020, 'verbosity':1,\n 'objective': 'multi:softmax', 'num_class':num_class}\n xgb_param['nthread'] = 4\n xgb_param['eval_metric'] = 'mlogloss'\n evallist = [(dtest, 'eval'), (dtrain, 'train')]\n bst = xgb.train(xgb_param, dtrain, num_round, evallist\n , early_stopping_rounds=10\n )\n \n train_logits = bst.predict(xgb.DMatrix(X_train), ntree_limit=bst.best_ntree_limit) \n test_logits = bst.predict(xgb.DMatrix(X_test), ntree_limit=bst.best_ntree_limit)\n\n train_pred = train_logits \n test_pred = test_logits \n \nelse:\n\n lgbmodel = lgb.LGBMClassifier(**clf.get_params())\n #set optimal parameters\n lgbmodel.set_params(**opt_parameters)\n lgbmodel.fit(X_train, y_train, **fit_params)\n \n train_logits = lgbmodel.predict(X_train) \n test_logits = lgbmodel.predict(X_test)\n\n train_pred = train_logits \n test_pred = test_logits \n \nprint(\"Validation F1: {} and Training F1: {} \".format(\n f1_score(y_test, test_pred, average='macro'), \n f1_score(y_train, train_pred, average='macro')))\n\nif model_met == 'train':\n feat_imp = pd.DataFrame({'feature':probas_cols, \n 'logit_kind': [i.split('_')[0] for i in probas_cols],\n 'imp':lgbmodel.feature_importance()/sum(lgbmodel.feature_importance())})\n\n\n lgbmodel.save_model('lgb_classifier_81feats.txt', num_iteration=lgbmodel.best_iteration) \n print(\"\"\"Feature Importances by logits group: \n \"\"\", feat_imp.groupby(['logit_kind'])['imp'].sum())\nelse:\n feat_imp = pd.DataFrame({'feature':probas_cols, \n 'logit_kind': [i.split('_')[0] for i in probas_cols],\n 'imp':lgbmodel.feature_importances_/sum(lgbmodel.feature_importances_)})\n\n print(\"\"\"Feature Importances by logits group: \n \"\"\", feat_imp.groupby(['logit_kind'])['imp'].sum())\n \nimport shap\nexplainer = shap.TreeExplainer(lgbmodel)\nshap_values = explainer.shap_values(X)\nprint(\"Time Elapsed: {:}.\".format(format_time(time.time() - t0)))", "_____no_output_____" ], [ "for n, path in enumerate(['/kaggle/input/textphase1/', \n '/kaggle/input/testphase2/']):\n phase = n+1\n if phase==1:\n test_logits_path = test_logits_path_phase1\n else:\n test_logits_path = test_logits_path_phase2\n Preprocess.prepare_test(text_col, path, phase)\n X_test_phase1= Preprocess.X_test\n\n test_phase1 = preparelogits_df(test_logits_path,\n df=X_test_phase1, val_labels=None, **kwargs)\n \n phase1_logits = lgbmodel.predict(test_phase1[probas_cols].values) \n if model_met == 'train':\n predictions = np.argmax(phase1_logits, axis=1) \n elif model_met == 'xgb':\n phase1_logits = bst.predict(xgb.DMatrix(test_phase1[probas_cols]), \n ntree_limit=bst.best_ntree_limit) \n predictions = phase1_logits\n else:\n predictions = phase1_logits\n X_test_phase1['prediction_model']= predictions\n X_test_phase1['Prdtypecode']=X_test_phase1['prediction_model'].map(Preprocess.dict_id_to_code)\n print(X_test_phase1['Prdtypecode'].value_counts())\n X_test_phase1=X_test_phase1.drop(['prediction_model','Title','Description'],axis=1)\n X_test_phase1.to_csv(f'y_test_task1_phase{phase}_pred_.tsv',sep='\\t',index=False)", "_____no_output_____" ] ] ]
[ "code", "markdown", "code", "markdown", "code" ]
[ [ "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code", "code" ] ]
d023cceb721a718610c9c02561150760ce5291e9
913,969
ipynb
Jupyter Notebook
06498_oc.ipynb
gerhajdu/rrl_binaries_1
a300ced2af822be5426a8f2c9405651a5ab1925c
[ "MIT" ]
1
2021-05-11T02:53:41.000Z
2021-05-11T02:53:41.000Z
06498_oc.ipynb
gerhajdu/rrl_binaries_1
a300ced2af822be5426a8f2c9405651a5ab1925c
[ "MIT" ]
null
null
null
06498_oc.ipynb
gerhajdu/rrl_binaries_1
a300ced2af822be5426a8f2c9405651a5ab1925c
[ "MIT" ]
null
null
null
2,135.441589
278,904
0.959987
[ [ [ "# Example usage of the O-C tools\n\n## This example shows how to construct and fit with MCMC the O-C diagram of the RR Lyrae star OGLE-BLG-RRLYR-02950", "_____no_output_____" ], [ "### We start with importing some libraries", "_____no_output_____" ] ], [ [ "import numpy as np\nimport oc_tools as octs", "_____no_output_____" ] ], [ [ "### We read in the data, set the period used to construct the O-C diagram (and to fold the light curve to construct the template curves, etc.), and the orders of the Fourier series we will fit to the light curve in the first and second iterations in the process", "_____no_output_____" ] ], [ [ "who = \"06498\"\nperiod = 0.589490\norder1 = 10\norder2 = 15\n\njd3, mag3 = np.loadtxt('data/{:s}.o3'.format(who), usecols=[0,1], unpack=True)\njd4, mag4 = np.loadtxt('data/{:s}.o4'.format(who), usecols=[0,1], unpack=True)", "_____no_output_____" ] ], [ [ "### We correct for possible average magnitude and amplitude differences between The OGLE-III and IV photometries by moving the intensity average of the former to the intensity average measured for the latter\n### The variables \"jd\" and \"mag\" contain the merged timings and magnitudes of the OGLE-III + IV photometry, wich are used from hereon to calculate the O-C values", "_____no_output_____" ] ], [ [ "mag3_shift=octs.shift_int(jd3, mag3, jd4, mag4, order1, period, plot=True)\njd = np.hstack((jd3,jd4))\nmag = np.hstack((mag3_shift, mag4))", "_____no_output_____" ] ], [ [ "### Calling the split_lc_seasons() function provides us with an array containing masks splitting the combined light curve into short sections, depending on the number of points\n\n### Optionally, the default splitting can be overriden by using the optional parameters \"limits\" and \"into\". For example, calling the function as:\n\nocts.split_lc_seasons(jd, plot=True, mag = mag, limits = np.array((0, 8, np.inf)), into = np.array((0, 2)))\n### will always split seasons with at least nine points into two separate segments", "_____no_output_____" ] ], [ [ "splits = octs.split_lc_seasons(jd, plot=True, mag = mag)", "_____no_output_____" ] ], [ [ "### The function calc_oc_points() fits the light curve of the variable to produce a template, and uses it to determine the O-C points of the individual segments", "_____no_output_____" ] ], [ [ "oc_jd, oc_oc = octs.calc_oc_points(jd, mag, period, order1, splits, figure=True)", "_____no_output_____" ] ], [ [ "### We make a guess at the binary parameters ", "_____no_output_____" ] ], [ [ "e = 0.37\nP_orb = 2800.\nT_peri = 6040\na_sini = 0.011\nomega = -0.7\na= -8e-03\nb= 3e-06\nc= -3.5e-10\nparams = np.asarray((e, P_orb, T_peri, a_sini, omega, a, b, c))\n\nlower_bounds = np.array((0., 100., -np.inf, 0.0, -np.inf, -np.inf, -np.inf, -np.inf))\nupper_bounds = np.array((0.99, 6000., np.inf, 1.0, np.inf, np.inf, np.inf, np.inf))", "_____no_output_____" ] ], [ [ "### We use the above guesses as the starting point (dashed grey line on the plot below) to find the O-C LTTE solution of the first iteration of our procedure. The yellow line on the plot shows the fit. The vertical blue bar shows the timing of the periastron passage\n\n### Note that in this function also provides the timings of the individual observations corrected for this initial O-C solution", "_____no_output_____" ] ], [ [ "params2, jd2 = octs.fit_oc1(oc_jd, oc_oc, jd, params, lower_bounds, upper_bounds)", "_____no_output_____" ] ], [ [ "### We use the initial solution as the starting point for the MCMC fit, therefore we prepare it first by transforming $e$ and $\\omega$ to $\\sqrt{e}\\sin{\\omega}$ and $\\sqrt{e}\\sin{\\omega}$\n### For each parameter, we also have a lower and higher limit in its prior, but the values given for $\\sqrt{e}\\sin{\\omega}$ and $\\sqrt{e}\\sin{\\omega}$ are ignored, as these are handled separately within the function checking the priors", "_____no_output_____" ] ], [ [ "start = np.zeros_like(params2)\nstart[0:3] = params2[1:4]\nstart[3] = np.sqrt(params2[0]) * np.sin(params2[4])\nstart[4] = np.sqrt(params2[0]) * np.cos(params2[4])\nstart[5:] = params2[5:]\n\nprior_ranges = np.asanyarray([[start[0]*0.9, start[0]*1.1],\n [start[1]-start[0]/4., start[1]+start[0]/4.],\n [0., 0.057754266],\n [0., 0.],\n [0., 0.],\n [-1., 1.],\n [-1e-4, 1e-4],\n [-1e-8, 1e-8]])", "_____no_output_____" ] ], [ [ "### We set a random seed to get reproducible results, then prepare the initial positions of the 200 walkers we are using during the fitting. During this, we check explicitly that these correspond to a position with a finite prior (i.e., they are not outside of the prior ranges defined above)", "_____no_output_____" ] ], [ [ "np.random.seed(0)\nwalkers = 200\nrandom_scales = np.array((1e+1, 1e+1, 1e-4, 1e-2, 1e-2, 1e-3, 2e-7, 5e-11))\npos = np.zeros((walkers, start.size))\n\nfor i in range(walkers):\n pos[i,:] = start + random_scales * np.random.normal(size=8)\n while np.isinf(octs.log_prior(pos[i,:], prior_ranges)):\n pos[i,:] = start + random_scales * np.random.normal(size=8)", "_____no_output_____" ] ], [ [ "### We recalculate the O-C points, but this time we use a higher-order Fourier series to fit the light curve with the modified timings, and we also calculate errors using bootstrapping", "_____no_output_____" ] ], [ [ "oc_jd, oc_oc, oc_sd = octs.calc_oc_points(jd, mag, period, order2, splits,\n bootstrap_times = 500, jd_mod = jd2,\n figure=True)", "_____no_output_____" ] ], [ [ "### We fit the O-C points measured above using MCMC by calling the run_mcmc() function\n### We plot both the fit, as well as the triangle plot showing the two- (and one-)dimensional posterior distributions (these can be suppressed by setting the optional parameters \"plot_oc\" and \"plot_triangle\" to False)", "_____no_output_____" ] ], [ [ "sampler, fit_mcmc, oc_sigmas, param_means, param_sigmas, fit_at_points, K =\\\nocts.run_mcmc(oc_jd, oc_oc, oc_sd,\n prior_ranges, pos,\n nsteps = 31000, discard = 1000,\n thin = 300, processes=1)", "100%|██████████| 31000/31000 [03:08<00:00, 164.32it/s]\n100%|███████████████████████████████████| 20000/20000 [00:02<00:00, 8267.13it/s]\n" ] ], [ [ "## The estimated LTTE parameters are:", "_____no_output_____" ] ], [ [ "print(\"Orbital period: {:d} +- {:d} [d]\".format(int(param_means[0]),\n int(param_sigmas[0])))\nprint(\"Projected semi-major axis: {:.3f} +- {:.3f} [AU]\".format(param_means[2]*173.144633,\n param_sigmas[2]*173.144633))\nprint(\"Eccentricity: {:.3f} +- {:.3f}\".format(param_means[3],\n param_sigmas[3]))\nprint(\"Argumen of periastron: {:+4d} +- {:d} [deg]\".format(int(param_means[4]*180/np.pi),\n int(param_sigmas[4]*180/np.pi)))\nprint(\"Periastron passage time: {:d} +- {:d} [HJD-2450000]\".format(int(param_means[1]),\n int(param_sigmas[1])))\nprint(\"Period-change rate: {:+.3f} +- {:.3f} [d/Myr] \".format(param_means[7]*365.2422*2e6*period,\n param_sigmas[7]*365.2422*2e6*period))\nprint(\"RV semi-amplitude: {:5.2f} +- {:.2f} [km/s]\".format(K[0], K[1]))\nprint(\"Mass function: {:.5f} +- {:.5f} [M_Sun]\".format(K[2], K[3]))", "Orbital period: 2803 +- 3 [d]\nProjected semi-major axis: 2.492 +- 0.010 [AU]\nEccentricity: 0.136 +- 0.008\nArgumen of periastron: -76 +- 3 [deg]\nPeriastron passage time: 6538 +- 24 [HJD-2450000]\nPeriod-change rate: -0.002 +- 0.005 [d/Myr] \nRV semi-amplitude: 9.76 +- 0.04 [km/s]\nMass function: 0.26290 +- 0.00334 [M_Sun]\n" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ] ]
d023ccfec1814303c75ac783f19ff43185d83197
153,928
ipynb
Jupyter Notebook
data/ferch_fullyears.ipynb
carocamargo/ohw20-proj-pyxpcm
301a36564167e22ab644f51ad1872c02bdcbbbb4
[ "BSD-3-Clause" ]
null
null
null
data/ferch_fullyears.ipynb
carocamargo/ohw20-proj-pyxpcm
301a36564167e22ab644f51ad1872c02bdcbbbb4
[ "BSD-3-Clause" ]
null
null
null
data/ferch_fullyears.ipynb
carocamargo/ohw20-proj-pyxpcm
301a36564167e22ab644f51ad1872c02bdcbbbb4
[ "BSD-3-Clause" ]
null
null
null
168.227322
7,537
0.684684
[ [ [ "# load libraries\nimport xarray as xr\nimport numpy as np\n\nfrom argopy import DataFetcher as ArgoDataFetcher\n\nfrom datetime import datetime, timedelta\nimport pandas as pd\n\n# User defined functions:\ndef get_argo_region_data(llon,rlon,llat,ulat,depthmin,depthmax,time_in,time_f):\n \n \"\"\"Function to get argo data for a given lat,lon box (using Argopy), \n and return a 2D array collection of vertical profile for the given region\n \n Parameters\n ----------\n llon : int\n left longitude\n rlon : int\n right longtidue\n ulat : int\n upper latitude\n llat : int\n lower latitude\n time_in : str/datetime object\n the start time of desired range, formatted Y-m-d\n time_f : str/datetime object\n the end time of desired range, formatted Y-m-d\n \n \n Returns\n ---------\n xarray\n The result is a xarray of the vertical profile for the given range and region.\n \n \n \"\"\"\n\n ds_points = ArgoDataFetcher(src='erddap').region([llon,rlon, llat,ulat, depthmin, depthmax,time_in,time_f]).to_xarray()\n ds_profiles = ds_points.argo.point2profile()\n return ds_profiles\n\ndef spliced_argo_region_data(llon,rlon,llat,ulat,depthmin,depthmax,time_start,time_end):\n \"\"\"Function that gets the argo data for given latitude and longitude bounding box\n (using Argopy), and given start and end time range to return a 2D array collection of vertical\n profile for the given region and time frame\n \n Parameters\n ----------\n llon : int\n left longitude\n rlon : int\n right longtidue\n ulat : int\n upper latitude\n llat : int\n lower latitude\n time_in : str/datetime object\n the start time of desired range, formatted Y-m-d\n time_f : str/datetime object\n the end time of desired range, formatted Y-m-d\n \n \n Returns\n ---------\n xarray\n The result is a xarray of the vertical profile for the given range and region.\n \n \n \"\"\"\n\n \n #step\n max_dt = timedelta(days = 10)\n \n if isinstance(time_start, str):\n time_start = datetime.strptime(time_start,\"%Y-%m-%d\")\n if isinstance(time_end, str):\n time_end = datetime.strptime(time_end,\"%Y-%m-%d\")\n \n if time_end - time_start <= max_dt:\n ds = get_argo_region_data(llon,rlon,llat,ulat,depthmin,depthmax,time_start,time_end)\n return ds\n else:\n early_end = time_start+max_dt\n ds = get_argo_region_data(llon,rlon,llat,ulat,depthmin,depthmax,time_start,early_end)\n print(\"Retrived data from \" + str(time_start) + \" to \" + str(early_end) + \", retreived \" + str(len(ds.N_PROF)) + \" profiles\")\n ds2 = spliced_argo_region_data(llon,rlon,llat,ulat,depthmin,depthmax, early_end,time_end)\n return xr.concat([ds,ds2],dim='N_PROF')\n", "_____no_output_____" ], [ "llon=-90;rlon=0\nulat=70;llat=0 \ndepthmin=0;depthmax=1400\ntime_start='2014-01-01'\ntime_end='2020-01-01'\nds=spliced_argo_region_data(llon,rlon,llat,ulat,depthmin,depthmax,time_start,time_end)", "Retrived data from 2014-01-01 00:00:00 to 2014-01-11 00:00:00, retreived 525 profiles\nRetrived data from 2014-01-11 00:00:00 to 2014-01-21 00:00:00, retreived 532 profiles\nRetrived data from 2014-01-21 00:00:00 to 2014-01-31 00:00:00, retreived 517 profiles\nRetrived data from 2014-01-31 00:00:00 to 2014-02-10 00:00:00, retreived 533 profiles\nRetrived data from 2014-02-10 00:00:00 to 2014-02-20 00:00:00, retreived 552 profiles\nRetrived data from 2014-02-20 00:00:00 to 2014-03-02 00:00:00, retreived 537 profiles\nRetrived data from 2014-03-02 00:00:00 to 2014-03-12 00:00:00, retreived 544 profiles\nRetrived data from 2014-03-12 00:00:00 to 2014-03-22 00:00:00, retreived 545 profiles\nRetrived data from 2014-03-22 00:00:00 to 2014-04-01 00:00:00, retreived 548 profiles\nRetrived data from 2014-04-01 00:00:00 to 2014-04-11 00:00:00, retreived 534 profiles\nRetrived data from 2014-04-11 00:00:00 to 2014-04-21 00:00:00, retreived 538 profiles\nRetrived data from 2014-04-21 00:00:00 to 2014-05-01 00:00:00, retreived 559 profiles\nRetrived data from 2014-05-01 00:00:00 to 2014-05-11 00:00:00, retreived 546 profiles\nRetrived data from 2014-05-11 00:00:00 to 2014-05-21 00:00:00, retreived 565 profiles\nRetrived data from 2014-05-21 00:00:00 to 2014-05-31 00:00:00, retreived 566 profiles\nRetrived data from 2014-05-31 00:00:00 to 2014-06-10 00:00:00, retreived 576 profiles\nRetrived data from 2014-06-10 00:00:00 to 2014-06-20 00:00:00, retreived 572 profiles\nRetrived data from 2014-06-20 00:00:00 to 2014-06-30 00:00:00, retreived 582 profiles\nRetrived data from 2014-06-30 00:00:00 to 2014-07-10 00:00:00, retreived 581 profiles\nRetrived data from 2014-07-10 00:00:00 to 2014-07-20 00:00:00, retreived 619 profiles\nRetrived data from 2014-07-20 00:00:00 to 2014-07-30 00:00:00, retreived 585 profiles\nRetrived data from 2014-07-30 00:00:00 to 2014-08-09 00:00:00, retreived 564 profiles\nRetrived data from 2014-08-09 00:00:00 to 2014-08-19 00:00:00, retreived 577 profiles\nRetrived data from 2014-08-19 00:00:00 to 2014-08-29 00:00:00, retreived 564 profiles\nRetrived data from 2014-08-29 00:00:00 to 2014-09-08 00:00:00, retreived 572 profiles\nRetrived data from 2014-09-08 00:00:00 to 2014-09-18 00:00:00, retreived 589 profiles\nRetrived data from 2014-09-18 00:00:00 to 2014-09-28 00:00:00, retreived 613 profiles\nRetrived data from 2014-09-28 00:00:00 to 2014-10-08 00:00:00, retreived 590 profiles\nRetrived data from 2014-10-08 00:00:00 to 2014-10-18 00:00:00, retreived 599 profiles\n" ], [ "ds", "_____no_output_____" ], [ "ds.to_netcdf(str('/home/jovyan/ohw20-proj-pyxpcm/data/2014-Jan_to_oct.nc'))", "_____no_output_____" ], [ "llon=-90;rlon=0\nulat=70;llat=0 \ndepthmin=0;depthmax=1400\ntime_start='2014-11-01'\ntime_end='2020-01-01'\nds=spliced_argo_region_data(llon,rlon,llat,ulat,depthmin,depthmax,time_start,time_end)", "Retrived data from 2014-11-01 00:00:00 to 2014-11-11 00:00:00, retreived 622 profiles\nRetrived data from 2014-11-11 00:00:00 to 2014-11-21 00:00:00, retreived 585 profiles\nRetrived data from 2014-11-21 00:00:00 to 2014-12-01 00:00:00, retreived 604 profiles\nRetrived data from 2014-12-01 00:00:00 to 2014-12-11 00:00:00, retreived 582 profiles\nRetrived data from 2014-12-11 00:00:00 to 2014-12-21 00:00:00, retreived 575 profiles\nRetrived data from 2014-12-21 00:00:00 to 2014-12-31 00:00:00, retreived 573 profiles\nRetrived data from 2014-12-31 00:00:00 to 2015-01-10 00:00:00, retreived 572 profiles\nRetrived data from 2015-01-10 00:00:00 to 2015-01-20 00:00:00, retreived 568 profiles\nRetrived data from 2015-01-20 00:00:00 to 2015-01-30 00:00:00, retreived 569 profiles\nRetrived data from 2015-01-30 00:00:00 to 2015-02-09 00:00:00, retreived 581 profiles\nRetrived data from 2015-02-09 00:00:00 to 2015-02-19 00:00:00, retreived 582 profiles\nRetrived data from 2015-02-19 00:00:00 to 2015-03-01 00:00:00, retreived 574 profiles\nRetrived data from 2015-03-01 00:00:00 to 2015-03-11 00:00:00, retreived 560 profiles\nRetrived data from 2015-03-11 00:00:00 to 2015-03-21 00:00:00, retreived 552 profiles\nRetrived data from 2015-03-21 00:00:00 to 2015-03-31 00:00:00, retreived 569 profiles\nRetrived data from 2015-03-31 00:00:00 to 2015-04-10 00:00:00, retreived 570 profiles\nRetrived data from 2015-04-10 00:00:00 to 2015-04-20 00:00:00, retreived 580 profiles\nRetrived data from 2015-04-20 00:00:00 to 2015-04-30 00:00:00, retreived 586 profiles\nRetrived data from 2015-04-30 00:00:00 to 2015-05-10 00:00:00, retreived 620 profiles\nRetrived data from 2015-05-10 00:00:00 to 2015-05-20 00:00:00, retreived 636 profiles\nRetrived data from 2015-05-20 00:00:00 to 2015-05-30 00:00:00, retreived 632 profiles\nRetrived data from 2015-05-30 00:00:00 to 2015-06-09 00:00:00, retreived 614 profiles\nRetrived data from 2015-06-09 00:00:00 to 2015-06-19 00:00:00, retreived 657 profiles\nRetrived data from 2015-06-19 00:00:00 to 2015-06-29 00:00:00, retreived 632 profiles\nRetrived data from 2015-06-29 00:00:00 to 2015-07-09 00:00:00, retreived 635 profiles\nRetrived data from 2015-07-09 00:00:00 to 2015-07-19 00:00:00, retreived 637 profiles\nRetrived data from 2015-07-19 00:00:00 to 2015-07-29 00:00:00, retreived 632 profiles\nRetrived data from 2015-07-29 00:00:00 to 2015-08-08 00:00:00, retreived 618 profiles\nRetrived data from 2015-08-08 00:00:00 to 2015-08-18 00:00:00, retreived 619 profiles\nRetrived data from 2015-08-18 00:00:00 to 2015-08-28 00:00:00, retreived 610 profiles\nRetrived data from 2015-08-28 00:00:00 to 2015-09-07 00:00:00, retreived 611 profiles\nRetrived data from 2015-09-07 00:00:00 to 2015-09-17 00:00:00, retreived 616 profiles\nRetrived data from 2015-09-17 00:00:00 to 2015-09-27 00:00:00, retreived 574 profiles\nRetrived data from 2015-09-27 00:00:00 to 2015-10-07 00:00:00, retreived 606 profiles\nRetrived data from 2015-10-07 00:00:00 to 2015-10-17 00:00:00, retreived 624 profiles\nRetrived data from 2015-10-17 00:00:00 to 2015-10-27 00:00:00, retreived 603 profiles\nRetrived data from 2015-10-27 00:00:00 to 2015-11-06 00:00:00, retreived 651 profiles\nRetrived data from 2015-11-06 00:00:00 to 2015-11-16 00:00:00, retreived 709 profiles\nRetrived data from 2015-11-16 00:00:00 to 2015-11-26 00:00:00, retreived 818 profiles\nRetrived data from 2015-11-26 00:00:00 to 2015-12-06 00:00:00, retreived 702 profiles\nRetrived data from 2015-12-06 00:00:00 to 2015-12-16 00:00:00, retreived 639 profiles\nRetrived data from 2015-12-16 00:00:00 to 2015-12-26 00:00:00, retreived 593 profiles\nRetrived data from 2015-12-26 00:00:00 to 2016-01-05 00:00:00, retreived 621 profiles\nRetrived data from 2016-01-05 00:00:00 to 2016-01-15 00:00:00, retreived 616 profiles\nRetrived data from 2016-01-15 00:00:00 to 2016-01-25 00:00:00, retreived 609 profiles\nRetrived data from 2016-01-25 00:00:00 to 2016-02-04 00:00:00, retreived 610 profiles\nRetrived data from 2016-02-04 00:00:00 to 2016-02-14 00:00:00, retreived 631 profiles\nRetrived data from 2016-02-14 00:00:00 to 2016-02-24 00:00:00, retreived 597 profiles\nRetrived data from 2016-02-24 00:00:00 to 2016-03-05 00:00:00, retreived 599 profiles\nRetrived data from 2016-03-05 00:00:00 to 2016-03-15 00:00:00, retreived 625 profiles\nRetrived data from 2016-03-15 00:00:00 to 2016-03-25 00:00:00, retreived 630 profiles\nRetrived data from 2016-03-25 00:00:00 to 2016-04-04 00:00:00, retreived 623 profiles\nRetrived data from 2016-04-04 00:00:00 to 2016-04-14 00:00:00, retreived 618 profiles\nRetrived data from 2016-04-14 00:00:00 to 2016-04-24 00:00:00, retreived 619 profiles\nRetrived data from 2016-04-24 00:00:00 to 2016-05-04 00:00:00, retreived 615 profiles\nRetrived data from 2016-05-04 00:00:00 to 2016-05-14 00:00:00, retreived 648 profiles\nRetrived data from 2016-05-14 00:00:00 to 2016-05-24 00:00:00, retreived 732 profiles\nRetrived data from 2016-05-24 00:00:00 to 2016-06-03 00:00:00, retreived 786 profiles\nRetrived data from 2016-06-03 00:00:00 to 2016-06-13 00:00:00, retreived 792 profiles\nRetrived data from 2016-06-13 00:00:00 to 2016-06-23 00:00:00, retreived 734 profiles\nRetrived data from 2016-06-23 00:00:00 to 2016-07-03 00:00:00, retreived 746 profiles\nRetrived data from 2016-07-03 00:00:00 to 2016-07-13 00:00:00, retreived 755 profiles\nRetrived data from 2016-07-13 00:00:00 to 2016-07-23 00:00:00, retreived 777 profiles\nRetrived data from 2016-07-23 00:00:00 to 2016-08-02 00:00:00, retreived 734 profiles\nRetrived data from 2016-08-02 00:00:00 to 2016-08-12 00:00:00, retreived 726 profiles\nRetrived data from 2016-08-12 00:00:00 to 2016-08-22 00:00:00, retreived 702 profiles\nRetrived data from 2016-08-22 00:00:00 to 2016-09-01 00:00:00, retreived 680 profiles\nRetrived data from 2016-09-01 00:00:00 to 2016-09-11 00:00:00, retreived 694 profiles\nRetrived data from 2016-09-11 00:00:00 to 2016-09-21 00:00:00, retreived 700 profiles\nRetrived data from 2016-09-21 00:00:00 to 2016-10-01 00:00:00, retreived 683 profiles\nRetrived data from 2016-10-01 00:00:00 to 2016-10-11 00:00:00, retreived 618 profiles\nRetrived data from 2016-10-11 00:00:00 to 2016-10-21 00:00:00, retreived 612 profiles\nRetrived data from 2016-10-21 00:00:00 to 2016-10-31 00:00:00, retreived 622 profiles\nRetrived data from 2016-10-31 00:00:00 to 2016-11-10 00:00:00, retreived 607 profiles\nRetrived data from 2016-11-10 00:00:00 to 2016-11-20 00:00:00, retreived 618 profiles\nRetrived data from 2016-11-20 00:00:00 to 2016-11-30 00:00:00, retreived 612 profiles\nRetrived data from 2016-11-30 00:00:00 to 2016-12-10 00:00:00, retreived 614 profiles\nRetrived data from 2016-12-10 00:00:00 to 2016-12-20 00:00:00, retreived 612 profiles\nRetrived data from 2016-12-20 00:00:00 to 2016-12-30 00:00:00, retreived 608 profiles\nRetrived data from 2016-12-30 00:00:00 to 2017-01-09 00:00:00, retreived 592 profiles\nRetrived data from 2017-01-09 00:00:00 to 2017-01-19 00:00:00, retreived 582 profiles\nRetrived data from 2017-01-19 00:00:00 to 2017-01-29 00:00:00, retreived 577 profiles\nRetrived data from 2017-01-29 00:00:00 to 2017-02-08 00:00:00, retreived 583 profiles\nRetrived data from 2017-02-08 00:00:00 to 2017-02-18 00:00:00, retreived 564 profiles\nRetrived data from 2017-02-18 00:00:00 to 2017-02-28 00:00:00, retreived 566 profiles\nRetrived data from 2017-02-28 00:00:00 to 2017-03-10 00:00:00, retreived 562 profiles\nRetrived data from 2017-03-10 00:00:00 to 2017-03-20 00:00:00, retreived 583 profiles\nRetrived data from 2017-03-20 00:00:00 to 2017-03-30 00:00:00, retreived 629 profiles\nRetrived data from 2017-03-30 00:00:00 to 2017-04-09 00:00:00, retreived 609 profiles\nRetrived data from 2017-04-09 00:00:00 to 2017-04-19 00:00:00, retreived 597 profiles\nRetrived data from 2017-04-19 00:00:00 to 2017-04-29 00:00:00, retreived 602 profiles\nRetrived data from 2017-04-29 00:00:00 to 2017-05-09 00:00:00, retreived 581 profiles\nRetrived data from 2017-05-09 00:00:00 to 2017-05-19 00:00:00, retreived 580 profiles\nRetrived data from 2017-05-19 00:00:00 to 2017-05-29 00:00:00, retreived 525 profiles\nRetrived data from 2017-05-29 00:00:00 to 2017-06-08 00:00:00, retreived 561 profiles\nRetrived data from 2017-06-08 00:00:00 to 2017-06-18 00:00:00, retreived 555 profiles\nRetrived data from 2017-06-18 00:00:00 to 2017-06-28 00:00:00, retreived 541 profiles\nRetrived data from 2017-06-28 00:00:00 to 2017-07-08 00:00:00, retreived 535 profiles\nRetrived data from 2017-07-08 00:00:00 to 2017-07-18 00:00:00, retreived 540 profiles\nRetrived data from 2017-07-18 00:00:00 to 2017-07-28 00:00:00, retreived 577 profiles\nRetrived data from 2017-07-28 00:00:00 to 2017-08-07 00:00:00, retreived 570 profiles\nRetrived data from 2017-08-07 00:00:00 to 2017-08-17 00:00:00, retreived 578 profiles\nRetrived data from 2017-08-17 00:00:00 to 2017-08-27 00:00:00, retreived 574 profiles\nRetrived data from 2017-08-27 00:00:00 to 2017-09-06 00:00:00, retreived 578 profiles\nRetrived data from 2017-09-06 00:00:00 to 2017-09-16 00:00:00, retreived 636 profiles\nRetrived data from 2017-09-16 00:00:00 to 2017-09-26 00:00:00, retreived 627 profiles\nRetrived data from 2017-09-26 00:00:00 to 2017-10-06 00:00:00, retreived 597 profiles\nRetrived data from 2017-10-06 00:00:00 to 2017-10-16 00:00:00, retreived 624 profiles\nRetrived data from 2017-10-16 00:00:00 to 2017-10-26 00:00:00, retreived 634 profiles\nRetrived data from 2017-10-26 00:00:00 to 2017-11-05 00:00:00, retreived 627 profiles\nRetrived data from 2017-11-05 00:00:00 to 2017-11-15 00:00:00, retreived 596 profiles\nRetrived data from 2017-11-15 00:00:00 to 2017-11-25 00:00:00, retreived 607 profiles\nRetrived data from 2017-11-25 00:00:00 to 2017-12-05 00:00:00, retreived 597 profiles\nRetrived data from 2017-12-05 00:00:00 to 2017-12-15 00:00:00, retreived 591 profiles\nRetrived data from 2017-12-15 00:00:00 to 2017-12-25 00:00:00, retreived 573 profiles\nRetrived data from 2017-12-25 00:00:00 to 2018-01-04 00:00:00, retreived 587 profiles\nRetrived data from 2018-01-04 00:00:00 to 2018-01-14 00:00:00, retreived 579 profiles\nRetrived data from 2018-01-14 00:00:00 to 2018-01-24 00:00:00, retreived 576 profiles\nRetrived data from 2018-01-24 00:00:00 to 2018-02-03 00:00:00, retreived 567 profiles\nRetrived data from 2018-02-03 00:00:00 to 2018-02-13 00:00:00, retreived 568 profiles\nRetrived data from 2018-02-13 00:00:00 to 2018-02-23 00:00:00, retreived 575 profiles\nRetrived data from 2018-02-23 00:00:00 to 2018-03-05 00:00:00, retreived 573 profiles\nRetrived data from 2018-03-05 00:00:00 to 2018-03-15 00:00:00, retreived 561 profiles\nRetrived data from 2018-03-15 00:00:00 to 2018-03-25 00:00:00, retreived 590 profiles\nRetrived data from 2018-03-25 00:00:00 to 2018-04-04 00:00:00, retreived 630 profiles\nRetrived data from 2018-04-04 00:00:00 to 2018-04-14 00:00:00, retreived 602 profiles\nRetrived data from 2018-04-14 00:00:00 to 2018-04-24 00:00:00, retreived 573 profiles\nRetrived data from 2018-04-24 00:00:00 to 2018-05-04 00:00:00, retreived 574 profiles\nRetrived data from 2018-05-04 00:00:00 to 2018-05-14 00:00:00, retreived 594 profiles\nRetrived data from 2018-05-14 00:00:00 to 2018-05-24 00:00:00, retreived 579 profiles\nRetrived data from 2018-05-24 00:00:00 to 2018-06-03 00:00:00, retreived 585 profiles\nRetrived data from 2018-06-03 00:00:00 to 2018-06-13 00:00:00, retreived 646 profiles\nRetrived data from 2018-06-13 00:00:00 to 2018-06-23 00:00:00, retreived 642 profiles\nRetrived data from 2018-06-23 00:00:00 to 2018-07-03 00:00:00, retreived 598 profiles\nRetrived data from 2018-07-03 00:00:00 to 2018-07-13 00:00:00, retreived 603 profiles\nRetrived data from 2018-07-13 00:00:00 to 2018-07-23 00:00:00, retreived 582 profiles\nRetrived data from 2018-07-23 00:00:00 to 2018-08-02 00:00:00, retreived 610 profiles\nRetrived data from 2018-08-02 00:00:00 to 2018-08-12 00:00:00, retreived 613 profiles\nRetrived data from 2018-08-12 00:00:00 to 2018-08-22 00:00:00, retreived 627 profiles\nRetrived data from 2018-08-22 00:00:00 to 2018-09-01 00:00:00, retreived 636 profiles\nRetrived data from 2018-09-01 00:00:00 to 2018-09-11 00:00:00, retreived 640 profiles\nRetrived data from 2018-09-11 00:00:00 to 2018-09-21 00:00:00, retreived 621 profiles\nRetrived data from 2018-09-21 00:00:00 to 2018-10-01 00:00:00, retreived 636 profiles\nRetrived data from 2018-10-01 00:00:00 to 2018-10-11 00:00:00, retreived 585 profiles\nRetrived data from 2018-10-11 00:00:00 to 2018-10-21 00:00:00, retreived 615 profiles\nRetrived data from 2018-10-21 00:00:00 to 2018-10-31 00:00:00, retreived 590 profiles\nRetrived data from 2018-10-31 00:00:00 to 2018-11-10 00:00:00, retreived 615 profiles\nRetrived data from 2018-11-10 00:00:00 to 2018-11-20 00:00:00, retreived 627 profiles\nRetrived data from 2018-11-20 00:00:00 to 2018-11-30 00:00:00, retreived 631 profiles\nRetrived data from 2018-11-30 00:00:00 to 2018-12-10 00:00:00, retreived 547 profiles\nRetrived data from 2018-12-10 00:00:00 to 2018-12-20 00:00:00, retreived 542 profiles\nRetrived data from 2018-12-20 00:00:00 to 2018-12-30 00:00:00, retreived 527 profiles\nRetrived data from 2018-12-30 00:00:00 to 2019-01-09 00:00:00, retreived 527 profiles\nRetrived data from 2019-01-09 00:00:00 to 2019-01-19 00:00:00, retreived 535 profiles\nRetrived data from 2019-01-19 00:00:00 to 2019-01-29 00:00:00, retreived 516 profiles\nRetrived data from 2019-01-29 00:00:00 to 2019-02-08 00:00:00, retreived 522 profiles\nRetrived data from 2019-02-08 00:00:00 to 2019-02-18 00:00:00, retreived 535 profiles\nRetrived data from 2019-02-18 00:00:00 to 2019-02-28 00:00:00, retreived 520 profiles\nRetrived data from 2019-02-28 00:00:00 to 2019-03-10 00:00:00, retreived 530 profiles\nRetrived data from 2019-03-10 00:00:00 to 2019-03-20 00:00:00, retreived 549 profiles\nRetrived data from 2019-03-20 00:00:00 to 2019-03-30 00:00:00, retreived 578 profiles\nRetrived data from 2019-03-30 00:00:00 to 2019-04-09 00:00:00, retreived 739 profiles\nRetrived data from 2019-04-09 00:00:00 to 2019-04-19 00:00:00, retreived 806 profiles\nRetrived data from 2019-04-19 00:00:00 to 2019-04-29 00:00:00, retreived 733 profiles\nRetrived data from 2019-04-29 00:00:00 to 2019-05-09 00:00:00, retreived 681 profiles\nRetrived data from 2019-05-09 00:00:00 to 2019-05-19 00:00:00, retreived 579 profiles\nRetrived data from 2019-05-19 00:00:00 to 2019-05-29 00:00:00, retreived 546 profiles\nRetrived data from 2019-05-29 00:00:00 to 2019-06-08 00:00:00, retreived 531 profiles\nRetrived data from 2019-06-08 00:00:00 to 2019-06-18 00:00:00, retreived 534 profiles\nRetrived data from 2019-06-18 00:00:00 to 2019-06-28 00:00:00, retreived 561 profiles\nRetrived data from 2019-06-28 00:00:00 to 2019-07-08 00:00:00, retreived 551 profiles\nRetrived data from 2019-07-08 00:00:00 to 2019-07-18 00:00:00, retreived 567 profiles\nRetrived data from 2019-07-18 00:00:00 to 2019-07-28 00:00:00, retreived 564 profiles\nRetrived data from 2019-07-28 00:00:00 to 2019-08-07 00:00:00, retreived 562 profiles\nRetrived data from 2019-08-07 00:00:00 to 2019-08-17 00:00:00, retreived 566 profiles\nRetrived data from 2019-08-17 00:00:00 to 2019-08-27 00:00:00, retreived 591 profiles\nRetrived data from 2019-08-27 00:00:00 to 2019-09-06 00:00:00, retreived 612 profiles\nRetrived data from 2019-09-06 00:00:00 to 2019-09-16 00:00:00, retreived 587 profiles\nRetrived data from 2019-09-16 00:00:00 to 2019-09-26 00:00:00, retreived 596 profiles\nRetrived data from 2019-09-26 00:00:00 to 2019-10-06 00:00:00, retreived 610 profiles\nRetrived data from 2019-10-06 00:00:00 to 2019-10-16 00:00:00, retreived 594 profiles\nRetrived data from 2019-10-16 00:00:00 to 2019-10-26 00:00:00, retreived 597 profiles\nRetrived data from 2019-10-26 00:00:00 to 2019-11-05 00:00:00, retreived 573 profiles\nRetrived data from 2019-11-05 00:00:00 to 2019-11-15 00:00:00, retreived 556 profiles\nRetrived data from 2019-11-15 00:00:00 to 2019-11-25 00:00:00, retreived 563 profiles\nRetrived data from 2019-11-25 00:00:00 to 2019-12-05 00:00:00, retreived 576 profiles\nRetrived data from 2019-12-05 00:00:00 to 2019-12-15 00:00:00, retreived 589 profiles\nRetrived data from 2019-12-15 00:00:00 to 2019-12-25 00:00:00, retreived 574 profiles\n" ] ] ]
[ "code" ]
[ [ "code", "code", "code", "code", "code" ] ]
d023e55f92924c0e5581dc92bf545d8395ccefba
20,698
ipynb
Jupyter Notebook
notebooks/1-Using-ImageJ/Ops/stats/percentile.ipynb
sonjoonho/tutorials
37a59e3c66e66303f66523d26bbb38e4bd140eaf
[ "Unlicense" ]
121
2017-02-12T23:05:19.000Z
2022-03-03T01:18:46.000Z
notebooks/1-Using-ImageJ/Ops/stats/percentile.ipynb
seanwallawalla-forks/tutorials
5dce8be0f6f1a3ff679c66cb53797c3e8130a573
[ "Unlicense" ]
71
2017-02-10T17:21:54.000Z
2022-03-26T02:37:20.000Z
notebooks/1-Using-ImageJ/Ops/stats/percentile.ipynb
seanwallawalla-forks/tutorials
5dce8be0f6f1a3ff679c66cb53797c3e8130a573
[ "Unlicense" ]
90
2017-03-16T10:22:07.000Z
2022-01-26T14:16:00.000Z
121.752941
16,781
0.887719
[ [ [ "empty" ] ] ]
[ "empty" ]
[ [ "empty" ] ]
d023f853b1ac57a0d1e3c003e84ee1c1a7d1770e
4,397
ipynb
Jupyter Notebook
notebooks/2020-05-15 tscan refactor.ipynb
danielsuo/toy_flood
471d3c4091d86d4a00fbf910937d4e60fdaf79a1
[ "MIT" ]
1
2020-04-30T07:42:12.000Z
2020-04-30T07:42:12.000Z
notebooks/2020-05-15 tscan refactor.ipynb
danielsuo/toy_flood
471d3c4091d86d4a00fbf910937d4e60fdaf79a1
[ "MIT" ]
3
2020-09-25T22:37:57.000Z
2022-02-09T23:38:23.000Z
notebooks/2020-05-15 tscan refactor.ipynb
danielsuo/skgaip
471d3c4091d86d4a00fbf910937d4e60fdaf79a1
[ "MIT" ]
null
null
null
28.185897
101
0.521719
[ [ [ "\"\"\"timecast top-level API\"\"\"\nfrom functools import partial\nfrom typing import Callable\nfrom typing import Tuple\nfrom typing import Union\n\nimport flax\nimport jax\nimport jax.numpy as jnp\nimport numpy as np\n\n\ndef _objective(x, y, loss_fn, model):\n \"\"\"Default objective function\"\"\"\n y_hat = model(x)\n return loss_fn(y, y_hat), y_hat\n\n\ndef tmap(\n X: Union[np.ndarray, Tuple[np.ndarray, ...]],\n Y: Union[np.ndarray, Tuple[np.ndarray, ...]],\n optimizer: flax.optim.base.Optimizer,\n loss_fn: Callable[[np.ndarray, np.ndarray], np.ndarray] = lambda true, pred: jnp.square(\n true - pred\n ).mean(),\n state: flax.nn.base.Collection = None,\n objective: Callable[\n [\n np.ndarray,\n np.ndarray,\n Callable[[np.ndarray, np.ndarray], np.ndarray],\n flax.nn.base.Model,\n ],\n Tuple[np.ndarray, np.ndarray],\n ] = None,\n):\n \"\"\"Take gradients steps performantly on one data item at a time\n Args:\n X: np.ndarray or tuple of np.ndarray of inputs\n Y: np.ndarray or tuple of np.ndarray of outputs\n optimizer: initialized optimizer\n loss_fn: loss function to compose where first arg is true value and\n second is pred\n state: state required by flax\n objective: function composing loss functions\n Returns:\n np.ndarray: result\n \"\"\"\n state = state or flax.nn.Collection()\n objective = objective or _objective\n\n def _tmap(optstate, xy):\n \"\"\"Helper function\"\"\"\n x, y = xy\n optimizer, state = optstate\n func = partial(objective, x, y, loss_fn)\n with flax.nn.stateful(state) as state:\n (loss, y_hat), grad = jax.value_and_grad(func, has_aux=True)(optimizer.target)\n return (optimizer.apply_gradient(grad), state), y_hat\n\n (optimizer, state), pred = jax.lax.scan(_tmap, (optimizer, state), (X, Y))\n return pred, optimizer, state", "_____no_output_____" ], [ "from timecast.learners import Linear\nfrom timecast.optim import GradientDescent", "_____no_output_____" ], [ "model_def = Linear.partial(features=1)\n_, params = model_def.init_by_shape(jax.random.PRNGKey(0), [(1, 10)])\nmodel = flax.nn.Model(model_def, params)\n\noptimizer_def = GradientDescent(learning_rate=1e-5)\noptimizer = optimizer_def.create(model)", "_____no_output_____" ], [ "X = np.random.rand(4, 10)\nY = np.random.rand(4)", "_____no_output_____" ], [ "pred, optimizer, state = tmap(X, Y, optimizer)", "_____no_output_____" ] ], [ [ "- Need to provide input/truth (X, Y)\n- Need to provide model, state (optimizer, state)\n- Need to provide update\n- Need to provide objective", "_____no_output_____" ] ] ]
[ "code", "markdown" ]
[ [ "code", "code", "code", "code", "code" ], [ "markdown" ] ]
d023fc5838f23bf41a3a3bd01dae6b2e34e24e90
10,756
ipynb
Jupyter Notebook
Part 5 - Confidence Intervals and Analysis of Linear Regression Model.ipynb
yesman89/predicting-nba-games
4d4c59fe82e8556fcc84627cf5da8f33ef9b251c
[ "MIT" ]
null
null
null
Part 5 - Confidence Intervals and Analysis of Linear Regression Model.ipynb
yesman89/predicting-nba-games
4d4c59fe82e8556fcc84627cf5da8f33ef9b251c
[ "MIT" ]
null
null
null
Part 5 - Confidence Intervals and Analysis of Linear Regression Model.ipynb
yesman89/predicting-nba-games
4d4c59fe82e8556fcc84627cf5da8f33ef9b251c
[ "MIT" ]
null
null
null
40.284644
321
0.409167
[ [ [ "library(tidyverse)\n# Read in the csv file\ndf <- read.csv(\"dfCR.csv\")", "_____no_output_____" ], [ "head(df)", "_____no_output_____" ], [ "# Change column names\ndf$TPP <- df$X3P.\ndf$FGP <- df$FG.\ndf$FTP <- df$FT.", "_____no_output_____" ], [ "# Remove unnessecary columns\ndrop_cols <- c(\"X3P.\", \"FG.\", \"FT.\")\ndf <- df %>% select(-one_of(drop_cols))", "_____no_output_____" ], [ "# Split data into a training, validation, and testing sets\nset.seed(1234)\n# Codes splits data and training set has 70% of the data\nind <- sort(sample(nrow(df), nrow(df)*.7, replace = F))\ntrain <- df[ind,]\nval_test <- df[-ind,]", "_____no_output_____" ], [ "# Use best model obtained in part 3\nfit <- lm(PTS ~ BLK + MP + PF + STL + TOV + TRB + TPP + FGP + FTP, data=train)", "_____no_output_____" ], [ "summary(fit)", "_____no_output_____" ], [ "confint(fit, level=0.95)", "_____no_output_____" ] ] ]
[ "code" ]
[ [ "code", "code", "code", "code", "code", "code", "code", "code" ] ]
d023fe55ff0f0e6b1ae8a6c6e25c6deadb570b32
21,368
ipynb
Jupyter Notebook
notebooks/mog-eigval-dist.ipynb
LMescheder/TheNumericsOfGANs
68d915fc01608e7f585af853a2aabeacbfa2d53f
[ "MIT" ]
46
2017-11-02T02:52:22.000Z
2021-12-18T17:41:23.000Z
notebooks/mog-eigval-dist.ipynb
MJfadeaway/TheNumericsOfGANs
68d915fc01608e7f585af853a2aabeacbfa2d53f
[ "MIT" ]
2
2017-10-31T14:35:30.000Z
2021-11-18T03:30:29.000Z
notebooks/mog-eigval-dist.ipynb
MJfadeaway/TheNumericsOfGANs
68d915fc01608e7f585af853a2aabeacbfa2d53f
[ "MIT" ]
13
2017-10-17T06:51:30.000Z
2020-03-05T03:43:23.000Z
23.481319
131
0.496443
[ [ [ "# Consensus Optimization", "_____no_output_____" ], [ "This notebook contains the code for the toy experiment in the paper [The Numerics of GANs](https://arxiv.org/abs/1705.10461).", "_____no_output_____" ] ], [ [ "%load_ext autoreload\n%autoreload 2\nimport tensorflow as tf\nfrom tensorflow.contrib import slim\nimport numpy as np\nimport scipy as sp\nfrom scipy import stats\nfrom matplotlib import pyplot as plt\nimport sys, os\nfrom tqdm import tqdm_notebook\ntf.reset_default_graph()\n", "_____no_output_____" ], [ "def kde(mu, tau, bbox=[-5, 5, -5, 5], save_file=\"\", xlabel=\"\", ylabel=\"\", cmap='Blues'):\n values = np.vstack([mu, tau])\n kernel = sp.stats.gaussian_kde(values)\n\n fig, ax = plt.subplots()\n ax.axis(bbox)\n ax.set_aspect(abs(bbox[1]-bbox[0])/abs(bbox[3]-bbox[2]))\n ax.set_xlabel(xlabel)\n ax.set_ylabel(ylabel)\n plt.tick_params(\n axis='x', # changes apply to the x-axis\n which='both', # both major and minor ticks are affected\n bottom='off', # ticks along the bottom edge are off\n top='off', # ticks along the top edge are off\n labelbottom='off') # labels along the bottom edge are off\n plt.tick_params(\n axis='y', # changes apply to the x-axis\n which='both', # both major and minor ticks are affected\n left='off', # ticks along the bottom edge are off\n right='off', # ticks along the top edge are off\n labelleft='off') # labels along the bottom edge are off\n \n xx, yy = np.mgrid[bbox[0]:bbox[1]:300j, bbox[2]:bbox[3]:300j]\n positions = np.vstack([xx.ravel(), yy.ravel()])\n f = np.reshape(kernel(positions).T, xx.shape)\n cfset = ax.contourf(xx, yy, f, cmap=cmap)\n\n if save_file != \"\":\n plt.savefig(save_file, bbox_inches='tight')\n plt.close(fig)\n else:\n plt.show()\n \n \ndef complex_scatter(points, bbox=None, save_file=\"\", xlabel=\"real part\", ylabel=\"imaginary part\", cmap='Blues'):\n fig, ax = plt.subplots()\n\n if bbox is not None:\n ax.axis(bbox)\n\n ax.set_xlabel(xlabel)\n ax.set_ylabel(ylabel)\n\n xx = [p.real for p in points]\n yy = [p.imag for p in points]\n \n plt.plot(xx, yy, 'X')\n plt.grid()\n\n if save_file != \"\":\n plt.savefig(save_file, bbox_inches='tight')\n plt.close(fig)\n else:\n plt.show()", "_____no_output_____" ], [ "# Parameters\nlearning_rate = 1e-4\nreg_param = 10.\nbatch_size = 512\nz_dim = 16\nsigma = 0.01\nmethod = 'conopt'\ndivergence = 'standard'\noutdir = os.path.join('gifs', method)\nniter = 50000\nn_save = 500\nbbox = [-1.6, 1.6, -1.6, 1.6]\ndo_eigen = True", "_____no_output_____" ], [ "# Target distribution\nmus = np.vstack([np.cos(2*np.pi*k/8), np.sin(2*np.pi*k/8)] for k in range(batch_size))\nx_real = mus + sigma*tf.random_normal([batch_size, 2])", "_____no_output_____" ], [ "# Model\ndef generator_func(z):\n net = slim.fully_connected(z, 16)\n net = slim.fully_connected(net, 16)\n net = slim.fully_connected(net, 16)\n net = slim.fully_connected(net, 16)\n x = slim.fully_connected(net, 2, activation_fn=None)\n return x\n \n\ndef discriminator_func(x):\n # Network\n net = slim.fully_connected(x, 16)\n net = slim.fully_connected(net, 16)\n net = slim.fully_connected(net, 16)\n net = slim.fully_connected(net, 16)\n logits = slim.fully_connected(net, 1, activation_fn=None)\n out = tf.squeeze(logits, -1)\n\n return out\n\ngenerator = tf.make_template('generator', generator_func)\ndiscriminator = tf.make_template('discriminator', discriminator_func)\n", "_____no_output_____" ], [ "z = tf.random_normal([batch_size, z_dim])\nx_fake = generator(z)\nd_out_real = discriminator(x_real)\nd_out_fake = discriminator(x_fake)\n\n# Loss\nif divergence == 'standard':\n d_loss_real = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(\n logits=d_out_real, labels=tf.ones_like(d_out_real)\n ))\n d_loss_fake = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(\n logits=d_out_fake, labels=tf.zeros_like(d_out_fake)\n ))\n d_loss = d_loss_real + d_loss_fake\n\n g_loss = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(\n logits=d_out_fake, labels=tf.ones_like(d_out_fake)\n ))\nelif divergence == 'JS':\n d_loss_real = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(\n logits=d_out_real, labels=tf.ones_like(d_out_real)\n ))\n d_loss_fake = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(\n logits=d_out_fake, labels=tf.zeros_like(d_out_fake)\n ))\n d_loss = d_loss_real + d_loss_fake\n\n g_loss = -d_loss\nelif divergence == 'indicator':\n d_loss = tf.reduce_mean(d_out_real - d_out_fake)\n g_loss = -d_loss \nelse:\n raise NotImplementedError", "_____no_output_____" ], [ "g_vars = tf.get_collection(tf.GraphKeys.TRAINABLE_VARIABLES, scope='generator')\nd_vars = tf.get_collection(tf.GraphKeys.TRAINABLE_VARIABLES, scope='discriminator')\noptimizer = tf.train.RMSPropOptimizer(learning_rate, use_locking=True)\n# optimizer = tf.train.GradientDescentOptimizer(learning_rate, use_locking=True)\n\n# Compute gradients\nd_grads = tf.gradients(d_loss, d_vars)\ng_grads = tf.gradients(g_loss, g_vars)\n# Merge variable and gradient lists\nvariables = d_vars + g_vars\ngrads = d_grads + g_grads\n\n \nif method == 'simga':\n apply_vec = list(zip(grads, variables))\nelif method == 'conopt':\n # Reguliarizer\n reg = 0.5 * sum(\n tf.reduce_sum(tf.square(g)) for g in grads\n )\n # Jacobian times gradiant\n Jgrads = tf.gradients(reg, variables)\n \n apply_vec = [\n (g + reg_param * Jg, v)\n for (g, Jg, v) in zip(grads, Jgrads, variables) if Jg is not None\n ]\n \nelse:\n raise NotImplementedError\n\nwith tf.control_dependencies([g for (g, v) in apply_vec]):\n train_op = optimizer.apply_gradients(apply_vec)", "_____no_output_____" ], [ "if do_eigen:\n jacobian_rows = []\n g_grads = tf.gradients(g_loss, g_vars)\n g_grads = [-g for g in g_grads]\n d_grads = tf.gradients(d_loss, d_vars)\n d_grads = [-g for g in d_grads]\n\n for g in tqdm_notebook(g_grads + d_grads):\n g = tf.reshape(g, [-1])\n len_g = int(g.get_shape()[0])\n for i in tqdm_notebook(range(len_g)):\n g_row = tf.gradients(g[i], g_vars)\n d_row = tf.gradients(g[i], d_vars)\n jacobian_rows.append(g_row + d_row)", "_____no_output_____" ], [ "def get_J(J_rows):\n J_rows_linear = [np.concatenate([g.flatten() for g in row]) for row in J_rows]\n J = np.array(J_rows_linear)\n return J\n\ndef process_J(J, save_file, bbox=None):\n eig, eigv = np.linalg.eig(J)\n eig_real = np.array([p.real for p in eig])\n complex_scatter(eig, save_file=save_file, bbox=bbox)\n\n \ndef process_J_conopt(J, reg, save_file, bbox=None):\n J2 = J - reg * np.dot(J.T, J)\n eig, eigv = np.linalg.eig(J2)\n eig_real = np.array([p.real for p in eig])\n complex_scatter(eig, save_file=save_file, bbox=bbox)\n", "_____no_output_____" ], [ "sess = tf.InteractiveSession()\nsess.run(tf.global_variables_initializer())\n", "_____no_output_____" ], [ "# Real distribution\nx_out = np.concatenate([sess.run(x_real) for i in range(5)], axis=0)\nkde(x_out[:, 0], x_out[:, 1], bbox=bbox, cmap='Reds', save_file='gt.png')", "_____no_output_____" ], [ "if not os.path.exists(outdir):\n os.makedirs(outdir)\n \neigrawdir = os.path.join(outdir, 'eigs_raw')\nif not os.path.exists(eigrawdir):\n os.makedirs(eigrawdir)\n \neigdir = os.path.join(outdir, 'eigs')\nif not os.path.exists(eigdir):\n os.makedirs(eigdir)\n\n \neigdir_conopt = os.path.join(outdir, 'eigs_conopt')\nif not os.path.exists(eigdir_conopt):\n os.makedirs(eigdir_conopt)\n \nztest = [np.random.randn(batch_size, z_dim) for i in range(5)]\nprogress = tqdm_notebook(range(niter))\n\nif do_eigen:\n J_rows = sess.run(jacobian_rows)\n J = get_J(J_rows)\n\nfor i in progress:\n sess.run(train_op)\n d_loss_out, g_loss_out = sess.run([d_loss, g_loss])\n \n if do_eigen and i % 500 == 0:\n J[:, :] = 0.\n for k in range(10):\n J_rows = sess.run(jacobian_rows)\n J += get_J(J_rows)/10.\n with open(os.path.join(eigrawdir, 'J_%d.npz' % i), 'wb') as f:\n np.save(f, J)\n\n progress.set_description('d_loss = %.4f, g_loss =%.4f' % (d_loss_out, g_loss_out))\n if i % n_save == 0:\n x_out = np.concatenate([sess.run(x_fake, feed_dict={z: zt}) for zt in ztest], axis=0)\n kde(x_out[:, 0], x_out[:, 1], bbox=bbox, save_file=os.path.join(outdir,'%d.png' % i))", "\n" ], [ "import re\nimport glob\nimport matplotlib\nmatplotlib.rcParams.update({'font.size': 16})\n\npattern = r'J_(?P<it>0).npz'\n\nbbox = [-3.5, 0.75, -1.2, 1.2]\n\n\neigrawdir = os.path.join(outdir, 'eigs_raw')\nif not os.path.exists(eigrawdir):\n os.makedirs(eigrawdir)\n \neigdir = os.path.join(outdir, 'eigs')\nif not os.path.exists(eigdir):\n os.makedirs(eigdir)\n\n \neigdir_conopt = os.path.join(outdir, 'eigs_conopt')\nif not os.path.exists(eigdir_conopt):\n os.makedirs(eigdir_conopt)\n \nout_files = glob.glob(os.path.join(eigrawdir, '*.npz'))\nmatches = [re.fullmatch(pattern, os.path.basename(s)) for s in out_files]\nmatches = [m for m in matches if m is not None]\n\nfor m in tqdm_notebook(matches):\n it = int(m.group('it'))\n J = np.load(os.path.join(eigrawdir, m.group()))\n process_J(J, save_file=os.path.join(eigdir, '%d.png' % it), bbox=bbox)\n process_J_conopt(J, reg=reg_param, save_file=os.path.join(eigdir_conopt, '%d.png' % it), bbox=bbox)", "\n" ] ] ]
[ "markdown", "code" ]
[ [ "markdown", "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ] ]
d0240247c03a25455cc36262a2d7cdb33e3b5360
32,782
ipynb
Jupyter Notebook
TensorFlowIntro/.ipynb_checkpoints/TensorFlowIntroduction-checkpoint.ipynb
dschmoeller/03TrafficSignClassifierCNN
d5d7b638d94adb4d0156d353519598d6da276dfd
[ "MIT" ]
null
null
null
TensorFlowIntro/.ipynb_checkpoints/TensorFlowIntroduction-checkpoint.ipynb
dschmoeller/03TrafficSignClassifierCNN
d5d7b638d94adb4d0156d353519598d6da276dfd
[ "MIT" ]
null
null
null
TensorFlowIntro/.ipynb_checkpoints/TensorFlowIntroduction-checkpoint.ipynb
dschmoeller/03TrafficSignClassifierCNN
d5d7b638d94adb4d0156d353519598d6da276dfd
[ "MIT" ]
null
null
null
32.880642
307
0.556006
[ [ [ "# Getting Started with Tensorflow", "_____no_output_____" ] ], [ [ "import tensorflow as tf\n\n# Create TensorFlow object called tensor\nhello_constant = tf.constant('Hello World!')\n\nwith tf.Session() as sess:\n # Run the tf.constant operation in the session\n output = sess.run(hello_constant)\n print(output); \n", "_____no_output_____" ], [ "A = tf.constant(1234)\nB = tf.constant([123, 456, 789])\nC = tf.constant([ [123, 145, 789], [222, 333, 444] ])\nprint(A)\n\n# A \"TensorFlow Session\", as shown above, is an environment for running a graph. \n# The session is in charge of allocating the operations to GPU(s) and/or CPU(s), including remote machines. \n# Let’s see how you use it.\n\nwith tf.Session() as sess:\n output = sess.run(A)\n print(output)", "_____no_output_____" ], [ "# Sadly you can’t just set x to your dataset and put it in TensorFlow, \n# because over time you'll want your TensorFlow model to take in different datasets with different parameters. \n# You need tf.placeholder()!\n# tf.placeholder() returns a tensor that gets its value from data passed to the tf.session.run() function, \n# allowing you to set the input right before the session runs.\n# Use the feed_dict parameter in tf.session.run() to set the placeholder tensor. \n# The above example shows the tensor x being set to the string \"Hello, world\". \n# It's also possible to set more than one tensor using feed_dict as shown below.\n\nx = tf.placeholder(tf.string)\nwith tf.Session() as sess: \n output = sess.run(x, feed_dict={x: \"Hello World\"})\n \nx = tf.placeholder(tf.string)\ny = tf.placeholder(tf.int32)\nz = tf.placeholder(tf.float32)\nwith tf.Session() as sess:\n output = sess.run(x, feed_dict={x: 'Test String', y: 123, z: 45.67})", "_____no_output_____" ], [ "# Applying math\ntf.multiply()\ntf.subtract()\ntf.add()\n\n# Sometimes the inputs have to the casted in that regard\ntf.cast(tf.const(1), tf.float64)\n\n# constants and placeholder are not mutable!!!\n# --> variables: tf.Variable()\n# --> needs to be initialized by tf.global_variables_initializer()\n# --> good practice is to randomly initialze weights: tf_truncated_normal()\n# Example for classification: \nweights = tf.Variable(tf.truncated_normal((n_features, n_labels)))\n\n# Zero method to initialize any variable with zeros (e.g the bias terms)\nbias = tf.Variable(tf.zeros(n_labels))\n\n# Multiplication for matrices: tf.matmul()\n\n# Softmax function \ntf.nn.softmax()\n\n# Arbitrary dimension placeholders\n# Features and Labels (e.g. for Neural Networks)\nfeatures = tf.placeholder(tf.float32, [None, n_input])\nlabels = tf.placeholder(tf.float32, [None, n_classes])\n\n# Relu function (Activation function) \ntf.nn.relu()\n\n# Sticking hidden layers together\n# Hidden Layer with ReLU activation function\nhidden_layer = tf.add(tf.matmul(features, hidden_weights), hidden_biases)\nhidden_layer = tf.nn.relu(hidden_layer)\noutput = tf.add(tf.matmul(hidden_layer, output_weights), output_biases)\n\n# Variables have to be initialized as well in order to use them in the session\ntf.global_variables_initializer()", "_____no_output_____" ] ], [ [ "# Build a Neural Network with Tensorflow", "_____no_output_____" ] ], [ [ "# Coding example for building a neural network with tensorflow\n# Quiz Solution\nimport tensorflow as tf\n\noutput = None\nhidden_layer_weights = [\n [0.1, 0.2, 0.4],\n [0.4, 0.6, 0.6],\n [0.5, 0.9, 0.1],\n [0.8, 0.2, 0.8]]\nout_weights = [\n [0.1, 0.6],\n [0.2, 0.1],\n [0.7, 0.9]]\n\n# Weights and biases\nweights = [\n tf.Variable(hidden_layer_weights),\n tf.Variable(out_weights)]\nbiases = [\n tf.Variable(tf.zeros(3)),\n tf.Variable(tf.zeros(2))]\n\n# Input\nfeatures = tf.Variable([[1.0, 2.0, 3.0, 4.0], [-1.0, -2.0, -3.0, -4.0], [11.0, 12.0, 13.0, 14.0]])\n\n# TODO: Create Model\nhidden_layer = tf.add(tf.matmul(features, weights[0]), biases[0])\nhidden_layer = tf.nn.relu(hidden_layer)\nlogits = tf.add(tf.matmul(hidden_layer, weights[1]), biases[1])\n\n# TODO: save and print session results on variable output\nwith tf.Session() as sess:\n sess.run(tf.global_variables_initializer())\n output = sess.run(logits)\n print(output)\n\n", "_____no_output_____" ] ], [ [ "# Deep Neural Networks in Tensorflow", "_____no_output_____" ] ], [ [ "# For stacking muliple layers --> Deep NN \n# Store layers weight & bias\nweights = {\n 'hidden_layer': tf.Variable(tf.random_normal([n_input, n_hidden_layer])),\n 'out': tf.Variable(tf.random_normal([n_hidden_layer, n_classes]))\n}\nbiases = {\n 'hidden_layer': tf.Variable(tf.random_normal([n_hidden_layer])),\n 'out': tf.Variable(tf.random_normal([n_classes]))\n}", "_____no_output_____" ], [ "# Example for input an image\n\n#The MNIST data is made up of 28px by 28px images with a single channel. \n# The tf.reshape() function above reshapes the 28px by 28px matrices in x into row vectors of 784px.\n\n# tf Graph input\nx = tf.placeholder(\"float\", [None, 28, 28, 1])\ny = tf.placeholder(\"float\", [None, n_classes])\n\nx_flat = tf.reshape(x, [-1, n_input])\n\n", "_____no_output_____" ], [ "# Builidng the model\n# Hidden layer with RELU activation\nlayer_1 = tf.add(tf.matmul(x_flat, weights['hidden_layer']),\\\n biases['hidden_layer'])\nlayer_1 = tf.nn.relu(layer_1)\n# Output layer with linear activation\nlogits = tf.add(tf.matmul(layer_1, weights['out']), biases['out'])", "_____no_output_____" ], [ "# Define the optimizer and the cost function\n# Define loss and optimizer\ncost = tf.reduce_mean(\\\n tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=y))\noptimizer = tf.train.GradientDescentOptimizer(learning_rate=learning_rate)\\\n .minimize(cost)", "_____no_output_____" ], [ "# How to run the actual session in TF\n# Initializing the variables\ninit = tf.global_variables_initializer()\n\n\n# Launch the graph\nwith tf.Session() as sess:\n sess.run(init)\n # Training cycle\n for epoch in range(training_epochs):\n total_batch = int(mnist.train.num_examples/batch_size)\n # Loop over all batches\n for i in range(total_batch):\n batch_x, batch_y = mnist.train.next_batch(batch_size)\n # Run optimization op (backprop) and cost op (to get loss value)\n sess.run(optimizer, feed_dict={x: batch_x, y: batch_y})", "_____no_output_____" ] ], [ [ "# Saving Variables and trained Models and load them back\nYou save the particular **session** in a file ", "_____no_output_____" ] ], [ [ "import tensorflow as tf\n\n# The file path to save the data\nsave_file = './model.ckpt'\n\n# Two Tensor Variables: weights and bias\nweights = tf.Variable(tf.truncated_normal([2, 3]))\nbias = tf.Variable(tf.truncated_normal([3]))\n\n# Class used to save and/or restore Tensor Variables\nsaver = tf.train.Saver()\n\nwith tf.Session() as sess:\n # Initialize all the Variables\n sess.run(tf.global_variables_initializer())\n\n # Show the values of weights and bias\n print('Weights:')\n print(sess.run(weights))\n print('Bias:')\n print(sess.run(bias))\n\n # Save the model\n saver.save(sess, save_file)", "_____no_output_____" ], [ "# Loading the variables back\n\n# Remove the previous weights and bias\ntf.reset_default_graph()\n\n# Two Variables: weights and bias\nweights = tf.Variable(tf.truncated_normal([2, 3]))\nbias = tf.Variable(tf.truncated_normal([3]))\n\n# Class used to save and/or restore Tensor Variables\nsaver = tf.train.Saver()\n\nwith tf.Session() as sess:\n # Load the weights and bias\n saver.restore(sess, save_file)\n\n # Show the values of weights and bias\n print('Weight:')\n print(sess.run(weights))\n print('Bias:')\n print(sess.run(bias))", "_____no_output_____" ] ], [ [ "### ... same works for models. Just train a NN like shown above and save the session afterwards", "_____no_output_____" ], [ "# Dropout for regularization in Tensorflow", "_____no_output_____" ] ], [ [ "# In tensorflow, dropout is just another \"layer\" in the model\n#During training, a good starting value for keep_prob is 0.5.\n#During testing, use a keep_prob value of 1.0 to keep all units and maximize the power of the model.\n\nkeep_prob = tf.placeholder(tf.float32) # probability to keep units\n\nhidden_layer = tf.add(tf.matmul(features, weights[0]), biases[0])\nhidden_layer = tf.nn.relu(hidden_layer)\nhidden_layer = tf.nn.dropout(hidden_layer, keep_prob)\n\nlogits = tf.add(tf.matmul(hidden_layer, weights[1]), biases[1])", "_____no_output_____" ] ], [ [ "# Convolutinal Neural Network (CNN)", "_____no_output_____" ] ], [ [ "# Note the output shape of conv will be [1, 16, 16, 20]. \n# It's 4D to account for batch size, but more importantly, it's not [1, 14, 14, 20]. \n# This is because the padding algorithm TensorFlow uses is not exactly the same as the one above. \n# An alternative algorithm is to switch padding from 'SAME' to 'VALID'", "_____no_output_____" ], [ "input = tf.placeholder(tf.float32, (None, 32, 32, 3))\nfilter_weights = tf.Variable(tf.truncated_normal((8, 8, 3, 20))) # (height, width, input_depth, output_depth)\nfilter_bias = tf.Variable(tf.zeros(20))\nstrides = [1, 2, 2, 1] # (batch, height, width, depth)\npadding = 'SAME'\nconv = tf.nn.conv2d(input, filter_weights, strides, padding) + filter_bias", "_____no_output_____" ] ], [ [ "## Example code for constructing a CNN", "_____no_output_____" ] ], [ [ "# Load data set \n# Batch, scale and one-hot-encode it\n# Set Parameters\n\nfrom tensorflow.examples.tutorials.mnist import input_data\nmnist = input_data.read_data_sets(\".\", one_hot=True, reshape=False)\n\nimport tensorflow as tf\n\n# Parameters\nlearning_rate = 0.00001\nepochs = 10\nbatch_size = 128\n\n# Number of samples to calculate validation and accuracy\n# Decrease this if you're running out of memory to calculate accuracy\ntest_valid_size = 256\n\n# Network Parameters\nn_classes = 10 # MNIST total classes (0-9 digits)\ndropout = 0.75 # Dropout, probability to keep units", "_____no_output_____" ], [ "# Define and store layers and biases\n# Store layers weight & bias\nweights = {\n 'wc1': tf.Variable(tf.random_normal([5, 5, 1, 32])),\n 'wc2': tf.Variable(tf.random_normal([5, 5, 32, 64])),\n 'wd1': tf.Variable(tf.random_normal([7*7*64, 1024])),\n 'out': tf.Variable(tf.random_normal([1024, n_classes]))}\n\nbiases = {\n 'bc1': tf.Variable(tf.random_normal([32])),\n 'bc2': tf.Variable(tf.random_normal([64])),\n 'bd1': tf.Variable(tf.random_normal([1024])),\n 'out': tf.Variable(tf.random_normal([n_classes]))}", "_____no_output_____" ], [ "# Apply Convolution (i.e. create a convolution layer)\n# The tf.nn.conv2d() function computes the convolution against weight W\ndef conv2d(x, W, b, strides=1):\n x = tf.nn.conv2d(x, W, strides=[1, strides, strides, 1], padding='SAME')\n x = tf.nn.bias_add(x, b)\n return tf.nn.relu(x)\n\n#In TensorFlow, strides is an array of 4 elements; the first element in this array indicates the stride for batch \n#and last element indicates stride for features. \n#It's good practice to remove the batches or features you want to skip from the data set rather than use a stride to skip them. \n#You can always set the first and last element to 1 in strides in order to use all batches and features.\n\n#The middle two elements are the strides for height and width respectively. \n#I've mentioned stride as one number because you usually have a square stride where height = width. \n#When someone says they are using a stride of 3, they usually mean tf.nn.conv2d(x, W, strides=[1, 3, 3, 1])", "_____no_output_____" ], [ "# Max Pooling\ndef maxpool2d(x, k=2):\n return tf.nn.max_pool(\n x,\n ksize=[1, k, k, 1],\n strides=[1, k, k, 1],\n padding='SAME')\n\n# The tf.nn.max_pool() function does exactly what you would expect, \n# it performs max pooling with the ksize parameter as the size of the filter.\n", "_____no_output_____" ], [ "# Sticking the model together\n\ndef conv_net(x, weights, biases, dropout):\n # Layer 1 - 28*28*1 to 14*14*32\n conv1 = conv2d(x, weights['wc1'], biases['bc1'])\n conv1 = maxpool2d(conv1, k=2)\n\n # Layer 2 - 14*14*32 to 7*7*64\n conv2 = conv2d(conv1, weights['wc2'], biases['bc2'])\n conv2 = maxpool2d(conv2, k=2)\n\n # Fully connected layer - 7*7*64 to 1024\n fc1 = tf.reshape(conv2, [-1, weights['wd1'].get_shape().as_list()[0]]) # The reshape step is to flatten the filter layers \n fc1 = tf.add(tf.matmul(fc1, weights['wd1']), biases['bd1'])\n fc1 = tf.nn.relu(fc1)\n fc1 = tf.nn.dropout(fc1, dropout)\n\n # Output Layer - class prediction - 1024 to 10\n out = tf.add(tf.matmul(fc1, weights['out']), biases['out'])\n return out\n", "_____no_output_____" ], [ "# Run the session in tensorflow\n\n# tf Graph input\nx = tf.placeholder(tf.float32, [None, 28, 28, 1])\ny = tf.placeholder(tf.float32, [None, n_classes])\nkeep_prob = tf.placeholder(tf.float32)\n\n# Model\nlogits = conv_net(x, weights, biases, keep_prob)\n\n# Define loss and optimizer\ncost = tf.reduce_mean(\\\n tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=y))\noptimizer = tf.train.GradientDescentOptimizer(learning_rate=learning_rate)\\\n .minimize(cost)\n\n# Accuracy\ncorrect_pred = tf.equal(tf.argmax(logits, 1), tf.argmax(y, 1))\naccuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32))\n\n# Initializing the variables\ninit = tf. global_variables_initializer()\n\n# Launch the graph\nwith tf.Session() as sess:\n sess.run(init)\n\n for epoch in range(epochs):\n for batch in range(mnist.train.num_examples//batch_size):\n batch_x, batch_y = mnist.train.next_batch(batch_size)\n sess.run(optimizer, feed_dict={\n x: batch_x,\n y: batch_y,\n keep_prob: dropout})\n\n # Calculate batch loss and accuracy\n loss = sess.run(cost, feed_dict={\n x: batch_x,\n y: batch_y,\n keep_prob: 1.})\n valid_acc = sess.run(accuracy, feed_dict={\n x: mnist.validation.images[:test_valid_size],\n y: mnist.validation.labels[:test_valid_size],\n keep_prob: 1.})\n\n print('Epoch {:>2}, Batch {:>3} -'\n 'Loss: {:>10.4f} Validation Accuracy: {:.6f}'.format(\n epoch + 1,\n batch + 1,\n loss,\n valid_acc))\n\n # Calculate Test Accuracy\n test_acc = sess.run(accuracy, feed_dict={\n x: mnist.test.images[:test_valid_size],\n y: mnist.test.labels[:test_valid_size],\n keep_prob: 1.})\n print('Testing Accuracy: {}'.format(test_acc))", "_____no_output_____" ] ], [ [ "# LeNet Architecture", "_____no_output_____" ], [ "## Load Data\n\nLoad the MNIST data, which comes pre-loaded with TensorFlow.\n\nYou do not need to modify this section.", "_____no_output_____" ] ], [ [ "from tensorflow.examples.tutorials.mnist import input_data\n\nmnist = input_data.read_data_sets(\"MNIST_data/\", reshape=False)\nX_train, y_train = mnist.train.images, mnist.train.labels\nX_validation, y_validation = mnist.validation.images, mnist.validation.labels\nX_test, y_test = mnist.test.images, mnist.test.labels\n\nassert(len(X_train) == len(y_train))\nassert(len(X_validation) == len(y_validation))\nassert(len(X_test) == len(y_test))\n\nprint()\nprint(\"Image Shape: {}\".format(X_train[0].shape))\nprint()\nprint(\"Training Set: {} samples\".format(len(X_train)))\nprint(\"Validation Set: {} samples\".format(len(X_validation)))\nprint(\"Test Set: {} samples\".format(len(X_test)))", "_____no_output_____" ] ], [ [ "## Split up data into training, validation and test set\n", "_____no_output_____" ] ], [ [ "import numpy as np\n\n# Pad images with 0s\nX_train = np.pad(X_train, ((0,0),(2,2),(2,2),(0,0)), 'constant')\nX_validation = np.pad(X_validation, ((0,0),(2,2),(2,2),(0,0)), 'constant')\nX_test = np.pad(X_test, ((0,0),(2,2),(2,2),(0,0)), 'constant')\n \nprint(\"Updated Image Shape: {}\".format(X_train[0].shape))", "_____no_output_____" ] ], [ [ "## Visualize Data\n\nView a sample from the dataset.\n\nYou do not need to modify this section.", "_____no_output_____" ] ], [ [ "import random\nimport numpy as np\nimport matplotlib.pyplot as plt\n%matplotlib inline\n\nindex = random.randint(0, len(X_train))\nimage = X_train[index].squeeze()\n\nplt.figure(figsize=(1,1))\nplt.imshow(image, cmap=\"gray\")\nprint(y_train[index])", "_____no_output_____" ] ], [ [ "## Preprocess Data\n\nShuffle the training data.\n\nYou do not need to modify this section.", "_____no_output_____" ] ], [ [ "from sklearn.utils import shuffle\n\nX_train, y_train = shuffle(X_train, y_train)", "_____no_output_____" ] ], [ [ "## Setup TensorFlow\nThe `EPOCH` and `BATCH_SIZE` values affect the training speed and model accuracy.\n", "_____no_output_____" ] ], [ [ "import tensorflow as tf\n\nEPOCHS = 10\nBATCH_SIZE = 128", "_____no_output_____" ] ], [ [ "### Input\nThe LeNet architecture accepts a 32x32xC image as input, where C is the number of color channels. Since MNIST images are grayscale, C is 1 in this case.\n\n### Architecture\n**Layer 1: Convolutional.** The output shape should be 28x28x6.\n\n**Activation.** Your choice of activation function.\n\n**Pooling.** The output shape should be 14x14x6.\n\n**Layer 2: Convolutional.** The output shape should be 10x10x16.\n\n**Activation.** Your choice of activation function.\n\n**Pooling.** The output shape should be 5x5x16.\n\n**Flatten.** Flatten the output shape of the final pooling layer such that it's 1D instead of 3D. The easiest way to do is by using `tf.contrib.layers.flatten`, which is already imported for you.\n\n**Layer 3: Fully Connected.** This should have 120 outputs.\n\n**Activation.** Your choice of activation function.\n\n**Layer 4: Fully Connected.** This should have 84 outputs.\n\n**Activation.** Your choice of activation function.\n\n**Layer 5: Fully Connected (Logits).** This should have 10 outputs.\n\n### Output\nReturn the result of the 2nd fully connected layer.", "_____no_output_____" ] ], [ [ "from tensorflow.contrib.layers import flatten\n\ndef LeNet(x): \n # Arguments used for tf.truncated_normal, randomly defines variables for the weights and biases for each layer\n mu = 0\n sigma = 0.1\n \n weights = {\n 'wc1': tf.Variable(tf.random_normal([5, 5, 1, 6])),\n 'wc2': tf.Variable(tf.random_normal([5, 5, 6, 16])),\n 'wd1': tf.Variable(tf.random_normal([400, 120])),\n 'wd2': tf.Variable(tf.random_normal([120, 84])),\n 'out': tf.Variable(tf.random_normal([84, 10]))}\n\n biases = {\n 'bc1': tf.Variable(tf.random_normal([6])),\n 'bc2': tf.Variable(tf.random_normal([16])),\n 'bd1': tf.Variable(tf.random_normal([120])),\n 'bd2': tf.Variable(tf.random_normal([84])),\n 'out': tf.Variable(tf.random_normal([10]))}\n \n \n # TODO: Layer 1: Convolutional. Input = 32x32x1. Output = 28x28x6.\n layer1 = tf.nn.conv2d(x, weights['wc1'], strides=[1, 1, 1, 1], padding=\"VALID\")\n layer1 = tf.nn.bias_add(layer1, biases['bc1'])\n # TODO: Activation.\n layer1 = tf.nn.relu(layer1)\n # TODO: Pooling. Input = 28x28x6. Output = 14x14x6.\n layer1 = tf.nn.max_pool(layer1, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding=\"SAME\")\n \n # TODO: Layer 2: Convolutional. Output = 10x10x16.\n layer2 = tf.nn.conv2d(layer1, weights['wc2'], strides=[1, 1, 1, 1], padding=\"VALID\")\n layer2 = tf.nn.bias_add(layer2, biases['bc2'])\n # TODO: Activation.\n layer2 = tf.nn.relu(layer2)\n # TODO: Pooling. Input = 10x10x16. Output = 5x5x16.\n layer2 = tf.nn.max_pool(layer2, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding=\"SAME\")\n \n # TODO: Flatten. Input = 5x5x16. Output = 400.\n flattenedLayer2 = tf.contrib.layers.flatten(layer2)\n \n # TODO: Layer 3: Fully Connected. Input = 400. Output = 120.\n layer3 = tf.add(tf.matmul(flattenedLayer2, weights['wd1']), biases['bd1'])\n # TODO: Activation.\n layer3 = tf.nn.relu(layer3)\n \n # TODO: Layer 4: Fully Connected. Input = 120. Output = 84.\n layer4 = tf.add(tf.matmul(layer3, weights['wd2']), biases['bd2'])\n # TODO: Activation.\n layer4 = tf.nn.relu(layer4)\n \n # TODO: Layer 5: Fully Connected. Input = 84. Output = 10.\n logits = tf.add(tf.matmul(layer4, weights['out']), biases['out'])\n \n return logits", "_____no_output_____" ] ], [ [ "## Features and Labels\nTrain LeNet to classify [MNIST](http://yann.lecun.com/exdb/mnist/) data.\n\n`x` is a placeholder for a batch of input images.\n`y` is a placeholder for a batch of output labels.\n", "_____no_output_____" ] ], [ [ "x = tf.placeholder(tf.float32, (None, 32, 32, 1))\ny = tf.placeholder(tf.int32, (None))\none_hot_y = tf.one_hot(y, 10)", "_____no_output_____" ] ], [ [ "## Training Pipeline\nCreate a training pipeline that uses the model to classify MNIST data.\n", "_____no_output_____" ] ], [ [ "rate = 0.001\n\nlogits = LeNet(x)\ncross_entropy = tf.nn.softmax_cross_entropy_with_logits(labels=one_hot_y, logits=logits)\nloss_operation = tf.reduce_mean(cross_entropy)\noptimizer = tf.train.AdamOptimizer(learning_rate = rate)\ntraining_operation = optimizer.minimize(loss_operation)", "_____no_output_____" ] ], [ [ "## Model Evaluation\nEvaluate how well the loss and accuracy of the model for a given dataset.\n", "_____no_output_____" ] ], [ [ "correct_prediction = tf.equal(tf.argmax(logits, 1), tf.argmax(one_hot_y, 1))\naccuracy_operation = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))\nsaver = tf.train.Saver()\n\ndef evaluate(X_data, y_data):\n num_examples = len(X_data)\n total_accuracy = 0\n sess = tf.get_default_session()\n for offset in range(0, num_examples, BATCH_SIZE):\n batch_x, batch_y = X_data[offset:offset+BATCH_SIZE], y_data[offset:offset+BATCH_SIZE]\n accuracy = sess.run(accuracy_operation, feed_dict={x: batch_x, y: batch_y})\n total_accuracy += (accuracy * len(batch_x))\n return total_accuracy / num_examples", "_____no_output_____" ] ], [ [ "## Train the Model\nRun the training data through the training pipeline to train the model.\n\nBefore each epoch, shuffle the training set.\n\nAfter each epoch, measure the loss and accuracy of the validation set.\n\nSave the model after training.", "_____no_output_____" ] ], [ [ "with tf.Session() as sess:\n sess.run(tf.global_variables_initializer())\n num_examples = len(X_train)\n \n print(\"Training...\")\n print()\n for i in range(EPOCHS):\n X_train, y_train = shuffle(X_train, y_train)\n for offset in range(0, num_examples, BATCH_SIZE):\n end = offset + BATCH_SIZE\n batch_x, batch_y = X_train[offset:end], y_train[offset:end]\n sess.run(training_operation, feed_dict={x: batch_x, y: batch_y})\n \n validation_accuracy = evaluate(X_validation, y_validation)\n print(\"EPOCH {} ...\".format(i+1))\n print(\"Validation Accuracy = {:.3f}\".format(validation_accuracy))\n print()\n \n saver.save(sess, './lenet')\n print(\"Model saved\")", "_____no_output_____" ] ], [ [ "## Evaluate the Model (on the test set)\nOnce you are completely satisfied with your model, evaluate the performance of the model on the test set.\n\nBe sure to only do this once!\n\nIf you were to measure the performance of your trained model on the test set, then improve your model, and then measure the performance of your model on the test set again, that would invalidate your test results. You wouldn't get a true measure of how well your model would perform against real data.", "_____no_output_____" ] ], [ [ "with tf.Session() as sess:\n saver.restore(sess, tf.train.latest_checkpoint('.'))\n\n test_accuracy = evaluate(X_test, y_test)\n print(\"Test Accuracy = {:.3f}\".format(test_accuracy))", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown" ], [ "code", "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ] ]
d0240fdced14f4f68fd4fbcf17819cac2f847eae
9,915
ipynb
Jupyter Notebook
ipynbs/reshape_demo.ipynb
zbytes/fsqs-tips-tricks-notes
cb56832646f83f94cfec553d314e8fce8ed73b94
[ "Apache-2.0" ]
2
2021-09-11T02:10:52.000Z
2021-09-11T16:24:01.000Z
ipynbs/reshape_demo.ipynb
zbytes/fsqs-tips-tricks-notes
cb56832646f83f94cfec553d314e8fce8ed73b94
[ "Apache-2.0" ]
null
null
null
ipynbs/reshape_demo.ipynb
zbytes/fsqs-tips-tricks-notes
cb56832646f83f94cfec553d314e8fce8ed73b94
[ "Apache-2.0" ]
null
null
null
39.035433
1,118
0.493898
[ [ [ "<a href=\"https://colab.research.google.com/github/bhuwanupadhyay/codes/blob/main/ipynbs/reshape_demo.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>", "_____no_output_____" ] ], [ [ "pip install pydicom", "Collecting pydicom\n Downloading pydicom-2.2.2-py3-none-any.whl (2.0 MB)\n\u001b[K |████████████████████████████████| 2.0 MB 25.5 MB/s \n\u001b[?25hInstalling collected packages: pydicom\nSuccessfully installed pydicom-2.2.2\n" ], [ "# Import tensorflow\nimport logging\n\nimport tensorflow as tf\nimport keras.backend as K\n\n# Helper libraries\nimport math\nimport numpy as np\nimport pandas as pd\nimport pydicom\nimport os\nimport sys\nimport time\n\n# Imports for dataset manipulation\nfrom sklearn.model_selection import train_test_split\nfrom keras.preprocessing.image import ImageDataGenerator\n\n# Improve progress bar display\nimport tqdm\nimport tqdm.auto\n\ntqdm.tqdm = tqdm.auto.tqdm\n\n#tf.enable_eager_execution() #comment this out if causing errors\nlogger = tf.get_logger()\nlogger.setLevel(logging.DEBUG)\n\n\n### SET MODEL CONFIGURATIONS ###\n# Data Loading\nCSV_PATH = 'label_data/CCC_clean.csv'\nIMAGE_BASE_PATH = './data/'\ntest_size_percent = 0.15 # percent of total data reserved for testing\n\nprint(IMAGE_BASE_PATH)\n\n# Data Augmentation\nmirror_im = False\n\n# Loss\nlambda_coord = 5\nepsilon = 0.00001\n\n# Learning\nstep_size = 0.00001\nBATCH_SIZE = 5\nnum_epochs = 1\n\n# Saving\nshape_path = 'trained_model/model_shape.json'\nweight_path = 'trained_model/model_weights.h5'\n\n# TensorBoard\ntb_graph = False\ntb_update_freq = 'batch'\n\n### GET THE DATASET AND PREPROCESS IT ###\n\nprint(\"Loading and processing data\\n\")\n\ndata_frame = pd.read_csv(CSV_PATH)\n\n\"\"\"\nConstruct numpy ndarrays from the loaded csv to use as training\nand testing datasets.\n\"\"\"\n# zip all points for each image label together into a tuple\npoints = zip(data_frame['start_x'], data_frame['start_y'],\n data_frame['end_x'], data_frame['end_y'])\nimg_paths = data_frame['imgPath']\n\ndef path_to_image(path):\n \"\"\"\n Load a matrix of pixel values from the DICOM image stored at the\n input path.\n\n @param path - string, relative path (from IMAGE_BASE_PATH) to\n a DICOM file\n @return image - numpy ndarray (int), 2D matrix of pixel\n values of the image loaded from path\n \"\"\"\n # load image from path as numpy array\n image = pydicom.dcmread(os.path.join(IMAGE_BASE_PATH, path)).pixel_array\n return image\n\n\n# normalize dicom image pixel values to 0-1 range\ndef normalize_image(img):\n \"\"\"\n Normalize the pixel values in img to be withing the range\n of 0 to 1.\n\n @param img - numpy ndarray, 2D matrix of pixel values\n @return img - numpy ndarray (float), 2D matrix of pixel values, every\n element is valued between 0 and 1 (inclusive)\n \"\"\"\n img = img.astype(np.float32)\n img += abs(np.amin(img)) # account for negatives\n img /= np.amax(img)\n return img\n\n\n# normalize the ground truth bounding box labels wrt image dimensions\ndef normalize_points(points):\n \"\"\"\n Normalize values in points to be within the range of 0 to 1.\n\n @param points - 1x4 tuple, elements valued in the range of 0\n 512 (inclusive). This is known from the nature\n of the dataset used in this program\n @return - 1x4 numpy ndarray (float), elements valued in range\n 0 to 1 (inclusive)\n \"\"\"\n imDims = 512.0 # each image in our dataset is 512x512\n points = list(points)\n for i in range(len(points)):\n points[i] /= imDims\n return np.array(points).astype(np.float32)\n\n\n\"\"\"\nConvert the numpy array of paths to the DICOM images to pixel\nmatrices that have been normalized to a 0-1 range.\nAlso normalize the bounding box labels to make it easier for\nthe model to predict on them.\n\"\"\"\n\n# apply preprocessing functions\npoints = map(normalize_points, points)\nimgs = map(path_to_image, img_paths)\nimgs = map(normalize_image, imgs)\n\nprint(list(imgs))\n\n# reshape input image data to 4D shape (as expected by the model)\n# and cast all data to np arrays (just in case)\nimgs = np.array(imgs)\npoints = np.array(points)\nimgs = imgs.reshape((-1, 512, 512, 1))", "./data/\nLoading and processing data\n\n[array([[0., 0., 0., ..., 0., 0., 0.],\n [0., 0., 0., ..., 0., 0., 0.],\n [0., 0., 0., ..., 0., 0., 0.],\n ...,\n [0., 0., 0., ..., 0., 0., 0.],\n [0., 0., 0., ..., 0., 0., 0.],\n [0., 0., 0., ..., 0., 0., 0.]], dtype=float32)]\n" ], [ "", "_____no_output_____" ] ] ]
[ "markdown", "code" ]
[ [ "markdown" ], [ "code", "code", "code" ] ]
d0241044e5207c049561f415325a261625de66ba
152,074
ipynb
Jupyter Notebook
.ipynb_checkpoints/DSP-checkpoint.ipynb
Valentine-Efagene/Jupyter-Notebooks
91a1d98354a270d214316eba21e4a435b3e17f5d
[ "MIT" ]
null
null
null
.ipynb_checkpoints/DSP-checkpoint.ipynb
Valentine-Efagene/Jupyter-Notebooks
91a1d98354a270d214316eba21e4a435b3e17f5d
[ "MIT" ]
null
null
null
.ipynb_checkpoints/DSP-checkpoint.ipynb
Valentine-Efagene/Jupyter-Notebooks
91a1d98354a270d214316eba21e4a435b3e17f5d
[ "MIT" ]
null
null
null
92.221953
17,864
0.841893
[ [ [ "## 20 Sept 2019", "_____no_output_____" ], [ "<strong>RULES</strong><br>\n<strong>Date:</strong> Level 2 heading ## <br>\n<strong>Example Heading:</strong> Level 3 heading ###<br>\n<strong>Method Heading:</strong> Level 4 heading ####", "_____no_output_____" ], [ "### References", "_____no_output_____" ], [ "1. [Forester_W._Isen;_J._Moura]_DSP_for_MATLAB_and_La Volume II(z-lib.org)\n2. H. K. Dass, Advanced Engineering Mathematics\n3. [Forester_W._Isen;_J._Moura]_DSP_for_MATLAB_and_La Volume I(z-lib.org)\n4. [John_G_Proakis;_Dimitris_G_Manolakis]_Digital_Sig(z-lib.org)", "_____no_output_____" ], [ "### Imports", "_____no_output_____" ] ], [ [ "import numpy as np\nfrom sympy import oo\nimport math\nimport sympy as sp\nimport matplotlib.pyplot as plt\nimport matplotlib as mpl\nfrom mpl_toolkits.mplot3d import Axes3D\nfrom IPython.display import display\nfrom IPython.display import display_latex\nfrom sympy import latex\nimport math\nfrom scipy import signal\nfrom datetime import datetime", "_____no_output_____" ] ], [ [ "### Setup", "_____no_output_____" ] ], [ [ "sp.init_printing(use_latex = True)\n \nz, f, i = sp.symbols('z f i')\nx, k = sp.symbols('x k')", "_____no_output_____" ] ], [ [ "### Methods", "_____no_output_____" ] ], [ [ "# Usage: display_equation('u_x', x)\ndef display_equation(idx, symObj):\n if(isinstance(idx, str)):\n eqn = '\\\\[' + idx + ' = ' + latex(symObj) + '\\\\]'\n display_latex(eqn, raw=True)\n else:\n eqn = '\\\\[' + latex(idx) + ' = ' + latex(symObj) + '\\\\]'\n display_latex(eqn, raw=True)\n return", "_____no_output_____" ], [ "# Usage: display_full_latex('u_x')\ndef display_full_latex(idx):\n if(isinstance(idx, str)):\n eqn = '\\\\[' + idx + '\\\\]'\n display_latex(eqn, raw=True)\n else:\n eqn = '\\\\[' + latex(idx) + '\\\\]'\n display_latex(eqn, raw=True)\n return", "_____no_output_____" ], [ "# Usage: display_full_latex('u_x')\ndef display_full_latex(idx):\n if(isinstance(idx, str)):\n eqn = '\\\\[' + idx + '\\\\]'\n display_latex(eqn, raw=True)\n else:\n eqn = '\\\\[' + latex(idx) + '\\\\]'\n display_latex(eqn, raw=True)\n return", "_____no_output_____" ], [ "def ztrans(a, b):\n F = sp.summation(f/z**k, ( k, a, b ))\n return F", "_____no_output_____" ], [ "def display_ztrans(f, k, limits = (-4, 4)):\n F = sp.summation(f/z**k, ( k, -oo, oo ))\n display_equation('f(k)', f)\n display_equation('F(k)_{\\infty}', F)\n\n F = sp.summation(f/z**k, (k, limits[0], limits[1]))\n display_equation('F(k)_{'+ str(limits[0]) + ',' + str(limits[1]) + '}', F)\n return", "_____no_output_____" ], [ "def sum_of_GP(a, r):\n return sp.simplify(a/(1-r))", "_____no_output_____" ], [ "# Credit: https://www.dsprelated.com/showcode/244.php\n \ndef zplane(b,a,filename=None):\n \"\"\"Plot the complex z-plane given a transfer function.\n \"\"\"\n\n # get a figure/plot\n ax = plt.subplot(111)\n\n # create the unit circle\n uc = patches.Circle((0,0), radius=1, fill=False,\n color='black', ls='dashed')\n ax.add_patch(uc)\n\n # The coefficients are less than 1, normalize the coeficients\n if np.max(b) > 1:\n kn = np.max(b)\n b = b/float(kn)\n else:\n kn = 1\n\n if np.max(a) > 1:\n kd = np.max(a)\n a = a/float(kd)\n else:\n kd = 1\n \n # Get the poles and zeros\n p = np.roots(a)\n z = np.roots(b)\n k = kn/float(kd)\n \n # Plot the zeros and set marker properties \n t1 = plt.plot(z.real, z.imag, 'go', ms=10)\n plt.setp( t1, markersize=10.0, markeredgewidth=1.0,\n markeredgecolor='k', markerfacecolor='g')\n\n # Plot the poles and set marker properties\n t2 = plt.plot(p.real, p.imag, 'rx', ms=10)\n plt.setp( t2, markersize=12.0, markeredgewidth=3.0,\n markeredgecolor='b', markerfacecolor='b')\n\n ax.spines['left'].set_position('center')\n ax.spines['bottom'].set_position('center')\n ax.spines['right'].set_visible(False)\n ax.spines['top'].set_visible(False)\n\n # set the ticks\n r = 1.5; plt.axis('scaled'); plt.axis([-r, r, -r, r])\n ticks = [-1, -.5, .5, 1]; plt.xticks(ticks); plt.yticks(ticks)\n\n if filename is None:\n plt.show()\n else:\n plt.savefig(filename)\n \n\n return z, p, k", "_____no_output_____" ] ], [ [ "### Z Transform", "_____no_output_____" ] ], [ [ "display_full_latex('X(z) = \\sum_{-\\infty}^{\\infty} x[n]z^{-n}')", "_____no_output_____" ] ], [ [ "### Tests", "_____no_output_____" ], [ "#### Convert Symbolic to Numeric", "_____no_output_____" ] ], [ [ "f = x**2\nf = sp.lambdify(x, f, 'numpy')\nf(2)", "_____no_output_____" ], [ "display_equation('f(x)', sp.summation(3**k, ( k, -oo, oo )))", "_____no_output_____" ], [ "display_equation('F(z)', sp.summation(3**k/z**k, ( k, -oo, oo )))", "_____no_output_____" ] ], [ [ "#### Partial Fractions", "_____no_output_____" ] ], [ [ "f = 1/(x**2 + x - 6)\ndisplay_equation('f(x)', f)\nf = sp.apart(f)\ndisplay_equation('f(x)_{canonical}', f)", "_____no_output_____" ] ], [ [ "#### Piecewise", "_____no_output_____" ] ], [ [ "f1 = 5**k\nf2 = 3**k\nf = sp.Piecewise((f1, k < 0), (f2, k >= 0))\ndisplay_equation('f(k)', f)", "_____no_output_____" ] ], [ [ "## 21 Sept 2019", "_____no_output_____" ], [ "#### Positive Time / Causal", "_____no_output_____" ] ], [ [ "f1 = k **2\nf2 = 3**k\nf = f1 * sp.Heaviside(k)\n# or\n#f = sp.Piecewise((0, k < 0), (f1, k >= 0))\ndisplay_equation('f(k)', f)\nsp.plot(f, (k, -10, 10))", "_____no_output_____" ] ], [ [ "#### Stem Plot", "_____no_output_____" ] ], [ [ "x = np.linspace(0.1, 2 * np.pi, 41)\ny = np.sin(x)\n\nplt.stem(x, y)\nplt.show()", "_____no_output_____" ] ], [ [ "#### zplane Plot", "_____no_output_____" ] ], [ [ "b = np.array([1, 1, 0, 0])\na = np.array([1, 1, 1])\nzplane(b,a)", "_____no_output_____" ] ], [ [ "### Filter", "_____no_output_____" ] ], [ [ "g = (1 + z**-2)/(1-1.2*z**-1+0.81*z**-2)\ndisplay_equation('F(z)', g)\nb = np.array([1,1])\na = np.array([1,-1.2,0.81])\nx = np.ones((1, 8))\n# Response\ny = signal.lfilter(b, a, x)\n# Reverse\nsignal.lfilter(a, b, y)", "_____no_output_____" ] ], [ [ "### [1] Example 2.2", "_____no_output_____" ] ], [ [ "radFreq = np.arange(0, 2*np.pi, 2*np.pi/499)\ng = np.exp(1j*radFreq)\nZxform= 1/(1-0.7*g**(-1))\n\nplt.plot(radFreq/np.pi,abs(Zxform))\nplt.title('Graph')\nplt.xlabel('Frequency, Units of π')\nplt.ylabel('H(x)')\nplt.grid(True)\nplt.show()", "_____no_output_____" ] ], [ [ "### [2] Chapter 19, Example 5", "_____no_output_____" ] ], [ [ "f = 3**(-k)\ndisplay_ztrans(f, k, (-4, 3))", "_____no_output_____" ] ], [ [ "### [2] Example 9", "_____no_output_____" ] ], [ [ "f1 = 5**k\nf2 = 3**k\nf = sp.Piecewise((f1, k < 0), (f2, k >= 0))\ndisplay_ztrans(f, k, (-3, 3))", "_____no_output_____" ], [ "p = sum_of_GP(z/5, z/5)\nq = sum_of_GP(1, 3/z)\ndisplay_equation('F(z)', sp.ratsimp(q + p))", "_____no_output_____" ] ], [ [ "## 28 Sept, 2019", "_____no_output_____" ], [ "### [3] Folding formula", "_____no_output_____" ], [ "fperceived = [ f - fsampling * NINT( f / fsampling ) ]", "_____no_output_____" ], [ "## 9 Oct, 2019", "_____no_output_____" ], [ "### [3] Section 4.3", "_____no_output_____" ], [ "### Equations", "_____no_output_____" ] ], [ [ "display_full_latex('F \\\\rightarrow analog')\ndisplay_full_latex('f \\\\rightarrow discrete')\ndisplay_full_latex('Nyquist frequency = F_s')\ndisplay_full_latex('Folding frequency = \\\\frac{F_s}{2}')\ndisplay_full_latex('F_{max} = \\\\frac{F_s}{2}')\ndisplay_full_latex('T = \\\\frac{1}{F_s}')\ndisplay_full_latex('f = \\\\frac{F}{F_s}')\ndisplay_full_latex('f_k = \\\\frac{k}{N}')\ndisplay_full_latex('F_k = F_0 + kF_s, k = \\\\pm 1, \\\\pm 2, ...')\ndisplay_full_latex('x_a(t) = Asin(2\\\\pi Ft + \\\\theta)')\ndisplay_full_latex('x(n) = Asin(\\\\frac{2\\\\pi nk}{N} + \\\\theta)')\ndisplay_full_latex('x(n) = Asin(2\\\\pi fn + \\\\theta)')\ndisplay_full_latex('x(n) = x_a (nT) = Acos(2\\\\pi \\\\frac{F_0 + kF_s}{F_s} n + \\\\theta)')\ndisplay_full_latex('t = nT')\ndisplay_full_latex('\\\\Omega = 2\\\\pi F')\ndisplay_full_latex('\\\\omega = 2\\\\pi f')\ndisplay_full_latex('\\\\omega = \\\\Omega T')\ndisplay_full_latex('x_q(n) = Q[x(n)]')\ndisplay_full_latex('e_q(n) = x_q(n) - x(n)')\ndisplay_full_latex('Interpolation function, g(t) = \\\\frac{sin2\\\\pi Bt}{2\\\\pi Bt}')\ndisplay_full_latex('x_a(t) = \\\\sum^\\\\infty _{n = - \\\\infty} x_a(\\\\frac{n}{F_s}).g(t - \\\\frac{n}{F_s})')\ndisplay_full_latex('\\\\Delta = \\\\frac{x_{max} - x_{min}}{L-1}, where L = Number of quantization levels')\ndisplay_full_latex('-\\\\frac{\\\\Delta}{2} \\\\leq e_q(n) \\\\leq \\\\frac{\\\\Delta}{2}')\ndisplay_full_latex('b \\\\geq log_2 L')\ndisplay_full_latex('SQNR = \\\\frac{3}{2}.2^{2b}')\ndisplay_full_latex('SQNR(dB) = 10log_{10}SQNR = 1.76 + 6.02b')", "_____no_output_____" ], [ "x = np.arange(0, 10, 1)\ny = np.power(0.9, x) * np.heaviside(np.power(0.9, x), 1)\n\ndisplay_full_latex('x_a(t) = 0.9^t')\ndisplay_full_latex('x(n) = 0.9^n')\n\nplt.stem(x, y)\nplt.plot(x, y, 'g-')\nplt.xticks(np.arange(0, 10, 1))\nplt.yticks(np.arange(0, 1.2, 0.1))\nplt.xlabel('n')\nplt.ylabel('x(n)')\nplt.grid(True)\nplt.show()", "_____no_output_____" ] ], [ [ "## 14 Oct, 2019", "_____no_output_____" ] ], [ [ "n = sp.symbols('n')\n\nx = np.arange(0, 10, 1)\ny = x * np.heaviside(x, 1)\n\nf = sp.Piecewise((0, n < 0), (n, n >= 0))\ndisplay_equation('u_r(n)', f)\n\nplt.stem(x, y)\nplt.plot(x, y, 'g-')\nplt.xticks(np.arange(0, 10, 1))\nplt.yticks(np.arange(0, 10, 1))\nplt.xlabel('n')\nplt.ylabel('x(n)')\nplt.grid(True)\nplt.show()", "_____no_output_____" ], [ "display_full_latex('E = \\\\sum^\\\\infty _{n = -\\\\infty} x|(n)|^2')\ndisplay_full_latex('P = \\\\lim_{N \\\\rightarrow \\\\infty} \\\\frac{1}{2N + 1} \\\\sum^ N _{n = -N} x|(n)|^2')", "_____no_output_____" ] ], [ [ "## 16 Oct, 2019", "_____no_output_____" ], [ "#### General form of the input-output relationships", "_____no_output_____" ] ], [ [ "display_full_latex('y(n) = -\\\\sum^N _{k = 1}a_k y(n-k) + \\\\sum^M _{k = 0}b_k x(n-k)')", "_____no_output_____" ] ], [ [ "### [4] Example 3.2", "_____no_output_____" ] ], [ [ "h = np.array([1, 2, 1, -1])\nx = np.array([1, 2, 3, 1])\ny = np.convolve(h, x, mode='full')\n#y = signal.convolve(h, x, mode='full', method='auto')\nprint(y)\n\nfig, (ax_orig, ax_h, ax_x) = plt.subplots(3, 1, sharex=True)\nax_orig.plot(h)\nax_orig.set_title('Impulse Response')\nax_orig.margins(0, 0.1)\nax_h.plot(x)\nax_h.set_title('Input Signal')\nax_h.margins(0, 0.1)\nax_x.plot(y)\nax_x.set_title('Output')\nax_x.margins(0, 0.1)\nfig.tight_layout()\nfig.show()", "[ 1 4 8 8 3 -2 -1]\n" ] ], [ [ "## 17 Oct, 2019", "_____no_output_____" ], [ "### Sum of an AP with common ratio r and first term a, starting from the zeroth term", "_____no_output_____" ] ], [ [ "a, r = sp.symbols('a r')\n\ns = sp.summation(a*r**k, ( k, 0, n ))\ndisplay_equation('S_n', s)", "_____no_output_____" ] ], [ [ "### Sum of positive powers of a", "_____no_output_____" ] ], [ [ "a = sp.symbols('a')\n\ns = sp.summation(a**k, ( k, 0, n ))\ndisplay_equation('S_n', s)", "_____no_output_____" ] ], [ [ "### [3] 4.12.3 Single Pole IIR", "_____no_output_____" ] ], [ [ "SR = 24\nb = 1\np = 0.8\ny = np.zeros((1, SR)).ravel()\nx = np.zeros((1, SR + 1)).ravel()\nx[0] = 1\ny[0] = b * x[0]\n\nfor n in range(1, SR):\n y[n] = b * x[n] + p * y[n - 1]\n \nplt.stem(y)", "_____no_output_____" ] ], [ [ "### Copying the method above for [4] 4.1 Averaging", "_____no_output_____" ] ], [ [ "x = np.array([1, 2, 3, 4, 5, 6, 7, 8, 9, 10])\ny[0] = b * x[0]\n\nfor n in range(1, len(x)):\n y[n] = (n/(n + 1)) * y[n - 1] + (1/(n + 1)) * x[n]\n \nprint(y[n], '\\n')", "5.5 \n\n" ] ], [ [ "### My Recursive Averaging Implementation", "_____no_output_____" ] ], [ [ "def avg(x, n):\n if (n < 0):\n return 0\n else:\n return (n/(n + 1)) * avg(x, n - 1) + (1/(n + 1)) * x[n]", "_____no_output_____" ], [ "x = np.array([1, 2, 3, 4, 5, 6, 7, 8, 9, 10])\naverage = avg(x, len(x) - 1)\nprint(average)", "5.5\n" ] ], [ [ "### Performance Comparism", "_____no_output_____" ] ], [ [ "from timeit import timeit\n\ncode_rec = '''\nimport numpy as np\n\ndef avg(x, n):\n if (n < 0):\n return 0\n else:\n return (n/(n + 1)) * avg(x, n - 1) + (1/(n + 1)) * x[n]\n \nx = np.array([1, 2, 3, 4, 5, 6, 7, 8, 9, 10])\naverage = avg(x, len(x) - 1)\n'''\n\ncode_py = '''\nimport numpy as np\n\nx = np.array([1, 2, 3, 4, 5, 6, 7, 8, 9, 10])\naverage = sum(x, len(x) - 1) / len(x)\n'''\n\ncode_loop = '''\nimport numpy as np\n\nx = np.array([1, 2, 3, 4, 5, 6, 7, 8, 9, 10])\nsum = 0\n\nfor i in x:\n sum += i\n \naverage = sum/len(x)\n'''\n\nrunning_time_rec = timeit(code_rec, number = 100) / 100\nrunning_time_py = timeit(code_py, number = 100) / 100\nrunning_time_loop = timeit(code_loop, number = 100) / 100\n\nprint(\"Running time using my recursive average function: \\n\",running_time_rec, '\\n')\nprint(\"Running time using python sum function: \\n\",running_time_py)\nprint(\"Running time using loop python function: \\n\",running_time_loop)", "Running time using my recursive average function: \n 9.264100000000001e-05 \n\nRunning time using python sum function: \n 4.1410000000000005e-05\nRunning time using loop python function: \n 7.479999999999987e-06\n" ] ], [ [ "### [4] Example 4.1", "_____no_output_____" ] ], [ [ "def rec_sqrt(x, n):\n if (n == -1):\n return 1\n else:\n return (1/2) * (rec_sqrt(x, n - 1) + (x[n]/rec_sqrt(x, n - 1)))", "_____no_output_____" ], [ "A = 2\nx = np.ones((1, 5)).ravel() * A\nprint(rec_sqrt(x, len(x) - 1))", "1.414213562373095\n" ], [ "b = np.array([1, 1, 1, 1, 1])\na = np.array([1, 0, 0])\nzplane(b,a)", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown", "markdown", "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown", "markdown", "markdown", "markdown", "markdown", "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code" ] ]
d024137b8886b0019ea90b7ee9f066d653bd2551
12,163
ipynb
Jupyter Notebook
modules.ipynb
LoicGrobol/python-im
28cac9392d09be29fd9234b0e21466e5408a9252
[ "MIT" ]
null
null
null
modules.ipynb
LoicGrobol/python-im
28cac9392d09be29fd9234b0e21466e5408a9252
[ "MIT" ]
1
2018-10-08T10:52:48.000Z
2018-10-08T10:52:48.000Z
modules.ipynb
LoicGrobol/python-im
28cac9392d09be29fd9234b0e21466e5408a9252
[ "MIT" ]
7
2018-09-19T06:47:18.000Z
2018-12-12T12:03:29.000Z
21.451499
237
0.541972
[ [ [ "# langages de script – Python\n\n## Modules et packages\n\n### M1 Ingénierie Multilingue – INaLCO\n\nclement.plancq@ens.fr", "_____no_output_____" ], [ "Les modules et les packages permettent d'ajouter des fonctionnalités à Python\n\nUn module est un fichier (```.py```) qui contient des fonctions et/ou des classes. \n<small>Et de la documentation bien sûr</small>\n\nUn package est un répertoire contenant des modules et des sous-répertoires.\n\nC'est aussi simple que ça. Évidemment en rentrant dans le détail c'est un peu plus compliqué.", "_____no_output_____" ], [ "## Un module", "_____no_output_____" ] ], [ [ "%%file operations.py\n\n# -*- coding: utf-8 -*-\n\n\"\"\"\nModule pour le cours sur les modules\nOpérations arithmétiques\n\"\"\"\n\ndef addition(a, b):\n \"\"\" Ben une addition quoi : a + b \"\"\"\n return a + b\n\ndef soustraction(a, b):\n \"\"\" Une soustraction : a - b \"\"\"\n return a - b", "_____no_output_____" ] ], [ [ "Pour l'utiliser on peut :\n* l'importer par son nom", "_____no_output_____" ] ], [ [ "import operations\noperations.addition(2, 4)", "_____no_output_____" ] ], [ [ "* l'importer et modifier son nom", "_____no_output_____" ] ], [ [ "import operations as op\nop.addition(2, 4)", "_____no_output_____" ] ], [ [ "* importer une partie du module", "_____no_output_____" ] ], [ [ "from operations import addition\naddition(2, 4)", "_____no_output_____" ] ], [ [ "* importer l'intégralité du module", "_____no_output_____" ] ], [ [ "from operations import *\naddition(2, 4)\nsoustraction(4, 2)", "_____no_output_____" ] ], [ [ "En réalité seules les fonctions et/ou les classes ne commençant pas par '_' sont importées.", "_____no_output_____" ], [ "L'utilisation de `import *` n'est pas recommandée. Parce que, comme vous le savez « *explicit is better than implicit* ». Et en ajoutant les fonctions dans l'espace de nommage du script vous pouvez écraser des fonctions existantes.", "_____no_output_____" ], [ "Ajoutez une fonction `print` à votre module pour voir (attention un module n'est chargé qu'une fois, vous devrez relancer le kernel ou passer par la console).", "_____no_output_____" ], [ "Autre définition d'un module : c'est un objet de type ``module``.", "_____no_output_____" ] ], [ [ "import operations\ntype(operations)", "_____no_output_____" ] ], [ [ "``import`` ajoute des attributs au module", "_____no_output_____" ] ], [ [ "import operations\n\nprint(f\"name : {operations.__name__}\")\nprint(f\"file : {operations.__file__}\")\nprint(f\"doc : {operations.__doc__}\")", "_____no_output_____" ] ], [ [ "## Un package", "_____no_output_____" ] ], [ [ "! tree operations_pack", "_____no_output_____" ] ], [ [ "Un package python peut contenir des modules, des répertoires et sous-répertoires, et bien souvent du non-python : de la doc html, des données pour les tests, etc…", "_____no_output_____" ], [ "Le répertoire principal et les répertoires contenant des modules python doivent contenir un fichier `__init__.py`", "_____no_output_____" ], [ "`__init__.py` peut être vide, contenir du code d'initialisation ou contenir la variable `__all__`", "_____no_output_____" ] ], [ [ "import operations_pack.simple\noperations_pack.simple.addition(2, 4)", "_____no_output_____" ], [ "from operations_pack import simple\nsimple.soustraction(4, 2)", "_____no_output_____" ] ], [ [ "``__all__`` dans ``__init__.py`` définit quels seront les modules qui seront importés avec ``import *``\n", "_____no_output_____" ] ], [ [ "from operations_pack.avance import *\nmulti.multiplication(2,4)", "_____no_output_____" ] ], [ [ "## Où sont les modules et les packages ?", "_____no_output_____" ], [ "Pour que ``import`` fonctionne il faut que les modules soient dans le PATH.", "_____no_output_____" ] ], [ [ "import sys\nsys.path", "_____no_output_____" ] ], [ [ "``sys.path`` est une liste, vous pouvez la modifier", "_____no_output_____" ] ], [ [ "sys.path.append(\"[...]\") # le chemin vers le dossier operations_pack\nsys.path", "_____no_output_____" ] ], [ [ "## Installer des modules et des packages", "_____no_output_____" ], [ "Dans les distributions Python récentes `pip` est installé, tant mieux.", "_____no_output_____" ], [ "Avec `pip` vous pouvez :\n* installer un module `pip install module` ou `pip install --user module`\n`pip` va trouver le module sur Pypi et l'installer au bon endroit s'il existe. Il installera les dépendances aussi.\n* désinstaller un module `pip uninstall module`\n* mettre à jour `pip install module --upgrade`\n* downgrader dans une version particulière `pip install module=0.9 --upgrade`\n* sauvegarder votre environnement de dév, la liste de vos modules `pip freeze > requirements.txt`\nCe qui vous permettra de le réinstaller sur une autre machine `pip install -r requirements.txt`", "_____no_output_____" ], [ "## S'en sortir avec les versions", "_____no_output_____" ], [ "Python évolue au fil des versions, les packages aussi. Ça peut poser des problèmes quand vous voulez partager votre code ou même quand vous voulez utiliser un code qui a besoin d'une version particulière.\n\nIl existe un outil pour isoler les environnement de développement : ``virtualenv`` \n``virtualenv /path/mon_projet`` ou ``python3 -m venv /path/mon_projet`` va créer un dossier avec plein de trucs dedans, y compris un interpréteur python. \nVous pouvez spécifier la version de python avec ``virtualenv /path/mon_projet -p /usr/bin/python3.6``\n", "_____no_output_____" ], [ "Pour activer l'environnement : ``source /path/mon_projet/bin/activate`` (``/path/mon_projet/Scripts/activate.bat`` sous Windows (je crois)) \nPour en sortir : ``deactivate``", "_____no_output_____" ], [ "Quand vous travaillez dans un venv les modules que vous installerez avec pip seront isolés dans le venv et pas ailleurs. \nSi vous utilisez ``python`` ce sera la version de l'interpréteur du venv et les modules du venv. \nAvec cet outil on doit installer à chaque fois les modules désirés mais au moins on ne s'embrouille pas. Et vous pouvez communiquer un fichier ``requirements.txt`` à un collègue qui pourra reproduire le venv sur sa machine.", "_____no_output_____" ], [ "Il existe aussi ``pipenv``, un outil plus récent qui combine ``pip`` et ``virtualenv``.", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown" ] ]
d0241fc0158d3364a5ee185ab6b2d84ed8068f06
24,190
ipynb
Jupyter Notebook
notebooks/LS333_DSPT6_Model_Demo.ipynb
DrewRust/DSPT6-Twitoff
c444c14441832051e767ab4d2c8c439cc56f0406
[ "MIT" ]
null
null
null
notebooks/LS333_DSPT6_Model_Demo.ipynb
DrewRust/DSPT6-Twitoff
c444c14441832051e767ab4d2c8c439cc56f0406
[ "MIT" ]
null
null
null
notebooks/LS333_DSPT6_Model_Demo.ipynb
DrewRust/DSPT6-Twitoff
c444c14441832051e767ab4d2c8c439cc56f0406
[ "MIT" ]
6
2020-08-09T10:36:47.000Z
2021-05-08T06:20:16.000Z
55.228311
12,108
0.721042
[ [ [ "### DSPT6 - Adding Data Science to a Web Application\n\nThe purpose of this notebook is to demonstrate:\n- Simple online analysis of data from a user of the Twitoff app or an API\n- Train a more complicated offline model, and serialize the results for online use", "_____no_output_____" ] ], [ [ "import sqlite3\nimport pickle\nimport pandas as pd", "_____no_output_____" ], [ "# Connect to sqlite database\nconn = sqlite3.connect('../twitoff/twitoff.sqlite')", "_____no_output_____" ], [ "def get_data(query, conn):\n '''Function to get data from SQLite DB'''\n \n cursor = conn.cursor()\n result = cursor.execute(query).fetchall()\n\n # Get columns from cursor object\n columns = list(map(lambda x: x[0], cursor.description))\n\n # Assign to DataFrame\n df = pd.DataFrame(data=result, columns=columns)\n return df", "_____no_output_____" ], [ "query = '''\nSELECT \n tweet.id,\n tweet.text,\n tweet.embedding,\n user.username\nFROM tweet\nJOIN user ON tweet.user_id = user.id;\n'''\n\ndf = get_data(query, conn)\ndf['embedding_decoded'] = df.embedding.apply(lambda x: pickle.loads(x[2:]))\nprint(df.shape)\ndf.head()", "(14163, 5)\n" ], [ "df.usernameme.value_counts()", "_____no_output_____" ], [ "import numpy as np\n\nuser1_embeddings = df.embedding_decoded[df.username=='elonmusk']\nuser2_embeddings = df.embedding_decoded[df.username=='nasa']\nembeddings = pd.concat([user1_embeddings, user2_embeddings])\n\nembeddings_df = pd.DataFrame(embeddings.tolist(),\n columns=[f'dim{i}' for i in range(768)])\nlabels = np.concatenate([np.ones(len(user1_embeddings)), \n np.zeros(len(user2_embeddings))])\nprint(embeddings_df.shape, labels.shape)", "(2089, 768) (2089,)\n" ], [ "from sklearn.model_selection import train_test_split\n\nX_train, X_test, y_train, y_test = train_test_split(\n embeddings_df, labels, test_size=0.25, random_state=42)\n\nprint(X_train.shape, X_test.shape)", "(1566, 768) (523, 768)\n" ], [ "from sklearn.linear_model import LogisticRegression\n\nlr = LogisticRegression(max_iter=1000)\nlr.fit(X_train, y_train)", "_____no_output_____" ], [ "import matplotlib.pyplot as plt\nfrom sklearn.metrics import classification_report, plot_confusion_matrix\n\ny_pred = lr.predict(X_test)\nprint(classification_report(y_test, y_pred))\n\nplot_confusion_matrix(lr, X_test, y_test, cmap='Blues')\nplt.title('LogReg Confusion Matrix');", " precision recall f1-score support\n\n 0.0 1.00 1.00 1.00 416\n 1.0 0.99 0.98 0.99 107\n\n accuracy 0.99 523\n macro avg 0.99 0.99 0.99 523\nweighted avg 0.99 0.99 0.99 523\n\n" ], [ "pickle.dump(lr, open(\"../models/logreg.pkl\", \"wb\"))", "_____no_output_____" ], [ "lr_unpickled = pickle.load(open(\"../models/logreg.pkl\", \"rb\"))\nlr_unpickled", "_____no_output_____" ], [ "import basilica\n\nBASILICA_KEY=''\nBASILICA = basilica.Connection(BASILICA_KEY)", "_____no_output_____" ], [ "example_embedding = BASILICA.embed_sentence('The MARS rover just reported new and interesting data!', model='twitter')", "_____no_output_____" ], [ "lr_unpickled.predict([example_embedding])[0]", "_____no_output_____" ] ] ]
[ "markdown", "code" ]
[ [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ] ]
d0244ba52c853d20441d4f19f11f10f0abcaf65e
25,937
ipynb
Jupyter Notebook
project/Untitled.ipynb
HenryTingle/aae497-f19
29df124e31fcc1a1a8462211b1390e5a74f40cd9
[ "BSD-3-Clause" ]
null
null
null
project/Untitled.ipynb
HenryTingle/aae497-f19
29df124e31fcc1a1a8462211b1390e5a74f40cd9
[ "BSD-3-Clause" ]
null
null
null
project/Untitled.ipynb
HenryTingle/aae497-f19
29df124e31fcc1a1a8462211b1390e5a74f40cd9
[ "BSD-3-Clause" ]
null
null
null
97.507519
18,032
0.770945
[ [ [ "import pandas\nimport matplotlib.pyplot as plt\nimport glob\nimport numpy as np", "_____no_output_____" ], [ "input_data = pandas.read_csv('data/aileron_servo_and_transmitter_inputs.csv',\n delimiter=',', nrows=11)\ninput_data", "_____no_output_____" ], [ "header = ['unknown', 'pitot', 'wind', 'aoa', 'drag', 'lift', 'pitch', 'roll', 'f0', 'f1', 'f2', 'f3']\n\ndata = []\nfor filename in glob.glob('data/*.txt'):\n data_i = pandas.read_csv(filename, delimiter='\\t', header=None, names=header)\n data.append(data_i)", "_____no_output_____" ], [ "plt.figure()\nfor datai in data:\n rho = 1.225\n area = 0.0154838\n mean_wind = datai.wind.mean()\n if mean_wind < 3:\n continue\n q = 0.5*rho*datai.wind**2\n ail = (input_data['Aileron Delection Servo signal'] - 1500)/500\n C_roll = (datai.roll - datai.roll[0])/(q*area)\n \n poly = np.polynomial.Polynomial.fit(ail, C_roll, deg=1)\n y_fit = poly(ail)\n plt.plot(ail,\n C_roll, '.', label=str(np.round(mean_wind)) + ' m/s');\n plt.plot(ail, y_fit, '--')\n print(mean_wind, poly)\n #plt.plot(datai.roll, label=str(np.round(datai.wind.mean())) + ' m/s');\nplt.legend()\nplt.grid()", "4.967630542504545 poly([0.01194488 0.07273465])\n10.021725518372728 poly([-0.0101331 0.07488419])\n" ] ] ]
[ "code" ]
[ [ "code", "code", "code", "code" ] ]
d024b963de0a4d62a395fed2b7c90ea35cf91aca
46,349
ipynb
Jupyter Notebook
experiments/main_simulations/plot_bivariate_identifiability.ipynb
rflperry/sparse_shift
7c0d68be21d56f706d1251b914d305786a4c9726
[ "MIT" ]
2
2022-01-31T14:12:54.000Z
2022-02-01T18:17:24.000Z
experiments/main_simulations/plot_bivariate_identifiability.ipynb
rflperry/sparse_shift
7c0d68be21d56f706d1251b914d305786a4c9726
[ "MIT" ]
null
null
null
experiments/main_simulations/plot_bivariate_identifiability.ipynb
rflperry/sparse_shift
7c0d68be21d56f706d1251b914d305786a4c9726
[ "MIT" ]
null
null
null
62.633784
21,344
0.623616
[ [ [ "import numpy as np\nimport matplotlib.pyplot as plt\nimport seaborn as sns\nimport pandas as pd", "_____no_output_____" ], [ "EXPERIMENT = 'bivariate_power'\nTAG = ''\ndf = pd.read_csv(f'./results/{EXPERIMENT}_results{TAG}.csv', sep=', ', engine='python')", "_____no_output_____" ], [ "plot_df = df\n\nx_var_rename_dict = {\n 'sample_size': '# Samples',\n 'Number of environments': '# Environments',\n 'Fraction of shifting mechanisms': 'Shift fraction',\n 'dag_density': 'Edge density',\n 'n_variables': '# Variables',\n}\n\nplot_df = df.rename(\n x_var_rename_dict, axis=1\n ).rename(\n {'Method': 'Test', 'Soft': 'Score'}, axis=1\n ).replace(\n {\n 'er': 'Erdos-Renyi',\n 'ba': 'Hub',\n 'PC (pool all)': 'Full PC (oracle)',\n 'Full PC (KCI)': r'Pooled PC (KCI) [25]',\n 'Min changes (oracle)': 'MSS (oracle)',\n 'Min changes (KCI)': 'MSS (KCI)',\n 'Min changes (GAM)': 'MSS (GAM)',\n 'Min changes (Linear)': 'MSS (Linear)',\n 'Min changes (FisherZ)': 'MSS (FisherZ)',\n 'MC': r'MC [11]',\n False: 'Hard',\n True: 'Soft',\n }\n)\n\nplot_df = plot_df.loc[\n (~plot_df['Test'].isin(['Full PC (oracle)', 'MSS (oracle)'])) &\n (plot_df['# Environments'] == 2) &\n (plot_df['Score'] == 'Hard')\n]\n\nplot_df = plot_df.replace({\n '[[];[0]]': 'P(X1)',\n '[[];[1]]': 'P(X2|X1)',\n '[[];[]]': 'Neither',\n '[[];[0;1]]': 'Both',\n})", "_____no_output_____" ], [ "plot_df['Test'].unique()", "_____no_output_____" ], [ "intv_targets = ['P(X1)', 'P(X2|X1)', 'Neither', 'Both']\nax_var = 'intervention_targets'\n\nfor targets in intv_targets:\n display(plot_df[plot_df[ax_var] == targets].groupby('Test').mean().reset_index().head(3))", "_____no_output_____" ], [ "sns.set_context('paper')\nfig, axes = plt.subplots(1, 4, sharey=True, sharex=True, figsize=(7.5, 2.5))\n\nintv_targets = ['P(X1)', 'P(X2|X1)', 'Neither', 'Both']\nax_var = 'intervention_targets'\nx_var = 'Precision' # 'False orientation rate' # \ny_var = 'Recall' # 'True orientation rate'# \nhue = 'Test'\n\nfor targets, ax in zip(intv_targets, axes.flatten()):\n mean_df = plot_df[plot_df[ax_var] == targets].groupby('Test').mean().reset_index()\n std_df = plot_df[plot_df[ax_var] == targets].groupby('Test')[['Precision', 'Recall']].std().reset_index()\n std_df.rename(\n {'Precision': 'Precision std', 'Recall': 'Recall std'}, axis=1\n )\n \n g = sns.scatterplot(\n data=plot_df[plot_df[ax_var] == targets].groupby('Test').mean().reset_index(),\n x=x_var,\n y=y_var,\n hue=hue,\n ax=ax,\n # markers=['d', 'P', 's'],\n palette=[\n sns.color_palette(\"tab10\")[i]\n for i in [2, 3, 4, 5, 7, 6] # 3, 4, 5, \n ],\n hue_order=[\n 'MSS (KCI)',\n 'MSS (GAM)',\n 'MSS (FisherZ)',\n 'MSS (Linear)',\n 'Pooled PC (KCI) [25]',\n 'MC [11]',\n ],\n legend='full',\n # alpha=1,\n s=100\n )\n # ax.axvline(0.05, ls=':', c='grey')\n ax.set_title(f'Shift in {targets}')\n \nplt.legend(bbox_to_anchor=(1.05, 1), loc=2, borderaxespad=0.)\nfor ax in axes[:-1]:\n ax.get_legend().remove()\n# ax.set_ylim([0, 1])\n# ax.set_xlim([0, 1])\nplt.tight_layout()\nplt.savefig('./figures/bivariate_power_plots.pdf')\nplt.show()", "_____no_output_____" ] ] ]
[ "code" ]
[ [ "code", "code", "code", "code", "code", "code" ] ]
d024c356228288bec14fac92e4851446e6776f66
19,198
ipynb
Jupyter Notebook
Batteries Included.ipynb
ThePoetCoder/Odds-and-Ends
fb287bfcf9eda3d4a06c44f83a6bddb4ef09c61f
[ "MIT" ]
null
null
null
Batteries Included.ipynb
ThePoetCoder/Odds-and-Ends
fb287bfcf9eda3d4a06c44f83a6bddb4ef09c61f
[ "MIT" ]
null
null
null
Batteries Included.ipynb
ThePoetCoder/Odds-and-Ends
fb287bfcf9eda3d4a06c44f83a6bddb4ef09c61f
[ "MIT" ]
null
null
null
30.766026
942
0.435879
[ [ [ "### Python's Dir Function ###\ndef attributes_and_methods(inp):\n print(\"The Attributes and Methods of a {} are:\".format(type(inp)))\n print(dir(inp))\n\n# Change x to be any different type\n# to get different results for that data type\nx = 'abc'\n\nattributes_and_methods(x)", "The Attributes and Methods of a <class 'str'> are:\n['__add__', '__class__', '__contains__', '__delattr__', '__dir__', '__doc__', '__eq__', '__format__', '__ge__', '__getattribute__', '__getitem__', '__getnewargs__', '__gt__', '__hash__', '__init__', '__init_subclass__', '__iter__', '__le__', '__len__', '__lt__', '__mod__', '__mul__', '__ne__', '__new__', '__reduce__', '__reduce_ex__', '__repr__', '__rmod__', '__rmul__', '__setattr__', '__sizeof__', '__str__', '__subclasshook__', 'capitalize', 'casefold', 'center', 'count', 'encode', 'endswith', 'expandtabs', 'find', 'format', 'format_map', 'index', 'isalnum', 'isalpha', 'isascii', 'isdecimal', 'isdigit', 'isidentifier', 'islower', 'isnumeric', 'isprintable', 'isspace', 'istitle', 'isupper', 'join', 'ljust', 'lower', 'lstrip', 'maketrans', 'partition', 'replace', 'rfind', 'rindex', 'rjust', 'rpartition', 'rsplit', 'rstrip', 'split', 'splitlines', 'startswith', 'strip', 'swapcase', 'title', 'translate', 'upper', 'zfill']\n" ], [ "### Contextmanager Example ###\n# one way to write code that can use Python's \"with\" statement\n# print statements have been added to show the order of operations\nfrom contextlib import contextmanager\n\n# add the decorator @contextmanager to a function of your own\n@contextmanager\ndef managed_function():\n print(\"2.) Inside function\")\n # put a \"try: except: else: finally\" block inside your function\n\n try:\n print(\"3.) Inside 'try' block\")\n # yield whatever it is you'd like to work with within your \"with\" statement block\n var = \"abc\"\n print(\"4.) Leaving function via 'yield'\")\n yield var\n # by using the 'yield' command this function is technically a 'generator'\n print(\"8.) Next line after yield is run\")\n\n except:\n print(\"Inside except block\")\n # put any code here you want to run in the event there is an error in the 'try' block\n pass\n\n else:\n print(\"9.) Inside else block\")\n # put any code here you want to run in the event there is NOT an error in the 'try' block\n pass\n finally:\n print(\"10.) Inside finally block\")\n # put the clean up code. The reason you would want to use a contextmanager is to have something opened and closed for you just by using the 'with' statement in the rest of your code. That which goes in the 'finally' block will run no matter what happens in the 'try' block\n del var\n\nprint(\"1.) Starting now\")\nwith managed_function() as mf:\n print(\"5.) Outside function\")\n print(\"6.)\", mf)\n print(\"7.) with block finished with no errors, going back into the function now at the line after 'yield'\")", "1.) Starting now\n2.) Inside function\n3.) Inside 'try' block\n4.) Leaving function via 'yield'\n5.) Outside function\n6.) abc\n7.) with block finished with no errors, going back into the function now at the line after 'yield'\n8.) Next line after yield is run\n9.) Inside else block\n10.) Inside finally block\n" ], [ "### Comprehension With Functions and Classes ###\ndef multiply_by_2(a):\n return a * 2\n\nclass Simple(object):\n def __init__(self, my_string):\n self.my_string = str(my_string)\n\n def result(self):\n return f'Hi, {self.my_string}!'\n\nlist_comp1 = [multiply_by_2(x) for x in range(5)]\nprint(list_comp1)\n\nname_list = ['Bill', 'Joe', 'Steve']\nlist_comp2 = [Simple(name).result() for name in name_list]\nprint(list_comp2)", "[0, 2, 4, 6, 8]\n['Hi, Bill!', 'Hi, Joe!', 'Hi, Steve!']\n" ], [ "### Extending Builtin Types ###\nfirst = {'a': 1, 'b': 2, 'c': 3}\nsecond = {'d': 4, 'e': 5, 'f': 6}\n\ntry:\n result = first + second\n print(result)\nexcept TypeError:\n print(\"Can't add dicts the normal way\\n\")\n\nprint('But you can inherit from builtin dict in order to extend it\\n')\n\n\nclass my_dict(dict):\n def __init__(self, d):\n print('\\tCreating new object now...')\n self.d = d\n pass\n\n def __add__(self, other):\n print('\\tAdding now...')\n result = {}\n for entry in self.d:\n result[entry] = self.d[entry]\n for entry in other.d:\n result[entry] = other.d[entry]\n return result\n\n\nprint('Instantiate new objects:')\nfirst = my_dict(first)\nsecond = my_dict(second)\n\nprint('\\nCalling new dunder operator method...')\nresult = first + second\nprint(result)", "Can't add dicts the normal way\n\nBut you can inherit from builtin dict in order to extend it\n\nInstantiate new objects:\n\tCreating new object now...\n\tCreating new object now...\n\nCalling new dunder operator method...\n\tAdding now...\n{'a': 1, 'b': 2, 'c': 3, 'd': 4, 'e': 5, 'f': 6}\n" ], [ "### Fuzzy Lookup ###\n# cutoff defaults to 0.6 matches\nfrom difflib import get_close_matches\n\ninput_list = [\n 'Happy',\n 'Sad',\n 'Angry',\n 'Elated',\n 'Upset'\n]\n\ncheck_for = 'Happiness'\nresult = get_close_matches(check_for,\n input_list)\nprint(result, \"# Can't find it?\")\n\ncheck_for = 'Happiness'\nresult = get_close_matches(check_for,\n input_list,\n cutoff=0.4)\nprint(result, \"# Lower the cutoff\")\n\ncheck_for = 'Sadness'\nresult = get_close_matches(check_for,\n input_list)\nprint()\nprint(result)\n\ncheck_for = 'Anger'\nresult = get_close_matches(check_for,\n input_list)\nprint(result)\n\ncheck_for = 'Elation'\nresult = get_close_matches(check_for,\n input_list)\nprint(result)\nprint()\n\ncheck_for = 'Setup'\nresult = get_close_matches(check_for,\n input_list)\nprint(result, \"# Can't find it?\")\n\ncheck_for = 'Setup'\nresult = get_close_matches(check_for,\n input_list,\n cutoff=0.4)\nprint(result, \"# Lower the cutoff\")", "[] # Can't find it?\n['Happy'] # Lower the cutoff\n\n['Sad']\n['Angry']\n['Elated']\n\n[] # Can't find it?\n['Upset'] # Lower the cutoff\n" ], [ "### Logging Example ###\nimport logging\n\nlogging.basicConfig(\n filename=\"program.log\",\n level=logging.DEBUG\n)\nlogging.warning(\"testing 1213\")\nlogging.debug(\"debug line here\")\n\ndef function(a,b):\n logging.info(f\"{a}-{b}\")\n\nfunction(3,4)\nwith open(\"program.log\") as p:\n lines = p.readlines()\n\nprint(lines)", "['WARNING:root:testing 1213\\n', 'DEBUG:root:debug line here\\n', 'INFO:root:3-4\\n']\n" ], [ "### Mix And Match ###\n\"\"\"\nThis is meant to serve as a quick\nexample of some different ways provided\nin the standard library to mix and match\nyour data\n\"\"\"\n\nfrom itertools import combinations, permutations, product\n\nmy_list = [\"a\", \"b\", \"c\", \"d\"]\nprint(\"Input list =\", my_list)\nprint()\n\n# Combinations\nprint(\"itertools.combinations with Length = 2\")\nfor combo in combinations(my_list, 2):\n print(combo)\nprint()\nprint(\"itertools.combinations - Length = 3\")\nfor combo in combinations(my_list, 3):\n print(combo)\nprint()\n\n# Permutations\nprint(\"itertools.permutations with Length = 2\")\nfor perm in permutations(my_list, 2):\n print(perm)\nprint()\nprint(\"itertools.permutations with Length = 3\")\nfor perm in permutations(my_list, 3):\n print(perm)\nprint()\n\n# Product\nprint(\"itertools.product with `repeat` = 2\")\nfor prod in product(my_list, repeat=2):\n print(prod)\nprint()\nprint(\"itertools.product with `repeat` = 3\")\nfor prod in product(my_list, repeat=3):\n print(prod)\nprint()\n\nprint(\n \"\"\"\nBottom line:\n combinations -> all variations where order doesn't matter\n permutations -> all variations where order matters\n product -> all variations where order matters AND you can replace each list item once for every additional item desired in the final iterables\n\"\"\"\n)", "Input list = ['a', 'b', 'c', 'd']\n\nitertools.combinations with Length = 2\n('a', 'b')\n('a', 'c')\n('a', 'd')\n('b', 'c')\n('b', 'd')\n('c', 'd')\n\nitertools.combinations - Length = 3\n('a', 'b', 'c')\n('a', 'b', 'd')\n('a', 'c', 'd')\n('b', 'c', 'd')\n\nitertools.permutations with Length = 2\n('a', 'b')\n('a', 'c')\n('a', 'd')\n('b', 'a')\n('b', 'c')\n('b', 'd')\n('c', 'a')\n('c', 'b')\n('c', 'd')\n('d', 'a')\n('d', 'b')\n('d', 'c')\n\nitertools.permutations with Length = 3\n('a', 'b', 'c')\n('a', 'b', 'd')\n('a', 'c', 'b')\n('a', 'c', 'd')\n('a', 'd', 'b')\n('a', 'd', 'c')\n('b', 'a', 'c')\n('b', 'a', 'd')\n('b', 'c', 'a')\n('b', 'c', 'd')\n('b', 'd', 'a')\n('b', 'd', 'c')\n('c', 'a', 'b')\n('c', 'a', 'd')\n('c', 'b', 'a')\n('c', 'b', 'd')\n('c', 'd', 'a')\n('c', 'd', 'b')\n('d', 'a', 'b')\n('d', 'a', 'c')\n('d', 'b', 'a')\n('d', 'b', 'c')\n('d', 'c', 'a')\n('d', 'c', 'b')\n\nitertools.product with `repeat` = 2\n('a', 'a')\n('a', 'b')\n('a', 'c')\n('a', 'd')\n('b', 'a')\n('b', 'b')\n('b', 'c')\n('b', 'd')\n('c', 'a')\n('c', 'b')\n('c', 'c')\n('c', 'd')\n('d', 'a')\n('d', 'b')\n('d', 'c')\n('d', 'd')\n\nitertools.product with `repeat` = 3\n('a', 'a', 'a')\n('a', 'a', 'b')\n('a', 'a', 'c')\n('a', 'a', 'd')\n('a', 'b', 'a')\n('a', 'b', 'b')\n('a', 'b', 'c')\n('a', 'b', 'd')\n('a', 'c', 'a')\n('a', 'c', 'b')\n('a', 'c', 'c')\n('a', 'c', 'd')\n('a', 'd', 'a')\n('a', 'd', 'b')\n('a', 'd', 'c')\n('a', 'd', 'd')\n('b', 'a', 'a')\n('b', 'a', 'b')\n('b', 'a', 'c')\n('b', 'a', 'd')\n('b', 'b', 'a')\n('b', 'b', 'b')\n('b', 'b', 'c')\n('b', 'b', 'd')\n('b', 'c', 'a')\n('b', 'c', 'b')\n('b', 'c', 'c')\n('b', 'c', 'd')\n('b', 'd', 'a')\n('b', 'd', 'b')\n('b', 'd', 'c')\n('b', 'd', 'd')\n('c', 'a', 'a')\n('c', 'a', 'b')\n('c', 'a', 'c')\n('c', 'a', 'd')\n('c', 'b', 'a')\n('c', 'b', 'b')\n('c', 'b', 'c')\n('c', 'b', 'd')\n('c', 'c', 'a')\n('c', 'c', 'b')\n('c', 'c', 'c')\n('c', 'c', 'd')\n('c', 'd', 'a')\n('c', 'd', 'b')\n('c', 'd', 'c')\n('c', 'd', 'd')\n('d', 'a', 'a')\n('d', 'a', 'b')\n('d', 'a', 'c')\n('d', 'a', 'd')\n('d', 'b', 'a')\n('d', 'b', 'b')\n('d', 'b', 'c')\n('d', 'b', 'd')\n('d', 'c', 'a')\n('d', 'c', 'b')\n('d', 'c', 'c')\n('d', 'c', 'd')\n('d', 'd', 'a')\n('d', 'd', 'b')\n('d', 'd', 'c')\n('d', 'd', 'd')\n\n\nBottom line:\n combinations -> all variations where order doesn't matter\n permutations -> all variations where order matters\n product -> all variations where order matters AND you can replace each list item once for every additional item desired in the final iterables\n\n" ], [ "### Simple Regex With Comments ###\nimport re\n\npattern = (\n \"^\" # at the start of the line\n \"[A-Z]+\" # find 1 or more capital letters\n \"-\" # followed by a dash\n \"[0-9]{2}\" # and 2 numbers\n)\n\nchecklist = [\n \"ERS-87\", # match\n \"DJHDJJ-55\", # match\n \"abbjd-44\", # no match(undercase)\n \"DFT-1\", # no match(not enough #s)\n]\n\nfor item in checklist:\n if re.match(pattern, item):\n print('\"{}\"'.format(item), \"Matched!\")\n else:\n print('\"{}\"'.format(item), \"Did not match...\")\n\n\n### Sorting Integers Stored As Strings ###\nmy_list = ['1', '5', '10', '15', '20']\n\n# A straight sort of the list will give different results than you might have expected\nsorted_list = sorted(my_list)\nprint(\"Without key:\\n\", sorted_list)\n\nprint()\n# Using the 'key' argument in sorted() allows you to specify that even though this is a list of strings technically, there are integers inside the strings and to sort as if they were integers, in numerical order\nother_sorted_list = sorted(my_list, key=int)\nprint(\"With key:\\n\", other_sorted_list)", "_____no_output_____" ], [ "### View The Bytecode You'Re Creating ###\nfrom dis import dis\n\ndef test():\n variable_1 = 1 + 1\n variable_2 = 5\n variable_3 = variable_1 + variable_2\n print(\"Answer here\", variable_3)\n\ndis(test)\nprint('----------------------')\ntest()", " 5 0 LOAD_CONST 1 (2)\n 2 STORE_FAST 0 (variable_1)\n\n 6 4 LOAD_CONST 2 (5)\n 6 STORE_FAST 1 (variable_2)\n\n 7 8 LOAD_FAST 0 (variable_1)\n 10 LOAD_FAST 1 (variable_2)\n 12 BINARY_ADD\n 14 STORE_FAST 2 (variable_3)\n\n 8 16 LOAD_GLOBAL 0 (print)\n 18 LOAD_CONST 3 ('Answer here')\n 20 LOAD_FAST 2 (variable_3)\n 22 CALL_FUNCTION 2\n 24 POP_TOP\n 26 LOAD_CONST 0 (None)\n 28 RETURN_VALUE\n----------------------\nAnswer here 7\n" ] ] ]
[ "code" ]
[ [ "code", "code", "code", "code", "code", "code", "code", "code", "code" ] ]
d024c6d5cc340a7aa889be2ccdc481fa8212c027
56,523
ipynb
Jupyter Notebook
IBM Cloud/WML/notebooks/regression/xgboost_scikit_wrapper/Watson OpenScale and Watson ML Engine Regression.ipynb
arsuryan/watson-openscale-samples
338e1e236d91baa10562ac6037eba91ca3e8a449
[ "Apache-2.0" ]
null
null
null
IBM Cloud/WML/notebooks/regression/xgboost_scikit_wrapper/Watson OpenScale and Watson ML Engine Regression.ipynb
arsuryan/watson-openscale-samples
338e1e236d91baa10562ac6037eba91ca3e8a449
[ "Apache-2.0" ]
null
null
null
IBM Cloud/WML/notebooks/regression/xgboost_scikit_wrapper/Watson OpenScale and Watson ML Engine Regression.ipynb
arsuryan/watson-openscale-samples
338e1e236d91baa10562ac6037eba91ca3e8a449
[ "Apache-2.0" ]
null
null
null
31.826014
579
0.592891
[ [ [ "<img src=\"https://github.com/pmservice/ai-openscale-tutorials/raw/master/notebooks/images/banner.png\" align=\"left\" alt=\"banner\">", "_____no_output_____" ], [ "# Working with Watson Machine Learning", "_____no_output_____" ], [ "This notebook should be run in a Watson Studio project, using **Default Python 3.7.x** runtime environment. **If you are viewing this in Watson Studio and do not see Python 3.7.x in the upper right corner of your screen, please update the runtime now.** It requires service credentials for the following Cloud services:\n * Watson OpenScale\n * Watson Machine Learning\n \nIf you have a paid Cloud account, you may also provision a **Databases for PostgreSQL** or **Db2 Warehouse** service to take full advantage of integration with Watson Studio and continuous learning services. If you choose not to provision this paid service, you can use the free internal PostgreSQL storage with OpenScale, but will not be able to configure continuous learning for your model.\n\nThe notebook will train, create and deploy a House Price regression model, configure OpenScale to monitor that deployment in the OpenScale Insights dashboard.", "_____no_output_____" ], [ "### Contents\n\n- [Setup](#setup)\n- [Model building and deployment](#model)\n- [OpenScale configuration](#openscale)\n- [Quality monitor and feedback logging](#quality)\n- [Fairness, drift monitoring and explanations](#fairness)", "_____no_output_____" ], [ "# Setup <a name=\"setup\"></a>", "_____no_output_____" ], [ "## Package installation", "_____no_output_____" ] ], [ [ "import warnings\nwarnings.filterwarnings('ignore')", "_____no_output_____" ], [ "!rm -rf /home/spark/shared/user-libs/python3.7*\n\n!pip install --upgrade pandas==1.2.3 --no-cache | tail -n 1\n!pip install --upgrade requests==2.23 --no-cache | tail -n 1\n!pip install --upgrade numpy==1.20.3 --user --no-cache | tail -n 1\n!pip install SciPy --no-cache | tail -n 1\n!pip install lime --no-cache | tail -n 1\n\n!pip install --upgrade ibm-watson-machine-learning --user | tail -n 1\n!pip install --upgrade ibm-watson-openscale --no-cache | tail -n 1\n\n!pip install --upgrade xgboost==1.3.3 --no-cache | tail -n 1", "_____no_output_____" ] ], [ [ "## Provision services and configure credentials", "_____no_output_____" ], [ "If you have not already, provision an instance of IBM Watson OpenScale using the [OpenScale link in the Cloud catalog](https://cloud.ibm.com/catalog/services/watson-openscale).", "_____no_output_____" ], [ "Your Cloud API key can be generated by going to the [**Users** section of the Cloud console](https://cloud.ibm.com/iam#/users). From that page, click your name, scroll down to the **API Keys** section, and click **Create an IBM Cloud API key**. Give your key a name and click **Create**, then copy the created key and paste it below.", "_____no_output_____" ], [ "**NOTE:** You can also get OpenScale `API_KEY` using IBM CLOUD CLI.\n\nHow to install IBM Cloud (bluemix) console: [instruction](https://console.bluemix.net/docs/cli/reference/ibmcloud/download_cli.html#install_use)\n\nHow to get api key using console:\n```\nbx login --sso\nbx iam api-key-create 'my_key'\n```", "_____no_output_____" ] ], [ [ "CLOUD_API_KEY = \"***\"\nIAM_URL=\"https://iam.ng.bluemix.net/oidc/token\"", "_____no_output_____" ] ], [ [ "If you have not already, provision an instance of IBM Watson OpenScale using the [OpenScale link in the Cloud catalog](https://cloud.ibm.com/catalog/services/watson-openscale).\n\nYour Cloud API key can be generated by going to the [**Users** section of the Cloud console](https://cloud.ibm.com/iam#/users). From that page, click your name, scroll down to the **API Keys** section, and click **Create an IBM Cloud API key**. Give your key a name and click **Create**, then copy the created key, generate an IAM token using that key and paste it below.", "_____no_output_____" ], [ "### WML credentials example with API key", "_____no_output_____" ] ], [ [ "WML_CREDENTIALS = {\n \"url\": \"https://us-south.ml.cloud.ibm.com\",\n \"apikey\": CLOUD_API_KEY\n}", "_____no_output_____" ] ], [ [ "### WML credentials example using IAM_token \n\n**NOTE**: If IAM_TOKEN is used for authentication and you receive unauthorized/expired token error at any steps, please create a new token and reinitiate clients authentication.", "_____no_output_____" ] ], [ [ "# #uncomment this cell if want to use IAM_TOKEN\n# import requests\n# def generate_access_token():\n# headers={}\n# headers[\"Content-Type\"] = \"application/x-www-form-urlencoded\"\n# headers[\"Accept\"] = \"application/json\"\n# auth = HTTPBasicAuth(\"bx\", \"bx\")\n# data = {\n# \"grant_type\": \"urn:ibm:params:oauth:grant-type:apikey\",\n# \"apikey\": CLOUD_API_KEY\n# }\n# response = requests.post(IAM_URL, data=data, headers=headers, auth=auth)\n# json_data = response.json()\n# iam_access_token = json_data['access_token']\n# return iam_access_token", "_____no_output_____" ], [ "#uncomment this cell if want to use IAM_TOKEN\n# IAM_TOKEN = generate_access_token()\n# WML_CREDENTIALS = {\n# \"url\": \"https://us-south.ml.cloud.ibm.com\",\n# \"token\": IAM_TOKEN\n# }", "_____no_output_____" ] ], [ [ "### Cloud object storage details\n\nIn next cells, you will need to paste some credentials to Cloud Object Storage. If you haven't worked with COS yet please visit [getting started with COS tutorial](https://cloud.ibm.com/docs/cloud-object-storage?topic=cloud-object-storage-getting-started). \nYou can find `COS_API_KEY_ID` and `COS_RESOURCE_CRN` variables in **_Service Credentials_** in menu of your COS instance. Used COS Service Credentials must be created with _Role_ parameter set as Writer. Later training data file will be loaded to the bucket of your instance and used as training refecence in subsription. \n`COS_ENDPOINT` variable can be found in **_Endpoint_** field of the menu.", "_____no_output_____" ] ], [ [ "COS_API_KEY_ID = \"***\"\nCOS_RESOURCE_CRN = \"***\" # eg \"crn:v1:bluemix:public:cloud-object-storage:global:a/3bf0d9003abfb5d29761c3e97696b71c:d6f04d83-6c4f-4a62-a165-696756d63903::\"\nCOS_ENDPOINT = \"***\" # Current list avaiable at https://control.cloud-object-storage.cloud.ibm.com/v2/endpoints", "_____no_output_____" ], [ "BUCKET_NAME = \"***\" \ntraining_data_file_name=\"house_price_regression.csv\"", "_____no_output_____" ] ], [ [ "This tutorial can use Databases for PostgreSQL, Db2 Warehouse, or a free internal verison of PostgreSQL to create a datamart for OpenScale.\n\nIf you have previously configured OpenScale, it will use your existing datamart, and not interfere with any models you are currently monitoring. Do not update the cell below.\n\nIf you do not have a paid Cloud account or would prefer not to provision this paid service, you may use the free internal PostgreSQL service with OpenScale. Do not update the cell below.\n\nTo provision a new instance of Db2 Warehouse, locate [Db2 Warehouse in the Cloud catalog](https://cloud.ibm.com/catalog/services/db2-warehouse), give your service a name, and click **Create**. Once your instance is created, click the **Service Credentials** link on the left side of the screen. Click the **New credential** button, give your credentials a name, and click **Add**. Your new credentials can be accessed by clicking the **View credentials** button. Copy and paste your Db2 Warehouse credentials into the cell below.\n\nTo provision a new instance of Databases for PostgreSQL, locate [Databases for PostgreSQL in the Cloud catalog](https://cloud.ibm.com/catalog/services/databases-for-postgresql), give your service a name, and click **Create**. Once your instance is created, click the **Service Credentials** link on the left side of the screen. Click the **New credential** button, give your credentials a name, and click **Add**. Your new credentials can be accessed by clicking the **View credentials** button. Copy and paste your Databases for PostgreSQL credentials into the cell below.", "_____no_output_____" ] ], [ [ "DB_CREDENTIALS = None\n#DB_CREDENTIALS= {\"hostname\":\"\",\"username\":\"\",\"password\":\"\",\"database\":\"\",\"port\":\"\",\"ssl\":True,\"sslmode\":\"\",\"certificate_base64\":\"\"}", "_____no_output_____" ], [ "KEEP_MY_INTERNAL_POSTGRES = True", "_____no_output_____" ] ], [ [ "## Run the notebook\n\nAt this point, the notebook is ready to run. You can either run the cells one at a time, or click the **Kernel** option above and select **Restart and Run All** to run all the cells.", "_____no_output_____" ], [ "# Model building and deployment <a name=\"model\"></a>", "_____no_output_____" ], [ "In this section you will learn how to train Spark MLLib model and next deploy it as web-service using Watson Machine Learning service.", "_____no_output_____" ], [ "## Load the training data from github", "_____no_output_____" ] ], [ [ "!rm house_price_regression.csv\n!wget https://raw.githubusercontent.com/IBM/watson-openscale-samples/main/IBM%20Cloud/WML/assets/data/house_price/house_price_regression.csv", "_____no_output_____" ], [ "import pandas as pd\nimport numpy as np\npd_data = pd.read_csv(\"house_price_regression.csv\")\npd_data.head()", "_____no_output_____" ] ], [ [ "## Explore data", "_____no_output_____" ], [ "## Save training data to Cloud Object Storage", "_____no_output_____" ] ], [ [ "import ibm_boto3\nfrom ibm_botocore.client import Config, ClientError\n\ncos_client = ibm_boto3.resource(\"s3\",\n ibm_api_key_id=COS_API_KEY_ID,\n ibm_service_instance_id=COS_RESOURCE_CRN,\n ibm_auth_endpoint=\"https://iam.bluemix.net/oidc/token\",\n config=Config(signature_version=\"oauth\"),\n endpoint_url=COS_ENDPOINT\n)", "_____no_output_____" ], [ "with open(training_data_file_name, \"rb\") as file_data:\n cos_client.Object(BUCKET_NAME, training_data_file_name).upload_fileobj(\n Fileobj=file_data\n )", "_____no_output_____" ] ], [ [ "## Create a model", "_____no_output_____" ] ], [ [ "from sklearn.model_selection import train_test_split\nfrom sklearn.impute import SimpleImputer\n\npd_data.dropna(axis=0, subset=['SalePrice'], inplace=True)\nlabel = pd_data.SalePrice\nfeature_data = pd_data.drop(['SalePrice'], axis=1).select_dtypes(exclude=['object'])\ntrain_X, test_X, train_y, test_y = train_test_split(feature_data.values, label.values, test_size=0.25)\n\nmy_imputer = SimpleImputer(missing_values=np.nan, strategy='mean')\ntrain_X = my_imputer.fit_transform(train_X)\ntest_X = my_imputer.transform(test_X)", "_____no_output_____" ], [ "from xgboost import XGBRegressor\nfrom sklearn.compose import ColumnTransformer\n\nmodel=XGBRegressor()\nmodel.fit(train_X, train_y, eval_metric=['error'], \n eval_set=[(test_X, test_y)], verbose=False)", "_____no_output_____" ], [ "# make predictions\npredictions = model.predict(test_X)\nfrom sklearn.metrics import mean_absolute_error\nprint(\"Mean Absolute Error : \" + str(mean_absolute_error(predictions, test_y)))", "_____no_output_____" ] ], [ [ "### wrap xgboost with scikit pipeline", "_____no_output_____" ] ], [ [ "from sklearn.pipeline import Pipeline\nxgb_model_imputer = SimpleImputer(missing_values=np.nan, strategy='mean')\npipeline = Pipeline(steps=[('Imputer', xgb_model_imputer), ('xgb', model)])", "_____no_output_____" ], [ "model_xgb=pipeline.fit(train_X, train_y)", "_____no_output_____" ], [ "# make predictions\npredictions = model_xgb.predict(test_X)\nfrom sklearn.metrics import mean_absolute_error\nprint(\"Mean Absolute Error : \" + str(mean_absolute_error(predictions, test_y)))", "_____no_output_____" ] ], [ [ "## Publish the model", "_____no_output_____" ] ], [ [ "import json\nfrom ibm_watson_machine_learning import APIClient\n\nwml_client = APIClient(WML_CREDENTIALS)\nwml_client.version", "_____no_output_____" ] ], [ [ "### Listing all the available spaces", "_____no_output_____" ] ], [ [ "wml_client.spaces.list(limit=10)", "_____no_output_____" ], [ "WML_SPACE_ID='***' # use space id here\nwml_client.set.default_space(WML_SPACE_ID)", "_____no_output_____" ] ], [ [ "### Remove existing model and deployment", "_____no_output_____" ] ], [ [ "MODEL_NAME=\"house_price_xgbregression\"\nDEPLOYMENT_NAME=\"house_price_xgbregression_deployment\"", "_____no_output_____" ], [ "deployments_list = wml_client.deployments.get_details()\nfor deployment in deployments_list[\"resources\"]:\n model_id = deployment[\"entity\"][\"asset\"][\"id\"]\n deployment_id = deployment[\"metadata\"][\"id\"]\n if deployment[\"metadata\"][\"name\"] == DEPLOYMENT_NAME:\n print(\"Deleting deployment id\", deployment_id)\n wml_client.deployments.delete(deployment_id)\n print(\"Deleting model id\", model_id)\n wml_client.repository.delete(model_id)\nwml_client.repository.list_models()", "_____no_output_____" ], [ "training_data_references = [\n {\n \"id\": \"product line\",\n \"type\": \"s3\",\n \"connection\": {\n \"access_key_id\": COS_API_KEY_ID,\n \"endpoint_url\": COS_ENDPOINT,\n \"resource_instance_id\":COS_RESOURCE_CRN\n },\n \"location\": {\n \"bucket\": BUCKET_NAME,\n \"path\": training_data_file_name,\n }\n }\n ]", "_____no_output_____" ], [ "#Note if there is specification related exception or specification ID is None then use \"default_py3.8\" instead of \"default_py3.7_opence\"\nsoftware_spec_uid = wml_client.software_specifications.get_id_by_name(\"default_py3.7_opence\")\nprint(\"Software Specification ID: {}\".format(software_spec_uid))\nmodel_props = {\n wml_client._models.ConfigurationMetaNames.NAME:\"{}\".format(MODEL_NAME),\n wml_client._models.ConfigurationMetaNames.TYPE: \"scikit-learn_0.23\",\n wml_client._models.ConfigurationMetaNames.SOFTWARE_SPEC_UID: software_spec_uid,\n wml_client._models.ConfigurationMetaNames.TRAINING_DATA_REFERENCES: training_data_references,\n wml_client._models.ConfigurationMetaNames.LABEL_FIELD: \"SalePrice\",\n }", "_____no_output_____" ], [ "print(\"Storing model ...\")\npublished_model_details = wml_client.repository.store_model(\n model=model_xgb, \n meta_props=model_props,\n training_data=feature_data, \n training_target=label\n)\n\nmodel_uid = wml_client.repository.get_model_uid(published_model_details)\nprint(\"Done\")\nprint(\"Model ID: {}\".format(model_uid))", "_____no_output_____" ] ], [ [ "## Deploy the model", "_____no_output_____" ], [ "The next section of the notebook deploys the model as a RESTful web service in Watson Machine Learning. The deployed model will have a scoring URL you can use to send data to the model for predictions.", "_____no_output_____" ] ], [ [ "deployment_details = wml_client.deployments.create(\n model_uid, \n meta_props={\n wml_client.deployments.ConfigurationMetaNames.NAME: \"{}\".format(DEPLOYMENT_NAME),\n wml_client.deployments.ConfigurationMetaNames.ONLINE: {}\n }\n)\nscoring_url = wml_client.deployments.get_scoring_href(deployment_details)\ndeployment_uid=wml_client.deployments.get_uid(deployment_details)\n\nprint(\"Scoring URL:\" + scoring_url)\nprint(\"Model id: {}\".format(model_uid))\nprint(\"Deployment id: {}\".format(deployment_uid))", "_____no_output_____" ] ], [ [ "## Sample scoring", "_____no_output_____" ] ], [ [ "fields = feature_data.columns.tolist()\nvalues = [\n test_X[0].tolist()\n ]\n\nscoring_payload = {\"input_data\": [{\"fields\": fields, \"values\": values}]}\nscoring_payload", "_____no_output_____" ], [ "scoring_response = wml_client.deployments.score(deployment_uid, scoring_payload)\nscoring_response", "_____no_output_____" ] ], [ [ "# Configure OpenScale <a name=\"openscale\"></a>", "_____no_output_____" ], [ "The notebook will now import the necessary libraries and set up a Python OpenScale client.", "_____no_output_____" ] ], [ [ "from ibm_cloud_sdk_core.authenticators import IAMAuthenticator,BearerTokenAuthenticator\n\nfrom ibm_watson_openscale import *\nfrom ibm_watson_openscale.supporting_classes.enums import *\nfrom ibm_watson_openscale.supporting_classes import *\n\n\nauthenticator = IAMAuthenticator(apikey=CLOUD_API_KEY)\nwos_client = APIClient(authenticator=authenticator)\nwos_client.version", "_____no_output_____" ] ], [ [ "## Create schema and datamart", "_____no_output_____" ], [ "### Set up datamart", "_____no_output_____" ], [ "Watson OpenScale uses a database to store payload logs and calculated metrics. If database credentials were **not** supplied above, the notebook will use the free, internal lite database. If database credentials were supplied, the datamart will be created there **unless** there is an existing datamart **and** the **KEEP_MY_INTERNAL_POSTGRES** variable is set to **True**. If an OpenScale datamart exists in Db2 or PostgreSQL, the existing datamart will be used and no data will be overwritten.\n\nPrior instances of the House price model will be removed from OpenScale monitoring.", "_____no_output_____" ] ], [ [ "wos_client.data_marts.show()", "_____no_output_____" ], [ "data_marts = wos_client.data_marts.list().result.data_marts\nif len(data_marts) == 0:\n if DB_CREDENTIALS is not None:\n if SCHEMA_NAME is None: \n print(\"Please specify the SCHEMA_NAME and rerun the cell\")\n\n print('Setting up external datamart')\n added_data_mart_result = wos_client.data_marts.add(\n background_mode=False,\n name=\"WOS Data Mart\",\n description=\"Data Mart created by WOS tutorial notebook\",\n database_configuration=DatabaseConfigurationRequest(\n database_type=DatabaseType.POSTGRESQL,\n credentials=PrimaryStorageCredentialsLong(\n hostname=DB_CREDENTIALS['hostname'],\n username=DB_CREDENTIALS['username'],\n password=DB_CREDENTIALS['password'],\n db=DB_CREDENTIALS['database'],\n port=DB_CREDENTIALS['port'],\n ssl=True,\n sslmode=DB_CREDENTIALS['sslmode'],\n certificate_base64=DB_CREDENTIALS['certificate_base64']\n ),\n location=LocationSchemaName(\n schema_name= SCHEMA_NAME\n )\n )\n ).result\n else:\n print('Setting up internal datamart')\n added_data_mart_result = wos_client.data_marts.add(\n background_mode=False,\n name=\"WOS Data Mart\",\n description=\"Data Mart created by WOS tutorial notebook\", \n internal_database = True).result\n \n data_mart_id = added_data_mart_result.metadata.id\n \nelse:\n data_mart_id=data_marts[0].metadata.id\n print('Using existing datamart {}'.format(data_mart_id))\n ", "_____no_output_____" ] ], [ [ "### Remove existing service provider connected with used WML instance. ", "_____no_output_____" ], [ "Multiple service providers for the same engine instance are avaiable in Watson OpenScale. To avoid multiple service providers of used WML instance in the tutorial notebook the following code deletes existing service provder(s) and then adds new one. ", "_____no_output_____" ] ], [ [ "SERVICE_PROVIDER_NAME = \"xgboost_WML V2\"\nSERVICE_PROVIDER_DESCRIPTION = \"Added by tutorial WOS notebook.\"", "_____no_output_____" ], [ "service_providers = wos_client.service_providers.list().result.service_providers\nfor service_provider in service_providers:\n service_instance_name = service_provider.entity.name\n if service_instance_name == SERVICE_PROVIDER_NAME:\n service_provider_id = service_provider.metadata.id\n wos_client.service_providers.delete(service_provider_id)\n print(\"Deleted existing service_provider for WML instance: {}\".format(service_provider_id))", "_____no_output_____" ] ], [ [ "## Add service provider", "_____no_output_____" ], [ "Watson OpenScale needs to be bound to the Watson Machine Learning instance to capture payload data into and out of the model.", "_____no_output_____" ], [ "**Note:** You can bind more than one engine instance if needed by calling `wos_client.service_providers.add` method. Next, you can refer to particular service provider using `service_provider_id`.", "_____no_output_____" ] ], [ [ "added_service_provider_result = wos_client.service_providers.add(\n name=SERVICE_PROVIDER_NAME,\n description=SERVICE_PROVIDER_DESCRIPTION,\n service_type=ServiceTypes.WATSON_MACHINE_LEARNING,\n deployment_space_id = WML_SPACE_ID,\n operational_space_id = \"production\",\n credentials=WMLCredentialsCloud(\n apikey=CLOUD_API_KEY, ## use `apikey=IAM_TOKEN` if using IAM_TOKEN to initiate client\n url=WML_CREDENTIALS[\"url\"],\n instance_id=None\n ),\n background_mode=False\n ).result\nservice_provider_id = added_service_provider_result.metadata.id", "_____no_output_____" ], [ "wos_client.service_providers.show()", "_____no_output_____" ], [ "asset_deployment_details = wos_client.service_providers.list_assets(data_mart_id=data_mart_id, service_provider_id=service_provider_id,deployment_id=deployment_uid, deployment_space_id = WML_SPACE_ID).result['resources'][0]\nasset_deployment_details", "_____no_output_____" ], [ "model_asset_details_from_deployment=wos_client.service_providers.get_deployment_asset(data_mart_id=data_mart_id,service_provider_id=service_provider_id,deployment_id=deployment_uid,deployment_space_id=WML_SPACE_ID)\nmodel_asset_details_from_deployment", "_____no_output_____" ] ], [ [ "## Subscriptions", "_____no_output_____" ], [ "### Remove existing House price model subscriptions", "_____no_output_____" ], [ "This code removes previous subscriptions to the House price model to refresh the monitors with the new model and new data.", "_____no_output_____" ] ], [ [ "wos_client.subscriptions.show()", "_____no_output_____" ] ], [ [ "This code removes previous subscriptions to the House price model to refresh the monitors with the new model and new data.", "_____no_output_____" ] ], [ [ "subscriptions = wos_client.subscriptions.list().result.subscriptions\nfor subscription in subscriptions:\n sub_model_id = subscription.entity.asset.asset_id\n if sub_model_id == model_uid:\n wos_client.subscriptions.delete(subscription.metadata.id)\n print('Deleted existing subscription for model', sub_model_id)", "_____no_output_____" ] ], [ [ "This code creates the model subscription in OpenScale using the Python client API. Note that we need to provide the model unique identifier, and some information about the model itself.", "_____no_output_____" ], [ "### This code creates the model subscription in OpenScale using the Python client API. Note that we need to provide the model unique identifier, and some information about the model itself.", "_____no_output_____" ] ], [ [ "feature_cols=feature_data.columns.tolist()\n#categorical_cols=X.select_dtypes(include=['object']).columns", "_____no_output_____" ], [ "from ibm_watson_openscale.base_classes.watson_open_scale_v2 import ScoringEndpointRequest", "_____no_output_____" ], [ "subscription_details = wos_client.subscriptions.add(\n data_mart_id=data_mart_id,\n service_provider_id=service_provider_id,\n asset=Asset(\n asset_id=model_asset_details_from_deployment[\"entity\"][\"asset\"][\"asset_id\"],\n name=model_asset_details_from_deployment[\"entity\"][\"asset\"][\"name\"],\n url=model_asset_details_from_deployment[\"entity\"][\"asset\"][\"url\"],\n asset_type=AssetTypes.MODEL,\n input_data_type=InputDataType.STRUCTURED,\n problem_type=ProblemType.REGRESSION\n ),\n deployment=AssetDeploymentRequest(\n deployment_id=asset_deployment_details['metadata']['guid'],\n name=asset_deployment_details['entity']['name'],\n deployment_type= DeploymentTypes.ONLINE,\n url=asset_deployment_details['metadata']['url'],\n scoring_endpoint=ScoringEndpointRequest(url=scoring_url) # scoring model without shadow deployment\n ),\n asset_properties=AssetPropertiesRequest(\n label_column='SalePrice',\n prediction_field='prediction',\n feature_fields = feature_cols,\n #categorical_fields = categorical_cols,\n training_data_reference=TrainingDataReference(type='cos',\n location=COSTrainingDataReferenceLocation(bucket = BUCKET_NAME,\n file_name = training_data_file_name),\n connection=COSTrainingDataReferenceConnection.from_dict({\n \"resource_instance_id\": COS_RESOURCE_CRN,\n \"url\": COS_ENDPOINT,\n \"api_key\": COS_API_KEY_ID,\n \"iam_url\": IAM_URL}))\n ),background_mode = False\n ).result\nsubscription_id = subscription_details.metadata.id\nsubscription_id", "_____no_output_____" ], [ "import time\n\ntime.sleep(5)\npayload_data_set_id = None\npayload_data_set_id = wos_client.data_sets.list(type=DataSetTypes.PAYLOAD_LOGGING, \n target_target_id=subscription_id, \n target_target_type=TargetTypes.SUBSCRIPTION).result.data_sets[0].metadata.id\nif payload_data_set_id is None:\n print(\"Payload data set not found. Please check subscription status.\")\nelse:\n print(\"Payload data set id: \", payload_data_set_id)", "_____no_output_____" ], [ "wos_client.data_sets.show()", "_____no_output_____" ] ], [ [ "Get subscription list", "_____no_output_____" ] ], [ [ "wos_client.subscriptions.show()", "_____no_output_____" ] ], [ [ "### Score the model so we can configure monitors", "_____no_output_____" ] ], [ [ "import random\n\n\nfields = feature_data.columns.tolist()\nvalues = random.sample(test_X.tolist(), 2)\n \nscoring_payload = {\"input_data\": [{\"fields\": fields, \"values\": values}]}\npredictions = wml_client.deployments.score(deployment_uid, scoring_payload)\n\nprint(\"Single record scoring result:\", \"\\n fields:\", predictions[\"predictions\"][0][\"fields\"], \"\\n values: \", predictions[\"predictions\"][0][\"values\"][0])", "_____no_output_____" ] ], [ [ "## Check if WML payload logging worked else manually store payload records", "_____no_output_____" ] ], [ [ "import uuid\nfrom ibm_watson_openscale.supporting_classes.payload_record import PayloadRecord\ntime.sleep(5)\npl_records_count = wos_client.data_sets.get_records_count(payload_data_set_id)\nprint(\"Number of records in the payload logging table: {}\".format(pl_records_count))\nif pl_records_count == 0:\n print(\"Payload logging did not happen, performing explicit payload logging.\")\n wos_client.data_sets.store_records(data_set_id=payload_data_set_id, request_body=[PayloadRecord(\n scoring_id=str(uuid.uuid4()),\n request=scoring_payload,\n response={\"fields\": predictions['predictions'][0]['fields'], \"values\":predictions['predictions'][0]['values']},\n response_time=460\n )],background_mode=False)\n time.sleep(5)\n pl_records_count = wos_client.data_sets.get_records_count(payload_data_set_id)\n print(\"Number of records in the payload logging table: {}\".format(pl_records_count))", "_____no_output_____" ], [ "wos_client.data_sets.show_records(payload_data_set_id)", "_____no_output_____" ] ], [ [ "# Quality monitoring and feedback logging <a name=\"quality\"></a>", "_____no_output_____" ], [ "## Enable quality monitoring", "_____no_output_____" ] ], [ [ "import time\n\ntime.sleep(10)\ntarget = Target(\n target_type=TargetTypes.SUBSCRIPTION,\n target_id=subscription_id\n)\nparameters = {\n \"min_feedback_data_size\": 50\n}\nquality_monitor_details = wos_client.monitor_instances.create(\n data_mart_id=data_mart_id,\n background_mode=False,\n monitor_definition_id=wos_client.monitor_definitions.MONITORS.QUALITY.ID,\n target=target,\n parameters=parameters\n).result", "_____no_output_____" ], [ "quality_monitor_instance_id = quality_monitor_details.metadata.id\nquality_monitor_instance_id", "_____no_output_____" ] ], [ [ "## Feedback logging", "_____no_output_____" ], [ "The code below downloads and stores enough feedback data to meet the minimum threshold so that OpenScale can calculate a new accuracy measurement. It then kicks off the accuracy monitor. The monitors run hourly, or can be initiated via the Python API, the REST API, or the graphical user interface.", "_____no_output_____" ], [ "### Get feedback logging dataset ID", "_____no_output_____" ] ], [ [ "feedback_dataset_id = None\nfeedback_dataset = wos_client.data_sets.list(type=DataSetTypes.FEEDBACK, \n target_target_id=subscription_id, \n target_target_type=TargetTypes.SUBSCRIPTION).result\nprint(feedback_dataset)\nfeedback_dataset_id = feedback_dataset.data_sets[0].metadata.id\nif feedback_dataset_id is None:\n print(\"Feedback data set not found. Please check quality monitor status.\")", "_____no_output_____" ], [ "!rm custom_feedback_50_regression.json\n!wget https://raw.githubusercontent.com/IBM/watson-openscale-samples/main/IBM%20Cloud/WML/assets/data/house_price/custom_feedback_50_regression.json", "_____no_output_____" ], [ "with open ('custom_feedback_50_regression.json')as file:\n feedback_data=json.load(file)", "_____no_output_____" ], [ "wos_client.data_sets.store_records(feedback_dataset_id, request_body=feedback_data, background_mode=False)", "_____no_output_____" ], [ "wos_client.data_sets.get_records_count(data_set_id=feedback_dataset_id)", "_____no_output_____" ], [ "run_details = wos_client.monitor_instances.run(monitor_instance_id=quality_monitor_instance_id, background_mode=False).result", "_____no_output_____" ], [ "wos_client.monitor_instances.show_metrics(monitor_instance_id=quality_monitor_instance_id)", "_____no_output_____" ] ], [ [ "# Fairness, drift monitoring and explanations <a name=\"fairness\"></a>", "_____no_output_____" ], [ "### Fairness configuration\n\nThe code below configures fairness monitoring for our model. It turns on monitoring for one features, MSSubClass. In each case, we must specify:\n\n * Which model feature to monitor\n * One or more **majority** groups, which are values of that feature that we expect to receive a higher percentage of favorable outcomes\n * One or more **minority** groups, which are values of that feature that we expect to receive a higher percentage of unfavorable outcomes\n * The threshold at which we would like OpenScale to display an alert if the fairness measurement falls below (in this case, 80%)\n\nAdditionally, we must specify which outcomes from the model are favourable outcomes, and which are unfavourable. We must also provide the number of records OpenScale will use to calculate the fairness score. In this case, OpenScale's fairness monitor will run hourly, but will not calculate a new fairness rating until at least 50 records have been added. Finally, to calculate fairness, OpenScale must perform some calculations on the training data, so we provide the dataframe containing the data.", "_____no_output_____" ] ], [ [ "wos_client.monitor_instances.show()", "_____no_output_____" ], [ "#wos_client.monitor_instances.delete(drift_monitor_instance_id,background_mode=False)", "_____no_output_____" ], [ "target = Target(\n target_type=TargetTypes.SUBSCRIPTION,\n target_id=subscription_id\n\n)\nparameters = {\n \"features\": [\n {\n \"feature\": \"MSSubClass\",\n \"majority\": [[50,70]],\n \"threshold\": 0.8,\n \"minority\": [[80,100]]\n }\n ],\n \"favourable_class\": [[200000,500000]],\n \"unfavourable_class\": [[35000,100000]],\n \"min_records\": 50\n}\n\nfairness_monitor_details = wos_client.monitor_instances.create(\n data_mart_id=data_mart_id,\n background_mode=False,\n monitor_definition_id=wos_client.monitor_definitions.MONITORS.FAIRNESS.ID,\n target=target,\n parameters=parameters).result\nfairness_monitor_instance_id =fairness_monitor_details.metadata.id\nfairness_monitor_instance_id", "_____no_output_____" ] ], [ [ "### Drift configuration", "_____no_output_____" ], [ "#### Note: you can choose to enable/disable (True or False) model or data drift within config", "_____no_output_____" ] ], [ [ "monitor_instances = wos_client.monitor_instances.list().result.monitor_instances\nfor monitor_instance in monitor_instances:\n monitor_def_id=monitor_instance.entity.monitor_definition_id\n if monitor_def_id == \"drift\" and monitor_instance.entity.target.target_id == subscription_id:\n wos_client.monitor_instances.delete(monitor_instance.metadata.id)\n print('Deleted existing drift monitor instance with id: ', monitor_instance.metadata.id)", "_____no_output_____" ], [ "target = Target(\n target_type=TargetTypes.SUBSCRIPTION,\n target_id=subscription_id\n\n)\nparameters = {\n \"min_samples\": 50,\n \"drift_threshold\": 0.1,\n \"train_drift_model\": True,\n \"enable_model_drift\": True,\n \"enable_data_drift\": True\n}\n\ndrift_monitor_details = wos_client.monitor_instances.create(\n data_mart_id=data_mart_id,\n background_mode=False,\n monitor_definition_id=wos_client.monitor_definitions.MONITORS.DRIFT.ID,\n target=target,\n parameters=parameters\n).result\n\ndrift_monitor_instance_id = drift_monitor_details.metadata.id\ndrift_monitor_instance_id", "_____no_output_____" ] ], [ [ "## Score the model again now that monitoring is configured", "_____no_output_____" ], [ "This next section randomly selects 200 records from the data feed and sends those records to the model for predictions. This is enough to exceed the minimum threshold for records set in the previous section, which allows OpenScale to begin calculating fairness.", "_____no_output_____" ] ], [ [ "!wget https://raw.githubusercontent.com/IBM/watson-openscale-samples/main/IBM%20Cloud/WML/assets/data/house_price/custom_scoring_payloads_50_regression.json", "_____no_output_____" ], [ "with open('custom_scoring_payloads_50_regression.json', 'r') as scoring_file:\n scoring_data = json.load(scoring_file)", "_____no_output_____" ], [ "import random\n\nwith open('custom_scoring_payloads_50_regression.json', 'r') as scoring_file:\n scoring_data = json.load(scoring_file)\n\nfields = scoring_data[0]['request']['fields']\nvalues = scoring_data[0]['request']['values']\npayload_scoring = {\"input_data\": [{\"fields\": fields, \"values\": values}]}\n\nscoring_response = wml_client.deployments.score(deployment_uid, payload_scoring)\ntime.sleep(5)\n\npl_records_count = wos_client.data_sets.get_records_count(payload_data_set_id)\nif pl_records_count == 2:\n print(\"Payload logging did not happen, performing explicit payload logging.\")\n wos_client.data_sets.store_records(data_set_id=payload_data_set_id, request_body=[PayloadRecord(\n scoring_id=str(uuid.uuid4()),\n request=payload_scoring,\n response={\"fields\": scoring_response['predictions'][0]['fields'], \"values\":scoring_response['predictions'][0]['values']},\n response_time=460\n )])\n time.sleep(5)\n pl_records_count = wos_client.data_sets.get_records_count(payload_data_set_id)\n print(\"Number of records in the payload logging table: {}\".format(pl_records_count))", "_____no_output_____" ], [ "print('Number of records in payload table: ', wos_client.data_sets.get_records_count(data_set_id=payload_data_set_id))", "_____no_output_____" ] ], [ [ "## Run fairness monitor", "_____no_output_____" ], [ "Kick off a fairness monitor run on current data. The monitor runs hourly, but can be manually initiated using the Python client, the REST API, or the graphical user interface.", "_____no_output_____" ] ], [ [ "run_details = wos_client.monitor_instances.run(monitor_instance_id=fairness_monitor_instance_id, background_mode=False)", "_____no_output_____" ], [ "time.sleep(10)\n\nwos_client.monitor_instances.show_metrics(monitor_instance_id=fairness_monitor_instance_id)", "_____no_output_____" ] ], [ [ "## Run drift monitor\n\n\nKick off a drift monitor run on current data. The monitor runs every hour, but can be manually initiated using the Python client, the REST API.", "_____no_output_____" ] ], [ [ "drift_run_details = wos_client.monitor_instances.run(monitor_instance_id=drift_monitor_instance_id, background_mode=False)", "_____no_output_____" ], [ "time.sleep(5)\n\nwos_client.monitor_instances.show_metrics(monitor_instance_id=drift_monitor_instance_id)", "_____no_output_____" ] ], [ [ "## Configure Explainability", "_____no_output_____" ], [ "Finally, we provide OpenScale with the training data to enable and configure the explainability features.", "_____no_output_____" ] ], [ [ "target = Target(\n target_type=TargetTypes.SUBSCRIPTION,\n target_id=subscription_id\n)\nparameters = {\n \"enabled\": True\n}\nexplainability_details = wos_client.monitor_instances.create(\n data_mart_id=data_mart_id,\n background_mode=False,\n monitor_definition_id=wos_client.monitor_definitions.MONITORS.EXPLAINABILITY.ID,\n target=target,\n parameters=parameters\n).result\n\nexplainability_monitor_id = explainability_details.metadata.id", "_____no_output_____" ] ], [ [ "## Run explanation for sample record", "_____no_output_____" ] ], [ [ "pl_records_resp = wos_client.data_sets.get_list_of_records(data_set_id=payload_data_set_id, limit=1, offset=0).result\nscoring_ids = [pl_records_resp[\"records\"][0][\"entity\"][\"values\"][\"scoring_id\"]]\nprint(\"Running explanations on scoring IDs: {}\".format(scoring_ids))\nexplanation_types = [\"lime\", \"contrastive\"]\nresult = wos_client.monitor_instances.explanation_tasks(scoring_ids=scoring_ids, explanation_types=explanation_types).result\nprint(result)", "_____no_output_____" ], [ "explanation_task_id=result.to_dict()['metadata']['explanation_task_ids'][0]\nexplanation_task_id", "_____no_output_____" ], [ "wos_client.monitor_instances.get_explanation_tasks(explanation_task_id=explanation_task_id).result.to_dict()", "_____no_output_____" ] ], [ [ "## Additional data to help debugging", "_____no_output_____" ] ], [ [ "print('Datamart:', data_mart_id)\nprint('Model:', model_uid)\nprint('Deployment:', deployment_uid)", "_____no_output_____" ] ], [ [ "## Identify transactions for Explainability", "_____no_output_____" ], [ "Transaction IDs identified by the cells below can be copied and pasted into the Explainability tab of the OpenScale dashboard.", "_____no_output_____" ] ], [ [ "wos_client.data_sets.show_records(payload_data_set_id, limit=5)", "_____no_output_____" ] ], [ [ "## Congratulations!\n\nYou have finished the hands-on lab for IBM Watson OpenScale. You can now view the [OpenScale Dashboard](https://aiopenscale.cloud.ibm.com/). Click on the tile for the House Price Regression model to see fairness, accuracy, and performance monitors. Click on the timeseries graph to get detailed information on transactions during a specific time window.\n", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "markdown", "markdown", "markdown", "markdown", "markdown", "markdown" ], [ "code", "code" ], [ "markdown", "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown", "markdown", "markdown", "markdown" ], [ "code", "code" ], [ "markdown", "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown" ], [ "code", "code" ], [ "markdown", "markdown" ], [ "code", "code" ], [ "markdown", "markdown", "markdown" ], [ "code", "code", "code", "code" ], [ "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown", "markdown" ], [ "code", "code" ], [ "markdown", "markdown", "markdown" ], [ "code", "code", "code", "code", "code", "code", "code" ], [ "markdown", "markdown" ], [ "code", "code", "code" ], [ "markdown", "markdown" ], [ "code", "code" ], [ "markdown", "markdown" ], [ "code", "code", "code", "code" ], [ "markdown", "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ] ]
d024cc41eebc3782f7c9c834bd33a23def6d3b81
50,794
ipynb
Jupyter Notebook
outlier_detection/training_outlier_detection.ipynb
felix-exel/kfserving-advanced
f75c5759c2ab1a5b0fba0ac0fda59f4e9062dfec
[ "MIT" ]
5
2021-02-11T07:36:45.000Z
2022-03-15T09:35:13.000Z
outlier_detection/training_outlier_detection.ipynb
felix-exel/kfserving-advanced
f75c5759c2ab1a5b0fba0ac0fda59f4e9062dfec
[ "MIT" ]
null
null
null
outlier_detection/training_outlier_detection.ipynb
felix-exel/kfserving-advanced
f75c5759c2ab1a5b0fba0ac0fda59f4e9062dfec
[ "MIT" ]
3
2021-03-02T11:35:39.000Z
2022-02-23T04:06:39.000Z
40.216944
337
0.493424
[ [ [ "# This Notebook uses a Session Event Dataset from E-Commerce Website (https://www.kaggle.com/mkechinov/ecommerce-behavior-data-from-multi-category-store and https://rees46.com/) to build an Outlier Detection based on an Autoencoder.", "_____no_output_____" ] ], [ [ "import mlflow\nimport numpy as np\nimport os\nimport shutil\nimport pandas as pd\nimport tensorflow as tf\nimport tensorflow.keras as keras\nimport tensorflow_hub as hub\nfrom itertools import product\n\n# enable gpu growth if gpu is available\ngpu_devices = tf.config.experimental.list_physical_devices('GPU')\nfor device in gpu_devices:\n tf.config.experimental.set_memory_growth(device, True)\n\n# tf.keras.mixed_precision.set_global_policy('mixed_float16')\n\ntf.config.optimizer.set_jit(True)\n\n%load_ext watermark\n%watermark -v -iv", "INFO:tensorflow:Mixed precision compatibility check (mixed_float16): OK\nYour GPU will likely run quickly with dtype policy mixed_float16 as it has compute capability of at least 7.0. Your GPU: GeForce RTX 2070 SUPER, compute capability 7.5\nnumpy 1.19.4\nmlflow 1.14.1\ntensorflow 2.4.0\ntensorflow_hub 0.11.0\npandas 1.1.5\ntensorflow.keras 2.4.0\nCPython 3.6.9\nIPython 7.16.1\n" ] ], [ [ "## Setting Registry and Tracking URI for MLflow", "_____no_output_____" ] ], [ [ "# Use this registry uri when mlflow is created by docker container with a mysql db backend\n#registry_uri = os.path.expandvars('mysql+pymysql://${MYSQL_USER}:${MYSQL_PASSWORD}@localhost:3306/${MYSQL_DATABASE}')\n\n# Use this registry uri when mlflow is running locally by the command:\n# \"mlflow server --backend-store-uri sqlite:///mlflow.db --default-artifact-root ./mlruns --host 0.0.0.0\"\nregistry_uri = 'sqlite:///mlflow.db'\n\ntracking_uri = 'http://localhost:5000'\n\nmlflow.tracking.set_registry_uri(registry_uri)\nmlflow.tracking.set_tracking_uri(tracking_uri)", "_____no_output_____" ] ], [ [ "# The Data is taken from https://www.kaggle.com/mkechinov/ecommerce-behavior-data-from-multi-category-store and https://rees46.com/\n## Each record/line in the file has the following fields:\n1. event_time: When did the event happened (UTC)\n2. event_type: Event type: one of [view, cart, remove_from_cart, purchase] \n3. product_id\n4. category_id\n5. category_code: Category meaningful name (if present)\n6. brand: Brand name in lower case (if present)\n7. price\n8. user_id: Permanent user ID\n9. user_session: User session ID", "_____no_output_____" ] ], [ [ "# Read first 500.000 Rows\nfor chunk in pd.read_table(\"2019-Dec.csv\",\n sep=\",\", header=0,\n infer_datetime_format=True, low_memory=False, chunksize=500000):\n # Filter out other event types than 'view'\n chunk = chunk[chunk['event_type'] == 'view']\n # Filter out missing 'category_code' rows\n chunk = chunk[chunk['category_code'].isna() == False]\n chunk.reset_index(drop=True, inplace=True)\n\n # Filter out all Sessions of length 1\n count_sessions = chunk.groupby('user_session').count()\n window_length = count_sessions.max()[0]\n unique_sessions = [count_sessions.index[i] for i in range(\n count_sessions.shape[0]) if count_sessions.iloc[i, 0] == 1]\n chunk = chunk[~chunk['user_session'].isin(unique_sessions)]\n chunk.reset_index(drop=True, inplace=True)\n\n # Text embedding based on https://tfhub.dev/google/nnlm-en-dim50/2\n last_category = []\n for i, el in enumerate(chunk['category_code']):\n last_category.append(el.split('.')[-1])\n chunk['Product'] = last_category\n embed = hub.load(\"https://tfhub.dev/google/nnlm-en-dim50/2\")\n embeddings = embed(chunk['Product'].tolist())\n for dim in range(embeddings.shape[1]):\n chunk['embedding_'+str(dim)] = embeddings[:, dim]\n\n # Standardization\n mean = chunk['price'].mean(axis=0)\n print('Mean:', mean)\n std = chunk['price'].std(axis=0)\n print('Std:', std)\n chunk['price_standardized'] = (chunk['price'] - mean) / std\n\n chunk.sort_values(by=['user_session', 'event_time'], inplace=True)\n chunk['price_standardized'] = chunk['price_standardized'].astype('float32')\n chunk['product_id'] = chunk['product_id'].astype('int32')\n chunk.reset_index(drop=True, inplace=True)\n\n print('Sessions:', pd.unique(chunk['user_session']).shape)\n print('Unique Products:', pd.unique(chunk['product_id']).shape)\n print('Unique category_code:', pd.unique(chunk['category_code']).shape)\n\n columns = ['embedding_'+str(i) for i in range(embeddings.shape[1])]\n columns.append('price_standardized')\n columns.append('user_session')\n columns.append('Product')\n columns.append('product_id')\n columns.append('category_code')\n\n df = chunk[columns]\n break\ndf", "Mean: 284.7710546866538\nStd: 349.46740231584886\nSessions: (61296,)\nUnique Products: (38515,)\nUnique category_code: (134,)\n" ] ], [ [ "## Delete Rows with equal or less than 6 Product Occurrences", "_____no_output_____" ] ], [ [ "count_product_id_mapped = df.groupby('product_id').count()\nproducts_to_delete = count_product_id_mapped.loc[count_product_id_mapped['embedding_0'] <= 6].index\nproducts_to_delete", "_____no_output_____" ] ], [ [ "## Slice Sessions from the Dataframe", "_____no_output_____" ] ], [ [ "list_sessions = []\nlist_last_clicked = []\nlist_last_clicked_temp = []\ncurrent_id = df.loc[0, 'user_session']\ncurrent_index = 0\n\ncolumns = ['embedding_'+str(i) for i in range(embeddings.shape[1])]\ncolumns.append('price_standardized')\ncolumns.insert(0, 'product_id')\n\nfor i in range(df.shape[0]):\n if df.loc[i, 'user_session'] != current_id:\n list_sessions.append(df.loc[current_index:i-2, columns])\n list_last_clicked.append(df.loc[i-1, 'product_id'])\n list_last_clicked_temp.append(df.loc[i-1, columns])\n current_id = df.loc[i, 'user_session']\n current_index = i", "_____no_output_____" ] ], [ [ "## Delete Sessions with Length larger than 30", "_____no_output_____" ] ], [ [ "print(len(list_sessions))\nlist_sessions_filtered = []\nlist_last_clicked_filtered = []\nlist_last_clicked_temp_filtered = []\n\nfor index, session in enumerate(list_sessions):\n if not (session.shape[0] > 30):\n if not (session['product_id'].isin(products_to_delete).any()):\n list_sessions_filtered.append(session)\n list_last_clicked_filtered.append(list_last_clicked[index])\n list_last_clicked_temp_filtered.append(list_last_clicked_temp[index])\n \nlen(list_sessions_filtered)", "61295\n" ] ], [ [ "## Slice Sessions if label and last product from session is the same\nExample:\n- From: session: [ 1506 1506 11410 11410 2826 2826], ground truth: 2826\n- To: session: [ 1506 1506 11410 11410], ground truth: 2826", "_____no_output_____" ] ], [ [ "print(\"Length before\", len(list_sessions_filtered))\nlist_sessions_processed = []\nlist_last_clicked_processed = []\nlist_session_processed_autoencoder = []\n\nfor i, session in enumerate(list_sessions_filtered):\n if session['product_id'].values[-1] == list_last_clicked_filtered[i]:\n mask = session['product_id'].values == list_last_clicked_filtered[i]\n if session[~mask].shape[0] > 0:\n list_sessions_processed.append(session[~mask])\n list_last_clicked_processed.append(list_last_clicked_filtered[i])\n list_session_processed_autoencoder.append(pd.concat([session[~mask], pd.DataFrame(list_last_clicked_temp_filtered[i]).T],\n ignore_index=True))\n else:\n list_sessions_processed.append(session)\n list_last_clicked_processed.append(list_last_clicked_filtered[i])\n list_session_processed_autoencoder.append(pd.concat([session, pd.DataFrame(list_last_clicked_temp_filtered[i]).T],\n ignore_index=True))\n\nprint(\"Length after\", len(list_sessions_processed))", "Length before 44551\nLength after 30941\n" ] ], [ [ "## Create Item IDs starting from value 1 for Embeddings and One Hot Layer", "_____no_output_____" ] ], [ [ "mapping = pd.read_csv('../ID_Mapping.csv')[['Item_ID', 'Mapped_ID']]\ndict_items = mapping.set_index('Item_ID').to_dict()['Mapped_ID']\n\nfor index, session in enumerate(list_session_processed_autoencoder):\n session['product_id'] = session['product_id'].map(dict_items)", "_____no_output_____" ], [ "# Pad all Sessions with 0. Embedding Layer and LSTM will use Masking to ignore zeros.\nlist_sessions_padded = []\nwindow_length = 31\n\nfor df in list_session_processed_autoencoder:\n np_array = df.values\n result = np.zeros((window_length, 1), dtype=np.float32)\n\n result[:np_array.shape[0],:1] = np_array[:,:1]\n list_sessions_padded.append(result)\n\n\n# Save the results, because the slicing can take some time\nnp.save('list_sessions_padded_autoencoder.npy', list_sessions_padded)\n\nsessions_padded = np.array(list_sessions_padded)\n\nn_output_features = int(sessions_padded.max())\nn_unique_input_ids = int(sessions_padded.max())\nwindow_length = sessions_padded.shape[1]\nn_input_features = sessions_padded.shape[2]\nprint(\"n_output_features\", n_output_features)\nprint(\"n_unique_input_ids\", n_unique_input_ids)\nprint(\"window_length\", window_length)\nprint(\"n_input_features\", n_input_features)", "n_output_features 9494\nn_unique_input_ids 9494\nwindow_length 31\nn_input_features 1\n" ] ], [ [ "# Training: Start here if the preprocessing was already executed", "_____no_output_____" ] ], [ [ "sessions_padded = np.load('list_sessions_padded_autoencoder.npy')\nprint(sessions_padded.shape)\nn_output_features = int(sessions_padded.max())\nn_unique_input_ids = int(sessions_padded.max())\nwindow_length = sessions_padded.shape[1]\nn_input_features = sessions_padded.shape[2]", "(30941, 31, 1)\n" ] ], [ [ "## Grid Search Hyperparameter\nDictionary with different hyperparameters to train on.\nMLflow will track those in a database.", "_____no_output_____" ] ], [ [ "grid_search_dic = {'hidden_layer_size': [300],\n 'batch_size': [32],\n 'embedding_dim': [200],\n 'window_length': [window_length],\n 'dropout_fc': [0.0], #0.2\n 'n_output_features': [n_output_features],\n 'n_input_features': [n_input_features]}\n\n# Cartesian product\ngrid_search_param = [dict(zip(grid_search_dic, v)) for v in product(*grid_search_dic.values())]\ngrid_search_param", "_____no_output_____" ] ], [ [ "### LSTM Autoencoder in functional API\n- Input: x rows (time steps) of Item IDs in a Session\n- Output: reconstructed Session", "_____no_output_____" ] ], [ [ "def build_autoencoder(window_length=50,\n units_lstm_layer=100,\n n_unique_input_ids=0,\n embedding_dim=200,\n n_input_features=1,\n n_output_features=3,\n dropout_rate=0.1):\n\n inputs = keras.layers.Input(\n shape=[window_length, n_input_features], dtype=np.float32)\n\n # Encoder\n # Embedding Layer\n embedding_layer = tf.keras.layers.Embedding(\n n_unique_input_ids+1, embedding_dim, input_length=window_length) # , mask_zero=True)\n embeddings = embedding_layer(inputs[:, :, 0])\n\n mask = inputs[:, :, 0] != 0\n\n # LSTM Layer 1\n lstm1_output, lstm1_state_h, lstm1_state_c = keras.layers.LSTM(units=units_lstm_layer, return_state=True,\n return_sequences=True)(embeddings, mask=mask)\n lstm1_state = [lstm1_state_h, lstm1_state_c]\n\n # Decoder\n # input: lstm1_state_c, lstm1_state_h\n decoder_state_c = lstm1_state_c\n decoder_state_h = lstm1_state_h\n decoder_outputs = tf.expand_dims(lstm1_state_h, 1)\n\n list_states = []\n decoder_layer = keras.layers.LSTM(\n units=units_lstm_layer, return_state=True, return_sequences=True, unroll=False)\n for i in range(window_length):\n decoder_outputs, decoder_state_h, decoder_state_c = decoder_layer(decoder_outputs,\n initial_state=[decoder_state_h,\n decoder_state_c])\n list_states.append(decoder_state_h)\n stacked = tf.stack(list_states, axis=1)\n\n fc_layer = tf.keras.layers.Dense(\n n_output_features+1, kernel_initializer='he_normal')\n\n fc_layer_output = tf.keras.layers.TimeDistributed(fc_layer)(\n stacked, mask=mask)\n\n mask_softmax = tf.tile(tf.expand_dims(mask, axis=2),\n [1, 1, n_output_features+1])\n\n softmax = tf.keras.layers.Softmax(axis=2, dtype=tf.float32)(\n fc_layer_output, mask=mask_softmax)\n\n model = keras.models.Model(inputs=[inputs],\n outputs=[softmax])\n return model", "_____no_output_____" ] ], [ [ "### Convert Numpy Array to tf.data.Dataset for better training performance\nThe function will return a zipped tf.data.Dataset with the following Shapes:\n- x: (batches, window_length)\n- y: (batches,)", "_____no_output_____" ] ], [ [ "def array_to_tf_data_api(train_data_x, train_data_y, batch_size=64, window_length=50,\n validate=False):\n \"\"\"Applies sliding window on the fly by using the TF Data API.\n Args:\n train_data_x: Input Data as Numpy Array, Shape (rows, n_features)\n batch_size: Batch Size.\n window_length: Window Length or Window Size.\n future_length: Number of time steps that will be predicted in the future.\n n_output_features: Number of features that will be predicted.\n validate: True if input data is a validation set and does not need to be shuffled\n shift: Shifts the Sliding Window by this Parameter.\n Returns:\n tf.data.Dataset\n \"\"\"\n\n X = tf.data.Dataset.from_tensor_slices(train_data_x)\n y = tf.data.Dataset.from_tensor_slices(train_data_y)\n\n if not validate:\n train_tf_data = tf.data.Dataset.zip((X, y)).cache() \\\n .shuffle(buffer_size=200000, reshuffle_each_iteration=True)\\\n .batch(batch_size).prefetch(1)\n return train_tf_data\n else:\n return tf.data.Dataset.zip((X, y)).batch(batch_size)\\\n .prefetch(1)", "_____no_output_____" ] ], [ [ "## Custom TF Callback to log Metrics by MLflow", "_____no_output_____" ] ], [ [ "class MlflowLogging(tf.keras.callbacks.Callback):\n def __init__(self, **kwargs):\n super().__init__() # handles base args (e.g., dtype)\n\n def on_epoch_end(self, epoch, logs=None):\n keys = list(logs.keys())\n for key in keys:\n mlflow.log_metric(str(key), logs.get(key), step=epoch)", "_____no_output_____" ], [ "class CustomCategoricalCrossentropy(keras.losses.Loss):\n def __init__(self, **kwargs):\n super().__init__(**kwargs)\n self.bce = tf.keras.losses.SparseCategoricalCrossentropy(\n from_logits=False, reduction='sum')\n\n @tf.function\n def call(self, y_true, y_pred):\n total = 0.0\n for i in tf.range(y_pred.shape[1]):\n loss = self.bce(y_true[:, i, 0], y_pred[:, i, :])\n total = total + loss\n return total\n\n def get_config(self):\n base_config = super().get_config()\n return {**base_config}\n\n def from_config(cls, config):\n return cls(**config)", "_____no_output_____" ], [ "class CategoricalAccuracy(keras.metrics.Metric):\n def __init__(self, name=\"categorical_accuracy\", **kwargs):\n super(CategoricalAccuracy, self).__init__(name=name, **kwargs)\n self.true = self.add_weight(name=\"true\", initializer=\"zeros\")\n self.count = self.add_weight(name=\"count\", initializer=\"zeros\")\n self.accuracy = self.add_weight(name=\"count\", initializer=\"zeros\")\n\n def update_state(self, y_true, y_pred, sample_weight=None):\n y_true = tf.cast(y_true, \"float32\")\n y_pred = tf.cast(y_pred, \"float32\")\n\n mask = y_true[:, :, 0] != 0\n argmax = tf.cast(tf.argmax(y_pred, axis=2), \"float32\")\n temp = argmax == y_true[:, :, 0]\n true = tf.reduce_sum(tf.cast(temp[mask], dtype=tf.float32))\n self.true.assign_add(true)\n self.count.assign_add(\n tf.cast(tf.shape(temp[mask])[0], dtype=\"float32\"))\n\n self.accuracy.assign(tf.math.divide(self.true, self.count))\n\n def result(self):\n return self.accuracy\n\n def reset_states(self):\n # The state of the metric will be reset at the start of each epoch.\n self.accuracy.assign(0.0)\n\n\nclass CategoricalSessionAccuracy(keras.metrics.Metric):\n def __init__(self, name=\"categorical_session_accuracy\", **kwargs):\n super(CategoricalSessionAccuracy, self).__init__(name=name, **kwargs)\n self.true = self.add_weight(name=\"true\", initializer=\"zeros\")\n self.count = self.add_weight(name=\"count\", initializer=\"zeros\")\n self.accuracy = self.add_weight(name=\"count\", initializer=\"zeros\")\n\n def update_state(self, y_true, y_pred, sample_weight=None):\n y_true = tf.cast(y_true, \"float32\")\n y_pred = tf.cast(y_pred, \"float32\")\n\n mask = y_true[:, :, 0] != 0\n argmax = tf.cast(tf.argmax(y_pred, axis=2), \"float32\")\n temp = argmax == y_true[:, :, 0]\n temp = tf.reduce_all(temp, axis=1)\n true = tf.reduce_sum(tf.cast(temp, dtype=tf.float32))\n self.true.assign_add(true)\n self.count.assign_add(tf.cast(tf.shape(temp)[0], dtype=\"float32\"))\n\n self.accuracy.assign(tf.math.divide(self.true, self.count))\n\n def result(self):\n return self.accuracy\n\n def reset_states(self):\n # The state of the metric will be reset at the start of each epoch.\n self.accuracy.assign(0.0)", "_____no_output_____" ] ], [ [ "# Training", "_____no_output_____" ] ], [ [ "with mlflow.start_run() as parent_run:\n for params in grid_search_param:\n batch_size = params['batch_size']\n window_length = params['window_length']\n embedding_dim = params['embedding_dim']\n dropout_fc = params['dropout_fc']\n hidden_layer_size = params['hidden_layer_size']\n n_output_features = params['n_output_features']\n n_input_features = params['n_input_features']\n\n with mlflow.start_run(nested=True) as child_run:\n # log parameter\n mlflow.log_param('batch_size', batch_size)\n mlflow.log_param('window_length', window_length)\n mlflow.log_param('hidden_layer_size', hidden_layer_size)\n mlflow.log_param('dropout_fc_layer', dropout_fc)\n mlflow.log_param('embedding_dim', embedding_dim)\n mlflow.log_param('n_output_features', n_output_features)\n mlflow.log_param('n_unique_input_ids', n_unique_input_ids)\n mlflow.log_param('n_input_features', n_input_features)\n\n model = build_autoencoder(window_length=window_length,\n n_output_features=n_output_features,\n n_unique_input_ids=n_unique_input_ids,\n n_input_features=n_input_features,\n embedding_dim=embedding_dim,\n units_lstm_layer=hidden_layer_size,\n dropout_rate=dropout_fc)\n\n data = array_to_tf_data_api(sessions_padded,\n sessions_padded,\n window_length=window_length,\n batch_size=batch_size)\n\n model.compile(loss=CustomCategoricalCrossentropy(),#tf.keras.losses.SparseCategoricalCrossentropy(from_logits=False, reduction='sum'),\n optimizer=keras.optimizers.Nadam(learning_rate=1e-3),\n metrics=[CategoricalAccuracy(), CategoricalSessionAccuracy()])\n\n model.fit(data, shuffle=True, initial_epoch=0, epochs=20,\n callbacks=[MlflowLogging()])\n \n model.compile()\n model.save(\"./tmp\")\n model.save_weights('weights')\n\n mlflow.tensorflow.log_model(tf_saved_model_dir='./tmp',\n tf_meta_graph_tags='serve',\n tf_signature_def_key='serving_default',\n artifact_path='saved_model',\n registered_model_name='Session Based LSTM Recommender')\n\n shutil.rmtree(\"./tmp\")", "Epoch 1/20\n967/967 [==============================] - 95s 70ms/step - loss: 536.0073 - categorical_accuracy: 0.0037 - categorical_session_accuracy: 0.0000e+00\nEpoch 2/20\n967/967 [==============================] - 65s 67ms/step - loss: 189.0837 - categorical_accuracy: 0.0097 - categorical_session_accuracy: 4.4663e-05\nEpoch 3/20\n967/967 [==============================] - 63s 66ms/step - loss: 174.3795 - categorical_accuracy: 0.0173 - categorical_session_accuracy: 3.9590e-04\nEpoch 4/20\n967/967 [==============================] - 63s 66ms/step - loss: 147.5806 - categorical_accuracy: 0.0341 - categorical_session_accuracy: 0.0023\nEpoch 5/20\n967/967 [==============================] - 63s 65ms/step - loss: 125.2717 - categorical_accuracy: 0.0621 - categorical_session_accuracy: 0.0089\nEpoch 6/20\n967/967 [==============================] - 63s 65ms/step - loss: 100.3309 - categorical_accuracy: 0.0977 - categorical_session_accuracy: 0.0234\nEpoch 7/20\n967/967 [==============================] - 62s 64ms/step - loss: 81.0303 - categorical_accuracy: 0.1379 - categorical_session_accuracy: 0.0440\nEpoch 8/20\n967/967 [==============================] - 62s 64ms/step - loss: 63.7936 - categorical_accuracy: 0.1798 - categorical_session_accuracy: 0.0692\nEpoch 9/20\n967/967 [==============================] - 61s 63ms/step - loss: 49.6684 - categorical_accuracy: 0.2223 - categorical_session_accuracy: 0.0991\nEpoch 10/20\n967/967 [==============================] - 62s 64ms/step - loss: 39.3160 - categorical_accuracy: 0.2642 - categorical_session_accuracy: 0.1321\nEpoch 11/20\n967/967 [==============================] - 63s 65ms/step - loss: 31.9699 - categorical_accuracy: 0.3041 - categorical_session_accuracy: 0.1679\nEpoch 12/20\n967/967 [==============================] - 62s 64ms/step - loss: 25.9715 - categorical_accuracy: 0.3419 - categorical_session_accuracy: 0.2051\nEpoch 13/20\n967/967 [==============================] - 63s 65ms/step - loss: 21.5118 - categorical_accuracy: 0.3767 - categorical_session_accuracy: 0.2418\nEpoch 14/20\n967/967 [==============================] - 63s 65ms/step - loss: 19.3681 - categorical_accuracy: 0.4084 - categorical_session_accuracy: 0.2759\nEpoch 15/20\n967/967 [==============================] - 60s 62ms/step - loss: 17.5992 - categorical_accuracy: 0.4371 - categorical_session_accuracy: 0.3075\nEpoch 16/20\n967/967 [==============================] - 59s 61ms/step - loss: 15.1798 - categorical_accuracy: 0.4632 - categorical_session_accuracy: 0.3369\nEpoch 17/20\n967/967 [==============================] - 59s 61ms/step - loss: 13.3361 - categorical_accuracy: 0.4872 - categorical_session_accuracy: 0.3641\nEpoch 18/20\n967/967 [==============================] - 59s 61ms/step - loss: 12.9639 - categorical_accuracy: 0.5090 - categorical_session_accuracy: 0.3889\nEpoch 19/20\n967/967 [==============================] - 59s 61ms/step - loss: 11.2216 - categorical_accuracy: 0.5290 - categorical_session_accuracy: 0.4118\nEpoch 20/20\n967/967 [==============================] - 59s 61ms/step - loss: 10.1313 - categorical_accuracy: 0.5476 - categorical_session_accuracy: 0.4333\n" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code" ] ]
d024d7301b88fc76d935836a20499b64fcd6a2c8
3,069
ipynb
Jupyter Notebook
midterm-commands.ipynb
jstevenr/Spring-2018
239bd404ea4e19f6fdd3c09036d175f21c70d7af
[ "Apache-2.0" ]
null
null
null
midterm-commands.ipynb
jstevenr/Spring-2018
239bd404ea4e19f6fdd3c09036d175f21c70d7af
[ "Apache-2.0" ]
null
null
null
midterm-commands.ipynb
jstevenr/Spring-2018
239bd404ea4e19f6fdd3c09036d175f21c70d7af
[ "Apache-2.0" ]
null
null
null
17.947368
63
0.424242
[ [ [ "import pandas as pd\nimport numpy as pd", "_____no_output_____" ], [ "data = pd.Series([0.25,0.5,0.75,1.0],index=[2,5,3,7])\ndata", "_____no_output_____" ], [ "df = pd.DataFrame([[1,2],[3,4],[5,6]],\n columns = ['foo','bar'],\n index = ['a','b','c'])\ndf", "_____no_output_____" ], [ "import numpy as np\nx = np.array([1,2,3,4,5])\nx <= 3", "_____no_output_____" ], [ "\nx = ['a', 'b']\ntype(x)", "_____no_output_____" ], [ "x = {4,1,2,3,3}\nprint(x)", "_____no_output_____" ], [ "df2 = df\n# df2.foo\n# df['foo']\n# df2.loc[:,'foo']\ndf2.iloc[:,'foo']\n", "_____no_output_____" ], [ "df.sum(0)", "_____no_output_____" ], [ "import numpy as np\ndf3 = pd.DataFrame([[1,2],[3,np.nan],[5,6]],\n columns = ['foo','bar'],\n index = ['a','b','c'])\ndf3\n\ndf3.bar.notnull()", "_____no_output_____" ], [ "pd.cut(df3, bins=3)", "_____no_output_____" ], [ "x = [1,2,3]\ny = [10,20,30]\n\n[i+j for i,j in zip(x,y)]\n# x+y\n# list(np.array(x)+np.array(y))", "_____no_output_____" ], [ "n = 7\nS = 10\nx = np.random.randint(size = (S,n), low = 1, high = 7)\nx", "_____no_output_____" ], [ "# x[0]\n# x.mean(axis = 0)\nx.mean(axis = 1)", "_____no_output_____" ], [ "x = [1,2,3,4,5]\ny = ['item1','item2','item3','item4','item5']\n['item' + str(one) for one in x]", "_____no_output_____" ] ] ]
[ "code" ]
[ [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ] ]
d024f264c1b25b0a02df0f2234dbda314f21b018
7,339
ipynb
Jupyter Notebook
06_Linear_Regression_Boston_House_Prices.ipynb
alzaia/keras_projects
4e946b59b635b81300d55a8892175c34f186e011
[ "MIT" ]
1
2019-03-12T02:40:45.000Z
2019-03-12T02:40:45.000Z
06_Linear_Regression_Boston_House_Prices.ipynb
alzaia/keras_projects
4e946b59b635b81300d55a8892175c34f186e011
[ "MIT" ]
null
null
null
06_Linear_Regression_Boston_House_Prices.ipynb
alzaia/keras_projects
4e946b59b635b81300d55a8892175c34f186e011
[ "MIT" ]
null
null
null
25.306897
277
0.545715
[ [ [ "### Linear regression on Boston house prices", "_____no_output_____" ] ], [ [ "from keras import models\nfrom keras import layers\nimport numpy as np\nimport matplotlib.pyplot as plt", "_____no_output_____" ], [ "# Download the data\nfrom keras.datasets import boston_housing\n(train_data, train_targets), (test_data, test_targets) = boston_housing.load_data()", "_____no_output_____" ], [ "# Look at the dataset\n\nprint train_data.shape # 404 samples, 13 features\nprint train_targets.shape # 404 targets\n\nprint test_data.shape # 102 samples, 13 features\nprint test_targets.shape # 102 targets\n\nprint train_data[10] # Each datapoint is an array with 13 entries (features)\nprint train_targets[10] # Each target is the price of the house (in k$)\n", "(404, 13)\n(404,)\n(102, 13)\n(102,)\n[ 9.59571 0. 18.1 0. 0.693 6.404 100.\n 1.639 24. 666. 20.2 376.11 20.31 ]\n12.1\n" ], [ "# Perform feature wise normalization on the data\n\nmean = train_data.mean(axis=0)\nstd = train_data.std(axis=0)\n\ntrain_data = train_data - mean\ntrain_data = train_data / std\ntest_data = test_data - mean\ntest_data = test_data / std\n\nprint train_data[10]", "[ 0.63391647 -0.48361547 1.0283258 -0.25683275 1.15788777 0.19313958\n 1.11048828 -1.03628262 1.67588577 1.5652875 0.78447637 0.22689422\n 1.04466491]\n" ], [ "def build_model():\n model = models.Sequential()\n model.add(layers.Dense(64, activation='relu',input_shape=(train_data.shape[1],)))\n model.add(layers.Dense(64, activation='relu'))\n model.add(layers.Dense(1))\n model.compile(optimizer='rmsprop', loss='mse', metrics=['mae'])\n return model", "_____no_output_____" ], [ "# Use k-fold validation because the dataset is small\n\nk=4\nnum_val_samples = len(train_data) // k \nnum_epochs = 100\nall_scores = []\n\nfor i in range(k):\n print('processing fold #', i)\n val_data = train_data[i * num_val_samples: (i + 1) * num_val_samples] \n val_targets = train_targets[i * num_val_samples: (i + 1) * num_val_samples]\n\n partial_train_data = np.concatenate( [train_data[:i * num_val_samples],train_data[(i + 1) * num_val_samples:]], axis=0)\n partial_train_targets = np.concatenate( [train_targets[:i * num_val_samples],train_targets[(i + 1) * num_val_samples:]], axis=0)\n\n model = build_model()\n model.fit(partial_train_data, partial_train_targets,epochs=num_epochs, batch_size=1, verbose=0)\n history = model.history()\n \n val_mse, val_mae = model.evaluate(val_data, val_targets, verbose=0)\n \n all_scores.append(val_mae)\n", "('processing fold #', 0)\nWARNING:tensorflow:From /Users/rudinakaprata/Documents/Aldo/ML_Stanford/venv_ML/lib/python2.7/site-packages/tensorflow/python/framework/op_def_library.py:263: colocate_with (from tensorflow.python.framework.ops) is deprecated and will be removed in a future version.\nInstructions for updating:\nColocations handled automatically by placer.\nWARNING:tensorflow:From /Users/rudinakaprata/Documents/Aldo/ML_Stanford/venv_ML/lib/python2.7/site-packages/tensorflow/python/ops/math_ops.py:3066: to_int32 (from tensorflow.python.ops.math_ops) is deprecated and will be removed in a future version.\nInstructions for updating:\nUse tf.cast instead.\n('processing fold #', 1)\n('processing fold #', 2)\n('processing fold #', 3)\n" ], [ "print all_scores", "[2.061478056529961, 2.1541910690836388, 2.944315316653488, 2.414095924632384]\n" ], [ "# Train the final model\n\nmodel = build_model()\nmodel.fit(train_data, train_targets,epochs=80, batch_size=8, verbose=0)\ntest_mse_score, test_mae_score = model.evaluate(test_data, test_targets)", "102/102 [==============================] - 0s 682us/step\n" ], [ "print test_mae_score", "2.8874634387446383\n" ] ] ]
[ "markdown", "code" ]
[ [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code", "code" ] ]