hexsha
stringlengths
40
40
size
int64
6
14.9M
ext
stringclasses
1 value
lang
stringclasses
1 value
max_stars_repo_path
stringlengths
6
260
max_stars_repo_name
stringlengths
6
119
max_stars_repo_head_hexsha
stringlengths
40
41
max_stars_repo_licenses
sequence
max_stars_count
int64
1
191k
max_stars_repo_stars_event_min_datetime
stringlengths
24
24
max_stars_repo_stars_event_max_datetime
stringlengths
24
24
max_issues_repo_path
stringlengths
6
260
max_issues_repo_name
stringlengths
6
119
max_issues_repo_head_hexsha
stringlengths
40
41
max_issues_repo_licenses
sequence
max_issues_count
int64
1
67k
max_issues_repo_issues_event_min_datetime
stringlengths
24
24
max_issues_repo_issues_event_max_datetime
stringlengths
24
24
max_forks_repo_path
stringlengths
6
260
max_forks_repo_name
stringlengths
6
119
max_forks_repo_head_hexsha
stringlengths
40
41
max_forks_repo_licenses
sequence
max_forks_count
int64
1
105k
max_forks_repo_forks_event_min_datetime
stringlengths
24
24
max_forks_repo_forks_event_max_datetime
stringlengths
24
24
avg_line_length
float64
2
1.04M
max_line_length
int64
2
11.2M
alphanum_fraction
float64
0
1
cells
sequence
cell_types
sequence
cell_type_groups
sequence
e7fd6be337ef66a84aec78959835291b5ea4a645
40,054
ipynb
Jupyter Notebook
Bloque 3 - Machine Learning/02_No supervisado/PCA/01_Ejemplo_PCA_teoria.ipynb
franciscocanon-thebridge/bootcamp_thebridge_PTSep20
e81cbdcb9254fa46a3925f41c583748e25b459c0
[ "MIT" ]
null
null
null
Bloque 3 - Machine Learning/02_No supervisado/PCA/01_Ejemplo_PCA_teoria.ipynb
franciscocanon-thebridge/bootcamp_thebridge_PTSep20
e81cbdcb9254fa46a3925f41c583748e25b459c0
[ "MIT" ]
null
null
null
Bloque 3 - Machine Learning/02_No supervisado/PCA/01_Ejemplo_PCA_teoria.ipynb
franciscocanon-thebridge/bootcamp_thebridge_PTSep20
e81cbdcb9254fa46a3925f41c583748e25b459c0
[ "MIT" ]
1
2021-01-23T10:37:15.000Z
2021-01-23T10:37:15.000Z
52.495413
15,676
0.683278
[ [ [ "# Ejercicio Aplicando PCA: Principal Component Analysis\n\nEn este notebook vamos a ver un ejemplo sencillo sobre el uso del PCA. Para ello, utilizaremos un dataset con datos sobre diferentes individuos y un indicador de si está residiendo en una vivienda que ha comprado o lo está haciendo en una de alquiler.\n\nSe tratará de un modelo de clasificación, por lo que podremos utilizar uno de los algoritmos de clasificación vistos con anterioridad. Sin embargo, veremos que tenemos un número elevado de variables, que podremos reducirlo gracias al uso de técnicas de reducción de variables, como el PCA.", "_____no_output_____" ], [ "### Importamos librerías\n\nAl igual que hemos hecho anteriormente, empezaremos importando las librerías que vamos a utilizar a lo largo del notebook.", "_____no_output_____" ] ], [ [ "import pandas as pd\nimport numpy as np\nimport matplotlib.pyplot as plt\n%matplotlib inline\n\nfrom sklearn.preprocessing import StandardScaler\n\nfrom sklearn.model_selection import train_test_split\nfrom sklearn.metrics import classification_report\nfrom sklearn.metrics import confusion_matrix\nfrom sklearn.neighbors import KNeighborsClassifier\n\n# Librería nueva para utilizar PCA:\nfrom sklearn.decomposition import PCA", "_____no_output_____" ] ], [ [ "### Cargamos datos de entrada\n\nLos datos de los individuos con un target que nos indique si está en una vivienda comprada o alquilada, son los siguientes:", "_____no_output_____" ] ], [ [ "dataframe = pd.read_csv(r\"comprar_alquilar.csv\")\n\ndataframe", "_____no_output_____" ] ], [ [ "Como podemos ver, son datos numéricos, por lo que no tendremos que realizar ningún tipo de conversión de categóricas.", "_____no_output_____" ], [ "### Visualicemos las dimensiones\n\nUno de los pasos principales que siempre decimos que es conveninete realizar, es el análisis de los datos. Para ello, vamos a analizar las distribuciones de los datos en base al target.\n\n### EJERCICIO\n\n1. Utiliza el dataframe que acabamos de importar para realizar la representación del histograma de cada una de las columnas en base al target, es decir, para cada columna, tendremos que ver superpuestos sus histogramas para los individuos \"alquilados\" frente a los \"comprados\":", "_____no_output_____" ], [ "Bien, ya tenemos una primera aproximación a los datos. Sin embargo, podemos seguir analizando los datos para extraer información útil a la hora de entenderlos.\n\n### Correlaciones de los datos\n\nOtro de los puntos interesantes a la hora de analizar los datos puede ser analizar las correlaciones, ya que nos pueden indicar variables similares que estén replicando información o aquellas más importantes en base al target.\n\n### EJERCICIO\n\n1. Representa la matriz de correlación del dataframe mediante un mapa de calor (o ``heatmap``), donde se indique el valor de esta relación.", "_____no_output_____" ], [ "Si analizamos los datos, podemos ver que existe una fuerte relación entre los ingresos y los ahorros, así como entre los propios ingresos y los gastos comunes, los gastos en vivienda o, incluso, el target.\n\n### EJERCICIO\n\nRealiza un gráfico de dispersión de las relaciones entre los **ingresos** y:\n1. Ahorros\n2. Gastos comunes\n3. Gatos de vivienda\n4. Target (comprar)\n\nHazlo todo en la misma figura, con 4 subgráficos.\n\nAdemás, resultaría interesante analizar otros gráficos que puede que no tengan una relación lineal, pero que a priori podrían estar relacionados, como:\n\n5. Otros gastos vs. Gastos comunes, donde el target se indique con diferentes colores (que los alquilados sean azules y los comprados rojos, por ejemplo)\n\nEste gráfico realízalo en una figura aparte.", "_____no_output_____" ], [ "## Normalización y estandarización de los datos\n\nDebido a la naturaleza del PCA, donde la magnitud de las variables gobernará la información que nos aporta cada variable, será muy importante mantener los datos en una misma escala. ¿Recuerdas cómo se hacía?\n\n### EJERCICIO\n\nNormaliza los datos, de forma que el resultado final tenga una media nula y desviación típica unidad. De este modo, reducimos las variables a unas dimensiones que pueden compararse entre sí.\n\nTras ello, divide los datos en train y test, y aplica un algoritmo KNN para clasificar los datos (con el k que mejor resultado ofrezca). Guarda los resultados en una variable para el futuro:", "_____no_output_____" ], [ "## Aplicamos PCA\n\nTras haber normalizado, podemos hacer uso del algoritmo compresor de variables, PCA, como se indica a continuación. Al aplicar el PCA no reducimos las variables automáticamente, sino que nos creamos nuevas variables que van explicando de más a menos varianza del dataset original. Es decir, la primera variable tras aplicar el PCA será la que mayor información del dataset nos explique, la segunda será la que maximice la información del resto de datos, y así sucesivamente hasta que lleguemos a las últimas, que deberían expresar una cantidad mínima de información, pues ya debería estar toda explicada.\n\nGracias a esto, el paso siguiente sería reducir las variables maximizando la información, lo cual podremos hacer eliminando las últimas variables.\n\nPara aplicar el PCA directamente, tenemos 2 opciones:\n1. Hacerlo matemáticamente, planteando la resolución de un determinante, como ya hicimos en el apartado Feature Engineering\n2. Aplicar un objeto de sklearn\n\nEn este caso, nos quedaremos con la segunda:", "_____no_output_____" ] ], [ [ "pca = PCA(len(X_cols))\n\npca.fit(X_train_scaled)\n\nX_train_scaled_pca = pca.transform(X_train_scaled)\nX_test_scaled_pca = pca.transform(X_test_scaled)\n\nprint(X_train_scaled.shape)\nprint(X_train_scaled_pca.shape)", "_____no_output_____" ] ], [ [ "### Varianza explicada\n\nGracias al objeto PCA, se calculan automáticamente ciertos parámetros:", "_____no_output_____" ] ], [ [ "# Varianza explicada (sobre 1):\npca.explained_variance_ratio_", "_____no_output_____" ], [ "# Valores singulares/autovalores: relacionados con la varianza explicada\npca.singular_values_", "_____no_output_____" ], [ "# Autovectores:\npca.components_", "_____no_output_____" ] ], [ [ "Pasemos a representar ahora esta medida. Para ello, vamos a recurrir a una estructura que vimos hace tiempo:", "_____no_output_____" ] ], [ [ "# A partir de los autovalores, calculamos la varianza explicada\nvar_exp = pca.explained_variance_ratio_*100\ncum_var_exp = np.cumsum(pca.explained_variance_ratio_*100)\n\n# Representamos en un diagrama de barras la varianza explicada por cada autovalor, y la acumulada\nplt.figure(figsize=(6, 4))\nplt.bar(range(len(pca.explained_variance_ratio_)), var_exp, alpha=0.5, align='center', label='Varianza individual explicada', color='g')\nplt.step(range(len(pca.explained_variance_ratio_)), cum_var_exp, where='mid', linestyle='--', label='Varianza explicada acumulada')\nplt.ylabel('Ratio de Varianza Explicada')\nplt.xlabel('Componentes Principales')\nplt.legend()", "_____no_output_____" ], [ "# Si queremos obtener cuántas variables necesitamos para cumplir con cierta varianza:\numbral_varianza_min = 90\n\ncum_var_exp = np.cumsum(pca.explained_variance_ratio_*100)\nn_var_90 = len(cum_var_exp[cum_var_exp<umbral_varianza_min])\nn_var_90", "_____no_output_____" ] ], [ [ "### EJERCICIO\n\nAhora que tenemos las componentes principales, calcula la correlación entre las nuevas variables entre sí. ¿Tiene sentido lo que sale?", "_____no_output_____" ], [ "### Predicción basada en PCA\n\nAhora que tenemos calculadas las nuevas varaibles, vamos a proceder a utilizar el algoritmo que habíamos pensado. Lo único que cambiaremos son las varaibles que vamos a utilizar, que ahora serán un subconjunto de las que hemos obtenido con la conversión PCA. Por seguir un poco con lo visto anteriormente, vamos a quedarnos con las variables que hemos visto que nos reducen los datos manteniendo un 90% de su información.\n\nTenemos 2 opciones:\n1. Seleccionar las n primeras variables de lo que nos devuelve el PCA\n2. Invocar el PCA con el valor n de las varaibles que queremos", "_____no_output_____" ] ], [ [ "# 1. Seleccionar las n primeras variables de lo que nos devuelve el PCA:\nX_ejercicio_train = X_train_scaled_pca[:, :n_var_90]\nX_ejercicio_test = X_test_scaled_pca[:, :n_var_90]", "_____no_output_____" ], [ "# 2. Invocar el PCA con el valor n de las varaibles que queremos\npca_b = PCA(n_var_90)\nX_ejercicio_train_b = pca_b.fit_transform(X_train_scaled)\nX_ejercicio_test_b = pca_b.transform(X_test_scaled)", "_____no_output_____" ], [ "# Comprobamos que son lo mismo:\n(np.round(X_ejercicio_train, 4) == np.round(X_ejercicio_train_b, 4)).all()", "_____no_output_____" ] ], [ [ "### EJERCICIO\n\nItera para obtener el mejor k del algoritmo utilizando las variables obtenidas con el PCA, y compáralo con el mejor de los anteriores:", "_____no_output_____" ], [ "¿Y si utilizamos solo 1 variable?", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown", "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown", "markdown" ], [ "code", "code", "code" ], [ "markdown", "markdown" ] ]
e7fd70c980628ddecd51cbbfef81a490c1f26b4a
31,936
ipynb
Jupyter Notebook
A failed attempt in Data Cleaning for the Asheville Dataset/Clustering - Asheville.ipynb
shilpiBose29/CIS519-Project
e2ce2f3c7e6e313d262d049721c8314ef5595dfa
[ "OML" ]
null
null
null
A failed attempt in Data Cleaning for the Asheville Dataset/Clustering - Asheville.ipynb
shilpiBose29/CIS519-Project
e2ce2f3c7e6e313d262d049721c8314ef5595dfa
[ "OML" ]
null
null
null
A failed attempt in Data Cleaning for the Asheville Dataset/Clustering - Asheville.ipynb
shilpiBose29/CIS519-Project
e2ce2f3c7e6e313d262d049721c8314ef5595dfa
[ "OML" ]
null
null
null
93.380117
12,044
0.832008
[ [ [ "from __future__ import absolute_import\nfrom __future__ import division\nfrom __future__ import print_function\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport pandas as pd", "_____no_output_____" ], [ "df = pd.read_csv('datasets/Asheville/Asheville-processed.csv')", "_____no_output_____" ], [ "amn_cols = [col for col in df.columns if col.startswith('AMN_')]", "_____no_output_____" ], [ "amn_df = df.loc[:, amn_cols].sort_values(amn_cols)\namns_in_freq_order = amn_df.sum().sort_values(ascending = False).index\namn_df = amn_df.reindex_axis(amns_in_freq_order, axis=1)\n\namns_not_too_common = amn_df.columns[amn_df.sum()<n*0.9]\ndata_df = amn_df.loc[:, amns_not_too_common].T\nprint(data_df.shape)\nprint([int(i/2) for i in data_df.shape])", "(39, 696)\n[19, 348]\n" ], [ "fig, ax = plt.subplots(figsize=[int(i/2) for i in data_df.shape])\n#plt.yticks(data_df.index, data_df, fontsize='small')\nax.imshow(data_df, cmap=plt.cm.gray_r, interpolation='none')\nplt.show()", "_____no_output_____" ], [ "#from sklearn.neighbors import NearestNeighbors\n#nbrs = NearestNeighbors(n_neighbors=2, algorithm='ball_tree').fit(X)", "_____no_output_____" ] ], [ [ "# what I want to do manually:\n\namn_groups = {\n 'AMN_group_\"pets friendly\"': [\n 'AMN_cat(s)',\n 'AMN_dog(s)',\n 'AMN_\"other pet(s)\"',\n 'AMN_\"pets allowed\"',\n 'AMN_\"pets live on this property\"'],\n 'AMN_group_\"safety measures\"': [\n 'AMN_\"lock on bedroom door\"',\n 'AMN_\"safety card\"'],\n 'AMN_group_\"winter friendly\"': [\n 'AMN_\"hot tub\"',\n 'AMN_\"indoor fireplace\"',\n 'AMN_heating']}\n\namn_grouped_df = amn_df.copy()\n\nfor group_name, group_members in amn_groups.items():\n amn_grouped_df.loc[:, group_name] = amn_df.loc[:, group_members].sum(axis = 1)\n amn_grouped_df.drop(group_members, axis=1, inplace=True)\n \namn_grouped_df.T", "_____no_output_____" ] ], [ [ "from sklearn.decomposition import PCA\npca = PCA(n_components=3)\n\nfrom sklearn.preprocessing import Normalizer\nnml = Normalizer()\n\namn_pca = pca.fit_transform( nml.fit_transform( amn_df ) )\n\namn_pca_df = pd.DataFrame(amn_pca)\nprint(amn_pca_df.shape)\namn_pca_df.head()", "(696, 3)\n" ], [ "amn_pca_df.to_csv('datasets/Asheville/amn_pca.csv', index = False, header=False)", "_____no_output_____" ], [ "amn_df.to_csv('datasets/Asheville/amn.csv', index = False, header=True)", "_____no_output_____" ] ], [ [ "PCA", "_____no_output_____" ] ], [ [ "from sklearn.decomposition import PCA\nfrom sklearn.preprocessing import scale", "_____no_output_____" ], [ "amns = amn_df.as_matrix()", "_____no_output_____" ], [ "print(\"Scaling the values...\")\namns_scaled = scale(amns)\n\nprint(\"Fit PCA...\")\npca = PCA(n_components='mle')\npca.fit(amns_scaled)\n\nprint(\"Cumulative Variance explains...\")\nvar1 = np.cumsum(pca.explained_variance_ratio_*100) #The amount of variance that each PC explains\n\nprint(\"Plotting...\")\nplt.plot(var1)\nplt.show()", "Scaling the values...\nFit PCA...\nCumulative Variance explains...\nPlotting...\n" ] ] ]
[ "code", "raw", "code", "markdown", "code" ]
[ [ "code", "code", "code", "code", "code", "code" ], [ "raw" ], [ "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code" ] ]
e7fd71f17afa253100724a89968aece3f4e9f6e9
139,699
ipynb
Jupyter Notebook
hackathons/lunar_image_classification/02_image_augmentation.ipynb
amitbcp/machine_learning_with_Scikit_Learn_and_TensorFlow
37dda063e316503d53ac45f3b104a5cf1aaa4d78
[ "MIT" ]
11
2019-12-19T08:55:52.000Z
2021-10-01T13:07:13.000Z
hackathons/lunar_image_classification/02_image_augmentation.ipynb
amitbcp/sckit-learn-examples
0c26f9178a0cf96ff79faf3b9b250dd5b8f6c49a
[ "MIT" ]
5
2019-10-09T01:41:19.000Z
2022-02-10T00:19:01.000Z
hackathons/lunar_image_classification/02_image_augmentation.ipynb
amitbcp/sckit-learn-examples
0c26f9178a0cf96ff79faf3b9b250dd5b8f6c49a
[ "MIT" ]
7
2019-10-08T06:10:14.000Z
2020-12-01T07:49:21.000Z
101.157857
54,406
0.809469
[ [ [ "# Lunar Rock Classfication\n\nUsing iamge Augmentation techniques", "_____no_output_____" ] ], [ [ "!pip install -U tensorflow-gpu", "Collecting tensorflow-gpu\n\u001b[?25l Downloading https://files.pythonhosted.org/packages/25/44/47f0722aea081697143fbcf5d2aa60d1aee4aaacb5869aee2b568974777b/tensorflow_gpu-2.0.0-cp36-cp36m-manylinux2010_x86_64.whl (380.8MB)\n\u001b[K |████████████████████████████████| 380.8MB 74.6MB/s \n\u001b[?25hRequirement already satisfied, skipping upgrade: keras-applications>=1.0.8 in /usr/local/lib/python3.6/dist-packages (from tensorflow-gpu) (1.0.8)\nRequirement already satisfied, skipping upgrade: protobuf>=3.6.1 in /usr/local/lib/python3.6/dist-packages (from tensorflow-gpu) (3.7.1)\nRequirement already satisfied, skipping upgrade: wrapt>=1.11.1 in /usr/local/lib/python3.6/dist-packages (from tensorflow-gpu) (1.11.2)\nRequirement already satisfied, skipping upgrade: astor>=0.6.0 in /usr/local/lib/python3.6/dist-packages (from tensorflow-gpu) (0.8.0)\nCollecting tensorboard<2.1.0,>=2.0.0 (from tensorflow-gpu)\n\u001b[?25l Downloading https://files.pythonhosted.org/packages/9b/a6/e8ffa4e2ddb216449d34cfcb825ebb38206bee5c4553d69e7bc8bc2c5d64/tensorboard-2.0.0-py3-none-any.whl (3.8MB)\n\u001b[K |████████████████████████████████| 3.8MB 27.2MB/s \n\u001b[?25hRequirement already satisfied, skipping upgrade: termcolor>=1.1.0 in /usr/local/lib/python3.6/dist-packages (from tensorflow-gpu) (1.1.0)\nRequirement already satisfied, skipping upgrade: six>=1.10.0 in /usr/local/lib/python3.6/dist-packages (from tensorflow-gpu) (1.12.0)\nRequirement already satisfied, skipping upgrade: keras-preprocessing>=1.0.5 in /usr/local/lib/python3.6/dist-packages (from tensorflow-gpu) (1.1.0)\nRequirement already satisfied, skipping upgrade: grpcio>=1.8.6 in /usr/local/lib/python3.6/dist-packages (from tensorflow-gpu) (1.15.0)\nRequirement already satisfied, skipping upgrade: numpy<2.0,>=1.16.0 in /usr/local/lib/python3.6/dist-packages (from tensorflow-gpu) (1.16.5)\nCollecting tensorflow-estimator<2.1.0,>=2.0.0 (from tensorflow-gpu)\n\u001b[?25l Downloading https://files.pythonhosted.org/packages/95/00/5e6cdf86190a70d7382d320b2b04e4ff0f8191a37d90a422a2f8ff0705bb/tensorflow_estimator-2.0.0-py2.py3-none-any.whl (449kB)\n\u001b[K |████████████████████████████████| 450kB 44.4MB/s \n\u001b[?25hRequirement already satisfied, skipping upgrade: absl-py>=0.7.0 in /usr/local/lib/python3.6/dist-packages (from tensorflow-gpu) (0.8.0)\nRequirement already satisfied, skipping upgrade: gast==0.2.2 in /usr/local/lib/python3.6/dist-packages (from tensorflow-gpu) (0.2.2)\nRequirement already satisfied, skipping upgrade: wheel>=0.26 in /usr/local/lib/python3.6/dist-packages (from tensorflow-gpu) (0.33.6)\nRequirement already satisfied, skipping upgrade: opt-einsum>=2.3.2 in /usr/local/lib/python3.6/dist-packages (from tensorflow-gpu) (3.1.0)\nRequirement already satisfied, skipping upgrade: google-pasta>=0.1.6 in /usr/local/lib/python3.6/dist-packages (from tensorflow-gpu) (0.1.7)\nRequirement already satisfied, skipping upgrade: h5py in /usr/local/lib/python3.6/dist-packages (from keras-applications>=1.0.8->tensorflow-gpu) (2.8.0)\nRequirement already satisfied, skipping upgrade: setuptools in /usr/local/lib/python3.6/dist-packages (from protobuf>=3.6.1->tensorflow-gpu) (41.2.0)\nRequirement already satisfied, skipping upgrade: markdown>=2.6.8 in /usr/local/lib/python3.6/dist-packages (from tensorboard<2.1.0,>=2.0.0->tensorflow-gpu) (3.1.1)\nRequirement already satisfied, skipping upgrade: werkzeug>=0.11.15 in /usr/local/lib/python3.6/dist-packages (from tensorboard<2.1.0,>=2.0.0->tensorflow-gpu) (0.16.0)\n\u001b[31mERROR: tensorflow 1.15.0rc3 has requirement tensorboard<1.16.0,>=1.15.0, but you'll have tensorboard 2.0.0 which is incompatible.\u001b[0m\n\u001b[31mERROR: tensorflow 1.15.0rc3 has requirement tensorflow-estimator==1.15.1, but you'll have tensorflow-estimator 2.0.0 which is incompatible.\u001b[0m\nInstalling collected packages: tensorboard, tensorflow-estimator, tensorflow-gpu\n Found existing installation: tensorboard 1.15.0\n Uninstalling tensorboard-1.15.0:\n Successfully uninstalled tensorboard-1.15.0\n Found existing installation: tensorflow-estimator 1.15.1\n Uninstalling tensorflow-estimator-1.15.1:\n Successfully uninstalled tensorflow-estimator-1.15.1\nSuccessfully installed tensorboard-2.0.0 tensorflow-estimator-2.0.0 tensorflow-gpu-2.0.0\n" ], [ "from google.colab import drive\ndrive.mount('/content/drive',force_remount=True)", "Go to this URL in a browser: https://accounts.google.com/o/oauth2/auth?client_id=947318989803-6bn6qk8qdgf4n4g3pfee6491hc0brc4i.apps.googleusercontent.com&redirect_uri=urn%3Aietf%3Awg%3Aoauth%3A2.0%3Aoob&scope=email%20https%3A%2F%2Fwww.googleapis.com%2Fauth%2Fdocs.test%20https%3A%2F%2Fwww.googleapis.com%2Fauth%2Fdrive%20https%3A%2F%2Fwww.googleapis.com%2Fauth%2Fdrive.photos.readonly%20https%3A%2F%2Fwww.googleapis.com%2Fauth%2Fpeopleapi.readonly&response_type=code\n\nEnter your authorization code:\n··········\nMounted at /content/drive\n" ], [ "import tensorflow as tf\nfrom tensorflow import keras\nfrom tensorflow.keras.models import Sequential\nfrom tensorflow.keras.layers import Dense, Conv2D, Flatten, Dropout, MaxPooling2D\nfrom tensorflow.keras.preprocessing.image import ImageDataGenerator\nimport pickle\nimport os\nimport numpy as np\nimport pandas as pd\nimport matplotlib.pyplot as plt", "_____no_output_____" ], [ "# Global Flags to control Data & Training/Valdiation\ndownload = False\nvalidation = False", "_____no_output_____" ], [ "# downloads and extracts, For Local/G-Drive\nbase_url = '/content/drive/My Drive/personal_hackathons/DataSet/'\n\n\nif download :\n _URL = 'http://hck.re/kkBIfM'\n path_to_zip = tf.keras.utils.get_file(base_url +'lunar_rock.zip' , origin=_URL, extract=True)\n PATH = os.path.join(os.path.dirname(path_to_zip), 'lunar_rock')\n print(\"Paths to the ZIP File : {}\".format(path_to_zip))\n\nelse :\n PATH='/content/drive/My Drive/personal_hackathons/DataSet/lunar_rock/'", "_____no_output_____" ], [ "print(\"Paths to the Data File : {}\".format(PATH))", "Paths to the Data File : /content/drive/My Drive/personal_hackathons/DataSet/lunar_rock/\n" ], [ "# Unzip Downloaded data if Downloading is required\n\nif download :\n\n os.chdir(PATH) #change dir\n !mkdir train #create a directory named train/\n !mkdir test #create a directory named test/\n !unzip train.zip -d PATH #unzip data in train/\n !unzip test.zip -d PATH #unzip data in test/\n !unzip sample_submission.csv.zip\n !unzip train_labels.csv.zip", "_____no_output_____" ], [ "train_dir = os.path.join(PATH, 'train')\ntrain_lg_dir = os.path.join(train_dir, 'Large') # directory with our training Large Lunar rock pictures\ntrain_sm_dir = os.path.join(train_dir, 'Small') # directory with our training Small Lunar rock pictures\n\nprint(\"Paths Train : {} \".format(train_dir))\nprint(\"Paths Train Large : {} \".format(train_lg_dir))\nprint(\"Paths Train Small: {} \".format(train_sm_dir))\n\nif validation : \n validation_dir = os.path.join(PATH, 'validation')\n validation_lg_dir = os.path.join(validation_dir, 'Large') # directory with our Large Lunar rock pictures\n validation_sm_dir = os.path.join(validation_dir, 'Small') # directory with our Small Lunar rock pictures", "Paths Train : /content/drive/My Drive/personal_hackathons/DataSet/lunar_rock/train \nPaths Train Large : /content/drive/My Drive/personal_hackathons/DataSet/lunar_rock/train/Large \nPaths Trai Small: /content/drive/My Drive/personal_hackathons/DataSet/lunar_rock/train/Small \n" ], [ "num_lg_tr = len(os.listdir(train_lg_dir))\nnum_sm_tr = len(os.listdir(train_sm_dir))\ntotal_train = num_lg_tr + num_sm_tr\n\nif validation :\n \n num_lg_val = len(os.listdir(validation_lg_dir))\n num_sm_val = len(os.listdir(validation_sm_dir))\n total_val = num_cats_val + num_dogs_val", "_____no_output_____" ], [ "print('total training Large images:', num_lg_tr)\nprint('total training Small images:', num_sm_tr)\nprint(\"Total training images:\", total_train)\n\nprint(\"--\")\n\nif validation :\n print('total validation Large images:', num_lg_val)\n print('total validation Small images:', num_sm_val)\n print(\"Total validation images:\", total_val)s", "total training Large images: 5999\ntotal training Small images: 5999\nTotal training images: 11998\n--\n" ], [ "batch_size = 128\nEPOCHS = 24\nIMG_HEIGHT = 480 # As the input image is 480 X 720\nIMG_WIDTH = 480 # As the input image is 480 X 720", "_____no_output_____" ], [ "# Evaluate baseline Model\ndef evaluation(model,generator,data = \"Training\"):\n print(\"--------------Evaluating {} Dataset--------------\".format(data))\n results = model.evaluate_generator(generator=generator,verbose=1)\n precision=0\n recall=0\n for name, value in zip(model.metrics_names, results):\n print(name, ': ', value)\n\n if name.strip() == 'precision':\n precision = value\n\n if name.strip() == 'recall':\n recall = value\n\n if precision !=0 and recall!=0 :\n f1 = (2 * precision * recall)/(precision+recall)\n print(\"f1 : \",f1)\n\n\ndef plot_metrices(EPOCHS,history,if_val=True):\n \n epochs = range(EPOCHS)\n\n plt.title('Accuracy')\n plt.plot(epochs, history.history['accuracy'], color='blue', label='Train')\n if if_val:\n plt.plot(epochs, history.history['val_accuracy'], color='orange', label='Val')\n plt.xlabel('Epoch')\n plt.ylabel('Accuracy')\n plt.legend()\n\n _ = plt.figure()\n plt.title('Loss')\n plt.plot(epochs, history.history['loss'], color='blue', label='Train')\n if if_val:\n plt.plot(epochs, history.history['val_loss'], color='orange', label='Val')\n plt.xlabel('Epoch')\n plt.ylabel('Loss')\n plt.legend()\n\n _ = plt.figure()\n plt.title('False Negatives')\n plt.plot(epochs, history.history['fn'], color='blue', label='Train')\n if if_val:\n plt.plot(epochs, history.history['val_fn'], color='orange', label='Val')\n plt.xlabel('Epoch')\n plt.ylabel('False Negatives')\n plt.legend()\n\ndef plot_confusion_matrix(predict,generator,threshold):\n # Confusion Matrix\n\n \n labels = generator.classes\n labels_pred = (predict[:,0] > threshold).astype(np.int)\n\n cm = confusion_matrix(labels,labels_pred)\n\n plt.matshow(cm, alpha=0)\n plt.title('Confusion matrix')\n plt.ylabel('Actual label')\n plt.xlabel('Predicted label')\n\n for (i, j), z in np.ndenumerate(cm):\n plt.text(j, i, str(z), ha='center', va='center')\n\n plt.show()\n\n print('Legitimate Customers Detected (True Negatives): ', cm[0][0])\n print('Legitimate Customers Incorrectly Detected (False Positives): ', cm[0][1])\n print('Loan Deafulters Missed (False Negatives): ', cm[1][0])\n print('Loan Deafulters Detected (True Positives): ', cm[1][1])\n print('Total Loan Deafulters Customers: ', np.sum(cm[1]))\n\ndef submission_categorical(model,submission_csv = '/content/drive/My Drive/personal_hackathons/DataSet/lunar_rock/results_01.pickle'):\n\n # Instantiate Generator\n test_datagen = ImageDataGenerator(rescale=1./255)\n test_dir = '/content/drive/My Drive/personal_hackathons/DataSet/lunar_rock/PATH/'\n\n test_generator = test_datagen.flow_from_directory(\n test_dir,\n target_size=(IMG_HEIGHT, IMG_WIDTH),\n # color_mode=\"rgb\",\n shuffle = False,\n class_mode='binary',\n batch_size=batch_size)\n \n # Check test Files\n filenames = test_generator.filenames\n nb_samples = len(filenames)\n print(\"Test Size : {}\".format(nb_samples))\n # print(filenames)\n\n # Model Prediction\n test_generator.reset()\n predict = model.predict_generator(test_generator,verbose=1)\n print(\"Model Prediction Shape {}\".format(predict.shape))\n\n labels = train_data_gen.class_indices\n predicted_class_indices=np.argmax(predict,axis=1)\n\n \n print(\"Labels : {}\".format(labels) )\n print(\"Class Indices {}\".format(predicted_class_indices))\n\n labels = dict((v,k) for k,v in labels.items())\n predictions = [labels[k] for k in predicted_class_indices]\n\n\n results=pd.DataFrame({\"Image_File\":filenames,\n \"Class\":predictions})\n print(\"Distribution : {} \".format(results['Class'].value_counts()))\n\n # Write Sumission\n with open(submission_name,'wb') as f :\n pickle.dump(results,f)\n\n return results\n\ndef submission_binary(model,threshold = 0.4,\n submission_name='lunar01_m1.pickle'):\n \n # Instantiate Generator\n test_datagen = ImageDataGenerator(rescale=1./255)\n test_dir = '/content/drive/My Drive/personal_hackathons/DataSet/lunar_rock/PATH/'\n\n test_generator = test_datagen.flow_from_directory(\n test_dir,\n target_size=(IMG_HEIGHT, IMG_WIDTH),\n # color_mode=\"rgb\",\n shuffle = False,\n class_mode='binary',\n batch_size=batch_size)\n \n # Check test Files\n filenames = test_generator.filenames\n nb_samples = len(filenames)\n print(\"Test Size : {}\".format(nb_samples))\n # print(filenames)\n\n # Model Prediction\n test_generator.reset()\n predict = model.predict_generator(test_generator,verbose=1)\n print(\"Model Prediction Shape {}\".format(predict.shape))\n\n labels = train_data_gen.class_indices\n print(\"Labels : {}\".format(labels) )\n\n # Predicting Classes based on Threshold\n predict_class = predict > threshold\n predict_class = predict_class.reshape(1,-1)\n predict_class = predict_class[0]\n\n results=pd.DataFrame({\"Image_File\":filenames,\n \"Class\":predict_class})\n \n results['Image_File'] = results['Image_File'].apply(lambda x : x[12:])\n results['Class'] = results['Class'].map({True: 'Small', False: \"Large\"})\n \n print(\"Distribution : {} \".format(results['Class'].value_counts()))\n\n # Write Sumission\n with open(submission_name,'wb') as f :\n pickle.dump(results,f)\n\n return results", "_____no_output_____" ], [ "# When using whole dataset for training\n\ntrain_image_generator = ImageDataGenerator(validation_split=0.2,\n rotation_range=45, \n horizontal_flip=True,\n zoom_range=0.5,\n rescale=1./255) # Generator for our training data\n\ntrain_data_gen = train_image_generator.flow_from_directory(batch_size=batch_size,\n directory=train_dir,\n shuffle=True,\n target_size=(IMG_HEIGHT, IMG_WIDTH),\n class_mode='binary',\n )\n\n# While Splitting into Train & Validation\n\n# train_image_generator = ImageDataGenerator(validation_split=0.2,\n# rotation_range=45, \n# horizontal_flip=True,\n# zoom_range=0.5,\n# rescale=1./255) # Generator for our training data\n\n# train_data_gen = train_image_generator.flow_from_directory(batch_size=batch_size,\n# directory=train_dir,\n# shuffle=True, \n# target_size=(IMG_HEIGHT, IMG_WIDTH),\n# class_mode='binary',\n# subset='training')\n\n# val_data_gen = train_image_generator.flow_from_directory(batch_size=batch_size,\n# directory=train_dir,\n# shuffle=False,\n# target_size=(IMG_HEIGHT, IMG_WIDTH),\n# class_mode='binary',\n# subset='validation')", "Found 11998 images belonging to 2 classes.\n" ], [ "# Only to be used When we have a different Validation Set \n\nif validation :\n validation_image_generator = ImageDataGenerator(rescale=1./255) # Generator for our validation data\n val_data_gen = validation_image_generator.flow_from_directory(batch_size=batch_size,\n directory=validation_dir,\n target_size=(IMG_HEIGHT, IMG_WIDTH),\n class_mode='binary')", "_____no_output_____" ], [ "sample_training_images, sample_training_labels = next(train_data_gen)\nsample_training_labels[:5]", "_____no_output_____" ], [ "# This function will plot images in the form of a grid with 1 row and 5 columns where images are placed in each column.\ndef plotImages(images_arr):\n fig, axes = plt.subplots(1, 5, figsize=(20,20))\n axes = axes.flatten()\n for img, ax in zip( images_arr, axes):\n ax.imshow(img)\n ax.axis('off')\n plt.tight_layout()\n plt.show()\n\nplotImages(sample_training_images[:5])", "_____no_output_____" ], [ "def model_metrics():\n metrics = [\n keras.metrics.Accuracy(name='accuracy'),\n keras.metrics.TruePositives(name='tp'),\n keras.metrics.FalsePositives(name='fp'),\n keras.metrics.TrueNegatives(name='tn'),\n keras.metrics.FalseNegatives(name='fn'),\n keras.metrics.Precision(name='precision'),\n keras.metrics.Recall(name='recall'),\n keras.metrics.AUC(name='auc')\n ]\n\n return metrics\n\nmetrics = model_metrics()", "_____no_output_____" ], [ "def make_model1(metrics=metrics):\n\n model = Sequential([\n Conv2D(16, 3, padding='same', activation='relu', input_shape=(IMG_HEIGHT, IMG_WIDTH ,3)),\n MaxPooling2D(),\n Conv2D(32, 3, padding='same', activation='relu'),\n MaxPooling2D(),\n Conv2D(64, 3, padding='same', activation='relu'),\n MaxPooling2D(),\n Flatten(),\n Dense(512, activation='relu'),\n Dense(1, activation='sigmoid')\n ])\n\n model.compile(optimizer='adam',\n loss='binary_crossentropy',\n metrics=metrics)\n\n model.summary()\n\n return model", "_____no_output_____" ], [ "def make_model2(metrics=metrics):\n model = Sequential([\n Conv2D(16, 3, padding='same', activation='relu', \n input_shape=(IMG_HEIGHT, IMG_WIDTH ,3)),\n MaxPooling2D(),\n Dropout(0.2),\n Conv2D(32, 3, padding='same', activation='relu'),\n MaxPooling2D(),\n Conv2D(64, 3, padding='same', activation='relu'),\n MaxPooling2D(),\n Dropout(0.2),\n Flatten(),\n Dense(512, activation='relu'),\n Dense(1, activation='sigmoid')\n])\n model.compile(optimizer='adam',\n loss='binary_crossentropy',\n metrics=metrics)\n\n model.summary()\n\n return model\n", "_____no_output_____" ], [ "model = make_model2()\nhistory = model.fit_generator(\n train_data_gen,\n # steps_per_epoch=total_train // batch_size,\n epochs=EPOCHS,\n # validation_data=val_data_gen,\n # validation_steps=total_val // batch_size\n)", "Model: \"sequential_1\"\n_________________________________________________________________\nLayer (type) Output Shape Param # \n=================================================================\nconv2d_3 (Conv2D) (None, 480, 480, 16) 448 \n_________________________________________________________________\nmax_pooling2d_3 (MaxPooling2 (None, 240, 240, 16) 0 \n_________________________________________________________________\ndropout_2 (Dropout) (None, 240, 240, 16) 0 \n_________________________________________________________________\nconv2d_4 (Conv2D) (None, 240, 240, 32) 4640 \n_________________________________________________________________\nmax_pooling2d_4 (MaxPooling2 (None, 120, 120, 32) 0 \n_________________________________________________________________\nconv2d_5 (Conv2D) (None, 120, 120, 64) 18496 \n_________________________________________________________________\nmax_pooling2d_5 (MaxPooling2 (None, 60, 60, 64) 0 \n_________________________________________________________________\ndropout_3 (Dropout) (None, 60, 60, 64) 0 \n_________________________________________________________________\nflatten_1 (Flatten) (None, 230400) 0 \n_________________________________________________________________\ndense_2 (Dense) (None, 512) 117965312 \n_________________________________________________________________\ndense_3 (Dense) (None, 1) 513 \n=================================================================\nTotal params: 117,989,409\nTrainable params: 117,989,409\nNon-trainable params: 0\n_________________________________________________________________\nEpoch 1/24\n94/94 [==============================] - 3818s 41s/step - loss: 0.4410 - accuracy: 0.0681 - tp: 5086.0000 - fp: 599.0000 - tn: 5400.0000 - fn: 913.0000 - precision: 0.8946 - recall: 0.8478 - auc: 0.9321\nEpoch 2/24\n94/94 [==============================] - 197s 2s/step - loss: 0.0194 - accuracy: 0.3834 - tp: 5926.0000 - fp: 6.0000 - tn: 5993.0000 - fn: 73.0000 - precision: 0.9990 - recall: 0.9878 - auc: 0.9998\nEpoch 3/24\n94/94 [==============================] - 197s 2s/step - loss: 0.0074 - accuracy: 0.5032 - tp: 5983.0000 - fp: 6.0000 - tn: 5993.0000 - fn: 16.0000 - precision: 0.9990 - recall: 0.9973 - auc: 0.9999\nEpoch 4/24\n94/94 [==============================] - 197s 2s/step - loss: 0.0063 - accuracy: 0.4424 - tp: 5979.0000 - fp: 1.0000 - tn: 5998.0000 - fn: 20.0000 - precision: 0.9998 - recall: 0.9967 - auc: 1.0000\nEpoch 5/24\n94/94 [==============================] - 197s 2s/step - loss: 0.0019 - accuracy: 0.5015 - tp: 5994.0000 - fp: 0.0000e+00 - tn: 5999.0000 - fn: 5.0000 - precision: 1.0000 - recall: 0.9992 - auc: 1.0000\nEpoch 6/24\n94/94 [==============================] - 197s 2s/step - loss: 0.0015 - accuracy: 0.5528 - tp: 5993.0000 - fp: 1.0000 - tn: 5998.0000 - fn: 6.0000 - precision: 0.9998 - recall: 0.9990 - auc: 1.0000\nEpoch 7/24\n94/94 [==============================] - 196s 2s/step - loss: 0.0012 - accuracy: 0.6015 - tp: 5997.0000 - fp: 1.0000 - tn: 5998.0000 - fn: 2.0000 - precision: 0.9998 - recall: 0.9997 - auc: 1.0000\nEpoch 8/24\n94/94 [==============================] - 197s 2s/step - loss: 0.0011 - accuracy: 0.6319 - tp: 5996.0000 - fp: 2.0000 - tn: 5997.0000 - fn: 3.0000 - precision: 0.9997 - recall: 0.9995 - auc: 1.0000\nEpoch 9/24\n94/94 [==============================] - 197s 2s/step - loss: 9.8835e-04 - accuracy: 0.6174 - tp: 5997.0000 - fp: 1.0000 - tn: 5998.0000 - fn: 2.0000 - precision: 0.9998 - recall: 0.9997 - auc: 1.0000\nEpoch 10/24\n94/94 [==============================] - 199s 2s/step - loss: 8.4462e-04 - accuracy: 0.6696 - tp: 5997.0000 - fp: 2.0000 - tn: 5997.0000 - fn: 2.0000 - precision: 0.9997 - recall: 0.9997 - auc: 1.0000\nEpoch 11/24\n94/94 [==============================] - 200s 2s/step - loss: 5.5589e-04 - accuracy: 0.6658 - tp: 5997.0000 - fp: 0.0000e+00 - tn: 5999.0000 - fn: 2.0000 - precision: 1.0000 - recall: 0.9997 - auc: 1.0000\nEpoch 12/24\n94/94 [==============================] - 201s 2s/step - loss: 6.2125e-04 - accuracy: 0.6938 - tp: 5997.0000 - fp: 3.0000 - tn: 5996.0000 - fn: 2.0000 - precision: 0.9995 - recall: 0.9997 - auc: 1.0000\nEpoch 13/24\n94/94 [==============================] - 198s 2s/step - loss: 5.0674e-04 - accuracy: 0.6952 - tp: 5997.0000 - fp: 0.0000e+00 - tn: 5999.0000 - fn: 2.0000 - precision: 1.0000 - recall: 0.9997 - auc: 1.0000\nEpoch 14/24\n94/94 [==============================] - 197s 2s/step - loss: 5.3749e-04 - accuracy: 0.7337 - tp: 5997.0000 - fp: 0.0000e+00 - tn: 5999.0000 - fn: 2.0000 - precision: 1.0000 - recall: 0.9997 - auc: 1.0000\nEpoch 15/24\n94/94 [==============================] - 196s 2s/step - loss: 5.0400e-04 - accuracy: 0.7361 - tp: 5997.0000 - fp: 0.0000e+00 - tn: 5999.0000 - fn: 2.0000 - precision: 1.0000 - recall: 0.9997 - auc: 1.0000\nEpoch 16/24\n94/94 [==============================] - 196s 2s/step - loss: 4.3057e-04 - accuracy: 0.7433 - tp: 5997.0000 - fp: 0.0000e+00 - tn: 5999.0000 - fn: 2.0000 - precision: 1.0000 - recall: 0.9997 - auc: 1.0000\nEpoch 17/24\n94/94 [==============================] - 196s 2s/step - loss: 4.6602e-04 - accuracy: 0.7743 - tp: 5998.0000 - fp: 1.0000 - tn: 5998.0000 - fn: 1.0000 - precision: 0.9998 - recall: 0.9998 - auc: 1.0000\nEpoch 18/24\n94/94 [==============================] - 197s 2s/step - loss: 7.9791e-04 - accuracy: 0.8036 - tp: 5997.0000 - fp: 2.0000 - tn: 5997.0000 - fn: 2.0000 - precision: 0.9997 - recall: 0.9997 - auc: 1.0000\nEpoch 19/24\n94/94 [==============================] - 196s 2s/step - loss: 0.0038 - accuracy: 0.6467 - tp: 5995.0000 - fp: 5.0000 - tn: 5994.0000 - fn: 4.0000 - precision: 0.9992 - recall: 0.9993 - auc: 0.9998\nEpoch 20/24\n94/94 [==============================] - 198s 2s/step - loss: 0.0013 - accuracy: 0.7038 - tp: 5997.0000 - fp: 3.0000 - tn: 5996.0000 - fn: 2.0000 - precision: 0.9995 - recall: 0.9997 - auc: 1.0000\nEpoch 21/24\n94/94 [==============================] - 196s 2s/step - loss: 6.5732e-04 - accuracy: 0.8033 - tp: 5998.0000 - fp: 1.0000 - tn: 5998.0000 - fn: 1.0000 - precision: 0.9998 - recall: 0.9998 - auc: 1.0000\nEpoch 22/24\n94/94 [==============================] - 196s 2s/step - loss: 4.7316e-04 - accuracy: 0.7912 - tp: 5997.0000 - fp: 0.0000e+00 - tn: 5999.0000 - fn: 2.0000 - precision: 1.0000 - recall: 0.9997 - auc: 1.0000\nEpoch 23/24\n94/94 [==============================] - 196s 2s/step - loss: 5.2201e-04 - accuracy: 0.8626 - tp: 5997.0000 - fp: 1.0000 - tn: 5998.0000 - fn: 2.0000 - precision: 0.9998 - recall: 0.9997 - auc: 1.0000\nEpoch 24/24\n94/94 [==============================] - 196s 2s/step - loss: 4.6280e-04 - accuracy: 0.8530 - tp: 5997.0000 - fp: 0.0000e+00 - tn: 5999.0000 - fn: 2.0000 - precision: 1.0000 - recall: 0.9997 - auc: 1.0000\n" ], [ "s# Save the entire model to a HDF5 file.\n# The '.h5' extension indicates that the model shuold be saved to HDF5.\nmodel.save(PATH+'my_model02_m2.h5') ", "_____no_output_____" ], [ "evaluation(model,train_data_gen,data = \"Training\")\n# evaluation(model,val_data_gen,data = \"Validation\")", "--------------Evaluating Training Dataset--------------\n94/94 [==============================] - 177s 2s/step - loss: 4.0114e-04 - accuracy: 0.8093 - tp: 5997.0000 - fp: 0.0000e+00 - tn: 5999.0000 - fn: 2.0000 - precision: 1.0000 - recall: 0.9997 - auc: 1.0000\nloss : 0.00040113908434825723\naccuracy : 0.80930156\ntp : 5997.0\nfp : 0.0\ntn : 5999.0\nfn : 2.0\nprecision : 1.0\nrecall : 0.99966663\nauc : 0.99999994\nf1 : 0.9998332580202475\n" ], [ "plot_metrices(EPOCHS,history,if_val=False)", "_____no_output_____" ], [ "generator = train_data_gen\npredict = model.predict_generator(train_data_gen,verbose=1)", "_____no_output_____" ], [ "plot_confusion_matrix(predict=predict,generator=train_data_gen,threshold=0.2)", "_____no_output_____" ], [ "sub = submission_binary(model,threshold=0.4,submission_name=PATH+'lunar01_m2.pickle')", "_____no_output_____" ] ], [ [ "## Evalaution Prediction to determine Threshold. \n\nsubmission_binary function is written in different cells below", "_____no_output_____" ] ], [ [ "filenames=test_generator.filenames\nresults=pd.DataFrame({\"Image_File\":filenames,\n \"Class\":predict_class})\nresults['Image_File'] = results['Image_File'].apply(lambda x : x[12:])\n# # results['Class'] = results[results.Score == True ]\nresults['Class'] = results['Class'].map({True: 'Small', False: \"Large\"})\n\nresults['Class'].value_counts()", "_____no_output_____" ], [ "# Instantiate Generator\ntest_datagen = ImageDataGenerator(rescale=1./255)\ntest_dir = '/content/drive/My Drive/personal_hackathons/DataSet/lunar_rock/PATH/'\n\ntest_generator = test_datagen.flow_from_directory(\n test_dir,\n target_size=(IMG_HEIGHT, IMG_WIDTH),\n # color_mode=\"rgb\",\n shuffle = False,\n class_mode='binary',\n batch_size=batch_size)\n\n# Check test Files\nfilenames = test_generator.filenames\nnb_samples = len(filenames)\nprint(\"Test Size : {}\".format(nb_samples))\n# print(filenames)\n\n ", "Found 7534 images belonging to 1 classes.\nTest Size : 7534\n" ], [ "# Model Prediction\ntest_generator.reset()\npredict = model.predict_generator(test_generator,verbose=1)\nprint(\"Model Prediction Shape {}\".format(predict.shape))\n\nlabels = train_data_gen.class_indices\nprint(\"Labels : {}\".format(labels) )", "58/59 [============================>.] - ETA: 37s Model Prediction Shape (7534, 1)\nLabels : {'Large': 0, 'Small': 1}\n" ], [ "threshold = 0.8", "_____no_output_____" ], [ "# Predicting Classes based on Threshold\npredict_class = predict > threshold\npredict_class = predict_class.reshape(1,-1)\npredict_class = predict_class[0]\n\nresults=pd.DataFrame({\"Image_File\":filenames,\n \"Class\":predict_class})\n\nresults['Image_File'] = results['Image_File'].apply(lambda x : x[12:])\nresults['Class'] = results['Class'].map({True: 'Small', False: \"Large\"})\n\nprint(\"Distribution : {} \".format(results['Class'].value_counts()))\n\n# # Write Sumission\n# with open(submission_name,'wb') as f :\n# pickle.dump(results,f)\n", "Distribution : Large 3772\nSmall 3762\nName: Class, dtype: int64 \n" ] ] ]
[ "markdown", "code", "markdown", "code" ]
[ [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code" ] ]
e7fd90048d5593e5cac8a4174c0e77583341accc
45,257
ipynb
Jupyter Notebook
dissertation/Experiments.ipynb
mathnathan/notebooks
63ae2f17fd8e1cd8d80fef8ee3b0d3d11d45cd28
[ "MIT" ]
1
2019-12-04T11:04:45.000Z
2019-12-04T11:04:45.000Z
dissertation/Experiments.ipynb
mathnathan/notebooks
63ae2f17fd8e1cd8d80fef8ee3b0d3d11d45cd28
[ "MIT" ]
null
null
null
dissertation/Experiments.ipynb
mathnathan/notebooks
63ae2f17fd8e1cd8d80fef8ee3b0d3d11d45cd28
[ "MIT" ]
null
null
null
132.330409
17,992
0.770245
[ [ [ "import numpy as np\n\ndef f(x):\n return x*x\n\ndef h(x):\n return x\n\nimport matplotlib.pyplot as plt\n\nx = np.linspace(0.001,1,200)\nplt.plot(x,f(x),label='$f(X)$')\nplt.plot(x,h(x),label='$h(x)$')\nplt.plot(x,h(x)*np.log(f(x)),label='$h(x)\\log{f(x)}$')\nplt.plot(x,np.log(h(x)),label='$f(x)\\log{h(x)}$')\n\nplt.legend()\nplt.show()", "_____no_output_____" ], [ "x", "_____no_output_____" ], [ "def ji(x,a):\n return (1-0.1*(1/x)**a)/a\ndef di(x):\n return np.log(x/2)+1", "_____no_output_____" ], [ "import matplotlib.pyplot as plt\nimport numpy as np\nx = np.linspace(1e-5,2,1000)\nplt.plot(x,ji(x,-0.01),label='ji')\nplt.plot(x,di(x),label='di')\nplt.legend()\nplt.show()", "_____no_output_____" ] ] ]
[ "code" ]
[ [ "code", "code", "code", "code" ] ]
e7fdb1bef1a2069e5b2677e47d4efb63b2093782
1,325
ipynb
Jupyter Notebook
python/OCW/6.00/Simple Fibonacci.ipynb
chadwhiting/algorithms-python
39910a1845f6385455c87e439daff3217c6bafb8
[ "MIT" ]
null
null
null
python/OCW/6.00/Simple Fibonacci.ipynb
chadwhiting/algorithms-python
39910a1845f6385455c87e439daff3217c6bafb8
[ "MIT" ]
null
null
null
python/OCW/6.00/Simple Fibonacci.ipynb
chadwhiting/algorithms-python
39910a1845f6385455c87e439daff3217c6bafb8
[ "MIT" ]
null
null
null
16.5625
44
0.431698
[ [ [ "def fib(x):\n if x == 0 or x == 1:\n return 1\n else:\n return fib(x - 2) + fib(x - 1)", "_____no_output_____" ], [ "for i in range(11):\n print(i,fib(i))", "0 1\n1 1\n2 2\n3 3\n4 5\n5 8\n6 13\n7 21\n8 34\n9 55\n10 89\n" ] ] ]
[ "code" ]
[ [ "code", "code" ] ]
e7fdbe9dccfabae6ca68f5c111732affe8cbcc61
1,974
ipynb
Jupyter Notebook
connection.ipynb
Db2-DTE-POC/Db2-Click-To-Containerize-Lab
e0c7f380bdbd2aa382cc5899ea327ed1ae63b331
[ "Apache-2.0" ]
null
null
null
connection.ipynb
Db2-DTE-POC/Db2-Click-To-Containerize-Lab
e0c7f380bdbd2aa382cc5899ea327ed1ae63b331
[ "Apache-2.0" ]
null
null
null
connection.ipynb
Db2-DTE-POC/Db2-Click-To-Containerize-Lab
e0c7f380bdbd2aa382cc5899ea327ed1ae63b331
[ "Apache-2.0" ]
null
null
null
27.416667
311
0.607903
[ [ [ "# Db2 Connection Document", "_____no_output_____" ], [ "This notebook contains the connect statement that will be used for connecting to Db2. The typical way of connecting to Db2 within a notebooks it to run the db2 notebook (`db2.ipynb`) and then issue the `%sql connect` statement:\n```sql\n%run db2.ipynb\n%sql connect to sample user ...\n```\n\nRather than having to change the connect statement in every notebook, this one file can be changed and all of the other notebooks will use the value in here. Note that if you do reset a connection within a notebook, you will need to issue the `CONNECT` command again or run this notebook to re-connect.\n\nThe `db2.ipynb` file is still used at the beginning of all notebooks to highlight the fact that we are using special code to allow Db2 commands to be issues from within Jupyter Notebooks.", "_____no_output_____" ], [ "### Connect to Db2\nThis code will connect to Db2 locally.", "_____no_output_____" ] ], [ [ "%sql CONNECT TO SAMPLE USER DB2INST1 USING db2inst1 HOST 10.0.0.2 PORT 50000", "_____no_output_____" ] ], [ [ "#### Credits: IBM 2019, George Baklarz [baklarz@ca.ibm.com]", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown" ]
[ [ "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ] ]
e7fdc7d916351de165d7f6a896336f93a6780734
38,120
ipynb
Jupyter Notebook
docs/tutorials/hello_many_worlds.ipynb
sarvex/tensorflow-quantum
09ba5d35b082d8229458522471a0c1ca8b77198d
[ "Apache-2.0" ]
1,501
2020-03-09T00:40:31.000Z
2022-03-28T19:59:57.000Z
docs/tutorials/hello_many_worlds.ipynb
sarvex/tensorflow-quantum
09ba5d35b082d8229458522471a0c1ca8b77198d
[ "Apache-2.0" ]
381
2020-03-09T18:31:04.000Z
2022-03-28T18:47:32.000Z
docs/tutorials/hello_many_worlds.ipynb
sarvex/tensorflow-quantum
09ba5d35b082d8229458522471a0c1ca8b77198d
[ "Apache-2.0" ]
410
2020-03-09T03:05:48.000Z
2022-03-31T12:08:14.000Z
31.017087
621
0.507162
[ [ [ "##### Copyright 2020 The TensorFlow Authors.", "_____no_output_____" ] ], [ [ "#@title Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.", "_____no_output_____" ] ], [ [ "# Hello, many worlds", "_____no_output_____" ], [ "<table class=\"tfo-notebook-buttons\" align=\"left\">\n <td>\n <a target=\"_blank\" href=\"https://www.tensorflow.org/quantum/tutorials/hello_many_worlds\"><img src=\"https://www.tensorflow.org/images/tf_logo_32px.png\" />View on TensorFlow.org</a>\n </td>\n <td>\n <a target=\"_blank\" href=\"https://colab.research.google.com/github/tensorflow/quantum/blob/master/docs/tutorials/hello_many_worlds.ipynb\"><img src=\"https://www.tensorflow.org/images/colab_logo_32px.png\" />Run in Google Colab</a>\n </td>\n <td>\n <a target=\"_blank\" href=\"https://github.com/tensorflow/quantum/blob/master/docs/tutorials/hello_many_worlds.ipynb\"><img src=\"https://www.tensorflow.org/images/GitHub-Mark-32px.png\" />View source on GitHub</a>\n </td>\n <td>\n <a href=\"https://storage.googleapis.com/tensorflow_docs/quantum/docs/tutorials/hello_many_worlds.ipynb\"><img src=\"https://www.tensorflow.org/images/download_logo_32px.png\" />Download notebook</a>\n </td>\n</table>", "_____no_output_____" ], [ "This tutorial shows how a classical neural network can learn to correct qubit calibration errors. It introduces <a target=\"_blank\" href=\"https://github.com/quantumlib/Cirq\" class=\"external\">Cirq</a>, a Python framework to create, edit, and invoke Noisy Intermediate Scale Quantum (NISQ) circuits, and demonstrates how Cirq interfaces with TensorFlow Quantum.", "_____no_output_____" ], [ "## Setup", "_____no_output_____" ] ], [ [ "!pip install tensorflow==2.4.1", "_____no_output_____" ] ], [ [ "Install TensorFlow Quantum:", "_____no_output_____" ] ], [ [ "!pip install tensorflow-quantum", "_____no_output_____" ], [ "# Update package resources to account for version changes.\nimport importlib, pkg_resources\nimportlib.reload(pkg_resources)", "_____no_output_____" ] ], [ [ "Now import TensorFlow and the module dependencies:", "_____no_output_____" ] ], [ [ "import tensorflow as tf\nimport tensorflow_quantum as tfq\n\nimport cirq\nimport sympy\nimport numpy as np\n\n# visualization tools\n%matplotlib inline\nimport matplotlib.pyplot as plt\nfrom cirq.contrib.svg import SVGCircuit", "_____no_output_____" ] ], [ [ "## 1. The Basics", "_____no_output_____" ], [ "### 1.1 Cirq and parameterized quantum circuits\n\nBefore exploring TensorFlow Quantum (TFQ), let's look at some <a target=\"_blank\" href=\"https://github.com/quantumlib/Cirq\" class=\"external\">Cirq</a> basics. Cirq is a Python library for quantum computing from Google. You use it to define circuits, including static and parameterized gates.\n\nCirq uses <a target=\"_blank\" href=\"https://www.sympy.org\" class=\"external\">SymPy</a> symbols to represent free parameters.", "_____no_output_____" ] ], [ [ "a, b = sympy.symbols('a b')", "_____no_output_____" ] ], [ [ "The following code creates a two-qubit circuit using your parameters:", "_____no_output_____" ] ], [ [ "# Create two qubits\nq0, q1 = cirq.GridQubit.rect(1, 2)\n\n# Create a circuit on these qubits using the parameters you created above.\ncircuit = cirq.Circuit(\n cirq.rx(a).on(q0),\n cirq.ry(b).on(q1), cirq.CNOT(control=q0, target=q1))\n\nSVGCircuit(circuit)", "_____no_output_____" ] ], [ [ "To evaluate circuits, you can use the `cirq.Simulator` interface. You replace free parameters in a circuit with specific numbers by passing in a `cirq.ParamResolver` object. The following code calculates the raw state vector output of your parameterized circuit:", "_____no_output_____" ] ], [ [ "# Calculate a state vector with a=0.5 and b=-0.5.\nresolver = cirq.ParamResolver({a: 0.5, b: -0.5})\noutput_state_vector = cirq.Simulator().simulate(circuit, resolver).final_state_vector\noutput_state_vector", "_____no_output_____" ] ], [ [ "State vectors are not directly accessible outside of simulation (notice the complex numbers in the output above). To be physically realistic, you must specify a measurement, which converts a state vector into a real number that classical computers can understand. Cirq specifies measurements using combinations of the <a target=\"_blank\" href=\"https://en.wikipedia.org/wiki/Pauli_matrices\" class=\"external\">Pauli operators</a> $\\hat{X}$, $\\hat{Y}$, and $\\hat{Z}$. As illustration, the following code measures $\\hat{Z}_0$ and $\\frac{1}{2}\\hat{Z}_0 + \\hat{X}_1$ on the state vector you just simulated:", "_____no_output_____" ] ], [ [ "z0 = cirq.Z(q0)\n\nqubit_map={q0: 0, q1: 1}\n\nz0.expectation_from_state_vector(output_state_vector, qubit_map).real", "_____no_output_____" ], [ "z0x1 = 0.5 * z0 + cirq.X(q1)\n\nz0x1.expectation_from_state_vector(output_state_vector, qubit_map).real", "_____no_output_____" ] ], [ [ "### 1.2 Quantum circuits as tensors\n\nTensorFlow Quantum (TFQ) provides `tfq.convert_to_tensor`, a function that converts Cirq objects into tensors. This allows you to send Cirq objects to our <a target=\"_blank\" href=\"https://www.tensorflow.org/quantum/api_docs/python/tfq/layers\">quantum layers</a> and <a target=\"_blank\" href=\"https://www.tensorflow.org/quantum/api_docs/python/tfq/get_expectation_op\">quantum ops</a>. The function can be called on lists or arrays of Cirq Circuits and Cirq Paulis:", "_____no_output_____" ] ], [ [ "# Rank 1 tensor containing 1 circuit.\ncircuit_tensor = tfq.convert_to_tensor([circuit])\n\nprint(circuit_tensor.shape)\nprint(circuit_tensor.dtype)", "_____no_output_____" ] ], [ [ "This encodes the Cirq objects as `tf.string` tensors that `tfq` operations decode as needed.", "_____no_output_____" ] ], [ [ "# Rank 1 tensor containing 2 Pauli operators.\npauli_tensor = tfq.convert_to_tensor([z0, z0x1])\npauli_tensor.shape", "_____no_output_____" ] ], [ [ "### 1.3 Batching circuit simulation\n\nTFQ provides methods for computing expectation values, samples, and state vectors. For now, let's focus on *expectation values*.\n\nThe highest-level interface for calculating expectation values is the `tfq.layers.Expectation` layer, which is a `tf.keras.Layer`. In its simplest form, this layer is equivalent to simulating a parameterized circuit over many `cirq.ParamResolvers`; however, TFQ allows batching following TensorFlow semantics, and circuits are simulated using efficient C++ code.\n\nCreate a batch of values to substitute for our `a` and `b` parameters:", "_____no_output_____" ] ], [ [ "batch_vals = np.array(np.random.uniform(0, 2 * np.pi, (5, 2)), dtype=np.float32)", "_____no_output_____" ] ], [ [ "Batching circuit execution over parameter values in Cirq requires a loop:", "_____no_output_____" ] ], [ [ "cirq_results = []\ncirq_simulator = cirq.Simulator()\n\nfor vals in batch_vals:\n resolver = cirq.ParamResolver({a: vals[0], b: vals[1]})\n final_state_vector = cirq_simulator.simulate(circuit, resolver).final_state_vector\n cirq_results.append(\n [z0.expectation_from_state_vector(final_state_vector, {\n q0: 0,\n q1: 1\n }).real])\n\nprint('cirq batch results: \\n {}'.format(np.array(cirq_results)))", "_____no_output_____" ] ], [ [ "The same operation is simplified in TFQ:", "_____no_output_____" ] ], [ [ "tfq.layers.Expectation()(circuit,\n symbol_names=[a, b],\n symbol_values=batch_vals,\n operators=z0)", "_____no_output_____" ] ], [ [ "## 2. Hybrid quantum-classical optimization\n\nNow that you've seen the basics, let's use TensorFlow Quantum to construct a *hybrid quantum-classical neural net*. You will train a classical neural net to control a single qubit. The control will be optimized to correctly prepare the qubit in the `0` or `1` state, overcoming a simulated systematic calibration error. This figure shows the architecture:\n\n<img src=\"./images/nn_control1.png\" width=\"1000\">\n\nEven without a neural network this is a straightforward problem to solve, but the theme is similar to the real quantum control problems you might solve using TFQ. It demonstrates an end-to-end example of a quantum-classical computation using the `tfq.layers.ControlledPQC` (Parametrized Quantum Circuit) layer inside of a `tf.keras.Model`.", "_____no_output_____" ], [ "For the implementation of this tutorial, this architecture is split into 3 parts:\n\n- The *input circuit* or *datapoint circuit*: The first three $R$ gates.\n- The *controlled circuit*: The other three $R$ gates.\n- The *controller*: The classical neural-network setting the parameters of the controlled circuit.", "_____no_output_____" ], [ "### 2.1 The controlled circuit definition\n\nDefine a learnable single bit rotation, as indicated in the figure above. This will correspond to our controlled circuit.", "_____no_output_____" ] ], [ [ "# Parameters that the classical NN will feed values into.\ncontrol_params = sympy.symbols('theta_1 theta_2 theta_3')\n\n# Create the parameterized circuit.\nqubit = cirq.GridQubit(0, 0)\nmodel_circuit = cirq.Circuit(\n cirq.rz(control_params[0])(qubit),\n cirq.ry(control_params[1])(qubit),\n cirq.rx(control_params[2])(qubit))\n\nSVGCircuit(model_circuit)", "_____no_output_____" ] ], [ [ "### 2.2 The controller\n\nNow define controller network: ", "_____no_output_____" ] ], [ [ "# The classical neural network layers.\ncontroller = tf.keras.Sequential([\n tf.keras.layers.Dense(10, activation='elu'),\n tf.keras.layers.Dense(3)\n])", "_____no_output_____" ] ], [ [ "Given a batch of commands, the controller outputs a batch of control signals for the controlled circuit. \n\nThe controller is randomly initialized so these outputs are not useful, yet.", "_____no_output_____" ] ], [ [ "controller(tf.constant([[0.0],[1.0]])).numpy()", "_____no_output_____" ] ], [ [ "### 2.3 Connect the controller to the circuit", "_____no_output_____" ], [ "Use `tfq` to connect the controller to the controlled circuit, as a single `keras.Model`. \n\nSee the [Keras Functional API guide](https://www.tensorflow.org/guide/keras/functional) for more about this style of model definition.\n\nFirst define the inputs to the model: ", "_____no_output_____" ] ], [ [ "# This input is the simulated miscalibration that the model will learn to correct.\ncircuits_input = tf.keras.Input(shape=(),\n # The circuit-tensor has dtype `tf.string` \n dtype=tf.string,\n name='circuits_input')\n\n# Commands will be either `0` or `1`, specifying the state to set the qubit to.\ncommands_input = tf.keras.Input(shape=(1,),\n dtype=tf.dtypes.float32,\n name='commands_input')\n", "_____no_output_____" ] ], [ [ "Next apply operations to those inputs, to define the computation.", "_____no_output_____" ] ], [ [ "dense_2 = controller(commands_input)\n\n# TFQ layer for classically controlled circuits.\nexpectation_layer = tfq.layers.ControlledPQC(model_circuit,\n # Observe Z\n operators = cirq.Z(qubit))\nexpectation = expectation_layer([circuits_input, dense_2])", "_____no_output_____" ] ], [ [ "Now package this computation as a `tf.keras.Model`:", "_____no_output_____" ] ], [ [ "# The full Keras model is built from our layers.\nmodel = tf.keras.Model(inputs=[circuits_input, commands_input],\n outputs=expectation)", "_____no_output_____" ] ], [ [ "The network architecture is indicated by the plot of the model below.\nCompare this model plot to the architecture diagram to verify correctness.\n\nNote: May require a system install of the `graphviz` package.", "_____no_output_____" ] ], [ [ "tf.keras.utils.plot_model(model, show_shapes=True, dpi=70)", "_____no_output_____" ] ], [ [ "This model takes two inputs: The commands for the controller, and the input-circuit whose output the controller is attempting to correct. ", "_____no_output_____" ], [ "### 2.4 The dataset", "_____no_output_____" ], [ "The model attempts to output the correct correct measurement value of $\\hat{Z}$ for each command. The commands and correct values are defined below.", "_____no_output_____" ] ], [ [ "# The command input values to the classical NN.\ncommands = np.array([[0], [1]], dtype=np.float32)\n\n# The desired Z expectation value at output of quantum circuit.\nexpected_outputs = np.array([[1], [-1]], dtype=np.float32)", "_____no_output_____" ] ], [ [ "This is not the entire training dataset for this task. \nEach datapoint in the dataset also needs an input circuit.", "_____no_output_____" ], [ "### 2.4 Input circuit definition\n\nThe input-circuit below defines the random miscalibration the model will learn to correct.", "_____no_output_____" ] ], [ [ "random_rotations = np.random.uniform(0, 2 * np.pi, 3)\nnoisy_preparation = cirq.Circuit(\n cirq.rx(random_rotations[0])(qubit),\n cirq.ry(random_rotations[1])(qubit),\n cirq.rz(random_rotations[2])(qubit)\n)\ndatapoint_circuits = tfq.convert_to_tensor([\n noisy_preparation\n] * 2) # Make two copied of this circuit", "_____no_output_____" ] ], [ [ "There are two copies of the circuit, one for each datapoint.", "_____no_output_____" ] ], [ [ "datapoint_circuits.shape", "_____no_output_____" ] ], [ [ "### 2.5 Training", "_____no_output_____" ], [ "With the inputs defined you can test-run the `tfq` model.", "_____no_output_____" ] ], [ [ "model([datapoint_circuits, commands]).numpy()", "_____no_output_____" ] ], [ [ "Now run a standard training process to adjust these values towards the `expected_outputs`.", "_____no_output_____" ] ], [ [ "optimizer = tf.keras.optimizers.Adam(learning_rate=0.05)\nloss = tf.keras.losses.MeanSquaredError()\nmodel.compile(optimizer=optimizer, loss=loss)\nhistory = model.fit(x=[datapoint_circuits, commands],\n y=expected_outputs,\n epochs=30,\n verbose=0)", "_____no_output_____" ], [ "plt.plot(history.history['loss'])\nplt.title(\"Learning to Control a Qubit\")\nplt.xlabel(\"Iterations\")\nplt.ylabel(\"Error in Control\")\nplt.show()", "_____no_output_____" ] ], [ [ "From this plot you can see that the neural network has learned to overcome the systematic miscalibration.", "_____no_output_____" ], [ "### 2.6 Verify outputs\nNow use the trained model, to correct the qubit calibration errors. With Cirq:", "_____no_output_____" ] ], [ [ "def check_error(command_values, desired_values):\n \"\"\"Based on the value in `command_value` see how well you could prepare\n the full circuit to have `desired_value` when taking expectation w.r.t. Z.\"\"\"\n params_to_prepare_output = controller(command_values).numpy()\n full_circuit = noisy_preparation + model_circuit\n\n # Test how well you can prepare a state to get expectation the expectation\n # value in `desired_values`\n for index in [0, 1]:\n state = cirq_simulator.simulate(\n full_circuit,\n {s:v for (s,v) in zip(control_params, params_to_prepare_output[index])}\n ).final_state_vector\n expt = cirq.Z(qubit).expectation_from_state_vector(state, {qubit: 0}).real\n print(f'For a desired output (expectation) of {desired_values[index]} with'\n f' noisy preparation, the controller\\nnetwork found the following '\n f'values for theta: {params_to_prepare_output[index]}\\nWhich gives an'\n f' actual expectation of: {expt}\\n')\n\n\ncheck_error(commands, expected_outputs)", "_____no_output_____" ] ], [ [ "The value of the loss function during training provides a rough idea of how well the model is learning. The lower the loss, the closer the expectation values in the above cell is to `desired_values`. If you aren't as concerned with the parameter values, you can always check the outputs from above using `tfq`:", "_____no_output_____" ] ], [ [ "model([datapoint_circuits, commands])", "_____no_output_____" ] ], [ [ "## 3 Learning to prepare eigenstates of different operators\n\nThe choice of the $\\pm \\hat{Z}$ eigenstates corresponding to 1 and 0 was arbitrary. You could have just as easily wanted 1 to correspond to the $+ \\hat{Z}$ eigenstate and 0 to correspond to the $-\\hat{X}$ eigenstate. One way to accomplish this is by specifying a different measurement operator for each command, as indicated in the figure below:\n\n<img src=\"./images/nn_control2.png\" width=\"1000\">\n\nThis requires use of <code>tfq.layers.Expectation</code>. Now your input has grown to include three objects: circuit, command, and operator. The output is still the expectation value.", "_____no_output_____" ], [ "### 3.1 New model definition\n\nLets take a look at the model to accomplish this task:", "_____no_output_____" ] ], [ [ "# Define inputs.\ncommands_input = tf.keras.layers.Input(shape=(1),\n dtype=tf.dtypes.float32,\n name='commands_input')\ncircuits_input = tf.keras.Input(shape=(),\n # The circuit-tensor has dtype `tf.string` \n dtype=tf.dtypes.string,\n name='circuits_input')\noperators_input = tf.keras.Input(shape=(1,),\n dtype=tf.dtypes.string,\n name='operators_input')", "_____no_output_____" ] ], [ [ "Here is the controller network:", "_____no_output_____" ] ], [ [ "# Define classical NN.\ncontroller = tf.keras.Sequential([\n tf.keras.layers.Dense(10, activation='elu'),\n tf.keras.layers.Dense(3)\n])", "_____no_output_____" ] ], [ [ "Combine the circuit and the controller into a single `keras.Model` using `tfq`:", "_____no_output_____" ] ], [ [ "dense_2 = controller(commands_input)\n\n# Since you aren't using a PQC or ControlledPQC you must append\n# your model circuit onto the datapoint circuit tensor manually.\nfull_circuit = tfq.layers.AddCircuit()(circuits_input, append=model_circuit)\nexpectation_output = tfq.layers.Expectation()(full_circuit,\n symbol_names=control_params,\n symbol_values=dense_2,\n operators=operators_input)\n\n# Contruct your Keras model.\ntwo_axis_control_model = tf.keras.Model(\n inputs=[circuits_input, commands_input, operators_input],\n outputs=[expectation_output])", "_____no_output_____" ] ], [ [ "### 3.2 The dataset\n\nNow you will also include the operators you wish to measure for each datapoint you supply for `model_circuit`:", "_____no_output_____" ] ], [ [ "# The operators to measure, for each command.\noperator_data = tfq.convert_to_tensor([[cirq.X(qubit)], [cirq.Z(qubit)]])\n\n# The command input values to the classical NN.\ncommands = np.array([[0], [1]], dtype=np.float32)\n\n# The desired expectation value at output of quantum circuit.\nexpected_outputs = np.array([[1], [-1]], dtype=np.float32)", "_____no_output_____" ] ], [ [ "### 3.3 Training\n\nNow that you have your new inputs and outputs you can train once again using keras.", "_____no_output_____" ] ], [ [ "optimizer = tf.keras.optimizers.Adam(learning_rate=0.05)\nloss = tf.keras.losses.MeanSquaredError()\n\ntwo_axis_control_model.compile(optimizer=optimizer, loss=loss)\n\nhistory = two_axis_control_model.fit(\n x=[datapoint_circuits, commands, operator_data],\n y=expected_outputs,\n epochs=30,\n verbose=1)", "_____no_output_____" ], [ "plt.plot(history.history['loss'])\nplt.title(\"Learning to Control a Qubit\")\nplt.xlabel(\"Iterations\")\nplt.ylabel(\"Error in Control\")\nplt.show()", "_____no_output_____" ] ], [ [ "The loss function has dropped to zero.", "_____no_output_____" ], [ "The `controller` is available as a stand-alone model. Call the controller, and check its response to each command signal. It would take some work to correctly compare these outputs to the contents of `random_rotations`.", "_____no_output_____" ] ], [ [ "controller.predict(np.array([0,1]))", "_____no_output_____" ] ], [ [ "Success: See if you can adapt the `check_error` function from your first model to work with this new model architecture.", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ] ]
e7fdd01e23358499fa214c21e64b6db78395b8c5
35,576
ipynb
Jupyter Notebook
examples/Untitled1.ipynb
iamtxena/coherent-state-discrimination
88a347d08730b2436264da8737940412847e60bf
[ "Apache-2.0" ]
null
null
null
examples/Untitled1.ipynb
iamtxena/coherent-state-discrimination
88a347d08730b2436264da8737940412847e60bf
[ "Apache-2.0" ]
null
null
null
examples/Untitled1.ipynb
iamtxena/coherent-state-discrimination
88a347d08730b2436264da8737940412847e60bf
[ "Apache-2.0" ]
null
null
null
70.169625
1,818
0.677311
[ [ [ "import numpy as np\nimport strawberryfields as sf\nimport random\nfrom typing import List\nimport tensorflow as tf", "_____no_output_____" ], [ "batch_size = 100\nalphas = list(np.arange(0.0, 2.1, 0.05))\nbetas = list(np.arange(-8.0, 2.0, 0.1))\nA = 1\nA = -1", "_____no_output_____" ], [ "def create_codeword(letters: List[float], codeword_size=10) -> List[float]:\n return [random.choice(letters) for _ in range(codeword_size)]", "_____no_output_____" ], [ "codeword = create_codeword([A*alphas[15], -A*alphas[15]], codeword_size=batch_size)", "_____no_output_____" ], [ "p_1 = codeword.count(A*alphas[15])/len(codeword)\np_0 = 1 - p_1", "_____no_output_____" ], [ "len(betas)\n", "_____no_output_____" ], [ "len(codeword)", "_____no_output_____" ], [ "eng = sf.Engine(backend=\"fock\", backend_options={\n \"cutoff_dim\": 7,\n \"batch_size\": len(codeword),\n })", "_____no_output_____" ], [ "circuit = sf.Program(1)\n\nbeta = circuit.params(\"beta\")\nalpha = circuit.params(\"alpha\")\n\nwith circuit.context as q:\n sf.ops.Dgate(alpha, 0.0) | q[0]\n sf.ops.Dgate(beta, 0.0) | q[0]", "_____no_output_____" ], [ "results = eng.run(circuit, args={\n \"beta\": betas[14],\n \"alpha\": codeword[14]\n })\n \n# get the probability of |0>\np_zero = results.state.fock_prob([0])\n\n# get the porbability of anything by |0>\np_one = 1 - p_zero", "_____no_output_____" ], [ "p_zero", "_____no_output_____" ], [ "p_one", "_____no_output_____" ], [ "eng.reset()\nprog = sf.Program(1)\n\nalpha = prog.params(\"alpha\")\n\nwith prog.context as q:\n sf.ops.Dgate(alpha) | q\n\n# Assign our TensorFlow variables, so that we can\n# refer to them later when differentiating/training.\na = tf.Variable(0.43)\n\nwith tf.GradientTape() as tape:\n # Here, we map our quantum free parameter `alpha`\n # to our TensorFlow variable `a` and pass it to the engine.\n\n result = eng.run(prog, args={\"alpha\": a})\n state = result.state\n\n # Note that all processing, including state-based post-processing,\n # must be done within the gradient tape context!\n mean, var = state.mean_photon(0)\n\n# test that the gradient of the mean photon number is correct\n\ngrad = tape.gradient(mean, [a])\nprint(\"Gradient:\", grad)", "Gradient: [<tf.Tensor: shape=(), dtype=float32, numpy=0.85999966>]\n" ], [ "print(\"Exact gradient:\", 2 * a)\nprint(\"Exact and TensorFlow gradient agree:\", np.allclose(grad, 2 * a))", "Exact gradient: tf.Tensor(0.86, shape=(), dtype=float32)\nExact and TensorFlow gradient agree: True\n" ], [ "prog = sf.Program(2)\n# we can create symbolic parameters one by one\nalpha = prog.params(\"alpha\")\n\n# or create multiple at the same time\ntheta_bs, phi_bs = prog.params(\"theta_bs\", \"phi_bs\")\n\nwith prog.context as q:\n # States\n sf.ops.Coherent(alpha) | q[0]\n # Gates\n sf.ops.BSgate(theta_bs, phi_bs) | (q[0], q[1])\n # Measurements\n sf.ops.MeasureHomodyne(0.0) | q[0]", "_____no_output_____" ], [ "eng = sf.Engine(backend=\"tf\", backend_options={\"cutoff_dim\": 7})", "_____no_output_____" ], [ "mapping = {\"alpha\": tf.Variable(0.5), \"theta_bs\": tf.constant(0.4), \"phi_bs\": tf.constant(0.0)}\n\nresult = eng.run(prog, args=mapping)", "2021-10-14 08:11:00.817102: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero\n2021-10-14 08:11:00.823202: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero\n2021-10-14 08:11:00.823794: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero\n2021-10-14 08:11:00.824731: I tensorflow/core/platform/cpu_feature_guard.cc:142] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations: AVX2 FMA\nTo enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.\n2021-10-14 08:11:00.825165: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero\n2021-10-14 08:11:00.825761: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero\n2021-10-14 08:11:00.826332: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero\n2021-10-14 08:11:01.127872: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero\n2021-10-14 08:11:01.128365: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero\n2021-10-14 08:11:01.128823: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero\n2021-10-14 08:11:01.129257: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1510] Created device /job:localhost/replica:0/task:0/device:GPU:0 with 22136 MB memory: -> device: 0, name: NVIDIA GeForce RTX 3090, pci bus id: 0000:01:00.0, compute capability: 8.6\n" ], [ "print(result.samples)", "tf.Tensor([[-0.27430275+0.j]], shape=(1, 1), dtype=complex64)\n" ], [ "state = result.state\nprint(\"Density matrix element [0,0,1,2]:\", state.dm()[0, 0, 1, 2])", "Density matrix element [0,0,1,2]: tf.Tensor((0.0050318697-1.2697814e-14j), shape=(), dtype=complex64)\n" ], [ "# run simulation in batched-processing mode\nbatch_size = 3\nprog = sf.Program(1)\neng = sf.Engine(\"tf\", backend_options={\"cutoff_dim\": 15, \"batch_size\": batch_size})\n\nx = prog.params(\"x\")\n\nwith prog.context as q:\n sf.ops.Thermal(x) | q[0]\n\nx_val = tf.Variable([0.1, 0.2, 0.3])\nresult = eng.run(prog, args={\"x\": x_val})\nprint(\"Mean photon number of mode 0 (batched):\", result.state.mean_photon(0)[0])", "Mean photon number of mode 0 (batched): tf.Tensor([0.09999998 0.19999997 0.29999998], shape=(3,), dtype=float32)\n" ], [ "# initialize engine and program objects\neng = sf.Engine(backend=\"tf\", backend_options={\"cutoff_dim\": 7})\ncircuit = sf.Program(1)\n\ntf_alpha = tf.Variable(0.1)\ntf_phi = tf.Variable(0.1)\n\nalpha, phi = circuit.params(\"alpha\", \"phi\")\n\nwith circuit.context as q:\n sf.ops.Dgate(alpha, phi) | q[0]\n\nopt = tf.keras.optimizers.Adam(learning_rate=0.1)\nsteps = 50\n\nfor step in range(steps):\n\n # reset the engine if it has already been executed\n if eng.run_progs:\n eng.reset()\n\n with tf.GradientTape() as tape:\n # execute the engine\n results = eng.run(circuit, args={\"alpha\": tf_alpha, \"phi\": tf_phi})\n # get the probability of fock state |1>\n prob = results.state.fock_prob([1])\n # negative sign to maximize prob\n loss = -prob\n\n gradients = tape.gradient(loss, [tf_alpha, tf_phi])\n opt.apply_gradients(zip(gradients, [tf_alpha, tf_phi]))\n print(\"Probability at step {}: {}\".format(step, prob))", "_____no_output_____" ] ] ]
[ "code" ]
[ [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ] ]
e7fdd9e521749df84ab8b7450fb4670fe4858a08
71,665
ipynb
Jupyter Notebook
nn.ipynb
LaudateCorpus1/pytorch_notebooks
a5eee7c71d34d94b3711fe24474ee5b6b4b3056f
[ "MIT" ]
495
2020-01-12T18:56:47.000Z
2022-03-28T11:52:13.000Z
nn.ipynb
alketcecaj12/pytorch_notebooks
a2fbe8ff710496cd64f3320b0fa5fbdf69ed6d3b
[ "MIT" ]
null
null
null
nn.ipynb
alketcecaj12/pytorch_notebooks
a2fbe8ff710496cd64f3320b0fa5fbdf69ed6d3b
[ "MIT" ]
93
2020-01-13T18:29:56.000Z
2022-03-27T22:10:31.000Z
48.718559
779
0.560957
[ [ [ "<a href=\"https://colab.research.google.com/github/dair-ai/pytorch_notebooks/blob/master/nn.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>", "_____no_output_____" ], [ "## A Simple Neural Network from Scratch with PyTorch and Google Colab", "_____no_output_____" ], [ "In this tutorial we will implement a simple neural network from scratch using PyTorch. The idea of the tutorial is to teach you the basics of PyTorch and how it can be used to implement a neural network from scratch. I will go over some of the basic functionalities and concepts available in PyTorch that will allow you to build your own neural networks. \n\nThis tutorial assumes you have prior knowledge of how a neural network works. Don’t worry! Even if you are not so sure, you will be okay. For advanced PyTorch users, this tutorial may still serve as a refresher. This tutorial is heavily inspired by this [Neural Network implementation](https://repl.it/talk/announcements/Build-a-Neural-Network-in-Python/5457) coded purely using Numpy. In fact, I tried re-implementing the code using PyTorch instead and added my own intuitions and explanations. Thanks to [Samay](https://repl.it/@shamdasani) for his phenomenal work, I hope this inspires many others as it did with me.\n\nSince we are working on Google Colab, we will need to install the PyTorch library. You can do this by using the following command:", "_____no_output_____" ] ], [ [ "!pip3 install torch torchvision", "Collecting torch\n\u001b[?25l Downloading https://files.pythonhosted.org/packages/7e/60/66415660aa46b23b5e1b72bc762e816736ce8d7260213e22365af51e8f9c/torch-1.0.0-cp36-cp36m-manylinux1_x86_64.whl (591.8MB)\n\u001b[K 100% |████████████████████████████████| 591.8MB 26kB/s \ntcmalloc: large alloc 1073750016 bytes == 0x61f82000 @ 0x7f400bb202a4 0x591a07 0x5b5d56 0x502e9a 0x506859 0x502209 0x502f3d 0x506859 0x504c28 0x502540 0x502f3d 0x506859 0x504c28 0x502540 0x502f3d 0x506859 0x504c28 0x502540 0x502f3d 0x507641 0x502209 0x502f3d 0x506859 0x504c28 0x502540 0x502f3d 0x507641 0x504c28 0x502540 0x502f3d 0x507641\n\u001b[?25hCollecting torchvision\n\u001b[?25l Downloading https://files.pythonhosted.org/packages/ca/0d/f00b2885711e08bd71242ebe7b96561e6f6d01fdb4b9dcf4d37e2e13c5e1/torchvision-0.2.1-py2.py3-none-any.whl (54kB)\n\u001b[K 100% |████████████████████████████████| 61kB 23.4MB/s \n\u001b[?25hRequirement already satisfied: numpy in /usr/local/lib/python3.6/dist-packages (from torchvision) (1.14.6)\nCollecting pillow>=4.1.1 (from torchvision)\n\u001b[?25l Downloading https://files.pythonhosted.org/packages/92/e3/217dfd0834a51418c602c96b110059c477260c7fee898542b100913947cf/Pillow-5.4.0-cp36-cp36m-manylinux1_x86_64.whl (2.0MB)\n\u001b[K 100% |████████████████████████████████| 2.0MB 6.8MB/s \n\u001b[?25hRequirement already satisfied: six in /usr/local/lib/python3.6/dist-packages (from torchvision) (1.11.0)\nInstalling collected packages: torch, pillow, torchvision\n Found existing installation: Pillow 4.0.0\n Uninstalling Pillow-4.0.0:\n Successfully uninstalled Pillow-4.0.0\nSuccessfully installed pillow-5.4.0 torch-1.0.0 torchvision-0.2.1\n" ] ], [ [ "\nThe `torch` module provides all the necessary **tensor** operators you will need to implement your first neural network from scratch in PyTorch. That's right! In PyTorch everything is a Tensor, so this is the first thing you will need to get used to.", "_____no_output_____" ] ], [ [ "import torch\nimport torch.nn as nn", "_____no_output_____" ] ], [ [ "## Data\nLet's start by creating some sample data using the `torch.tensor` command. In Numpy, this could be done with `np.array`. Both functions serve the same purpose, but in PyTorch everything is a Tensor as opposed to a vector or matrix. We define types in PyTorch using the `dtype=torch.xxx` command. \n\nIn the data below, `X` represents the amount of hours studied and how much time students spent sleeping, whereas `y` represent grades. The variable `xPredicted` is a single input for which we want to predict a grade using the parameters learned by the neural network. Remember, the neural network wants to learn a mapping between `X` and `y`, so it will try to take a guess from what it has learned from the training data. ", "_____no_output_____" ] ], [ [ "X = torch.tensor(([2, 9], [1, 5], [3, 6]), dtype=torch.float) # 3 X 2 tensor\ny = torch.tensor(([92], [100], [89]), dtype=torch.float) # 3 X 1 tensor\nxPredicted = torch.tensor(([4, 8]), dtype=torch.float) # 1 X 2 tensor", "_____no_output_____" ] ], [ [ "You can check the size of the tensors we have just created with the `size` command. This is equivalent to the `shape` command used in tools such as Numpy and Tensorflow. ", "_____no_output_____" ] ], [ [ "print(X.size())\nprint(y.size())", "torch.Size([3, 2])\ntorch.Size([3, 1])\n" ] ], [ [ "## Scaling\n\nBelow we are performing some scaling on the sample data. Notice that the `max` function returns both a tensor and the corresponding indices. So we use `_` to capture the indices which we won't use here because we are only interested in the max values to conduct the scaling. Perfect! Our data is now in a very nice format our neural network will appreciate later on. ", "_____no_output_____" ] ], [ [ "# scale units\nX_max, _ = torch.max(X, 0)\nxPredicted_max, _ = torch.max(xPredicted, 0)\n\nX = torch.div(X, X_max)\nxPredicted = torch.div(xPredicted, xPredicted_max)\ny = y / 100 # max test score is 100\nprint(xPredicted)", "tensor([0.5000, 1.0000])\n" ] ], [ [ "Notice that there are two functions `max` and `div` that I didn't discuss above. They do exactly what they imply: `max` finds the maximum value in a vector... I mean tensor; and `div` is basically a nice little function to divide two tensors. ", "_____no_output_____" ], [ "## Model (Computation Graph)\nOnce the data has been processed and it is in the proper format, all you need to do now is to define your model. Here is where things begin to change a little as compared to how you would build your neural networks using, say, something like Keras or Tensorflow. However, you will realize quickly as you go along that PyTorch doesn't differ much from other deep learning tools. At the end of the day we are constructing a computation graph, which is used to dictate how data should flow and what type of operations are performed on this information. \n\nFor illustration purposes, we are building the following neural network or computation graph:\n\n\n![alt text](https://drive.google.com/uc?export=view&id=1l-sKpcCJCEUJV1BlAqcVAvLXLpYCInV6)", "_____no_output_____" ] ], [ [ "class Neural_Network(nn.Module):\n def __init__(self, ):\n super(Neural_Network, self).__init__()\n # parameters\n # TODO: parameters can be parameterized instead of declaring them here\n self.inputSize = 2\n self.outputSize = 1\n self.hiddenSize = 3\n \n # weights\n self.W1 = torch.randn(self.inputSize, self.hiddenSize) # 3 X 2 tensor\n self.W2 = torch.randn(self.hiddenSize, self.outputSize) # 3 X 1 tensor\n \n def forward(self, X):\n self.z = torch.matmul(X, self.W1) # 3 X 3 \".dot\" does not broadcast in PyTorch\n self.z2 = self.sigmoid(self.z) # activation function\n self.z3 = torch.matmul(self.z2, self.W2)\n o = self.sigmoid(self.z3) # final activation function\n return o\n \n def sigmoid(self, s):\n return 1 / (1 + torch.exp(-s))\n \n def sigmoidPrime(self, s):\n # derivative of sigmoid\n return s * (1 - s)\n \n def backward(self, X, y, o):\n self.o_error = y - o # error in output\n self.o_delta = self.o_error * self.sigmoidPrime(o) # derivative of sig to error\n self.z2_error = torch.matmul(self.o_delta, torch.t(self.W2))\n self.z2_delta = self.z2_error * self.sigmoidPrime(self.z2)\n self.W1 += torch.matmul(torch.t(X), self.z2_delta)\n self.W2 += torch.matmul(torch.t(self.z2), self.o_delta)\n \n def train(self, X, y):\n # forward + backward pass for training\n o = self.forward(X)\n self.backward(X, y, o)\n \n def saveWeights(self, model):\n # we will use the PyTorch internal storage functions\n torch.save(model, \"NN\")\n # you can reload model with all the weights and so forth with:\n # torch.load(\"NN\")\n \n def predict(self):\n print (\"Predicted data based on trained weights: \")\n print (\"Input (scaled): \\n\" + str(xPredicted))\n print (\"Output: \\n\" + str(self.forward(xPredicted)))\n ", "_____no_output_____" ] ], [ [ "For the purpose of this tutorial, we are not going to be talking math stuff, that's for another day. I just want you to get a gist of what it takes to build a neural network from scratch using PyTorch. Let's break down the model which was declared via the class above. \n\n## Class Header\nFirst, we defined our model via a class because that is the recommended way to build the computation graph. The class header contains the name of the class `Neural Network` and the parameter `nn.Module` which basically indicates that we are defining our own neural network. \n\n```python\nclass Neural_Network(nn.Module):\n```\n\n## Initialization\nThe next step is to define the initializations ( `def __init__(self,)`) that will be performed upon creating an instance of the customized neural network. You can declare the parameters of your model here, but typically, you would declare the structure of your network in this section -- the size of the hidden layers and so forth. Since we are building the neural network from scratch, we explicitly declared the size of the weights matrices: one that stores the parameters from the input to hidden layer; and one that stores the parameter from the hidden to output layer. Both weight matrices are initialized with values randomly chosen from a normal distribution via `torch.randn(...)`. Note that we are not using bias just to keep things as simple as possible. \n\n```python\ndef __init__(self, ):\n super(Neural_Network, self).__init__()\n # parameters\n # TODO: parameters can be parameterized instead of declaring them here\n self.inputSize = 2\n self.outputSize = 1\n self.hiddenSize = 3\n\n # weights\n self.W1 = torch.randn(self.inputSize, self.hiddenSize) # 3 X 2 tensor\n self.W2 = torch.randn(self.hiddenSize, self.outputSize) # 3 X 1 tensor\n```\n\n## The Forward Function\nThe `forward` function is where all the magic happens (see below). This is where the data enters and is fed into the computation graph (i.e., the neural network structure we have built). Since we are building a simple neural network with one hidden layer, our forward function looks very simple:\n\n```python\ndef forward(self, X):\n self.z = torch.matmul(X, self.W1) \n self.z2 = self.sigmoid(self.z) # activation function\n self.z3 = torch.matmul(self.z2, self.W2)\n o = self.sigmoid(self.z3) # final activation function\n return o\n```\n\nThe `forward` function above takes the input `X`and then performs a matrix multiplication (`torch.matmul(...)`) with the first weight matrix `self.W1`. Then the result is applied an activation function, `sigmoid`. The resulting matrix of the activation is then multiplied with the second weight matrix `self.W2`. Then another activation if performed, which renders the output of the neural network or computation graph. The process I described above is simply what's known as a `feedforward pass`. In order for the weights to optimize when training, we need a backpropagation algorithm. \n\n## The Backward Function\nThe `backward` function contains the backpropagation algorithm, where the goal is to essentially minimize the loss with respect to our weights. In other words, the weights need to be updated in such a way that the loss decreases while the neural network is training (well, that is what we hope for). All this magic is possible with the gradient descent algorithm which is declared in the `backward` function. Take a minute or two to inspect what is happening in the code below:\n\n```python\ndef backward(self, X, y, o):\n self.o_error = y - o # error in output\n self.o_delta = self.o_error * self.sigmoidPrime(o) \n self.z2_error = torch.matmul(self.o_delta, torch.t(self.W2))\n self.z2_delta = self.z2_error * self.sigmoidPrime(self.z2)\n self.W1 += torch.matmul(torch.t(X), self.z2_delta)\n self.W2 += torch.matmul(torch.t(self.z2), self.o_delta)\n```\n\nNotice that we are performing a lot of matrix multiplications along with the transpose operations via the `torch.matmul(...)` and `torch.t(...)` operations, respectively. The rest is simply gradient descent -- there is nothing to it.", "_____no_output_____" ], [ "## Training\nAll that is left now is to train the neural network. First we create an instance of the computation graph we have just built:\n\n```python\nNN = Neural_Network()\n```\n\nThen we train the model for `1000` rounds. Notice that in PyTorch `NN(X)` automatically calls the `forward` function so there is no need to explicitly call `NN.forward(X)`. \n\nAfter we have obtained the predicted output for ever round of training, we compute the loss, with the following code:\n\n```python\ntorch.mean((y - NN(X))**2).detach().item()\n```\n\nThe next step is to start the training (foward + backward) via `NN.train(X, y)`. After we have trained the neural network, we can store the model and output the predicted value of the single instance we declared in the beginning, `xPredicted`. \n\nLet's train!", "_____no_output_____" ] ], [ [ "NN = Neural_Network()\nfor i in range(1000): # trains the NN 1,000 times\n print (\"#\" + str(i) + \" Loss: \" + str(torch.mean((y - NN(X))**2).detach().item())) # mean sum squared loss\n NN.train(X, y)\nNN.saveWeights(NN)\nNN.predict()", "#0 Loss: 0.28770461678504944\n#1 Loss: 0.19437099993228912\n#2 Loss: 0.129642054438591\n#3 Loss: 0.08898762613534927\n#4 Loss: 0.0638350322842598\n#5 Loss: 0.04783045873045921\n#6 Loss: 0.037219222635030746\n#7 Loss: 0.029889358207583427\n#8 Loss: 0.024637090042233467\n#9 Loss: 0.020752854645252228\n#10 Loss: 0.01780204102396965\n#11 Loss: 0.015508432872593403\n#12 Loss: 0.013690348714590073\n#13 Loss: 0.012224685400724411\n#14 Loss: 0.011025689542293549\n#15 Loss: 0.0100322300568223\n#16 Loss: 0.009199750609695911\n#17 Loss: 0.008495191112160683\n#18 Loss: 0.007893583737313747\n#19 Loss: 0.007375772576779127\n#20 Loss: 0.006926907692104578\n#21 Loss: 0.006535270716995001\n#22 Loss: 0.006191555876284838\n#23 Loss: 0.005888286512345076\n#24 Loss: 0.005619380157440901\n#25 Loss: 0.0053798723965883255\n#26 Loss: 0.005165652371942997\n#27 Loss: 0.004973314236849546\n#28 Loss: 0.0048000202514231205\n#29 Loss: 0.004643348511308432\n#30 Loss: 0.00450127711519599\n#31 Loss: 0.004372074268758297\n#32 Loss: 0.004254247527569532\n#33 Loss: 0.004146536346524954\n#34 Loss: 0.004047831054776907\n#35 Loss: 0.003957169130444527\n#36 Loss: 0.0038737261202186346\n#37 Loss: 0.0037967758253216743\n#38 Loss: 0.0037256714422255754\n#39 Loss: 0.0036598537117242813\n#40 Loss: 0.003598827635869384\n#41 Loss: 0.0035421468783169985\n#42 Loss: 0.0034894247073680162\n#43 Loss: 0.003440307453274727\n#44 Loss: 0.0033944963943213224\n#45 Loss: 0.003351695602759719\n#46 Loss: 0.003311669686809182\n#47 Loss: 0.003274182789027691\n#48 Loss: 0.0032390293199568987\n#49 Loss: 0.0032060390803962946\n#50 Loss: 0.0031750358175486326\n#51 Loss: 0.0031458677258342505\n#52 Loss: 0.003118406282737851\n#53 Loss: 0.0030925225000828505\n#54 Loss: 0.0030680971685796976\n#55 Loss: 0.0030450366903096437\n#56 Loss: 0.003023233264684677\n#57 Loss: 0.0030026088934391737\n#58 Loss: 0.002983089303597808\n#59 Loss: 0.0029645822942256927\n#60 Loss: 0.00294703827239573\n#61 Loss: 0.0029303862247616053\n#62 Loss: 0.002914572134613991\n#63 Loss: 0.0028995368629693985\n#64 Loss: 0.0028852447867393494\n#65 Loss: 0.002871639095246792\n#66 Loss: 0.002858673455193639\n#67 Loss: 0.0028463276103138924\n#68 Loss: 0.0028345445170998573\n#69 Loss: 0.0028233081102371216\n#70 Loss: 0.0028125671669840813\n#71 Loss: 0.002802313072606921\n#72 Loss: 0.0027925113681703806\n#73 Loss: 0.002783131552860141\n#74 Loss: 0.0027741591911762953\n#75 Loss: 0.00276556215249002\n#76 Loss: 0.0027573201805353165\n#77 Loss: 0.002749415347352624\n#78 Loss: 0.002741842297837138\n#79 Loss: 0.0027345670387148857\n#80 Loss: 0.00272758468054235\n#81 Loss: 0.0027208721730858088\n#82 Loss: 0.002714422531425953\n#83 Loss: 0.002708215033635497\n#84 Loss: 0.0027022461872547865\n#85 Loss: 0.0026964957360178232\n#86 Loss: 0.002690958557650447\n#87 Loss: 0.0026856244076043367\n#88 Loss: 0.002680474892258644\n#89 Loss: 0.002675510011613369\n#90 Loss: 0.002670713933184743\n#91 Loss: 0.0026660896837711334\n#92 Loss: 0.0026616165414452553\n#93 Loss: 0.0026572979986667633\n#94 Loss: 0.0026531198527663946\n#95 Loss: 0.002649075584486127\n#96 Loss: 0.002645164029672742\n#97 Loss: 0.0026413705199956894\n#98 Loss: 0.0026377029716968536\n#99 Loss: 0.002634142292663455\n#100 Loss: 0.00263069081120193\n#101 Loss: 0.0026273438706994057\n#102 Loss: 0.0026240937877446413\n#103 Loss: 0.0026209382340312004\n#104 Loss: 0.002617868361994624\n#105 Loss: 0.002614888595417142\n#106 Loss: 0.0026119956746697426\n#107 Loss: 0.002609172137454152\n#108 Loss: 0.0026064326521009207\n#109 Loss: 0.002603760687634349\n#110 Loss: 0.00260116346180439\n#111 Loss: 0.002598624676465988\n#112 Loss: 0.0025961531791836023\n#113 Loss: 0.0025937433820217848\n#114 Loss: 0.0025913880672305822\n#115 Loss: 0.0025890925899147987\n#116 Loss: 0.002586849732324481\n#117 Loss: 0.002584656234830618\n#118 Loss: 0.0025825174525380135\n#119 Loss: 0.0025804194156080484\n#120 Loss: 0.0025783723685890436\n#121 Loss: 0.002576368162408471\n#122 Loss: 0.002574402838945389\n#123 Loss: 0.002572478959336877\n#124 Loss: 0.0025705902371555567\n#125 Loss: 0.0025687431916594505\n#126 Loss: 0.002566935494542122\n#127 Loss: 0.0025651559699326754\n#128 Loss: 0.002563410671427846\n#129 Loss: 0.0025617002975195646\n#130 Loss: 0.0025600148364901543\n#131 Loss: 0.0025583638343960047\n#132 Loss: 0.002556734485551715\n#133 Loss: 0.002555140992626548\n#134 Loss: 0.0025535663589835167\n#135 Loss: 0.0025520166382193565\n#136 Loss: 0.002550497418269515\n#137 Loss: 0.0025489996187388897\n#138 Loss: 0.002547516720369458\n#139 Loss: 0.0025460589677095413\n#140 Loss: 0.0025446258950978518\n#141 Loss: 0.0025432079564779997\n#142 Loss: 0.0025418128352612257\n#143 Loss: 0.0025404333136975765\n#144 Loss: 0.0025390759110450745\n#145 Loss: 0.002537728287279606\n#146 Loss: 0.0025364060420542955\n#147 Loss: 0.0025350917130708694\n#148 Loss: 0.002533797873184085\n#149 Loss: 0.002532513812184334\n#150 Loss: 0.0025312507059425116\n#151 Loss: 0.0025300011038780212\n#152 Loss: 0.0025287508033216\n#153 Loss: 0.0025275293737649918\n#154 Loss: 0.002526313764974475\n#155 Loss: 0.00252510909922421\n#156 Loss: 0.0025239146780222654\n#157 Loss: 0.0025227360893040895\n#158 Loss: 0.002521563321352005\n#159 Loss: 0.002520401030778885\n#160 Loss: 0.002519249450415373\n#161 Loss: 0.0025181034579873085\n#162 Loss: 0.0025169753935188055\n#163 Loss: 0.0025158498901873827\n#164 Loss: 0.0025147362612187862\n#165 Loss: 0.002513629151508212\n#166 Loss: 0.002512530190870166\n#167 Loss: 0.0025114361196756363\n#168 Loss: 0.0025103483349084854\n#169 Loss: 0.0025092700961977243\n#170 Loss: 0.0025081969797611237\n#171 Loss: 0.0025071338750422\n#172 Loss: 0.0025060747284442186\n#173 Loss: 0.0025050221011042595\n#174 Loss: 0.002503973664715886\n#175 Loss: 0.002502931747585535\n#176 Loss: 0.002501895884051919\n#177 Loss: 0.0025008656084537506\n#178 Loss: 0.00249984092079103\n#179 Loss: 0.002498818328604102\n#180 Loss: 0.002497798763215542\n#181 Loss: 0.0024967871140688658\n#182 Loss: 0.00249578058719635\n#183 Loss: 0.0024947759229689837\n#184 Loss: 0.0024937766138464212\n#185 Loss: 0.002492778468877077\n#186 Loss: 0.0024917826522141695\n#187 Loss: 0.0024907945189625025\n#188 Loss: 0.002489812206476927\n#189 Loss: 0.002488828031346202\n#190 Loss: 0.0024878503754734993\n#191 Loss: 0.0024868694599717855\n#192 Loss: 0.002485897159203887\n#193 Loss: 0.002484926488250494\n#194 Loss: 0.0024839574471116066\n#195 Loss: 0.0024829902686178684\n#196 Loss: 0.002482031239196658\n#197 Loss: 0.0024810675531625748\n#198 Loss: 0.002480114810168743\n#199 Loss: 0.00247915368527174\n#200 Loss: 0.0024782009422779083\n#201 Loss: 0.002477245405316353\n#202 Loss: 0.0024762984830886126\n#203 Loss: 0.002475348999723792\n#204 Loss: 0.002474398585036397\n#205 Loss: 0.0024734551552683115\n#206 Loss: 0.002472516382113099\n#207 Loss: 0.002471569227054715\n#208 Loss: 0.002470628125593066\n#209 Loss: 0.0024696916807442904\n#210 Loss: 0.002468749647960067\n#211 Loss: 0.0024678176268935204\n#212 Loss: 0.0024668758269399405\n#213 Loss: 0.002465949160978198\n#214 Loss: 0.0024650150444358587\n#215 Loss: 0.00246407906524837\n#216 Loss: 0.002463151467964053\n#217 Loss: 0.002462216652929783\n#218 Loss: 0.0024612878914922476\n#219 Loss: 0.002460360061377287\n#220 Loss: 0.0024594322312623262\n#221 Loss: 0.0024585050996392965\n#222 Loss: 0.002457576571032405\n#223 Loss: 0.0024566520005464554\n#224 Loss: 0.002455727430060506\n#225 Loss: 0.002454800298437476\n#226 Loss: 0.002453884808346629\n#227 Loss: 0.0024529551155865192\n#228 Loss: 0.002452034503221512\n#229 Loss: 0.002451109467074275\n#230 Loss: 0.0024501883890479803\n#231 Loss: 0.002449269639328122\n#232 Loss: 0.0024483499582856894\n#233 Loss: 0.002447424689307809\n#234 Loss: 0.0024465022142976522\n#235 Loss: 0.0024455797392874956\n#236 Loss: 0.0024446637835353613\n#237 Loss: 0.002443745033815503\n#238 Loss: 0.0024428225588053465\n#239 Loss: 0.0024419049732387066\n#240 Loss: 0.002440983895212412\n#241 Loss: 0.0024400672409683466\n#242 Loss: 0.002439146162942052\n#243 Loss: 0.0024382262490689754\n#244 Loss: 0.002437308896332979\n#245 Loss: 0.0024363857228308916\n#246 Loss: 0.002435472561046481\n#247 Loss: 0.0024345542769879103\n#248 Loss: 0.0024336313363164663\n#249 Loss: 0.00243271142244339\n#250 Loss: 0.00243179383687675\n#251 Loss: 0.0024308778811246157\n#252 Loss: 0.0024299558717757463\n#253 Loss: 0.0024290340952575207\n#254 Loss: 0.002428111620247364\n#255 Loss: 0.002427193336188793\n#256 Loss: 0.002426273887977004\n#257 Loss: 0.002425355603918433\n#258 Loss: 0.002424436155706644\n#259 Loss: 0.002423514612019062\n#260 Loss: 0.002422596327960491\n#261 Loss: 0.0024216733872890472\n#262 Loss: 0.0024207504466176033\n#263 Loss: 0.002419829135760665\n#264 Loss: 0.0024189057294279337\n#265 Loss: 0.0024179841857403517\n#266 Loss: 0.002417063107714057\n#267 Loss: 0.0024161438923329115\n#268 Loss: 0.0024152155965566635\n#269 Loss: 0.0024142952170222998\n#270 Loss: 0.0024133676197379827\n#271 Loss: 0.002412450732663274\n#272 Loss: 0.002411528956145048\n#273 Loss: 0.0024105983320623636\n#274 Loss: 0.0024096802808344364\n#275 Loss: 0.0024087547790259123\n#276 Loss: 0.0024078262504190207\n#277 Loss: 0.0024068995844572783\n#278 Loss: 0.0024059752468019724\n#279 Loss: 0.002405051840469241\n#280 Loss: 0.002404116792604327\n#281 Loss: 0.0024031943175941706\n#282 Loss: 0.0024022667203098536\n#283 Loss: 0.002401341451331973\n#284 Loss: 0.002400410594418645\n#285 Loss: 0.0023994811344891787\n#286 Loss: 0.0023985551670193672\n#287 Loss: 0.0023976238444447517\n#288 Loss: 0.0023966955486685038\n#289 Loss: 0.0023957621306180954\n#290 Loss: 0.002394832205027342\n#291 Loss: 0.0023939006496220827\n#292 Loss: 0.002392966765910387\n#293 Loss: 0.00239203916862607\n#294 Loss: 0.002391106216236949\n#295 Loss: 0.0023901707027107477\n#296 Loss: 0.002389240777119994\n#297 Loss: 0.0023883050307631493\n#298 Loss: 0.0023873704485595226\n#299 Loss: 0.0023864342365413904\n#300 Loss: 0.0023854991886764765\n#301 Loss: 0.0023845701944082975\n#302 Loss: 0.0023836297914385796\n#303 Loss: 0.0023826900869607925\n#304 Loss: 0.0023817545734345913\n#305 Loss: 0.002380818361416459\n#306 Loss: 0.0023798795882612467\n#307 Loss: 0.0023789377883076668\n#308 Loss: 0.0023780011106282473\n#309 Loss: 0.0023770590778440237\n#310 Loss: 0.0023761214688420296\n#311 Loss: 0.0023751859553158283\n#312 Loss: 0.0023742406629025936\n#313 Loss: 0.002373295836150646\n#314 Loss: 0.0023723554331809282\n#315 Loss: 0.002371413866057992\n#316 Loss: 0.0023704750929027796\n#317 Loss: 0.002369531663134694\n#318 Loss: 0.0023685868363827467\n#319 Loss: 0.002367644337937236\n#320 Loss: 0.002366698579862714\n#321 Loss: 0.0023657495621591806\n#322 Loss: 0.0023648033384233713\n#323 Loss: 0.002363859675824642\n#324 Loss: 0.0023629090283066034\n#325 Loss: 0.0023619639687240124\n#326 Loss: 0.0023610175121575594\n#327 Loss: 0.002360069891437888\n#328 Loss: 0.002359122270718217\n#329 Loss: 0.0023581702262163162\n#330 Loss: 0.0023572223726660013\n#331 Loss: 0.002356275450438261\n#332 Loss: 0.0023553166538476944\n#333 Loss: 0.0023543667048215866\n#334 Loss: 0.0023534176871180534\n#335 Loss: 0.002352464245632291\n#336 Loss: 0.0023515131324529648\n#337 Loss: 0.0023505568969994783\n#338 Loss: 0.0023496015928685665\n#339 Loss: 0.002348652807995677\n#340 Loss: 0.002347696339711547\n#341 Loss: 0.0023467380087822676\n#342 Loss: 0.0023457861971110106\n#343 Loss: 0.0023448301944881678\n#344 Loss: 0.0023438704665750265\n#345 Loss: 0.002342912135645747\n#346 Loss: 0.002341957064345479\n#347 Loss: 0.0023409996647387743\n#348 Loss: 0.0023400387726724148\n#349 Loss: 0.002339078113436699\n#350 Loss: 0.002338117454200983\n#351 Loss: 0.0023371621500700712\n#352 Loss: 0.0023361986968666315\n#353 Loss: 0.00233523640781641\n#354 Loss: 0.0023342801723629236\n#355 Loss: 0.002333313226699829\n#356 Loss: 0.002332353265956044\n#357 Loss: 0.002331388648599386\n#358 Loss: 0.0023304217029362917\n#359 Loss: 0.0023294605780392885\n#360 Loss: 0.002328496426343918\n#361 Loss: 0.002327530412003398\n#362 Loss: 0.0023265639320015907\n#363 Loss: 0.0023255993146449327\n#364 Loss: 0.0023246288765221834\n#365 Loss: 0.0023236607667058706\n#366 Loss: 0.002322700573131442\n#367 Loss: 0.0023217289708554745\n#368 Loss: 0.0023207550402730703\n#369 Loss: 0.002319787396118045\n#370 Loss: 0.002318824175745249\n#371 Loss: 0.0023178488481789827\n#372 Loss: 0.002316881902515888\n#373 Loss: 0.0023159075062721968\n#374 Loss: 0.002314941259101033\n#375 Loss: 0.0023139675613492727\n#376 Loss: 0.0023129950277507305\n#377 Loss: 0.0023120215628296137\n#378 Loss: 0.002311046002432704\n#379 Loss: 0.002310073934495449\n#380 Loss: 0.002309101400896907\n#381 Loss: 0.0023081284016370773\n#382 Loss: 0.00230714725330472\n#383 Loss: 0.00230617169290781\n#384 Loss: 0.0023051972966641188\n#385 Loss: 0.002304219640791416\n#386 Loss: 0.0023032415192574263\n#387 Loss: 0.002302265027537942\n#388 Loss: 0.0023012871388345957\n#389 Loss: 0.002300310181453824\n#390 Loss: 0.002299328101798892\n#391 Loss: 0.0022983483504503965\n#392 Loss: 0.0022973709274083376\n#393 Loss: 0.002296391176059842\n#394 Loss: 0.002295407932251692\n#395 Loss: 0.00229442841373384\n#396 Loss: 0.002293441677466035\n#397 Loss: 0.0022924619261175394\n#398 Loss: 0.0022914784494787455\n#399 Loss: 0.0022904963698238134\n#400 Loss: 0.0022895135916769505\n#401 Loss: 0.0022885303478688\n#402 Loss: 0.0022875459399074316\n#403 Loss: 0.0022865592036396265\n#404 Loss: 0.0022855724673718214\n#405 Loss: 0.0022845915518701077\n#406 Loss: 0.002283601788803935\n#407 Loss: 0.002282612957060337\n#408 Loss: 0.002281626919284463\n#409 Loss: 0.0022806443739682436\n#410 Loss: 0.0022796487901359797\n#411 Loss: 0.0022786634508520365\n#412 Loss: 0.0022776739206165075\n#413 Loss: 0.0022766822949051857\n#414 Loss: 0.0022756929975003004\n#415 Loss: 0.0022747062612324953\n#416 Loss: 0.00227371440269053\n#417 Loss: 0.0022727230098098516\n#418 Loss: 0.002271731849759817\n#419 Loss: 0.0022707392927259207\n#420 Loss: 0.002269746968522668\n#421 Loss: 0.002268751384690404\n#422 Loss: 0.002267759060487151\n#423 Loss: 0.0022667646408081055\n#424 Loss: 0.0022657769732177258\n#425 Loss: 0.002264777896925807\n#426 Loss: 0.002263784408569336\n#427 Loss: 0.0022627897560596466\n#428 Loss: 0.0022617937065660954\n#429 Loss: 0.002260798355564475\n#430 Loss: 0.0022597969509661198\n#431 Loss: 0.002258802531287074\n#432 Loss: 0.0022578088100999594\n#433 Loss: 0.0022568099666386843\n#434 Loss: 0.002255811123177409\n#435 Loss: 0.0022548120468854904\n#436 Loss: 0.0022538129705935717\n#437 Loss: 0.0022528113331645727\n#438 Loss: 0.002251812256872654\n#439 Loss: 0.00225081411190331\n#440 Loss: 0.0022498099133372307\n#441 Loss: 0.002248812699690461\n#442 Loss: 0.002247813157737255\n#443 Loss: 0.0022468070965260267\n#444 Loss: 0.002245804527774453\n#445 Loss: 0.0022448061499744654\n#446 Loss: 0.002243800787255168\n#447 Loss: 0.0022427986841648817\n#448 Loss: 0.0022417923901230097\n#449 Loss: 0.0022407902870327234\n#450 Loss: 0.0022397860884666443\n#451 Loss: 0.002238777931779623\n#452 Loss: 0.002237774431705475\n#453 Loss: 0.00223676860332489\n#454 Loss: 0.0022357627749443054\n#455 Loss: 0.002234755316749215\n#456 Loss: 0.0022337529808282852\n#457 Loss: 0.0022327450569719076\n#458 Loss: 0.0022317382972687483\n#459 Loss: 0.002230728277936578\n#460 Loss: 0.0022297168616205454\n#461 Loss: 0.0022287091705948114\n#462 Loss: 0.002227703807875514\n#463 Loss: 0.002226694021373987\n#464 Loss: 0.002225684467703104\n#465 Loss: 0.0022246765438467264\n#466 Loss: 0.0022236653603613377\n#467 Loss: 0.0022226530127227306\n#468 Loss: 0.002221642527729273\n#469 Loss: 0.0022206297144293785\n#470 Loss: 0.0022196185309439898\n#471 Loss: 0.0022186103742569685\n#472 Loss: 0.0022175933700054884\n#473 Loss: 0.0022165849804878235\n#474 Loss: 0.0022155700717121363\n#475 Loss: 0.0022145553957670927\n#476 Loss: 0.0022135439794510603\n#477 Loss: 0.0022125281393527985\n#478 Loss: 0.002211514627560973\n#479 Loss: 0.002210496924817562\n#480 Loss: 0.0022094829473644495\n#481 Loss: 0.0022084659431129694\n#482 Loss: 0.0022074568551033735\n#483 Loss: 0.002206437522545457\n#484 Loss: 0.0022054200526326895\n#485 Loss: 0.0022044044453650713\n#486 Loss: 0.0022033853456377983\n#487 Loss: 0.0022023695055395365\n#488 Loss: 0.002201352035626769\n#489 Loss: 0.0022003341000527143\n#490 Loss: 0.002199317794293165\n#491 Loss: 0.0021982965990900993\n#492 Loss: 0.0021972774993628263\n#493 Loss: 0.00219626072794199\n#494 Loss: 0.0021952392999082804\n#495 Loss: 0.002194217639043927\n#496 Loss: 0.002193200634792447\n#497 Loss: 0.002192180836573243\n#498 Loss: 0.0021911589428782463\n#499 Loss: 0.0021901384461671114\n#500 Loss: 0.002189117018133402\n#501 Loss: 0.0021880920976400375\n#502 Loss: 0.0021870729979127645\n#503 Loss: 0.0021860499400645494\n#504 Loss: 0.0021850315388292074\n#505 Loss: 0.002184005454182625\n#506 Loss: 0.0021829840261489153\n#507 Loss: 0.002181959105655551\n#508 Loss: 0.0021809397730976343\n#509 Loss: 0.002179911592975259\n#510 Loss: 0.002178889000788331\n#511 Loss: 0.0021778629161417484\n#512 Loss: 0.002176836598664522\n#513 Loss: 0.002175812376663089\n#514 Loss: 0.0021747888531535864\n#515 Loss: 0.0021737609058618546\n#516 Loss: 0.002172738080844283\n#517 Loss: 0.002171711064875126\n#518 Loss: 0.0021706840489059687\n#519 Loss: 0.0021696598269045353\n#520 Loss: 0.0021686323452740908\n#521 Loss: 0.0021676046308130026\n#522 Loss: 0.0021665773820132017\n#523 Loss: 0.0021655478049069643\n#524 Loss: 0.0021645205561071634\n#525 Loss: 0.002163497731089592\n#526 Loss: 0.002162465127184987\n#527 Loss: 0.0021614336874336004\n#528 Loss: 0.0021604085341095924\n#529 Loss: 0.0021593787241727114\n#530 Loss: 0.0021583528723567724\n#531 Loss: 0.0021573195699602365\n#532 Loss: 0.0021562918554991484\n#533 Loss: 0.0021552571561187506\n#534 Loss: 0.0021542287431657314\n#535 Loss: 0.0021532000973820686\n#536 Loss: 0.0021521716844290495\n#537 Loss: 0.0021511383820325136\n#538 Loss: 0.0021501071751117706\n#539 Loss: 0.0021490773651748896\n#540 Loss: 0.0021480440627783537\n#541 Loss: 0.002147009363397956\n#542 Loss: 0.0021459797862917185\n#543 Loss: 0.002144948346540332\n#544 Loss: 0.002143915044143796\n#545 Loss: 0.0021428829059004784\n#546 Loss: 0.002141848672181368\n#547 Loss: 0.0021408156026154757\n#548 Loss: 0.002139780670404434\n#549 Loss: 0.0021387485321611166\n#550 Loss: 0.002137715695425868\n#551 Loss: 0.0021366847213357687\n#552 Loss: 0.0021356476936489344\n#553 Loss: 0.0021346136927604675\n#554 Loss: 0.0021335785277187824\n#555 Loss: 0.002132538938894868\n#556 Loss: 0.002131509128957987\n#557 Loss: 0.002130476525053382\n#558 Loss: 0.0021294394973665476\n#559 Loss: 0.002128403866663575\n#560 Loss: 0.002127366838976741\n#561 Loss: 0.0021263323724269867\n#562 Loss: 0.002125295577570796\n#563 Loss: 0.002124261111021042\n#564 Loss: 0.0021232208237051964\n#565 Loss: 0.002122187288478017\n#566 Loss: 0.0021211470011621714\n#567 Loss: 0.002120112767443061\n#568 Loss: 0.002119072712957859\n#569 Loss: 0.0021180338226258755\n#570 Loss: 0.00211700308136642\n#571 Loss: 0.0021159613970667124\n#572 Loss: 0.0021149280946701765\n#573 Loss: 0.0021138915326446295\n#574 Loss: 0.0021128482185304165\n#575 Loss: 0.0021118095610290766\n#576 Loss: 0.0021107716020196676\n#577 Loss: 0.002109734108671546\n#578 Loss: 0.0021087005734443665\n#579 Loss: 0.002107657492160797\n#580 Loss: 0.00210661836899817\n#581 Loss: 0.002105577616021037\n#582 Loss: 0.0021045382600277662\n#583 Loss: 0.002103500533849001\n#584 Loss: 0.0021024595480412245\n#585 Loss: 0.0021014243829995394\n#586 Loss: 0.002100378042086959\n#587 Loss: 0.002099341945722699\n#588 Loss: 0.00209829886443913\n#589 Loss: 0.0020972639322280884\n#590 Loss: 0.0020962206181138754\n#591 Loss: 0.002095181494951248\n#592 Loss: 0.0020941428374499083\n#593 Loss: 0.002093098359182477\n#594 Loss: 0.002092057839035988\n#595 Loss: 0.0020910180173814297\n#596 Loss: 0.002089978661388159\n#597 Loss: 0.0020889334846287966\n#598 Loss: 0.0020878936629742384\n#599 Loss: 0.0020868529099971056\n#600 Loss: 0.002085815416648984\n#601 Loss: 0.0020847702398896217\n#602 Loss: 0.0020837283227592707\n#603 Loss: 0.0020826871041208506\n#604 Loss: 0.0020816465839743614\n#605 Loss: 0.002080598147585988\n#606 Loss: 0.002079556928947568\n#607 Loss: 0.0020785192027688026\n#608 Loss: 0.0020774772856384516\n#609 Loss: 0.002076430944725871\n#610 Loss: 0.00207538646645844\n#611 Loss: 0.00207435037009418\n#612 Loss: 0.002073307754471898\n#613 Loss: 0.002072261879220605\n#614 Loss: 0.0020712194964289665\n#615 Loss: 0.0020701782777905464\n#616 Loss: 0.0020691361278295517\n#617 Loss: 0.0020680923480540514\n#618 Loss: 0.0020670518279075623\n#619 Loss: 0.0020660050213336945\n#620 Loss: 0.0020649584475904703\n#621 Loss: 0.0020639190915971994\n#622 Loss: 0.002062877407297492\n#623 Loss: 0.0020618324633687735\n#624 Loss: 0.0020607870537787676\n#625 Loss: 0.00205974536947906\n#626 Loss: 0.0020587043836712837\n#627 Loss: 0.0020576564129441977\n#628 Loss: 0.0020566147286444902\n#629 Loss: 0.002055570250377059\n#630 Loss: 0.0020545274019241333\n#631 Loss: 0.0020534859504550695\n#632 Loss: 0.002052436349913478\n#633 Loss: 0.0020513960625976324\n#634 Loss: 0.0020503487903624773\n#635 Loss: 0.0020493092015385628\n#636 Loss: 0.002048263093456626\n#637 Loss: 0.002047223038971424\n#638 Loss: 0.002046172507107258\n#639 Loss: 0.002045132452622056\n#640 Loss: 0.002044085180386901\n#641 Loss: 0.002043043961748481\n#642 Loss: 0.0020420013461261988\n#643 Loss: 0.0020409554708749056\n#644 Loss: 0.002039908664301038\n#645 Loss: 0.002038867911323905\n#646 Loss: 0.0020378208719193935\n#647 Loss: 0.0020367794204503298\n#648 Loss: 0.0020357321482151747\n#649 Loss: 0.002034691860899329\n#650 Loss: 0.002033643191680312\n#651 Loss: 0.002032601274549961\n#652 Loss: 0.002031555864959955\n#653 Loss: 0.0020305109210312366\n#654 Loss: 0.002029466675594449\n#655 Loss: 0.002028421498835087\n#656 Loss: 0.002027378184720874\n#657 Loss: 0.0020263351034373045\n#658 Loss: 0.0020252885296940804\n#659 Loss: 0.0020242466125637293\n#660 Loss: 0.002023200271651149\n#661 Loss: 0.002022160217165947\n#662 Loss: 0.0020211131777614355\n#663 Loss: 0.0020200731232762337\n#664 Loss: 0.0020190232899039984\n#665 Loss: 0.0020179767161607742\n#666 Loss: 0.002016937592998147\n#667 Loss: 0.002015892183408141\n#668 Loss: 0.0020148518960922956\n#669 Loss: 0.0020138081163167953\n#670 Loss: 0.0020127587486058474\n#671 Loss: 0.0020117172971367836\n#672 Loss: 0.0020106742158532143\n#673 Loss: 0.002009629737585783\n#674 Loss: 0.002008582465350628\n#675 Loss: 0.0020075414795428514\n#676 Loss: 0.002006495138630271\n#677 Loss: 0.0020054553169757128\n#678 Loss: 0.002004409907385707\n#679 Loss: 0.002003363100811839\n#680 Loss: 0.002002324676141143\n#681 Loss: 0.002001277869567275\n#682 Loss: 0.002000238513574004\n#683 Loss: 0.001999191241338849\n#684 Loss: 0.0019981495570391417\n#685 Loss: 0.0019971048459410667\n#686 Loss: 0.0019960617646574974\n#687 Loss: 0.0019950189162045717\n#688 Loss: 0.0019939783960580826\n#689 Loss: 0.001992932753637433\n#690 Loss: 0.001991888275370002\n#691 Loss: 0.0019908458925783634\n#692 Loss: 0.001989804906770587\n#693 Loss: 0.0019887599628418684\n#694 Loss: 0.0019877159502357244\n#695 Loss: 0.001986677525565028\n#696 Loss: 0.0019856367725878954\n#697 Loss: 0.001984592527151108\n#698 Loss: 0.001983546419069171\n#699 Loss: 0.001982505898922682\n#700 Loss: 0.0019814646802842617\n#701 Loss: 0.0019804220646619797\n#702 Loss: 0.0019793810788542032\n#703 Loss: 0.0019783375319093466\n#704 Loss: 0.0019772977102547884\n#705 Loss: 0.0019762550946325064\n#706 Loss: 0.0019752129446715117\n#707 Loss: 0.001974171493202448\n#708 Loss: 0.001973131438717246\n#709 Loss: 0.001972092781215906\n#710 Loss: 0.0019710464403033257\n#711 Loss: 0.0019700077828019857\n#712 Loss: 0.001968963770195842\n#713 Loss: 0.0019679246470332146\n#714 Loss: 0.0019668852910399437\n#715 Loss: 0.001965844538062811\n#716 Loss: 0.001964807277545333\n#717 Loss: 0.0019637665245682\n#718 Loss: 0.0019627264700829983\n#719 Loss: 0.00196168408729136\n#720 Loss: 0.00196064286865294\n#721 Loss: 0.0019596030469983816\n#722 Loss: 0.001958560897037387\n#723 Loss: 0.001957525731995702\n#724 Loss: 0.001956489635631442\n#725 Loss: 0.001955445623025298\n#726 Loss: 0.001954407896846533\n#727 Loss: 0.0019533671438694\n#728 Loss: 0.0019523290684446692\n#729 Loss: 0.0019512904109433293\n#730 Loss: 0.0019502503564581275\n#731 Loss: 0.0019492128631100059\n#732 Loss: 0.0019481779308989644\n#733 Loss: 0.0019471339182928205\n#734 Loss: 0.0019461024785414338\n#735 Loss: 0.0019450596300885081\n#736 Loss: 0.0019440216710790992\n#737 Loss: 0.001942987204529345\n#738 Loss: 0.0019419504096731544\n#739 Loss: 0.0019409122178331017\n#740 Loss: 0.0019398737931624055\n#741 Loss: 0.0019388411892578006\n#742 Loss: 0.001937802298925817\n#743 Loss: 0.001936764339916408\n#744 Loss: 0.001935729756951332\n#745 Loss: 0.0019346913322806358\n#746 Loss: 0.0019336584955453873\n#747 Loss: 0.0019326211186125875\n#748 Loss: 0.001931585487909615\n#749 Loss: 0.0019305492751300335\n#750 Loss: 0.0019295121310278773\n#751 Loss: 0.0019284767331555486\n#752 Loss: 0.0019274475052952766\n#753 Loss: 0.0019264090806245804\n#754 Loss: 0.0019253772916272283\n#755 Loss: 0.0019243452697992325\n#756 Loss: 0.0019233074272051454\n#757 Loss: 0.0019222754053771496\n#758 Loss: 0.0019212419865652919\n#759 Loss: 0.0019202110124751925\n#760 Loss: 0.0019191773608326912\n#761 Loss: 0.0019181432435289025\n#762 Loss: 0.0019171085441485047\n#763 Loss: 0.0019160775700584054\n#764 Loss: 0.0019150450825691223\n#765 Loss: 0.0019140088697895408\n#766 Loss: 0.0019129784777760506\n#767 Loss: 0.0019119485514238477\n#768 Loss: 0.0019109140848740935\n#769 Loss: 0.0019098850898444653\n#770 Loss: 0.001908852718770504\n#771 Loss: 0.001907822792418301\n#772 Loss: 0.0019067925168201327\n#773 Loss: 0.0019057630561292171\n#774 Loss: 0.0019047335954383016\n#775 Loss: 0.0019037051824852824\n#776 Loss: 0.0019026693189516664\n#777 Loss: 0.0019016433507204056\n#778 Loss: 0.0019006148213520646\n#779 Loss: 0.0018995892023667693\n#780 Loss: 0.0018985569477081299\n#781 Loss: 0.0018975288840010762\n#782 Loss: 0.0018965002382174134\n#783 Loss: 0.0018954715924337506\n#784 Loss: 0.0018944436451420188\n#785 Loss: 0.0018934140680357814\n#786 Loss: 0.0018923920579254627\n#787 Loss: 0.0018913644598796964\n#788 Loss: 0.001890333485789597\n#789 Loss: 0.0018893079832196236\n#790 Loss: 0.0018882853910326958\n#791 Loss: 0.001887254766188562\n#792 Loss: 0.0018862345023080707\n#793 Loss: 0.0018852058565244079\n#794 Loss: 0.00188418326433748\n#795 Loss: 0.0018831556662917137\n#796 Loss: 0.0018821310950443149\n#797 Loss: 0.0018811067566275597\n#798 Loss: 0.001880083349533379\n#799 Loss: 0.001879060291685164\n#800 Loss: 0.0018780353711917996\n#801 Loss: 0.0018770135939121246\n#802 Loss: 0.0018759918166324496\n#803 Loss: 0.0018749730661511421\n#804 Loss: 0.0018739477964118123\n#805 Loss: 0.0018729200819507241\n#806 Loss: 0.0018719009822234511\n#807 Loss: 0.001870879321359098\n#808 Loss: 0.001869861502200365\n#809 Loss: 0.0018688408890739083\n#810 Loss: 0.001867820625193417\n#811 Loss: 0.0018667984986677766\n#812 Loss: 0.0018657720647752285\n#813 Loss: 0.001864760648459196\n#814 Loss: 0.0018637363100424409\n#815 Loss: 0.0018627209356054664\n#816 Loss: 0.0018617023015394807\n#817 Loss: 0.0018606797093525529\n#818 Loss: 0.0018596658483147621\n#819 Loss: 0.0018586452351883054\n#820 Loss: 0.0018576303264126182\n#821 Loss: 0.001856614020653069\n#822 Loss: 0.001855594920925796\n#823 Loss: 0.0018545795464888215\n#824 Loss: 0.001853560097515583\n#825 Loss: 0.0018525446066632867\n#826 Loss: 0.0018515288829803467\n#827 Loss: 0.00185050955042243\n#828 Loss: 0.0018494967371225357\n#829 Loss: 0.0018484825268387794\n#830 Loss: 0.001847467734478414\n#831 Loss: 0.0018464555032551289\n#832 Loss: 0.0018454398959875107\n#833 Loss: 0.0018444285960868\n#834 Loss: 0.0018434150842949748\n#835 Loss: 0.0018424022709950805\n#836 Loss: 0.001841390854679048\n#837 Loss: 0.0018403776921331882\n#838 Loss: 0.0018393672071397305\n#839 Loss: 0.00183835718780756\n#840 Loss: 0.0018373435596004128\n#841 Loss: 0.001836334471590817\n#842 Loss: 0.0018353263149037957\n#843 Loss: 0.0018343138508498669\n#844 Loss: 0.0018333062762394547\n#845 Loss: 0.001832296489737928\n#846 Loss: 0.0018312829779461026\n#847 Loss: 0.0018302792450413108\n#848 Loss: 0.0018292715540155768\n#849 Loss: 0.0018282626988366246\n#850 Loss: 0.0018272522138431668\n#851 Loss: 0.001826247200369835\n#852 Loss: 0.0018252409063279629\n#853 Loss: 0.001824233098886907\n#854 Loss: 0.0018232259899377823\n#855 Loss: 0.001822225865907967\n#856 Loss: 0.0018212157301604748\n#857 Loss: 0.0018202122300863266\n#858 Loss: 0.0018192125717177987\n#859 Loss: 0.0018182039493694901\n#860 Loss: 0.0018171994015574455\n#861 Loss: 0.0018161969492211938\n#862 Loss: 0.00181519181933254\n#863 Loss: 0.0018141911132261157\n#864 Loss: 0.001813187263906002\n#865 Loss: 0.0018121921457350254\n#866 Loss: 0.0018111892277374864\n#867 Loss: 0.0018101868918165565\n#868 Loss: 0.0018091824604198337\n#869 Loss: 0.001808184664696455\n#870 Loss: 0.0018071848899126053\n#871 Loss: 0.0018061831360682845\n#872 Loss: 0.0018051863880828023\n#873 Loss: 0.0018041870789602399\n#874 Loss: 0.0018031877698376775\n#875 Loss: 0.0018021933501586318\n#876 Loss: 0.0018011946231126785\n#877 Loss: 0.0018001968273892999\n#878 Loss: 0.0017991961212828755\n#879 Loss: 0.001798199606128037\n#880 Loss: 0.0017972056521102786\n#881 Loss: 0.0017962086712941527\n#882 Loss: 0.0017952205380424857\n#883 Loss: 0.0017942209960892797\n#884 Loss: 0.001793228555470705\n#885 Loss: 0.0017922349506989121\n#886 Loss: 0.001791241578757763\n#887 Loss: 0.001790247275494039\n#888 Loss: 0.0017892572795972228\n#889 Loss: 0.0017882628599181771\n#890 Loss: 0.0017872735625132918\n#891 Loss: 0.0017862803069874644\n#892 Loss: 0.001785286352969706\n#893 Loss: 0.0017842984525486827\n#894 Loss: 0.0017833089223131537\n#895 Loss: 0.0017823184607550502\n#896 Loss: 0.0017813298618420959\n#897 Loss: 0.0017803410300984979\n#898 Loss: 0.0017793524311855435\n#899 Loss: 0.0017783649964258075\n#900 Loss: 0.001777378492988646\n#901 Loss: 0.0017763897776603699\n#902 Loss: 0.0017754010623320937\n#903 Loss: 0.001774418051354587\n#904 Loss: 0.0017734314315021038\n#905 Loss: 0.0017724483041092753\n#906 Loss: 0.0017714608693495393\n#907 Loss: 0.0017704787896946073\n#908 Loss: 0.0017694927519187331\n#909 Loss: 0.0017685088096186519\n#910 Loss: 0.0017675244016572833\n#911 Loss: 0.001766547211445868\n#912 Loss: 0.001765563734807074\n#913 Loss: 0.001764580956660211\n#914 Loss: 0.0017636003904044628\n#915 Loss: 0.0017626197077333927\n#916 Loss: 0.0017616351833567023\n#917 Loss: 0.0017606564797461033\n#918 Loss: 0.0017596777761355042\n#919 Loss: 0.0017587020993232727\n#920 Loss: 0.001757721765898168\n#921 Loss: 0.0017567459726706147\n#922 Loss: 0.0017557647079229355\n#923 Loss: 0.0017547908937558532\n#924 Loss: 0.0017538117244839668\n#925 Loss: 0.0017528367461636662\n#926 Loss: 0.0017518624663352966\n#927 Loss: 0.0017508859746158123\n#928 Loss: 0.0017499076202511787\n#929 Loss: 0.0017489390447735786\n#930 Loss: 0.0017479656962677836\n#931 Loss: 0.0017469911836087704\n#932 Loss: 0.0017460188828408718\n#933 Loss: 0.0017450453015044332\n#934 Loss: 0.00174407206941396\n#935 Loss: 0.0017430986044928432\n#936 Loss: 0.0017421283992007375\n#937 Loss: 0.001741158775985241\n#938 Loss: 0.0017401917139068246\n#939 Loss: 0.0017392206937074661\n#940 Loss: 0.0017382544465363026\n#941 Loss: 0.0017372820293530822\n#942 Loss: 0.001736316829919815\n#943 Loss: 0.0017353454604744911\n#944 Loss: 0.0017343764193356037\n#945 Loss: 0.0017334137810394168\n#946 Loss: 0.0017324457876384258\n#947 Loss: 0.0017314818687736988\n#948 Loss: 0.001730515738017857\n#949 Loss: 0.0017295492580160499\n#950 Loss: 0.0017285882495343685\n#951 Loss: 0.0017276207217946649\n#952 Loss: 0.0017266602953895926\n#953 Loss: 0.0017256977735087276\n#954 Loss: 0.0017247359501197934\n#955 Loss: 0.0017237764550372958\n#956 Loss: 0.0017228134674951434\n#957 Loss: 0.0017218533903360367\n#958 Loss: 0.0017208936624228954\n#959 Loss: 0.001719936146400869\n#960 Loss: 0.001718974090181291\n#961 Loss: 0.0017180143622681499\n#962 Loss: 0.001717058359645307\n#963 Loss: 0.0017161048017442226\n#964 Loss: 0.001715144026093185\n#965 Loss: 0.0017141870921477675\n#966 Loss: 0.0017132310895249248\n#967 Loss: 0.0017122785793617368\n#968 Loss: 0.0017113216454163194\n#969 Loss: 0.001710368786007166\n#970 Loss: 0.0017094146460294724\n#971 Loss: 0.001708458294160664\n#972 Loss: 0.0017075081123039126\n#973 Loss: 0.0017065554857254028\n#974 Loss: 0.0017056027427315712\n#975 Loss: 0.0017046512803062797\n#976 Loss: 0.0017037037760019302\n#977 Loss: 0.0017027502181008458\n#978 Loss: 0.0017018018988892436\n#979 Loss: 0.001700854511000216\n#980 Loss: 0.0016999054932966828\n#981 Loss: 0.001698957639746368\n#982 Loss: 0.0016980115324258804\n#983 Loss: 0.0016970612341538072\n#984 Loss: 0.0016961172223091125\n#985 Loss: 0.0016951701836660504\n#986 Loss: 0.001694221398793161\n#987 Loss: 0.0016932813450694084\n#988 Loss: 0.0016923333751037717\n#989 Loss: 0.0016913922736421227\n#990 Loss: 0.0016904502408578992\n#991 Loss: 0.0016895070439204574\n#992 Loss: 0.0016885654767975211\n#993 Loss: 0.001687621814198792\n#994 Loss: 0.0016866797814145684\n#995 Loss: 0.001685741706751287\n#996 Loss: 0.0016847997903823853\n#997 Loss: 0.0016838625306263566\n#998 Loss: 0.0016829235246405005\n#999 Loss: 0.0016819849843159318\nPredicted data based on trained weights: \nInput (scaled): \ntensor([0.5000, 1.0000])\nOutput: \ntensor([0.9505])\n" ] ], [ [ "The loss keeps decreasing, which means that the neural network is learning something. That's it. Congratulations! You have just learned how to create and train a neural network from scratch using PyTorch. There are so many things you can do with the shallow network we have just implemented. You can add more hidden layers or try to incorporate the bias terms for practice. I would love to see what you will build from here. Reach me out on [Twitter](https://twitter.com/omarsar0) if you have any further questions or leave your comments here. Until next time!", "_____no_output_____" ], [ "## References:\n- [PyTorch nn. Modules](https://pytorch.org/tutorials/beginner/pytorch_with_examples.html#pytorch-custom-nn-modules)\n- [Build a Neural Network with Numpy](https://enlight.nyc/neural-network)\n", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown" ] ]
e7fde3312083e913375e385783481cc33f67a64b
341,376
ipynb
Jupyter Notebook
examples/reinforcement_learning/hiv_simulator.ipynb
yoshavit/whynot
e33e56bae377b65fe87feac5c6246ae38f4586e8
[ "MIT" ]
376
2020-03-20T20:09:16.000Z
2022-03-29T09:53:33.000Z
examples/reinforcement_learning/hiv_simulator.ipynb
yoshavit/whynot
e33e56bae377b65fe87feac5c6246ae38f4586e8
[ "MIT" ]
5
2020-04-20T10:19:34.000Z
2021-11-03T09:36:28.000Z
examples/reinforcement_learning/hiv_simulator.ipynb
yoshavit/whynot
e33e56bae377b65fe87feac5c6246ae38f4586e8
[ "MIT" ]
41
2020-03-20T23:14:38.000Z
2022-03-09T06:02:01.000Z
1,009.988166
235,444
0.95268
[ [ [ "# Policy Gradients on HIV Simulator\n\nAn example of using WhyNot for reinforcement learning. WhyNot presents a unified interface with the [OpenAI gym](https://github.com/openai/gym), which makes it easy to run sequential decision making experiments on simulators in WhyNot.\n\nIn this notebook we compare four different policies on the WhyNot HIV simulator: a neural network policy trained by policy gradient, a random policy, the no treatment policy, and the max treatment policy.\n<br/><br/>", "_____no_output_____" ] ], [ [ "%load_ext autoreload\n%autoreload 2\n\nimport whynot.gym as gym\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport torch\n\nfrom scripts import utils\n%matplotlib inline", "/Users/miller_john/anaconda3/envs/whynot/lib/python3.7/site-packages/statsmodels/tools/_testing.py:19: FutureWarning: pandas.util.testing is deprecated. Use the functions in the public API at pandas.testing instead.\n import pandas.util.testing as tm\n" ] ], [ [ "## HIV Simulator\nThe HIV simulator is a differential equation simulator based on\n\nAdams, Brian Michael, et al. Dynamic multidrug therapies for HIV: Optimal and STI control approaches. North Carolina State University. Center for Research in Scientific Computation, 2004. APA.\n\nThis HIV model has a set of 6 state and 20 simulation parameters to parameterize the dynamics. \nThe state variables are:\n* uninfected_T1: uninfected CD4+ T-lymphocytes (cells/ml)\n* infected_T1: infected CD4+ T-lymphocytes (cells/ml)\n* uninfected_T2: uninfected macrophages (cells/ml)\n* infected_T2: infected macrophages (cells/ml)\n* free_virus: free virus (copies/ml)\n* immune_response: immune response CTL E (cells/ml)\n\n\nThe simulator models two types of drugs: RT (reverse-transcriptase inhibitors) inhibitors and protease inhibitors. RT inhibitors are more effective on CD4+ T-lymphocytes (T1) cells, while protease inhibitors are more effective on macrophages (T2) cells.\n\nThere are 4 possible actions:\n* Action 0: no drug, costs 0\n* Action 1: protease inhibitor only, costs 1800\n* Action 2: RT inhibitor only, costs 9800\n* Action 3: both RT inhibitor and protease inhibitor, costs 11600\n\nThe reward at each step is defined based on the current state and the action. We follow the optimization objective introduced in the original paper by Adams et. al. Intuitively, virus is bad and immune response is good.\n\n$$\\text{reward} = -0.1 * \\text{free}\\_\\text{virus} + 1000 * \\text{immune}\\_\\text{response} - \\text{action}\\_\\text{cost}$$", "_____no_output_____" ] ], [ [ "# Make the HIV environment and set random seed.\nenv = gym.make('HIV-v0')\nnp.random.seed(1)\nenv.seed(1)\ntorch.manual_seed(1)", "_____no_output_____" ] ], [ [ "## Compared Policies\n\n### Base Policy Class\nWe define a base `Policy` class. Every policy has a `sample_action` function that takes an observation and returns an action.\n\n### NNPolicy\nA 1-layer feed forward neural network with state dimension as input dimension, one hidden layer of 8 neurons (the state dim is 6), and action dimension as output dimension. We use batch normalization and ReLU activation.\n\n### No Treatment Policy\nNever apply any treatment, i.e. action = 0 (corresponds to $\\epsilon_1 = 0$ and $\\epsilon_2 = 0$ in the simulation) for all observations.\n\n### Max Treatment Policy\nNever apply max treatment, i.e. action = 3 (corresponds to $\\epsilon_1 = 0.7$ and $\\epsilon_2 = 0.3$ in the simulation) for all observations.\n\n### Random Policy\nTakes a random action regardless of the obervation.", "_____no_output_____" ] ], [ [ "class NoTreatmentPolicy(utils.Policy):\n \"\"\"The policy of always no treatment.\"\"\"\n def __init__(self):\n super(NoTreatmentPolicy, self).__init__(env)\n \n def sample_action(self, obs):\n return 0\n \nclass MaxTreatmentPolicy(utils.Policy):\n \"\"\"The policy of always applying both RT inhibitor and protease inhibitor.\"\"\"\n def __init__(self):\n super(MaxTreatmentPolicy, self).__init__(env)\n \n def sample_action(self, obs):\n return 3\n \nclass RandomPolicy(utils.Policy):\n \"\"\"The policy of picking a random action at each time step.\"\"\"\n def __init__(self):\n super(RandomPolicy, self).__init__(env)\n \n def sample_action(self, obs):\n return np.random.randint(4)", "_____no_output_____" ] ], [ [ "\n## Policy Gradient Implementation Details\nFor a given state $s$, a policy can be written as a probability distribution $\\pi_\\theta(s, a)$ over actions $a$, where $\\theta$ is the parameter of the policy.\n\nThe reinforcement learning objective is to learn a $\\theta^*$ that maximizes the objective function\n\n $\\;\\;\\;\\; J(\\theta) = E_{\\tau \\sim \\pi_\\theta}[r(\\tau)]$,\n\nwhere $\\tau$ is the trajectory sampled according to policy $\\pi_\\theta$ and $r(\\tau)$ is the sum of discounted rewards on trajectory $\\tau$.\n\nThe policy gradient approach is to take the gradient of this objective\n\n $\\;\\;\\;\\; \\nabla_\\theta J(\\theta) = \\nabla_\\theta \\int \\pi_\\theta(\\tau)r(\\tau)d\\tau = \\int \\pi_\\theta(\\tau) \\nabla_\\theta \\log\\pi_\\theta(\\tau)r(\\tau)d\\tau = E_{\\tau \\sim \\pi_\\theta(\\tau)}[\\nabla_\\theta \\log \\pi_\\theta(\\tau)r(\\tau)]$\n\n### Reward to Go\nHere, $\\log \\pi_\\theta(\\tau) = \\sum_{t=0}^T \\log \\pi_\\theta(a_t \\mid s_t)$ and $r(\\tau) = \\sum_{t=0}^T \\gamma^t r_t$. Since the reward $r_t$ at time $t$ is not influenced by states and actions that happen after $t$, we can replace $ \\log \\pi_\\theta(\\tau)r(\\tau)$ in the equation above by\n\n $\\;\\;\\;\\;\\sum_{t=0}^T \\log \\pi_\\theta(a_t \\mid s_t) \\; \\gamma^t \\sum_{t'=t}^T \\gamma^{t'-t} r_{t'}$.\n \nThis technique is referred to as \"reward to go\". In practice, it often works better to omit the $\\gamma^t$ factor. As a short hand, we will denote $\\gamma^{t'-t} r_{t'}$ as $Q_t$. In a sense, $Q_t$ represents the \"reward to go\".\n\n### Sampling\n\nIn practice this can be estimated by sampling trajectories $\\tau^{(i)} = \\{s_0^{(i)}, a_0^{(i)}, s_1^{(i)}, a_1^{(i)}, \\cdots\\} \\sim \\pi_\\theta(\\tau)$ and computing the gradient (w.r.t. $\\theta$) of loss function\n\n$\\;\\;\\;\\; Loss = -\\frac{1}{N} \\sum_i [\\sum_{t=0}^T \\log \\pi_\\theta(a_t^{(i)} \\mid s_t^{(i)}) \\;Q_t^{(i)}]$.\n\n### Baseline and Advantage\nIn practice, for better stability in training, we demean the \"reward to go\" $Q_t$ by a baseline $b_t$. This can be a constant or a neural network function of the state. The demeaned quantity $A_t = Q_t - b_t$ is referred to ads \"advantage\", as it represents how much better the action is compared to average. We can also hope for better stability by normalizing the advantage by $\\tilde A_t = (A_t - mean(A_t)) / std(A_t)$. ", "_____no_output_____" ] ], [ [ "learned_policy = utils.run_training_loop(\n env=env, n_iter=300, max_episode_length=100, batch_size=1000, learning_rate=1e-3)", "*****Iteration 0*****\n*****Iteration 10*****\n*****Iteration 20*****\n*****Iteration 30*****\n*****Iteration 40*****\n*****Iteration 50*****\n*****Iteration 60*****\n*****Iteration 70*****\n*****Iteration 80*****\n*****Iteration 90*****\n*****Iteration 100*****\n*****Iteration 110*****\n*****Iteration 120*****\n*****Iteration 130*****\n*****Iteration 140*****\n*****Iteration 150*****\n*****Iteration 160*****\n*****Iteration 170*****\n*****Iteration 180*****\n*****Iteration 190*****\n*****Iteration 200*****\n*****Iteration 210*****\n*****Iteration 220*****\n*****Iteration 230*****\n*****Iteration 240*****\n*****Iteration 250*****\n*****Iteration 260*****\n*****Iteration 270*****\n*****Iteration 280*****\n*****Iteration 290*****\n" ], [ "policies = {\n \"learned_policy\": learned_policy,\n \"no_treatment\": NoTreatmentPolicy(),\n \"max_treatment\": MaxTreatmentPolicy(),\n \"random\": RandomPolicy(),\n}\nutils.plot_sample_trajectory(policies, 100, state_names=wn.hiv.State.variable_names())", "Total reward for learned_policy: 4802102.5\nTotal reward for no_treatment: 1762320.5\nTotal reward for max_treatment: 2147030.5\nTotal reward for random: 2171225.0\n" ] ], [ [ "Note: To make sure the results are robust, we also try this with different random seeds. For example, to set 4 as the random seed, run:\n```\nnp.random.seed(4)\nenv.seed(4)\ntorch.manual_seed(4)\n```\n\nThe results are similar qualitatively but the exact rewards can vary from run to run based on the seed.", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ] ]
e7fde5e669ce74ad52f23ecb606e46dc2773e01e
29,618
ipynb
Jupyter Notebook
_notebooks/2020-02-20-test.ipynb
hasanaliyev/notes
02cf7a80d01ab7b0c939113ee8617ef2ad836999
[ "Apache-2.0" ]
null
null
null
_notebooks/2020-02-20-test.ipynb
hasanaliyev/notes
02cf7a80d01ab7b0c939113ee8617ef2ad836999
[ "Apache-2.0" ]
null
null
null
_notebooks/2020-02-20-test.ipynb
hasanaliyev/notes
02cf7a80d01ab7b0c939113ee8617ef2ad836999
[ "Apache-2.0" ]
null
null
null
38.415045
2,126
0.500979
[ [ [ "# Testing Fastpages Notebook Blog Post\n> A tutorial of fastpages for Jupyter notebooks.\n\n- toc: true \n- badges: true\n- comments: true\n- categories: [jupyter]\n- image: images/chart-preview.png", "_____no_output_____" ], [ "# About\n\nThis notebook is a demonstration of some of capabilities of [fastpages](https://github.com/fastai/fastpages) with notebooks.\n\n\nWith `fastpages` you can save your jupyter notebooks into the `_notebooks` folder at the root of your repository, and they will be automatically be converted to Jekyll compliant blog posts!\n", "_____no_output_____" ], [ "## Front Matter\n\nThe first cell in your Jupyter Notebook or markdown blog post contains front matter. Front matter is metadata that can turn on/off options in your Notebook. It is formatted like this:\n\n```\n# Title\n> Awesome summary\n\n- toc: true- branch: master- badges: true\n- comments: true\n- author: Hamel Husain & Jeremy Howard\n- categories: [fastpages, jupyter]\n```\n\n- Setting `toc: true` will automatically generate a table of contents\n- Setting `badges: true` will automatically include GitHub and Google Colab links to your notebook.\n- Setting `comments: true` will enable commenting on your blog post, powered by [utterances](https://github.com/utterance/utterances).\n\nMore details and options for front matter can be viewed on the [front matter section](https://github.com/fastai/fastpages#front-matter-related-options) of the README.", "_____no_output_____" ], [ "## Markdown Shortcuts", "_____no_output_____" ], [ "A `#hide` comment at the top of any code cell will hide **both the input and output** of that cell in your blog post.\n\nA `#hide_input` comment at the top of any code cell will **only hide the input** of that cell.", "_____no_output_____" ] ], [ [ "#hide_input\nprint('The comment #hide_input was used to hide the code that produced this.')", "The comment #hide_input was used to hide the code that produced this.\n" ] ], [ [ "put a `#collapse-hide` flag at the top of any cell if you want to **hide** that cell by default, but give the reader the option to show it:", "_____no_output_____" ] ], [ [ "#collapse-hide\nimport pandas as pd\nimport altair as alt", "_____no_output_____" ] ], [ [ "put a `#collapse-show` flag at the top of any cell if you want to **show** that cell by default, but give the reader the option to hide it:", "_____no_output_____" ] ], [ [ "#collapse-show\ncars = 'https://vega.github.io/vega-datasets/data/cars.json'\nmovies = 'https://vega.github.io/vega-datasets/data/movies.json'\nsp500 = 'https://vega.github.io/vega-datasets/data/sp500.csv'\nstocks = 'https://vega.github.io/vega-datasets/data/stocks.csv'\nflights = 'https://vega.github.io/vega-datasets/data/flights-5k.json'", "_____no_output_____" ] ], [ [ "## Interactive Charts With Altair\n\nCharts made with Altair remain interactive. Example charts taken from [this repo](https://github.com/uwdata/visualization-curriculum), specifically [this notebook](https://github.com/uwdata/visualization-curriculum/blob/master/altair_interaction.ipynb).", "_____no_output_____" ] ], [ [ "# hide\ndf = pd.read_json(movies) # load movies data\ngenres = df['Major_Genre'].unique() # get unique field values\ngenres = list(filter(lambda d: d is not None, genres)) # filter out None values\ngenres.sort() # sort alphabetically", "_____no_output_____" ], [ "#hide\nmpaa = ['G', 'PG', 'PG-13', 'R', 'NC-17', 'Not Rated']", "_____no_output_____" ] ], [ [ "### Example 1: DropDown", "_____no_output_____" ] ], [ [ "# single-value selection over [Major_Genre, MPAA_Rating] pairs\n# use specific hard-wired values as the initial selected values\nselection = alt.selection_single(\n name='Select',\n fields=['Major_Genre', 'MPAA_Rating'],\n init={'Major_Genre': 'Drama', 'MPAA_Rating': 'R'},\n bind={'Major_Genre': alt.binding_select(options=genres), 'MPAA_Rating': alt.binding_radio(options=mpaa)}\n)\n \n# scatter plot, modify opacity based on selection\nalt.Chart(movies).mark_circle().add_selection(\n selection\n).encode(\n x='Rotten_Tomatoes_Rating:Q',\n y='IMDB_Rating:Q',\n tooltip='Title:N',\n opacity=alt.condition(selection, alt.value(0.75), alt.value(0.05))\n)", "_____no_output_____" ] ], [ [ "### Example 2: Tooltips", "_____no_output_____" ] ], [ [ "alt.Chart(movies).mark_circle().add_selection(\n alt.selection_interval(bind='scales', encodings=['x'])\n).encode(\n x='Rotten_Tomatoes_Rating:Q',\n y=alt.Y('IMDB_Rating:Q', axis=alt.Axis(minExtent=30)), # use min extent to stabilize axis title placement\n tooltip=['Title:N', 'Release_Date:N', 'IMDB_Rating:Q', 'Rotten_Tomatoes_Rating:Q']\n).properties(\n width=600,\n height=400\n)", "_____no_output_____" ] ], [ [ "### Example 3: More Tooltips", "_____no_output_____" ] ], [ [ "# select a point for which to provide details-on-demand\nlabel = alt.selection_single(\n encodings=['x'], # limit selection to x-axis value\n on='mouseover', # select on mouseover events\n nearest=True, # select data point nearest the cursor\n empty='none' # empty selection includes no data points\n)\n\n# define our base line chart of stock prices\nbase = alt.Chart().mark_line().encode(\n alt.X('date:T'),\n alt.Y('price:Q', scale=alt.Scale(type='log')),\n alt.Color('symbol:N')\n)\n\nalt.layer(\n base, # base line chart\n \n # add a rule mark to serve as a guide line\n alt.Chart().mark_rule(color='#aaa').encode(\n x='date:T'\n ).transform_filter(label),\n \n # add circle marks for selected time points, hide unselected points\n base.mark_circle().encode(\n opacity=alt.condition(label, alt.value(1), alt.value(0))\n ).add_selection(label),\n\n # add white stroked text to provide a legible background for labels\n base.mark_text(align='left', dx=5, dy=-5, stroke='white', strokeWidth=2).encode(\n text='price:Q'\n ).transform_filter(label),\n\n # add text labels for stock prices\n base.mark_text(align='left', dx=5, dy=-5).encode(\n text='price:Q'\n ).transform_filter(label),\n \n data=stocks\n).properties(\n width=700,\n height=400\n)", "_____no_output_____" ] ], [ [ "## Data Tables\n\nYou can display tables per the usual way in your blog:", "_____no_output_____" ] ], [ [ "movies = 'https://vega.github.io/vega-datasets/data/movies.json'\ndf = pd.read_json(movies)\n# display table with pandas\ndf[['Title', 'Worldwide_Gross', \n 'Production_Budget', 'Distributor', 'MPAA_Rating', 'IMDB_Rating', 'Rotten_Tomatoes_Rating']].head()", "_____no_output_____" ] ], [ [ "## Images\n\n### Local Images\n\nYou can reference local images and they will be copied and rendered on your blog automatically. You can include these with the following markdown syntax:\n\n`![](my_icons/fastai_logo.png)`", "_____no_output_____" ], [ "![](my_icons/fastai_logo.png)", "_____no_output_____" ], [ "### Remote Images\n\nRemote images can be included with the following markdown syntax:\n\n`![](https://image.flaticon.com/icons/svg/36/36686.svg)`", "_____no_output_____" ], [ "![](https://image.flaticon.com/icons/svg/36/36686.svg)", "_____no_output_____" ], [ "### Animated Gifs\n\nAnimated Gifs work, too!\n\n`![](https://upload.wikimedia.org/wikipedia/commons/7/71/ChessPawnSpecialMoves.gif)`", "_____no_output_____" ], [ "![](https://upload.wikimedia.org/wikipedia/commons/7/71/ChessPawnSpecialMoves.gif)", "_____no_output_____" ], [ "### Captions\n\nYou can include captions with markdown images like this:\n\n```\n![](https://www.fast.ai/images/fastai_paper/show_batch.png \"Credit: https://www.fast.ai/2020/02/13/fastai-A-Layered-API-for-Deep-Learning/\")\n```\n\n\n![](https://www.fast.ai/images/fastai_paper/show_batch.png \"Credit: https://www.fast.ai/2020/02/13/fastai-A-Layered-API-for-Deep-Learning/\")\n\n\n\n", "_____no_output_____" ], [ "# Other Elements", "_____no_output_____" ], [ "## Tweetcards\n\nTyping `> twitter: https://twitter.com/jakevdp/status/1204765621767901185?s=20` will render this:\n\n> twitter: https://twitter.com/jakevdp/status/1204765621767901185?s=20", "_____no_output_____" ], [ "## Youtube Videos\n\nTyping `> youtube: https://youtu.be/XfoYk_Z5AkI` will render this:\n\n\n> youtube: https://youtu.be/XfoYk_Z5AkI", "_____no_output_____" ], [ "## Boxes / Callouts \n\nTyping `> Warning: There will be no second warning!` will render this:\n\n\n> Warning: There will be no second warning!\n\n\n\nTyping `> Important: Pay attention! It's important.` will render this:\n\n> Important: Pay attention! It's important.\n\n\n\nTyping `> Tip: This is my tip.` will render this:\n\n> Tip: This is my tip.\n\n\n\nTyping `> Note: Take note of this.` will render this:\n\n> Note: Take note of this.\n\n\n\nTyping `> Note: A doc link to [an example website: fast.ai](https://www.fast.ai/) should also work fine.` will render in the docs:\n\n> Note: A doc link to [an example website: fast.ai](https://www.fast.ai/) should also work fine.", "_____no_output_____" ], [ "## Footnotes\n\nYou can have footnotes in notebooks, however the syntax is different compared to markdown documents. [This guide provides more detail about this syntax](https://github.com/fastai/fastpages/blob/master/_fastpages_docs/NOTEBOOK_FOOTNOTES.md), which looks like this:\n\n```\n{% raw %}For example, here is a footnote {% fn 1 %}.\nAnd another {% fn 2 %}\n{{ 'This is the footnote.' | fndetail: 1 }}\n{{ 'This is the other footnote. You can even have a [link](www.github.com)!' | fndetail: 2 }}{% endraw %}\n```\n\nFor example, here is a footnote {% fn 1 %}.\n\nAnd another {% fn 2 %}\n\n{{ 'This is the footnote.' | fndetail: 1 }}\n{{ 'This is the other footnote. You can even have a [link](www.github.com)!' | fndetail: 2 }}", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "markdown", "markdown", "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown" ] ]
e7fdf20a46a9095768299e65dcd1ac3ba514d5b1
165,793
ipynb
Jupyter Notebook
module3-databackedassertions/LS_DS_113_Making_Data_backed_Assertions_Assignment.ipynb
davidanagy/DS-Unit-1-Sprint-1-Dealing-With-Data
67daffde8ef4d794fd92189e7057af14aaf78fde
[ "MIT" ]
null
null
null
module3-databackedassertions/LS_DS_113_Making_Data_backed_Assertions_Assignment.ipynb
davidanagy/DS-Unit-1-Sprint-1-Dealing-With-Data
67daffde8ef4d794fd92189e7057af14aaf78fde
[ "MIT" ]
null
null
null
module3-databackedassertions/LS_DS_113_Making_Data_backed_Assertions_Assignment.ipynb
davidanagy/DS-Unit-1-Sprint-1-Dealing-With-Data
67daffde8ef4d794fd92189e7057af14aaf78fde
[ "MIT" ]
null
null
null
100.602549
30,010
0.70956
[ [ [ "<a href=\"https://colab.research.google.com/github/davidanagy/DS-Unit-1-Sprint-1-Dealing-With-Data/blob/master/module3-databackedassertions/LS_DS_113_Making_Data_backed_Assertions_Assignment.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>", "_____no_output_____" ], [ "# Lambda School Data Science - Making Data-backed Assertions\n\nThis is, for many, the main point of data science - to create and support reasoned arguments based on evidence. It's not a topic to master in a day, but it is worth some focused time thinking about and structuring your approach to it.", "_____no_output_____" ], [ "## Assignment - what's going on here?\n\nConsider the data in `persons.csv` (already prepared for you, in the repo for the week). It has four columns - a unique id, followed by age (in years), weight (in lbs), and exercise time (in minutes/week) of 1200 (hypothetical) people.\n\nTry to figure out which variables are possibly related to each other, and which may be confounding relationships.\n\nTry and isolate the main relationships and then communicate them using crosstabs and graphs. Share any cool graphs that you make with the rest of the class in Slack!", "_____no_output_____" ] ], [ [ "import pandas as pd\n\ndf = pd.read_csv('https://raw.githubusercontent.com/davidanagy/DS-Unit-1-Sprint-1-Dealing-With-Data/master/module3-databackedassertions/persons.csv')\n\ndf.head()\n\n# !pip install pandas==0.23.4", "_____no_output_____" ], [ "# Weight seems to make the most sense as a dependent variable. We would expect weight to go down as exercise time goes up--but what effect does age have?\n\npd.crosstab(df['exercise_time'], df['weight']) # This is useless; I need bins for both columns.\n\nweight_bins = pd.cut(df['weight'], 5)\n\ntime_bins = pd.cut(df['exercise_time'], 5)\n\nct1 = pd.crosstab(time_bins, weight_bins, normalize='columns')\nct1\n\n# Data is a little messy. The lowest weights have a high % of people with lots of exercise, and the highest weight has 0% of such people, though the sample is small.\n# However, looking at those who don't exercise, their percentage at the highest weight goes *down*--and the second-least-exercise group has similarly messy data.\n# So what effect does age have?", "_____no_output_____" ], [ "age_bins = pd.cut(df['age'], 5)\n\nct2 = pd.crosstab(age_bins, weight_bins, normalize='columns')\nct2", "_____no_output_____" ], [ "ct3 = pd.crosstab(age_bins, time_bins, normalize='columns')\nct3 # The relationships still aren't clear to me. I'm going to try more things.", "_____no_output_____" ], [ "ct1.plot(kind='bar');", "_____no_output_____" ], [ "ct4 = pd.crosstab(weight_bins, [time_bins, age_bins], normalize='columns')\nct4", "_____no_output_____" ], [ "ct5 = ct4.iloc[:, [0,1,2,3,4]]\nct5", "_____no_output_____" ], [ "ct5.plot(kind='bar'); # I think it would be more helpful to switch time and age.", "_____no_output_____" ], [ "ct6 = pd.crosstab(weight_bins, [age_bins, time_bins], normalize='columns')\nct6", "_____no_output_____" ], [ "age_bins2 = pd.cut(df['age'], 3)\ntime_bins2 = pd.cut(df['exercise_time'], 3)\nweight_bins2 = pd.cut(df['weight'], 3)\nct7 = pd.crosstab(weight_bins2, [age_bins2, time_bins2], normalize='columns')\nct7\n\n# By reducing the number of bins, I think this shows the relationships the clearest. Regardless of age, the low weight goes up as exercise time does,\n# the middle weight is more of a standard distribution vis-a-vis exercise time, and the highest weight goes down as exercise time increases\n# (and is notably nonexistent for the largest exercise time).", "_____no_output_____" ], [ "ct7.plot(kind='bar') # This is not what I want. I have to flip the axes.", "_____no_output_____" ], [ "ct8 = pd.crosstab([age_bins2, time_bins2], weight_bins2, normalize='columns')\nct8", "_____no_output_____" ], [ "ct8.plot(kind='bar');\n\n# For all ages, the highest weight class drops dramatically once they get a moderate amount of exercise (and disappears entirely with a lot of exercise).\n# The lowest age class conforms to my hypothesis: the lowest weight class rises as exercise time goes up, while the moderate weight class is more of a bell curve.\n# The medium age class mostly conforms to my hypothesis, except the moderate weight class dips slightly as exercise goes from low to medium.\n# The oldest age class is interesting in that the percentage of old people of *any* weight drops as exercise increases. This is likely because older people\n# are less likely to exercise, period--see above crosstabs.", "_____no_output_____" ] ], [ [ "### Assignment questions\n\nAfter you've worked on some code, answer the following questions in this text block:\n\n1. What are the variable types in the data?\n\nIn this context, they're all continuous; each variable takes on\na bunch of different values.\n\n2. What are the relationships between the variables?\n\nWeight and exercise time have a negative correlation, as do age and exercise\ntime. Age and weight have a positive correlation, albeit a messy one.\n\n3. Which relationships are \"real\", and which spurious?\n\nThe first two are \"real.\" The fact that older people tend to be more overweight\nis explained by the fact that they're also more likely to exercise less.\nOn the other hand, it makes sense that exercise (on average) reduces one's\nweight, and that as one gets older one is (on average) less likely to exercise.", "_____no_output_____" ], [ "## Stretch goals and resources\n\nFollowing are *optional* things for you to take a look at. Focus on the above assignment first, and make sure to commit and push your changes to GitHub.\n\n- [Spurious Correlations](http://tylervigen.com/spurious-correlations)\n- [NIH on controlling for confounding variables](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4017459/)\n\nStretch goals:\n\n- Produce your own plot inspired by the Spurious Correlation visualizations (and consider writing a blog post about it - both the content and how you made it)\n- Pick one of the techniques that NIH highlights for confounding variables - we'll be going into many of them later, but see if you can find which Python modules may help (hint - check scikit-learn)", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown" ]
[ [ "markdown", "markdown", "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ], [ "markdown", "markdown" ] ]
e7fdf5e5e7dbbac71e389cd1a1f62c596facd891
188,547
ipynb
Jupyter Notebook
Week -12/TED Talk - Correlation - Comments/Correlation - Comments.ipynb
AshishJangra27/Data-Science-Specialization
638e490fdd00e57fa8cdc7fcbc946307babd6d0a
[ "Apache-2.0" ]
null
null
null
Week -12/TED Talk - Correlation - Comments/Correlation - Comments.ipynb
AshishJangra27/Data-Science-Specialization
638e490fdd00e57fa8cdc7fcbc946307babd6d0a
[ "Apache-2.0" ]
null
null
null
Week -12/TED Talk - Correlation - Comments/Correlation - Comments.ipynb
AshishJangra27/Data-Science-Specialization
638e490fdd00e57fa8cdc7fcbc946307babd6d0a
[ "Apache-2.0" ]
null
null
null
143.490868
54,552
0.846553
[ [ [ "import numpy as np\nimport pandas as pd\nfrom tqdm import tqdm\nimport matplotlib.pyplot as plt\n\ndf = pd.read_csv('TED_Preprocessed.csv')\n\ndf = df[df['views'] < 100000]\n\ndel df['video_link']\ndel df['date_month_year']\n\ndf.head()", "_____no_output_____" ] ], [ [ "#### 1. Finding Correlation from Scratch", "_____no_output_____" ] ], [ [ "factor = []\n\nfor i in df.values: \n factor.append(round(i[4]/i[2],5)) # i[2] = Views, i[4] = Comments\n \ndf['view_to_comments'] = factor\n\ndf.head()", "_____no_output_____" ], [ "print(\"Minimum : \", min(df['view_to_comments']))\nprint(\"Maximum : \", max(df['view_to_comments']))\n\nprint(df['view_to_comments'].mode())", "Minimum : 0.00013\nMaximum : 0.05427\n0 0.00137\ndtype: float64\n" ] ], [ [ "#### 2. Adding Predicted Comments Column", "_____no_output_____" ] ], [ [ "comments = []\n\nfor i in df['views']:\n comments.append(int(i * .00137))\n \ndf['pred_comments'] = comments\n\ndf.head()", "_____no_output_____" ] ], [ [ "## Ploting Correlation", "_____no_output_____" ], [ "#### 3. Correlation b/w Comments and Views", "_____no_output_____" ] ], [ [ "data = []\n\nfor i in df.values:\n data.append([i[2],i[4]])\n \ndf_ = pd.DataFrame(data, columns = ['views','comments'])\n\nviews = list(df_.sort_values(by = 'views')['views'])\ncomments = list(df_.sort_values(by = 'views')['comments'])\n\nfig, ax = plt.subplots(figsize = (15,4))\n\nax.plot(views,comments)\n\n\nplt.show()", "_____no_output_____" ], [ "df.head()", "_____no_output_____" ] ], [ [ "#### 4. Correlation b/w Views & [Comments, Predicted Comments]", "_____no_output_____" ] ], [ [ "data = []\n\nfor i in df.values:\n \n data.append([i[2],i[4],i[10]])\n\n \ndf_ = pd.DataFrame(data, columns = ['views','comments','pred_comments'])\n\nviews = list(df_.sort_values(by = 'views')['views'])\nlikes = list(df_.sort_values(by = 'views')['comments'])\nlikes_ = list(df_.sort_values(by = 'views')['pred_comments'])\n\nfig, ax = plt.subplots(figsize = (15,4))\n\nplt.plot(views,likes , label = 'Actual')\nplt.plot(views,likes_, label = 'Predicted')\n\nplt.legend()\n\nplt.show()", "_____no_output_____" ], [ "df.head()", "_____no_output_____" ] ], [ [ "## 5. Finding Loss Using MSE", "_____no_output_____" ], [ "#### 5.1) Finding M-Error", "_____no_output_____" ] ], [ [ "total_error = []\n\nfor i in df.values:\n \n t = i[4]-i[10] # i[4] is Actual Comments, i[10] is Predicted Comments\n \n if (t >= 0):\n total_error.append(t)\n else:\n total_error.append(-t)\n \nsum(total_error)/len(total_error)\n", "_____no_output_____" ] ], [ [ "#### 5.2) Finding View to Comments Ratio", "_____no_output_____" ] ], [ [ "view_to_comments = []\n\nfor i in df.values: \n view_to_comments.append(round(i[4]/i[2],5))\n \ndf['view_to_comments'] = view_to_comments\n\nst = int(df['view_to_comments'].min() * 100000)\nend = int(df['view_to_comments'].max() * 100000)\n\nfactors = []\n\nfor i in range(st,end + 1 , 1):\n factors.append(i/100000)", "_____no_output_____" ] ], [ [ "#### 5.3) Predicting Comments for Specific Factor", "_____no_output_____" ] ], [ [ "comments_ = []\n\nfor i in df['views']: \n comments_.append(int(i * .01388))", "_____no_output_____" ] ], [ [ "### 6. Combining Factor + Error + Ratios", "_____no_output_____" ] ], [ [ "comments = np.array(df['comments'])\n\nerror = []\n\nfor i in tqdm(range(st,end + 1 , 1)): # Creating Start and Ending Reage for Factors\n factor = i/100000 \n \n comments_ = []\n \n for i in df['views']: # Predicting Likes for Specific Factor \n comments_.append(int(factor * i))\n\n comments_ = np.array(comments_) \n \n total_error = [] \n \n for i in range(len(comments)): # Erros for Actual Like to Predicted Like for One Factor\n l = comments[i] - comments_[i]\n if (l >= 0): # Finding Modulo\n total_error.append(l)\n else:\n total_error.append(-l)\n \n total_error = np.array(total_error) \n \n error.append([factor, int(total_error.mean())]) # Finding Error for Specific Factor\n \nerror = pd.DataFrame(error, columns = ['Factor','Error'])", "100%|██████████████████████████████████████| 5416/5416 [00:07<00:00, 770.16it/s]\n" ] ], [ [ "#### Finding Best Factor that Fits the Likes and Views", "_____no_output_____" ] ], [ [ "final_factor = error.sort_values(by = 'Error').head(10)['Factor'].mean()", "_____no_output_____" ], [ "final_factor", "_____no_output_____" ], [ "comments_ = []\n\nfor i in df['views']:\n comments_.append(int(i * final_factor))\n \ndf['pred_comments'] = comments_\n\ndf.head()", "_____no_output_____" ] ], [ [ "### Actual to Predicted Likes with best Fit Factor", "_____no_output_____" ] ], [ [ "data = []\n\nfor i in df.values:\n \n data.append([i[2],i[4],i[10]])\n\n \ndf_ = pd.DataFrame(data, columns = ['views','comments','pred_comments'])\n\nviews = list(df_.sort_values(by = 'views')['views'])\nlikes = list(df_.sort_values(by = 'views')['comments'])\nlikes_ = list(df_.sort_values(by = 'views')['pred_comments'])\n\nfig, ax = plt.subplots(figsize = (15,4))\n\nplt.plot(views,likes , label = 'Actual')\nplt.plot(views,likes_, label = 'Predicted')\n\nplt.legend()\n\nplt.xlabel('Views of the Video')\nplt.ylabel('Number of likes')\n\nplt.show()", "_____no_output_____" ] ] ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code" ] ]
e7fe2a57bea6774a7b5406b04e5cbf848d4280ef
136,797
ipynb
Jupyter Notebook
CodeSprints/multicollinearity_methods.ipynb
jjessamy/EnvDatSci2021
fbcbcbf394c790bb52079a64d7aece1ea8cd7310
[ "MIT" ]
null
null
null
CodeSprints/multicollinearity_methods.ipynb
jjessamy/EnvDatSci2021
fbcbcbf394c790bb52079a64d7aece1ea8cd7310
[ "MIT" ]
null
null
null
CodeSprints/multicollinearity_methods.ipynb
jjessamy/EnvDatSci2021
fbcbcbf394c790bb52079a64d7aece1ea8cd7310
[ "MIT" ]
null
null
null
117.826873
78,444
0.841649
[ [ [ "## Multicollinearity and Regression Analysis\nIn this tutorial, we will be using a spatial dataset of county-level election and demographic statistics for the United States. This time, we'll explore different methods to diagnose and account for multicollinearity in our data. Specifically, we'll calculate variance inflation factor (VIF), and compare parameter estimates and model fit in a multivariate regression predicting 2016 county voting preferences using an OLS model, a ridge regression, a lasso regression, and an elastic net regression.\n\nObjectives:\n* ***Calculate a variance inflation factor to diagnose multicollinearity.***\n* ***Use geographicall weighted regression to identify if the multicollinearity is scale dependent.***\n* ***Interpret model summary statistics.***\n* ***Describe how multicollinearity impacts stability in parameter esimates.***\n* ***Explain the variance/bias tradeoff and describe how to use it to improve models***\n* ***Draw a conclusion based on contrasting models.***\n\nReview: \n* [Dormann, C. et al. (2013). Collinearity: a review of methods to deal with it and a simulation study evaluating their performance. Ecography, 36(1), 27-46.](https://onlinelibrary.wiley.com/doi/full/10.1111/j.1600-0587.2012.07348.x)\n", "_____no_output_____" ] ], [ [ "import numpy as np\nimport geopandas as gpd\nimport pandas as pd\nimport matplotlib.pyplot as plt\nimport seaborn as sns\nimport statsmodels.api as sm\nfrom statsmodels.stats.outliers_influence import variance_inflation_factor\nfrom sklearn.model_selection import cross_val_score\nfrom sklearn.model_selection import RepeatedKFold\nfrom sklearn.linear_model import Ridge\nfrom sklearn.linear_model import Lasso\nfrom sklearn.linear_model import ElasticNet\nfrom numpy import mean\nfrom numpy import std\nfrom numpy import absolute\nfrom libpysal.weights.contiguity import Queen\nimport libpysal\nfrom statsmodels.api import OLS\nsns.set_style('white')", "_____no_output_____" ] ], [ [ "First, we're going to load the 'Elections' dataset from the libpysal library, which is a very easy to use API that accesses the Geodata Center at the University of Chicago.\n\n* More on spatial data science resources from UC: https://spatial.uchicago.edu/\n* A list of datasets available through lipysal: https://geodacenter.github.io/data-and-lab//", "_____no_output_____" ] ], [ [ "from libpysal.examples import load_example\nelections = load_example('Elections')\n#note the folder where your data now lives:\n", "_____no_output_____" ], [ "#First, let's see what files are available in the 'Elections' data example\nelections.get_file_list()", "_____no_output_____" ] ], [ [ "When you are out in the world doing research, you often will not find a ready-made function to download your data. That's okay! You know how to get this dataset without using pysal! Do a quick internal review of online data formats and automatic data downloads.\n\n### TASK 1: Use urllib functions to download this file directly from the internet to you H:/EnvDatSci folder (not your git repository). Extract the zipped file you've downloaded into a subfolder called H:/EnvDatSci/elections.", "_____no_output_____" ] ], [ [ "# Task 1 code here:\n#import required function:\nimport urllib.request\n\n#define online filepath (aka url):\nurl = \"https://geodacenter.github.io/data-and-lab//data/election.zip\"\n\n#define local filepath:\nlocal = '../../elections.zip'\n\n#download elections data:\nurllib.request.urlretrieve(url, local)\n\n#unzip file: see if google can help you figure this one out!\nimport shutil\nshutil.unpack_archive(local, \"../../../\")", "_____no_output_____" ] ], [ [ "### TASK 2: Use geopandas to read in this shapefile. Call your geopandas.DataFrame \"votes\"", "_____no_output_____" ] ], [ [ "# TASK 2: Use geopandas to read in this shapefile. Call your geopandas.DataFrame \"votes\"\nvotes = gpd.read_file(\"H:\\EnvDataSci\\election/election.shp\")", "_____no_output_____" ] ], [ [ "### EXTRA CREDIT TASK (+2pts): use os to delete the elections data downloaded by pysal in your C: drive that you are no longer using.", "_____no_output_____" ] ], [ [ "# Extra credit task:\n", "_____no_output_____" ], [ "#Let's view the shapefile to get a general idea of the geometry we're looking at:\n%matplotlib inline\nvotes.plot()", "_____no_output_____" ], [ "#View the first few line]s of the dataset\nvotes.head()", "_____no_output_____" ], [ "#Since there are too many columns for us to view on a signle page using \"head\", we can just print out the column names so we have them all listed for reference\nfor col in votes.columns: \n print(col) ", "STATEFP\nCOUNTYFP\nGEOID\nALAND\nAWATER\narea_name\nstate_abbr\nPST045214\nPST040210\nPST120214\nPOP010210\nAGE135214\nAGE295214\nAGE775214\nSEX255214\nRHI125214\nRHI225214\nRHI325214\nRHI425214\nRHI525214\nRHI625214\nRHI725214\nRHI825214\nPOP715213\nPOP645213\nPOP815213\nEDU635213\nEDU685213\nVET605213\nLFE305213\nHSG010214\nHSG445213\nHSG096213\nHSG495213\nHSD410213\nHSD310213\nINC910213\nINC110213\nPVY020213\nBZA010213\nBZA110213\nBZA115213\nNES010213\nSBO001207\nSBO315207\nSBO115207\nSBO215207\nSBO515207\nSBO415207\nSBO015207\nMAN450207\nWTN220207\nRTN130207\nRTN131207\nAFN120207\nBPS030214\nLND110210\nPOP060210\nDemvotes16\nGOPvotes16\ntotal_2016\npct_dem_16\npct_gop_16\ndiff_2016\npct_pt_16\ntotal_2012\nDemvotes12\nGOPvotes12\ncounty_fip\nstate_fips\npct_dem_12\npct_gop_12\ndiff_2012\npct_pt_12\ngeometry\n" ] ], [ [ "#### You can use pandas summary statistics to get an idea of how county-level data varies across the United States. \n### TASK 3: For example, how did the county mean percent Democratic vote change between 2012 (pct_dem_12) and 2016 (pct_dem_16)?\n\nLook here for more info on pandas summary statistics:https://www.earthdatascience.org/courses/intro-to-earth-data-science/scientific-data-structures-python/pandas-dataframes/run-calculations-summary-statistics-pandas-dataframes/", "_____no_output_____" ] ], [ [ "#Task 3\ndemchange = votes[\"pct_dem_16\"].mean() - votes[\"pct_dem_12\"].mean()\nprint(\"The mean percent Democrative vote changed by \", demchange, \"between 2012 and 2016.\")\n", "The mean percent Democrative vote changed by -0.06783446699806961 between 2012 and 2016.\n" ] ], [ [ "We can also plot histograms of the data. Below, smoothed histograms from the seaborn package (imported as sns) let us get an idea of the distribution of percent democratic votes in 2012 (left) and 2016 (right).", "_____no_output_____" ] ], [ [ "# Plot histograms:\nf,ax = plt.subplots(1,2, figsize=(2*3*1.6, 2))\nfor i,col in enumerate(['pct_dem_12','pct_dem_16']):\n sns.kdeplot(votes[col].values, shade=True, color='slategrey', ax=ax[i])\n ax[i].set_title(col.split('_')[1])", "_____no_output_____" ], [ "# Plot spatial distribution of # dem vote in 2012 and 2016 with histogram.\nf,ax = plt.subplots(2,2, figsize=(1.6*6 + 1,2.4*3), gridspec_kw=dict(width_ratios=(6,1)))\nfor i,col in enumerate(['pct_dem_12','pct_dem_16']):\n votes.plot(col, linewidth=.05, cmap='RdBu', ax=ax[i,0])\n ax[i,0].set_title(['2012','2016'][i] + \"% democratic vote\")\n ax[i,0].set_xticklabels('')\n ax[i,0].set_yticklabels('')\n sns.kdeplot(votes[col].values, ax=ax[i,1], vertical=True, shade=True, color='slategrey')\n ax[i,1].set_xticklabels('')\n ax[i,1].set_ylim(-1,1)\nf.tight_layout()\nplt.show()", "_____no_output_____" ] ], [ [ "### TASK 4: Make a new column on your geopandas dataframe called \"pct_dem_change\" and plot it using the syntax above. Explain the plot.", "_____no_output_____" ] ], [ [ "# Task 4: add new column pct_dem_change to votes:\nvotes[\"pct_dem_change\"] = votes.pct_dem_16 - votes.pct_dem_12\n\nf, ax = plt\nplt.show(votes.pct_dem_change)", "_____no_output_____" ], [ "#Task 4: plot your pct_dem_change variable on a map:\n", "_____no_output_____" ] ], [ [ "Click on this url to learn more about the variables in this dataset: https://geodacenter.github.io/data-and-lab//county_election_2012_2016-variables/\nAs you can see, there are a lot of data values available in this dataset. Let's say we want to learn more about what county-level factors influence percent change in democratic vote between (pct_dem_change).\n\nLooking at the data description on the link above, you see that this is an exceptionally large dataset with many variables. During lecture, we discussed how there are two types of multicollinearity in our data:\n\n* *Intrinsic multicollinearity:* is an artifact of how we make observations. Often our measurements serve as proxies for some latent process (for example, we can measure percent silt, percent sand, and percent clay as proxies for the latent variable of soil texture). There will be slight variability in the information content between each proxy measurement, but they will not be independent of one another.\n\n* *Incidental collinearity:* is an artifact of how we sample complex populations. If we collect data from a subsample of the landscape where we don't see all combinations of our predictor variables (do not have good cross replication across our variables). We often induce collinearity in our data just because we are limitted in our ability to sample the environment at the scale of temporal/spatial variability of our process of interest. Incidental collinearity is a model formulation problem.(See here for more info on how to avoid it: https://people.umass.edu/sdestef/NRC%20601/StudyDesignConcepts.pdf)", "_____no_output_____" ], [ "### TASK 5: Looking at the data description, pick two variables that you believe will be intrinsically multicollinear. List and describe these variables. Why do you think they will be collinear? Is this an example of *intrinsic* or *incidental* collinearity?\n\n*Click on this box to enter text*\nI chose: \n* \"RHI125214\", #White alone, percent, 2014\n* \"RHI225214\", #Black or African American alone, percent, 2014\nThese variables are intrinsically multicollinear. A decrease in one of a finite number of races implicitly signifies an increase in another race.", "_____no_output_____" ], [ "## Multivariate regression in observational data:\nOur next step is to formulate our predictive/diagnostic model. We want to create a subset of the \"votes\" geopandas data frame that contains ten predictor variables and our response variable (pct_pt_16) two variables you selected under TASK 1. First, create a list of the variables you'd like to select.\n\n### TASK 6: Create a subset of votes called \"my_list\" containing only your selected predictor variables. Make sure you use the two variables selected under TASK 3, and eight additional variables", "_____no_output_____" ] ], [ [ "# Task 4: create a subset of votes called \"my list\" with all your subset variables.\n#my_list = [\"pct_pt_16\", <list your variables here>] ", "_____no_output_____" ], [ "#check to make sure all your columns are there:\nvotes[my_list].head()", "_____no_output_____" ] ], [ [ "### Scatterplot matrix\nWe call the process of getting to know your data (ranges and distributions of the data, as well as any relationships between variables) \"exploratory data analysis\". Pairwise plots of your variables, called scatterplots, can provide a lot of insight into the type of relationships you have between variables. A scatterplot matrix is a pairwise comparison of all variables in your dataset.", "_____no_output_____" ] ], [ [ "#Use seaborn.pairplot to plot a scatterplot matrix of you 10 variable subset:\nsns.pairplot(votes[my_list])", "_____no_output_____" ] ], [ [ "### TASK 7: Do you observe any collinearity in this dataset? How would you describe the relationship between your two \"incidentally collinear\" variables that you selected based on looking at variable descriptions? \n\n*Type answer here*\n\n\n### TASK 8: What is plotted on the diagonal panels of the scatterplot matrix?\n\n*Type answer here*\n", "_____no_output_____" ], [ "## Diagnosing collinearity globally:\nDuring class, we discussed the Variance Inflation Factor, which describes the magnitude of variance inflation that can be expected in an OLS parameter estimate for a given variable *given pairwise collinearity between that variable and another variable*. ", "_____no_output_____" ] ], [ [ "#VIF = 1/(1-R2) of a pairwise OLS regression between two predictor variables\n#We can use a built-in function \"variance_inflation_factor\" from statsmodel.api to calculate VIF\n#Learn more about the function\n?variance_inflation_factor", "_____no_output_____" ], [ "#Calculate VIFs on our dataset\nvif = pd.DataFrame()\nvif[\"VIF Factor\"] = [variance_inflation_factor(votes[my_list[1:10]].values, i) for i in range(votes[my_list[1:10]].shape[1])]\nvif[\"features\"] = votes[my_list[1:10]].columns\n", "_____no_output_____" ], [ "vif.round()", "_____no_output_____" ] ], [ [ "### Collinearity is always present in observational data. When is it a problem?\nGenerally speaking, VIF > 10 are considered \"too much\" collinearity. But this value is somewhat arbitrary: the extent to which variance inflation will impact your analysis is highly context dependent. There are two primary contexts where variance inflation is problematic:\n\n 1\\. **You are using your analysis to evaluate variable importance:** If you are using parameter estimates from your model to diagnose which observations have physically important relationships with your response variable, variance inflation can make an important predictor look unimportant, and parameter estimates will be highly leveraged by small changes in the data. \n\n 2\\. **You want to use your model to make predictions in a situation where the specific structure of collinearity between variables may have shifted:** When training a model on collinear data, the model only applies to data with that exact structure of collinearity.", "_____no_output_____" ], [ "### Caluculate a linear regression on the global data:\nIn this next step, we're going to calculate a linear regression in our data an determine whether there is a statistically significant relationship between per capita income and percent change in democratic vote.", "_____no_output_____" ] ], [ [ "#first, forumalate the model. See weather_trend.py in \"Git_101\" for a refresher on how.\n\n#extract variable that you want to use to \"predict\"\nX = np.array(votes[my_list[1:10]].values)\n#standardize data to assist in interpretation of coefficients\nX = (X - np.mean(X, axis=0)) / np.std(X, axis=0)\n\n#extract variable that we want to \"predict\"\nY = np.array(votes['pct_dem_change'].values)\n#standardize data to assist in interpretation of coefficients\nY = (Y - np.mean(X)) / np.std(Y)\n\nlm = OLS(Y,X)\nlm_results = OLS(Y,X).fit().summary()", "_____no_output_____" ], [ "print(lm_results)", "_____no_output_____" ] ], [ [ "### TASK 9: Answer: which coefficients indicate a statisticall significant relationship between parameter and pct_dem_change? What is your most important predictor variable? How can you tell?\n\n*Type answer here*\n", "_____no_output_____" ], [ "### TASK10: Are any of these parameters subject to variance inflation? How can you tell?\n\n*Type answer here*\n", "_____no_output_____" ], [ "Now, let's plot our residuals to see if there are any spatial patterns in them.\n\nRemember residuals = predicted - fitted values", "_____no_output_____" ] ], [ [ "#Add model residuals to our \"votes\" geopandas dataframe:\nvotes['lm_resid']=OLS(Y,X).fit().resid\n", "_____no_output_____" ], [ "sns.kdeplot(votes['lm_resid'].values, shade=True, color='slategrey')\n", "_____no_output_____" ] ], [ [ "### TASK 11: Are our residuals normally distributed with a mean of zero? What does that mean?\n\n*Type answer here*\n", "_____no_output_____" ], [ "## Penalized regression: ridge penalty\nIn penalized regression, we intentionally bias the parameter estimates to stabilize them given collinearity in the dataset.\n\nFrom https://www.analyticsvidhya.com/blog/2016/01/ridge-lasso-regression-python-complete-tutorial/\n\"As mentioned before, ridge regression performs ‘L2 regularization‘, i.e. it adds a factor of sum of squares of coefficients in the optimization objective. Thus, ridge regression optimizes the following:\n\n**Objective = RSS + α * (sum of square of coefficients)**\n\nHere, α (alpha) is the parameter which balances the amount of emphasis given to minimizing RSS vs minimizing sum of square of coefficients. α can take various values:\n\n* **α = 0:** The objective becomes same as simple linear regression. We’ll get the same coefficients as simple linear regression.\n\n* **α = ∞:** The coefficients will approach zero. Why? Because of infinite weightage on square of coefficients, anything less than zero will make the objective infinite.\n\n* **0 < α < ∞:** The magnitude of α will decide the weightage given to different parts of objective. The coefficients will be somewhere between 0 and ones for simple linear regression.\"\n\nIn other words, the ridge penalty shrinks coefficients such that collinear coefficients will have more similar coefficient values. It has a \"grouping\" tendency.", "_____no_output_____" ] ], [ [ "# when L2=0, Ridge equals OLS\nmodel = Ridge(alpha=1)", "_____no_output_____" ], [ "# define model evaluation method\ncv = RepeatedKFold(n_splits=10, n_repeats=3, random_state=1)\n# evaluate model\nscores = cross_val_score(model, X, Y, scoring='neg_mean_absolute_error', cv=cv, n_jobs=-1)\n#force scores to be positive\nscores = absolute(scores)\nprint('Mean MAE: %.3f (%.3f)' % (mean(scores), std(scores)))", "_____no_output_____" ], [ "model.fit(X,Y)\n#Print out the model coefficients\nprint(model.coef_)", "_____no_output_____" ] ], [ [ "## Penalized regression: lasso penalty\n\nFrom https://www.analyticsvidhya.com/blog/2016/01/ridge-lasso-regression-python-complete-tutorial/\n\"LASSO stands for Least Absolute Shrinkage and Selection Operator. I know it doesn’t give much of an idea but there are 2 key words here – ‘absolute‘ and ‘selection‘.\n\nLets consider the former first and worry about the latter later.\n\nLasso regression performs L1 regularization, i.e. it adds a factor of sum of absolute value of coefficients in the optimization objective. Thus, lasso regression optimizes the following:\n\n**Objective = RSS + α * (sum of absolute value of coefficients)**\nHere, α (alpha) works similar to that of ridge and provides a trade-off between balancing RSS and magnitude of coefficients. Like that of ridge, α can take various values. Lets iterate it here briefly:\n\n* **α = 0:** Same coefficients as simple linear regression\n* **α = ∞:** All coefficients zero (same logic as before)\n* **0 < α < ∞:** coefficients between 0 and that of simple linear regression\n\nYes its appearing to be very similar to Ridge till now. But just hang on with me and you’ll know the difference by the time we finish.\"\n\nIn other words, the lasso penalty shrinks unimportant coefficients down towards zero, automatically \"selecting\" important predictor variables. But what if that shrunken coefficient is induced by incidental collinearity (i.e. is a feature of how we sampled our data)?", "_____no_output_____" ] ], [ [ "# when L1=0, Lasso equals OLS\nmodel = Lasso(alpha=0)", "_____no_output_____" ], [ "# define model evaluation method\ncv = RepeatedKFold(n_splits=10, n_repeats=3, random_state=1)\n# evaluate model\nscores = cross_val_score(model, X, Y, scoring='neg_mean_absolute_error', cv=cv, n_jobs=-1)\n#force scores to be positive\nscores = absolute(scores)\nprint('Mean MAE: %.3f (%.3f)' % (mean(scores), std(scores)))", "_____no_output_____" ], [ "model.fit(X,Y)\n#Print out the model coefficients\nprint(model.coef_)\n#How do these compare with OLS coefficients above?", "_____no_output_____" ], [ "# when L1 approaches infinity, certain coefficients will become exactly zero, and MAE equals the variance of our response variable:\nmodel = Lasso(alpha=10000000)", "_____no_output_____" ], [ "# define model evaluation method\ncv = RepeatedKFold(n_splits=10, n_repeats=3, random_state=1)\n# evaluate model\nscores = cross_val_score(model, X, Y, scoring='neg_mean_absolute_error', cv=cv, n_jobs=-1)\n#force scores to be positive\nscores = absolute(scores)\nprint('Mean MAE: %.3f (%.3f)' % (mean(scores), std(scores)))", "_____no_output_____" ], [ "model.fit(X,Y)\n#Print out the model coefficients\nprint(model.coef_)\n#How do these compare with OLS coefficients above?", "_____no_output_____" ] ], [ [ "### Penalized regression: elastic net penalty\n\nIn other words, the lasso penalty shrinks unimportant coefficients down towards zero, automatically \"selecting\" important predictor variables. The ridge penalty shrinks coefficients of collinear predictor variables nearer to each other, effectively partitioning the magnitude of response from the response variable between them, instead of \"arbitrarily\" partitioning it to one group.\n\nWe can also run a regression with a linear combination of ridge and lasso, called the elastic net, that has a cool property called \"group selection.\"\n\nThe ridge penalty still works to distribute response variance equally between members of \"groups\" of collinear predictor variables. The lasso penalty still works to shrink certain coefficients to exactly zero so they can be ignored in model formulation. The elastic net produces models that are both sparse and stable under collinearity, by shrinking parameters of members of unimportant collinear predictor variables to exactly zero:", "_____no_output_____" ] ], [ [ "# when L1 approaches infinity, certain coefficients will become exactly zero, and MAE equals the variance of our response variable:\nmodel = ElasticNet(alpha=1, l1_ratio=0.2)", "_____no_output_____" ], [ "# define model evaluation method\ncv = RepeatedKFold(n_splits=10, n_repeats=3, random_state=1)\n# evaluate model\nscores = cross_val_score(model, X, Y, scoring='neg_mean_absolute_error', cv=cv, n_jobs=-1)\n#force scores to be positive\nscores = absolute(scores)\nprint('Mean MAE: %.3f (%.3f)' % (mean(scores), std(scores)))", "_____no_output_____" ], [ "model.fit(X,Y)\n#Print out the model coefficients\nprint(model.coef_)\n#How do these compare with OLS coefficients above?", "_____no_output_____" ] ], [ [ "### TASK 11: Match these elastic net coefficients up with your original data. Do you see a logical grouping(s) between variables that have non-zero coefficients?Explain why or why not.\n*Type answer here*", "_____no_output_____" ] ], [ [ "# Task 11 scratch cell:", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown", "markdown", "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code", "code", "code" ], [ "markdown", "markdown" ], [ "code", "code" ], [ "markdown", "markdown", "markdown" ], [ "code", "code" ], [ "markdown", "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code" ] ]
e7fe2bf423f8d636a396c0d884e651182a16cbef
33,795
ipynb
Jupyter Notebook
docs/test/testinnsending/.ipynb_checkpoints/person-enk-med-vedlegg-2021-checkpoint.ipynb
Skatteetaten/skattemelding
3ab6ff9ea6cbabd8745904a840ae118844e3a357
[ "Apache-2.0" ]
null
null
null
docs/test/testinnsending/.ipynb_checkpoints/person-enk-med-vedlegg-2021-checkpoint.ipynb
Skatteetaten/skattemelding
3ab6ff9ea6cbabd8745904a840ae118844e3a357
[ "Apache-2.0" ]
null
null
null
docs/test/testinnsending/.ipynb_checkpoints/person-enk-med-vedlegg-2021-checkpoint.ipynb
Skatteetaten/skattemelding
3ab6ff9ea6cbabd8745904a840ae118844e3a357
[ "Apache-2.0" ]
null
null
null
51.204545
3,849
0.736144
[ [ [ "# Testinnsening av person skattemelding med næringspesifikasjon", "_____no_output_____" ], [ "Denne demoen er ment for å vise hvordan flyten for et sluttbrukersystem kan hente et utkast, gjøre endringer, validere/kontrollere det mot Skatteetatens apier, for å sende det inn via Altinn3", "_____no_output_____" ] ], [ [ "try: \n from altinn3 import *\n from skatteetaten_api import main_relay, base64_decode_response, decode_dokument\n import requests\n import base64\n import xmltodict\n import xml.dom.minidom\n from pathlib import Path\nexcept ImportError as e:\n print(\"Mangler en eller avhengighet, installer dem via pip, se requierments.txt fil for detaljer\")\n raise ImportError(e)\n\n \n#hjelpe metode om du vil se en request printet som curl \ndef print_request_as_curl(r):\n command = \"curl -X {method} -H {headers} -d '{data}' '{uri}'\"\n method = r.request.method\n uri = r.request.url\n data = r.request.body\n headers = ['\"{0}: {1}\"'.format(k, v) for k, v in r.request.headers.items()]\n headers = \" -H \".join(headers)\n print(command.format(method=method, headers=headers, data=data, uri=uri))", "_____no_output_____" ] ], [ [ "## Generer ID-porten token\nTokenet er gyldig i 300 sekunder, rekjørt denne biten om du ikke har kommet frem til Altinn3 biten før 300 sekunder ", "_____no_output_____" ] ], [ [ "idporten_header = main_relay()", "https://oidc-ver2.difi.no/idporten-oidc-provider/authorize?scope=skatteetaten%3Aformueinntekt%2Fskattemelding%20openid&acr_values=Level3&client_id=8d7adad7-b497-40d0-8897-9a9d86c95306&redirect_uri=http%3A%2F%2Flocalhost%3A12345%2Ftoken&response_type=code&state=5lCEToPZskoHXWGs-ghf4g&nonce=1638258045740949&resource=https%3A%2F%2Fmp-test.sits.no%2Fapi%2Feksterntapi%2Fformueinntekt%2Fskattemelding%2F&code_challenge=gnh30mujVP4US-TgTN7nvsGjRU9MCWYwqZ_xolRt6zI=&code_challenge_method=S256&ui_locales=nb\nAuthorization token received\n{'code': ['TBNZZzWsfhY2LgB3mk8nbvUR8KmXhngSQ5HeuDeW9NI'], 'state': ['5lCEToPZskoHXWGs-ghf4g']}\nJS : \n{'access_token': 'eyJraWQiOiJjWmswME1rbTVIQzRnN3Z0NmNwUDVGSFpMS0pzdzhmQkFJdUZiUzRSVEQ0IiwiYWxnIjoiUlMyNTYifQ.eyJzdWIiOiJXQTdMRE51djZiLUNpZkk0aFNtTWRmQ2dubmxSNmRLQVJvU0Q4Vkh6WGEwPSIsImlzcyI6Imh0dHBzOlwvXC9vaWRjLXZlcjIuZGlmaS5ub1wvaWRwb3J0ZW4tb2lkYy1wcm92aWRlclwvIiwiY2xpZW50X2FtciI6Im5vbmUiLCJwaWQiOiIyOTExNDUwMTMxOCIsInRva2VuX3R5cGUiOiJCZWFyZXIiLCJjbGllbnRfaWQiOiI4ZDdhZGFkNy1iNDk3LTQwZDAtODg5Ny05YTlkODZjOTUzMDYiLCJhdWQiOiJodHRwczpcL1wvbXAtdGVzdC5zaXRzLm5vXC9hcGlcL2Vrc3Rlcm50YXBpXC9mb3JtdWVpbm50ZWt0XC9za2F0dGVtZWxkaW5nXC8iLCJhY3IiOiJMZXZlbDMiLCJzY29wZSI6Im9wZW5pZCBza2F0dGVldGF0ZW46Zm9ybXVlaW5udGVrdFwvc2thdHRlbWVsZGluZyIsImV4cCI6MTYzODM0NDQ1NSwiaWF0IjoxNjM4MjU4MDU2LCJjbGllbnRfb3Jnbm8iOiI5NzQ3NjEwNzYiLCJqdGkiOiJFWVNfYVZNWU5KcUlEYmRVNG4xWjZqWmdVZ0dWLTBCc2E5TGdQNGtxOEtNIiwiY29uc3VtZXIiOnsiYXV0aG9yaXR5IjoiaXNvNjUyMy1hY3RvcmlkLXVwaXMiLCJJRCI6IjAxOTI6OTc0NzYxMDc2In19.rx_TeF6Xv3rwJwCy7DTfhmJ25UiLAQqo06qIXQqw00cg8FZhsNT1GtP40kHhGNrtXg2WfpgBSNNlnew64j9iHyEO1LlZous2GazVU0vjfJT-kWKbos2nhOaxWf0zZStvOwp4WXA9nyta6RwIF4brMa9aFmhWC0019FJPxOKFg8K7D0wHOAZtc5QLd7iL6Hysx35n4MjPEIe0uIQNP7PSRlnbTTxXOmwRJsVems0qgvcik-T3o_mkG7FCbjUCd4B22NB87fSC8HFV63lzseVZ7odldwFvJWsOMqoJEBtsVJVzcl2NeCkxJv0mXXvaOLpBbpnE9Fg8Cysd0SeXyLDkLg', 'id_token': 'eyJraWQiOiJjWmswME1rbTVIQzRnN3Z0NmNwUDVGSFpMS0pzdzhmQkFJdUZiUzRSVEQ0IiwiYWxnIjoiUlMyNTYifQ.eyJhdF9oYXNoIjoiaHNOWVRsRTBhM0JEVEdjSGRQSXBmZyIsInN1YiI6IldBN0xETnV2NmItQ2lmSTRoU21NZGZDZ25ubFI2ZEtBUm9TRDhWSHpYYTA9IiwiYW1yIjpbIk1pbmlkLVBJTiJdLCJpc3MiOiJodHRwczpcL1wvb2lkYy12ZXIyLmRpZmkubm9cL2lkcG9ydGVuLW9pZGMtcHJvdmlkZXJcLyIsInBpZCI6IjI5MTE0NTAxMzE4IiwibG9jYWxlIjoibmIiLCJub25jZSI6IjE2MzgyNTgwNDU3NDA5NDkiLCJzaWQiOiIyN1ZQSUp3cXZrZHlvc0ZBZ0tYMGZsUk9CdHZRTFFFOFRxQl9HZlNfMlhzIiwiYXVkIjoiOGQ3YWRhZDctYjQ5Ny00MGQwLTg4OTctOWE5ZDg2Yzk1MzA2IiwiYWNyIjoiTGV2ZWwzIiwiYXV0aF90aW1lIjoxNjM4MjU4MDU1LCJleHAiOjE2MzgyNTgxNzYsImlhdCI6MTYzODI1ODA1NiwianRpIjoiRXpZWVJhTmRmZm5SeVNzNFVfdE9UbVZsTFRvSURZemlXTS1zVkFMclNmYyJ9.nuNYzanJrliYENhag64WsAe-m5DvZ1uKszCj8akRck-_-FxNH59IwamK6cRP4TcGTM3a5nung4paWkNvfoQOWQajbU51tqffMJzG53qyMDwWTETo7_YotTS4TkhM8aNGdZykch6K5toADEDZzp3IHXXL5-ZAZ8nmcpJOP4tgvACYVATcFK8bbvJ79IPIUKuk_lBiNOckj0PyFpAkIuqjhFAFTsYqcKbpD6_w0RSHUty1cQ4pvsQIXhsli6phpBbefrx3Wm2ArXNRV9eBBS1NaBSnCtVs6ze3fRJs_pKsbFEgIpuxrDK0ICAZROONGDx8631G7_co4iedrNCYD11rfg', 'token_type': 'Bearer', 'expires_in': 86399, 'scope': 'openid skatteetaten:formueinntekt/skattemelding'}\nThe token is good, expires in 86399 seconds\n\nBearer eyJraWQiOiJjWmswME1rbTVIQzRnN3Z0NmNwUDVGSFpMS0pzdzhmQkFJdUZiUzRSVEQ0IiwiYWxnIjoiUlMyNTYifQ.eyJzdWIiOiJXQTdMRE51djZiLUNpZkk0aFNtTWRmQ2dubmxSNmRLQVJvU0Q4Vkh6WGEwPSIsImlzcyI6Imh0dHBzOlwvXC9vaWRjLXZlcjIuZGlmaS5ub1wvaWRwb3J0ZW4tb2lkYy1wcm92aWRlclwvIiwiY2xpZW50X2FtciI6Im5vbmUiLCJwaWQiOiIyOTExNDUwMTMxOCIsInRva2VuX3R5cGUiOiJCZWFyZXIiLCJjbGllbnRfaWQiOiI4ZDdhZGFkNy1iNDk3LTQwZDAtODg5Ny05YTlkODZjOTUzMDYiLCJhdWQiOiJodHRwczpcL1wvbXAtdGVzdC5zaXRzLm5vXC9hcGlcL2Vrc3Rlcm50YXBpXC9mb3JtdWVpbm50ZWt0XC9za2F0dGVtZWxkaW5nXC8iLCJhY3IiOiJMZXZlbDMiLCJzY29wZSI6Im9wZW5pZCBza2F0dGVldGF0ZW46Zm9ybXVlaW5udGVrdFwvc2thdHRlbWVsZGluZyIsImV4cCI6MTYzODM0NDQ1NSwiaWF0IjoxNjM4MjU4MDU2LCJjbGllbnRfb3Jnbm8iOiI5NzQ3NjEwNzYiLCJqdGkiOiJFWVNfYVZNWU5KcUlEYmRVNG4xWjZqWmdVZ0dWLTBCc2E5TGdQNGtxOEtNIiwiY29uc3VtZXIiOnsiYXV0aG9yaXR5IjoiaXNvNjUyMy1hY3RvcmlkLXVwaXMiLCJJRCI6IjAxOTI6OTc0NzYxMDc2In19.rx_TeF6Xv3rwJwCy7DTfhmJ25UiLAQqo06qIXQqw00cg8FZhsNT1GtP40kHhGNrtXg2WfpgBSNNlnew64j9iHyEO1LlZous2GazVU0vjfJT-kWKbos2nhOaxWf0zZStvOwp4WXA9nyta6RwIF4brMa9aFmhWC0019FJPxOKFg8K7D0wHOAZtc5QLd7iL6Hysx35n4MjPEIe0uIQNP7PSRlnbTTxXOmwRJsVems0qgvcik-T3o_mkG7FCbjUCd4B22NB87fSC8HFV63lzseVZ7odldwFvJWsOMqoJEBtsVJVzcl2NeCkxJv0mXXvaOLpBbpnE9Fg8Cysd0SeXyLDkLg\n" ] ], [ [ "# Hent utkast og gjeldende\nHer legger vi inn fødselsnummeret vi logget oss inn med, Dersom du velger et annet fødselsnummer så må den du logget på med ha tilgang til skattemeldingen du ønsker å hente\n\n#### Parten nedenfor er brukt for internt test, pass på bruk deres egne testparter når dere tester\n\n01014700230 har fått en myndighetsfastsetting\n\nLegg merke til `/api/skattemelding/v2/` biten av url'n er ny for 2021", "_____no_output_____" ] ], [ [ "s = requests.Session()\ns.headers = dict(idporten_header)\nfnr=\"29114501318\" #oppdater med test fødselsnummerene du har fått tildelt", "_____no_output_____" ] ], [ [ "### Utkast", "_____no_output_____" ] ], [ [ "url_utkast = f'https://mp-test.sits.no/api/skattemelding/v2/utkast/2021/{fnr}'\nr = s.get(url_utkast)\nr", "_____no_output_____" ], [ "print(r.text)", "<skattemeldingOgNaeringsspesifikasjonforespoerselResponse xmlns=\"no:skatteetaten:fastsetting:formueinntekt:skattemeldingognaeringsspesifikasjon:forespoersel:response:v2\"><dokumenter><skattemeldingdokument><id>SKI:138:41694</id><encoding>utf-8</encoding><content>PD94bWwgdmVyc2lvbj0iMS4wIiBlbmNvZGluZz0iVVRGLTgiPz48c2thdHRlbWVsZGluZyB4bWxucz0idXJuOm5vOnNrYXR0ZWV0YXRlbjpmYXN0c2V0dGluZzpmb3JtdWVpbm50ZWt0OnNrYXR0ZW1lbGRpbmc6ZWtzdGVybjp2OSI+PHBhcnRzcmVmZXJhbnNlPjIyMjU3NjY2PC9wYXJ0c3JlZmVyYW5zZT48aW5udGVrdHNhYXI+MjAyMTwvaW5udGVrdHNhYXI+PGJhbmtMYWFuT2dGb3JzaWtyaW5nPjxrb250bz48aWQ+NTg0OGRjYjE1Y2I1YzkyMGNiMWFhMDc0Yzg2NjA5OWZlNTg2MTY0YjwvaWQ+PGJhbmtlbnNOYXZuPjx0ZWtzdD5TT0ZJRU1ZUiBPRyBCUkVWSUsgUkVWSVNKT048L3Rla3N0PjwvYmFua2Vuc05hdm4+PG9yZ2FuaXNhc2pvbnNudW1tZXI+PG9yZ2FuaXNhc2pvbnNudW1tZXI+OTEwOTMxNDE1PC9vcmdhbmlzYXNqb25zbnVtbWVyPjwvb3JnYW5pc2Fzam9uc251bW1lcj48a29udG9udW1tZXI+PHRla3N0Pjg4MDg4MTY1MTIyPC90ZWtzdD48L2tvbnRvbnVtbWVyPjxpbm5za3VkZD48YmVsb2VwPjxiZWxvZXBJTm9rPjxiZWxvZXBTb21IZWx0YWxsPjY5NTcwMTwvYmVsb2VwU29tSGVsdGFsbD48L2JlbG9lcElOb2s+PGJlbG9lcElWYWx1dGE+PGJlbG9lcD42OTU3MDE8L2JlbG9lcD48L2JlbG9lcElWYWx1dGE+PHZhbHV0YWtvZGU+PHZhbHV0YWtvZGU+Tk9LPC92YWx1dGFrb2RlPjwvdmFsdXRha29kZT48dmFsdXRha3Vycz48dmFsdXRha3Vycz4xPC92YWx1dGFrdXJzPjwvdmFsdXRha3Vycz48L2JlbG9lcD48L2lubnNrdWRkPjxvcHB0amVudGVSZW50ZXI+PGJlbG9lcD48YmVsb2VwSU5vaz48YmVsb2VwU29tSGVsdGFsbD45Njk2PC9iZWxvZXBTb21IZWx0YWxsPjwvYmVsb2VwSU5vaz48YmVsb2VwSVZhbHV0YT48YmVsb2VwPjk2OTY8L2JlbG9lcD48L2JlbG9lcElWYWx1dGE+PHZhbHV0YWtvZGU+PHZhbHV0YWtvZGU+Tk9LPC92YWx1dGFrb2RlPjwvdmFsdXRha29kZT48dmFsdXRha3Vycz48dmFsdXRha3Vycz4xPC92YWx1dGFrdXJzPjwvdmFsdXRha3Vycz48L2JlbG9lcD48L29wcHRqZW50ZVJlbnRlcj48L2tvbnRvPjwvYmFua0xhYW5PZ0ZvcnNpa3Jpbmc+PGFyYmVpZFRyeWdkT2dQZW5zam9uPjxsb2Vubk9nVGlsc3ZhcmVuZGVZdGVsc2VyPjxhcmJlaWRzZ2l2ZXI+PGlkPjAwZWU3MWU1YjFkMTRmYWVjZmMxNzM1Y2ExMTBkYjdjMjcwMTdkN2E8L2lkPjxuYXZuPjxvcmdhbmlzYXNqb25zbmF2bj5UUkVOR0VSRUlEIE9HIEFTSyBSRVZJU0pPTjwvb3JnYW5pc2Fzam9uc25hdm4+PC9uYXZuPjxzYW1sZWRlWXRlbHNlckZyYUFyYmVpZHNnaXZlclBlckJlaGFuZGxpbmdzYXJ0PjxpZD44Y2E5MzJlM2MwMTBkOTdhNmVmMmU1YzhkYmVlZmMyOTIzOWRiZDQ0PC9pZD48YmVsb2VwPjxiZWxvZXA+PGJlbG9lcElOb2s+PGJlbG9lcFNvbUhlbHRhbGw+NTMzNDQ4PC9iZWxvZXBTb21IZWx0YWxsPjwvYmVsb2VwSU5vaz48YmVsb2VwSVZhbHV0YT48YmVsb2VwPjUzMzQ0ODwvYmVsb2VwPjwvYmVsb2VwSVZhbHV0YT48dmFsdXRha29kZT48dmFsdXRha29kZT5OT0s8L3ZhbHV0YWtvZGU+PC92YWx1dGFrb2RlPjx2YWx1dGFrdXJzPjx2YWx1dGFrdXJzPjE8L3ZhbHV0YWt1cnM+PC92YWx1dGFrdXJzPjwvYmVsb2VwPjwvYmVsb2VwPjxiZWhhbmRsaW5nc2FydD48dGVrc3Q+TE9OTjwvdGVrc3Q+PC9iZWhhbmRsaW5nc2FydD48L3NhbWxlZGVZdGVsc2VyRnJhQXJiZWlkc2dpdmVyUGVyQmVoYW5kbGluZ3NhcnQ+PG9yZ2FuaXNhc2pvbnNudW1tZXI+PG9yZ2FuaXNhc2pvbnNudW1tZXI+OTEwOTE5NjYwPC9vcmdhbmlzYXNqb25zbnVtbWVyPjwvb3JnYW5pc2Fzam9uc251bW1lcj48L2FyYmVpZHNnaXZlcj48L2xvZW5uT2dUaWxzdmFyZW5kZVl0ZWxzZXI+PG1pbnN0ZWZyYWRyYWdPZ0tvc3RuYWRlcj48aWQ+TUlOU1RFRlJBRFJBR19PR19LT1NUTkFERVJfS05ZVFRFVF9USUxfQVJCRUlEX09HX0FOTkVOX0lOTlRFS1Q8L2lkPjxtaW5zdGVmcmFkcmFnSUlubnRla3Q+PGZyYWRyYWdzYmVyZXR0aWdldEJlbG9lcD48YmVsb2VwPjxiZWxvZXBTb21IZWx0YWxsPjEwNjc1MDwvYmVsb2VwU29tSGVsdGFsbD48L2JlbG9lcD48L2ZyYWRyYWdzYmVyZXR0aWdldEJlbG9lcD48YmVsb2VwVXRlbkhlbnN5blRpbFZhbGd0UHJpb3JpdGVydEZyYWRyYWdzdHlwZT48YmVsb2VwPjxiZWxvZXBTb21IZWx0YWxsPjEwNjc1MDwvYmVsb2VwU29tSGVsdGFsbD48L2JlbG9lcD48L2JlbG9lcFV0ZW5IZW5zeW5UaWxWYWxndFByaW9yaXRlcnRGcmFkcmFnc3R5cGU+PC9taW5zdGVmcmFkcmFnSUlubnRla3Q+PC9taW5zdGVmcmFkcmFnT2dLb3N0bmFkZXI+PC9hcmJlaWRUcnlnZE9nUGVuc2pvbj48c2thdHRlbWVsZGluZ09wcHJldHRldD48YnJ1a2VyaWRlbnRpZmlrYXRvcj5pa2tlLWltcGxlbWVudGVydDwvYnJ1a2VyaWRlbnRpZmlrYXRvcj48YnJ1a2VyaWRlbnRpZmlrYXRvcnR5cGU+c3lzdGVtaWRlbnRpZmlrYXRvcjwvYnJ1a2VyaWRlbnRpZmlrYXRvcnR5cGU+PG9wcHJldHRldERhdG8+MjAyMS0xMS0zMFQwNzozNzoxNi4zOTE4MjhaPC9vcHByZXR0ZXREYXRvPjwvc2thdHRlbWVsZGluZ09wcHJldHRldD48L3NrYXR0ZW1lbGRpbmc+</content><type>skattemeldingPersonligUtkast</type></skattemeldingdokument></dokumenter></skattemeldingOgNaeringsspesifikasjonforespoerselResponse>\n" ] ], [ [ "### Gjeldende", "_____no_output_____" ] ], [ [ "url_gjeldende = f'https://mp-test.sits.no/api/skattemelding/v2/2021/{fnr}'\nr_gjeldende = s.get(url_gjeldende)\nr_gjeldende", "_____no_output_____" ] ], [ [ "#### Fastsatt\nHer får en _http 404_ om vedkommende ikke har noen fastsetting, rekjørt denne etter du har sendt inn og fått tilbakemdling i Altinn at den har blitt behandlet, du skal nå ha en fastsatt skattemelding om den har blitt sent inn som Komplett", "_____no_output_____" ] ], [ [ "url_fastsatt = f'https://mp-test.sits.no/api/skattemelding/v2/fastsatt/2021/{fnr}'\nr_fastsatt = s.get(url_fastsatt)\nr_fastsatt", "_____no_output_____" ] ], [ [ "## Svar fra hent gjeldende \n\n### Gjeldende dokument referanse: \nI responsen på alle api kallene, være seg utkast/fastsatt eller gjeldene, så følger det med en dokumentreferanse. \nFor å kalle valider tjenesten, er en avhengig av å bruke korrekt referanse til gjeldende skattemelding. \n\nCellen nedenfor henter ut gjeldende dokumentrefranse printer ut responsen fra hent gjeldende kallet ", "_____no_output_____" ] ], [ [ "sjekk_svar = r_gjeldende\n\nsme_og_naering_respons = xmltodict.parse(sjekk_svar.text)\nskattemelding_base64 = sme_og_naering_respons[\"skattemeldingOgNaeringsspesifikasjonforespoerselResponse\"][\"dokumenter\"][\"skattemeldingdokument\"]\nsme_base64 = skattemelding_base64[\"content\"]\ndokref = sme_og_naering_respons[\"skattemeldingOgNaeringsspesifikasjonforespoerselResponse\"][\"dokumenter\"]['skattemeldingdokument']['id']\ndecoded_sme_xml = decode_dokument(skattemelding_base64)\nsme_utkast = xml.dom.minidom.parseString(decoded_sme_xml[\"content\"]).toprettyxml()\n\nprint(f\"Responsen fra hent gjeldende ser slik ut, gjeldende dokumentrerefanse er {dokref}\\n\")\nprint(xml.dom.minidom.parseString(sjekk_svar.text).toprettyxml())\n", "Responsen fra hent gjeldende ser slik ut, gjeldende dokumentrerefanse er SKI:138:41694\n\n<?xml version=\"1.0\" ?>\n<skattemeldingOgNaeringsspesifikasjonforespoerselResponse xmlns=\"no:skatteetaten:fastsetting:formueinntekt:skattemeldingognaeringsspesifikasjon:forespoersel:response:v2\">\n\t<dokumenter>\n\t\t<skattemeldingdokument>\n\t\t\t<id>SKI:138:41694</id>\n\t\t\t<encoding>utf-8</encoding>\n\t\t\t<content>PD94bWwgdmVyc2lvbj0iMS4wIiBlbmNvZGluZz0iVVRGLTgiPz48c2thdHRlbWVsZGluZyB4bWxucz0idXJuOm5vOnNrYXR0ZWV0YXRlbjpmYXN0c2V0dGluZzpmb3JtdWVpbm50ZWt0OnNrYXR0ZW1lbGRpbmc6ZWtzdGVybjp2OSI+PHBhcnRzcmVmZXJhbnNlPjIyMjU3NjY2PC9wYXJ0c3JlZmVyYW5zZT48aW5udGVrdHNhYXI+MjAyMTwvaW5udGVrdHNhYXI+PGJhbmtMYWFuT2dGb3JzaWtyaW5nPjxrb250bz48aWQ+NTg0OGRjYjE1Y2I1YzkyMGNiMWFhMDc0Yzg2NjA5OWZlNTg2MTY0YjwvaWQ+PGJhbmtlbnNOYXZuPjx0ZWtzdD5TT0ZJRU1ZUiBPRyBCUkVWSUsgUkVWSVNKT048L3Rla3N0PjwvYmFua2Vuc05hdm4+PG9yZ2FuaXNhc2pvbnNudW1tZXI+PG9yZ2FuaXNhc2pvbnNudW1tZXI+OTEwOTMxNDE1PC9vcmdhbmlzYXNqb25zbnVtbWVyPjwvb3JnYW5pc2Fzam9uc251bW1lcj48a29udG9udW1tZXI+PHRla3N0Pjg4MDg4MTY1MTIyPC90ZWtzdD48L2tvbnRvbnVtbWVyPjxpbm5za3VkZD48YmVsb2VwPjxiZWxvZXBJTm9rPjxiZWxvZXBTb21IZWx0YWxsPjY5NTcwMTwvYmVsb2VwU29tSGVsdGFsbD48L2JlbG9lcElOb2s+PGJlbG9lcElWYWx1dGE+PGJlbG9lcD42OTU3MDE8L2JlbG9lcD48L2JlbG9lcElWYWx1dGE+PHZhbHV0YWtvZGU+PHZhbHV0YWtvZGU+Tk9LPC92YWx1dGFrb2RlPjwvdmFsdXRha29kZT48dmFsdXRha3Vycz48dmFsdXRha3Vycz4xPC92YWx1dGFrdXJzPjwvdmFsdXRha3Vycz48L2JlbG9lcD48L2lubnNrdWRkPjxvcHB0amVudGVSZW50ZXI+PGJlbG9lcD48YmVsb2VwSU5vaz48YmVsb2VwU29tSGVsdGFsbD45Njk2PC9iZWxvZXBTb21IZWx0YWxsPjwvYmVsb2VwSU5vaz48YmVsb2VwSVZhbHV0YT48YmVsb2VwPjk2OTY8L2JlbG9lcD48L2JlbG9lcElWYWx1dGE+PHZhbHV0YWtvZGU+PHZhbHV0YWtvZGU+Tk9LPC92YWx1dGFrb2RlPjwvdmFsdXRha29kZT48dmFsdXRha3Vycz48dmFsdXRha3Vycz4xPC92YWx1dGFrdXJzPjwvdmFsdXRha3Vycz48L2JlbG9lcD48L29wcHRqZW50ZVJlbnRlcj48L2tvbnRvPjwvYmFua0xhYW5PZ0ZvcnNpa3Jpbmc+PGFyYmVpZFRyeWdkT2dQZW5zam9uPjxsb2Vubk9nVGlsc3ZhcmVuZGVZdGVsc2VyPjxhcmJlaWRzZ2l2ZXI+PGlkPjAwZWU3MWU1YjFkMTRmYWVjZmMxNzM1Y2ExMTBkYjdjMjcwMTdkN2E8L2lkPjxuYXZuPjxvcmdhbmlzYXNqb25zbmF2bj5UUkVOR0VSRUlEIE9HIEFTSyBSRVZJU0pPTjwvb3JnYW5pc2Fzam9uc25hdm4+PC9uYXZuPjxzYW1sZWRlWXRlbHNlckZyYUFyYmVpZHNnaXZlclBlckJlaGFuZGxpbmdzYXJ0PjxpZD44Y2E5MzJlM2MwMTBkOTdhNmVmMmU1YzhkYmVlZmMyOTIzOWRiZDQ0PC9pZD48YmVsb2VwPjxiZWxvZXA+PGJlbG9lcElOb2s+PGJlbG9lcFNvbUhlbHRhbGw+NTMzNDQ4PC9iZWxvZXBTb21IZWx0YWxsPjwvYmVsb2VwSU5vaz48YmVsb2VwSVZhbHV0YT48YmVsb2VwPjUzMzQ0ODwvYmVsb2VwPjwvYmVsb2VwSVZhbHV0YT48dmFsdXRha29kZT48dmFsdXRha29kZT5OT0s8L3ZhbHV0YWtvZGU+PC92YWx1dGFrb2RlPjx2YWx1dGFrdXJzPjx2YWx1dGFrdXJzPjE8L3ZhbHV0YWt1cnM+PC92YWx1dGFrdXJzPjwvYmVsb2VwPjwvYmVsb2VwPjxiZWhhbmRsaW5nc2FydD48dGVrc3Q+TE9OTjwvdGVrc3Q+PC9iZWhhbmRsaW5nc2FydD48L3NhbWxlZGVZdGVsc2VyRnJhQXJiZWlkc2dpdmVyUGVyQmVoYW5kbGluZ3NhcnQ+PG9yZ2FuaXNhc2pvbnNudW1tZXI+PG9yZ2FuaXNhc2pvbnNudW1tZXI+OTEwOTE5NjYwPC9vcmdhbmlzYXNqb25zbnVtbWVyPjwvb3JnYW5pc2Fzam9uc251bW1lcj48L2FyYmVpZHNnaXZlcj48L2xvZW5uT2dUaWxzdmFyZW5kZVl0ZWxzZXI+PG1pbnN0ZWZyYWRyYWdPZ0tvc3RuYWRlcj48aWQ+TUlOU1RFRlJBRFJBR19PR19LT1NUTkFERVJfS05ZVFRFVF9USUxfQVJCRUlEX09HX0FOTkVOX0lOTlRFS1Q8L2lkPjxtaW5zdGVmcmFkcmFnSUlubnRla3Q+PGZyYWRyYWdzYmVyZXR0aWdldEJlbG9lcD48YmVsb2VwPjxiZWxvZXBTb21IZWx0YWxsPjEwNjc1MDwvYmVsb2VwU29tSGVsdGFsbD48L2JlbG9lcD48L2ZyYWRyYWdzYmVyZXR0aWdldEJlbG9lcD48YmVsb2VwVXRlbkhlbnN5blRpbFZhbGd0UHJpb3JpdGVydEZyYWRyYWdzdHlwZT48YmVsb2VwPjxiZWxvZXBTb21IZWx0YWxsPjEwNjc1MDwvYmVsb2VwU29tSGVsdGFsbD48L2JlbG9lcD48L2JlbG9lcFV0ZW5IZW5zeW5UaWxWYWxndFByaW9yaXRlcnRGcmFkcmFnc3R5cGU+PC9taW5zdGVmcmFkcmFnSUlubnRla3Q+PC9taW5zdGVmcmFkcmFnT2dLb3N0bmFkZXI+PC9hcmJlaWRUcnlnZE9nUGVuc2pvbj48c2thdHRlbWVsZGluZ09wcHJldHRldD48YnJ1a2VyaWRlbnRpZmlrYXRvcj5pa2tlLWltcGxlbWVudGVydDwvYnJ1a2VyaWRlbnRpZmlrYXRvcj48YnJ1a2VyaWRlbnRpZmlrYXRvcnR5cGU+c3lzdGVtaWRlbnRpZmlrYXRvcjwvYnJ1a2VyaWRlbnRpZmlrYXRvcnR5cGU+PG9wcHJldHRldERhdG8+MjAyMS0xMS0zMFQwNzozNzoxNi4zOTE4MjhaPC9vcHByZXR0ZXREYXRvPjwvc2thdHRlbWVsZGluZ09wcHJldHRldD48L3NrYXR0ZW1lbGRpbmc+</content>\n\t\t\t<type>skattemeldingPersonligUtkast</type>\n\t\t</skattemeldingdokument>\n\t</dokumenter>\n</skattemeldingOgNaeringsspesifikasjonforespoerselResponse>\n\n" ], [ "with open(\"../../../src/resources/eksempler/v2/Naeringspesifikasjon-enk-v2_etterBeregning.xml\", 'r') as f:\n naering_enk_xml = f.read()\n \ninnsendingstype = \"ikkeKomplett\"\nnaeringsspesifikasjoner_enk_b64 = base64.b64encode(naering_enk_xml.encode(\"utf-8\"))\nnaeringsspesifikasjoner_enk_b64 = str(naeringsspesifikasjoner_enk_b64.decode(\"utf-8\"))\nskattemeldingPersonligSkattepliktig_base64=sme_base64 #bruker utkastet uten noen endringer\nnaeringsspesifikasjoner_base64=naeringsspesifikasjoner_enk_b64\ndok_ref=dokref\n\nvalider_konvlutt_v2 = \"\"\"\n<?xml version=\"1.0\" encoding=\"utf-8\" ?>\n<skattemeldingOgNaeringsspesifikasjonRequest xmlns=\"no:skatteetaten:fastsetting:formueinntekt:skattemeldingognaeringsspesifikasjon:request:v2\">\n <dokumenter>\n <dokument>\n <type>skattemeldingPersonlig</type>\n <encoding>utf-8</encoding>\n <content>{sme_base64}</content>\n </dokument>\n <dokument>\n <type>naeringsspesifikasjon</type>\n <encoding>utf-8</encoding>\n <content>{naeringsspeifikasjon_base64}</content>\n </dokument>\n </dokumenter>\n <dokumentreferanseTilGjeldendeDokument>\n <dokumenttype>skattemeldingPersonlig</dokumenttype>\n <dokumentidentifikator>{dok_ref}</dokumentidentifikator>\n </dokumentreferanseTilGjeldendeDokument>\n <inntektsaar>2021</inntektsaar>\n <innsendingsinformasjon>\n <innsendingstype>{innsendingstype}</innsendingstype>\n <opprettetAv>TurboSkatt</opprettetAv>\n </innsendingsinformasjon>\n</skattemeldingOgNaeringsspesifikasjonRequest>\n\"\"\".replace(\"\\n\",\"\")\n\n\nnaering_enk = valider_konvlutt_v2.format(\n sme_base64=skattemeldingPersonligSkattepliktig_base64,\n naeringsspeifikasjon_base64=naeringsspesifikasjoner_base64,\n dok_ref=dok_ref,\n innsendingstype=innsendingstype)", "_____no_output_____" ] ], [ [ "# Valider utkast sme med næringsopplysninger", "_____no_output_____" ] ], [ [ "def valider_sme(payload):\n url_valider = f'https://mp-test.sits.no/api/skattemelding/v2/valider/2021/{fnr}'\n header = dict(idporten_header)\n header[\"Content-Type\"] = \"application/xml\"\n return s.post(url_valider, headers=header, data=payload)\n\n\nvalider_respons = valider_sme(naering_enk)\nresultatAvValidering = xmltodict.parse(valider_respons.text)[\"skattemeldingOgNaeringsspesifikasjonResponse\"][\"resultatAvValidering\"]\n\nif valider_respons:\n print(resultatAvValidering)\n print()\n print(xml.dom.minidom.parseString(valider_respons.text).toprettyxml())\nelse:\n print(valider_respons.status_code, valider_respons.headers, valider_respons.text)", "validertMedFeil\n\n<?xml version=\"1.0\" ?>\n<skattemeldingOgNaeringsspesifikasjonResponse xmlns=\"no:skatteetaten:fastsetting:formueinntekt:skattemeldingognaeringsspesifikasjon:response:v2\">\n\t<avvikVedValidering>\n\t\t<avvik>\n\t\t\t<avvikstype>xmlValideringsfeilPaaNaeringsopplysningene</avvikstype>\n\t\t</avvik>\n\t</avvikVedValidering>\n\t<resultatAvValidering>validertMedFeil</resultatAvValidering>\n\t<aarsakTilValidertMedFeil>xmlValideringsfeilPaaNaeringsopplysningene</aarsakTilValidertMedFeil>\n</skattemeldingOgNaeringsspesifikasjonResponse>\n\n" ] ], [ [ "# Altinn 3", "_____no_output_____" ], [ "1. Hent Altinn Token\n2. Oppretter en ny instans av skjemaet\n3. Last opp vedlegg til skattemeldingen\n4. Oppdater skattemelding xml med referanse til vedlegg_id fra altinn3.\n5. Laster opp skattemeldingen og næringsopplysninger som et vedlegg", "_____no_output_____" ] ], [ [ "#1\naltinn3_applikasjon = \"skd/formueinntekt-skattemelding-v2\"\naltinn_header = hent_altinn_token(idporten_header)\n#2\ninstans_data = opprett_ny_instans_med_inntektsaar(altinn_header, fnr, \"2021\", appnavn=altinn3_applikasjon)", "{'Authorization': 'Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IjI3RTAyRTk4M0FCMUEwQzZEQzFBRjAyN0YyMUZFMUVFNENEQjRGRjEiLCJ4NXQiOiJKLUF1bURxeG9NYmNHdkFuOGhfaDdremJUX0UiLCJ0eXAiOiJKV1QifQ.eyJuYW1laWQiOiI4NTMzNyIsInVybjphbHRpbm46dXNlcmlkIjoiODUzMzciLCJ1cm46YWx0aW5uOnVzZXJuYW1lIjoibXVuaGplbSIsInVybjphbHRpbm46cGFydHlpZCI6NTAxMTA0OTUsInVybjphbHRpbm46YXV0aGVudGljYXRlbWV0aG9kIjoiTm90RGVmaW5lZCIsInVybjphbHRpbm46YXV0aGxldmVsIjozLCJjbGllbnRfYW1yIjoibm9uZSIsInBpZCI6IjI5MTE0NTAxMzE4IiwidG9rZW5fdHlwZSI6IkJlYXJlciIsImNsaWVudF9pZCI6IjhkN2FkYWQ3LWI0OTctNDBkMC04ODk3LTlhOWQ4NmM5NTMwNiIsImFjciI6IkxldmVsMyIsInNjb3BlIjoib3BlbmlkIHNrYXR0ZWV0YXRlbjpmb3JtdWVpbm50ZWt0L3NrYXR0ZW1lbGRpbmciLCJleHAiOjE2MzgzNDQ0NTUsImlhdCI6MTYzODI2NTExMywiY2xpZW50X29yZ25vIjoiOTc0NzYxMDc2IiwiY29uc3VtZXIiOnsiYXV0aG9yaXR5IjoiaXNvNjUyMy1hY3RvcmlkLXVwaXMiLCJJRCI6IjAxOTI6OTc0NzYxMDc2In0sImlzcyI6Imh0dHBzOi8vcGxhdGZvcm0udHQwMi5hbHRpbm4ubm8vYXV0aGVudGljYXRpb24vYXBpL3YxL29wZW5pZC8iLCJuYmYiOjE2MzgyNjUxMTN9.BYvu4hWxhFDTQSXrsxXA5EKBRUpt1v71AP22YkVCOhfoxhMqbes0x9QpKw6PQ6Xm8PtokJpWB-HeuPkG8nHPgQGMY4HV1_zlfxKjXQjYqYlPVT8tCwVJUaNUOcRHaA7zrEytMPUcohuIfRrBPMAyXF3fnETSm26YhLlHNqAWz5N5g6_GIiixDVzydp8WY3IWSb5U0u3zPEUgoSqqJr3DA9pUzhJrevusU386P9D57_Zm2ZRS3QZ4hvRSAmDjkfntTt0prnXmHFG1Qqv0BVdgmNRAzlgHVyH0KJVCrsFUU8_CxyKK6j4lvuDc4ELvvscypWdvTc1I_KFuXoGhQbY7cQ'}\n" ] ], [ [ "## Last opp skattemelding\n### Last først opp vedlegg som hører til skattemeldingen\nEksemplet nedenfor gjelder kun generelle vedlegg for skattemeldingen, \n\n ```xml\n <vedlegg>\n <id>En unik id levert av altinn når du laster opp vedleggsfilen</id>\n <vedleggsfil>\n <opprinneligFilnavn><tekst>vedlegg_eksempel_sirius_stjerne.jpg</tekst></opprinneligFilnavn>\n <opprinneligFiltype><tekst>jpg</tekst></opprinneligFiltype>\n </vedleggsfil>\n <vedleggstype>dokumentertMarkedsverdi</vedleggstype>\n </vedlegg>\n```\n\nmen samme prinsippet gjelder for andre kort som kan ha vedlegg. Husk at rekkefølgen på xml elementene har noe å si for å få validert xml'n", "_____no_output_____" ] ], [ [ "vedleggfil = \"vedlegg_eksempel_sirius_stjerne.jpg\"\nopplasting_respons = last_opp_vedlegg(instans_data, \n altinn_header, \n vedleggfil, \n content_type=\"image/jpeg\", \n data_type=\"skattemelding-vedlegg\",\n appnavn=altinn3_applikasjon)\nvedlegg_id = opplasting_respons.json()[\"id\"]\n\n\n# Så må vi modifisere skattemeldingen slik at vi får med vedlegg idn inn skattemelding xml'n\nwith open(\"../../../src/resources/eksempler/v2/personligSkattemeldingV9EksempelVedlegg.xml\") as f:\n filnavn = Path(vedleggfil).name\n filtype = \"jpg\"\n partsnummer = xmltodict.parse(decoded_sme_xml[\"content\"])[\"skattemelding\"][\"partsreferanse\"]\n \n sme_xml = f.read().format(partsnummer=partsnummer, vedlegg_id=vedlegg_id, filnavn=filnavn, filtype=filtype)\n sme_xml_b64 = base64.b64encode(sme_xml.encode(\"utf-8\"))\n sme_xml_b64 = str(sme_xml_b64.decode(\"utf-8\"))\n \n#La oss validere at skattemeldingen fortsatt validerer mot valideringstjenesten\nnaering_enk_med_vedlegg = valider_konvlutt_v2.format(sme_base64=sme_xml_b64,\n naeringsspeifikasjon_base64=naeringsspesifikasjoner_base64,\n dok_ref=dok_ref,\n innsendingstype=innsendingstype)\n\nvalider_respons = valider_sme(naering_enk_med_vedlegg)\nresultat_av_validering_med_vedlegg = xmltodict.parse(valider_respons.text)[\"skattemeldingOgNaeringsspesifikasjonResponse\"][\"resultatAvValidering\"]\nresultat_av_validering_med_vedlegg", "_____no_output_____" ], [ "#Last opp skattemeldingen\nreq_send_inn = last_opp_skattedata(instans_data, altinn_header, \n xml=naering_enk_med_vedlegg, \n data_type=\"skattemeldingOgNaeringsspesifikasjon\",\n appnavn=altinn3_applikasjon)\nreq_send_inn", "_____no_output_____" ] ], [ [ "### Sett statusen klar til henting av skatteetaten. ", "_____no_output_____" ] ], [ [ "req_bekreftelse = endre_prosess_status(instans_data, altinn_header, \"next\", appnavn=altinn3_applikasjon)\nreq_bekreftelse = endre_prosess_status(instans_data, altinn_header, \"next\", appnavn=altinn3_applikasjon)\nreq_bekreftelse", "_____no_output_____" ] ], [ [ "### Sjekk status på altinn3 instansen om skatteetaten har hentet instansen.\nDenne statusen vil til å begynne med ha verdien \"none\". Oppdatering skjer når skatteetaten har behandlet innsendingen.\n- Ved **komplett**-innsending vil status oppdateres til Godkjent/Avvist når innsendingen er behandlet.\n- Ved **ikkeKomplett**-innsending vil status oppdateres til Tilgjengelig når innsendingen er behandlet. Etter innsending via SME vil den oppdateres til Godkjent/Avvist etter behandling.", "_____no_output_____" ] ], [ [ "instans_etter_bekreftelse = hent_instans(instans_data, altinn_header, appnavn=altinn3_applikasjon)\nresponse_data = instans_etter_bekreftelse.json()\nprint(f\"Instans-status: {response_data['status']['substatus']}\")", "Instans-status: None\n" ] ], [ [ "### Se innsending i Altinn\n\nTa en slurk av kaffen og klapp deg selv på ryggen, du har nå sendt inn, la byråkratiet gjøre sin ting... og det tar litt tid. Pt så sjekker skatteeaten hos Altinn3 hvert 30 sek om det har kommet noen nye innsendinger. Skulle det gå mer enn et par minutter så har det mest sansynlig feilet. \n\nFør dere feilmelder noe til skatteetaten så må dere minimum ha med enten en korrelasjons-id eller instans-id for at vi skal kunne feilsøke", "_____no_output_____" ], [ "# Ikke komplett skattemelding\n1. Når du har fått svar i altinn innboksen, så kan du gå til \n https://skatt-sbstest.sits.no/web/skattemeldingen/2021\n2. Her vil du se næringsinntekter overført fra skattemeldingen\n3. Når du har sendt inn i SME så vil du kunne se i altinn instansen at den har blitt avsluttet\n4. Kjør cellen nedenfor for å se at du har fått en ny fastsatt skattemelding og næringsopplysninger\n", "_____no_output_____" ] ], [ [ "print(\"Resultat av hent fastsatt før fastsetting\")\nprint(r_fastsatt.text)\nprint(\"Resultat av hent fastsatt etter fastsetting\")\n\nr_fastsatt2 = s.get(url_fastsatt)\nr_fastsatt2.text\n#r_fastsatt.elapsed.total_seconds()", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ] ]
e7fe46c24e7713bd38ce1c1b945c2a8cd4c1124c
7,270
ipynb
Jupyter Notebook
notebooks/Full Run.ipynb
joelmpiper/ga_project
2caa34de771970429112149479539eda1d796e57
[ "MIT" ]
null
null
null
notebooks/Full Run.ipynb
joelmpiper/ga_project
2caa34de771970429112149479539eda1d796e57
[ "MIT" ]
null
null
null
notebooks/Full Run.ipynb
joelmpiper/ga_project
2caa34de771970429112149479539eda1d796e57
[ "MIT" ]
1
2019-06-02T03:55:59.000Z
2019-06-02T03:55:59.000Z
28.735178
334
0.573728
[ [ [ "## Full Run", "_____no_output_____" ], [ "### In order to run the scripts, we need to be in the base directory. This will move us out of the notebooks directory and into the base directory", "_____no_output_____" ] ], [ [ "import os\nos.chdir('..')", "_____no_output_____" ] ], [ [ "### Define where each of the datasets are stored", "_____no_output_____" ] ], [ [ "Xtrain_dir = 'solar/data/kaggle_solar/train/'\nXtest_dir = 'solar/data/kaggle_solar/test'\nytrain_file = 'solar/data/kaggle_solar/train.csv'\nstation_file = 'solar/data/kaggle_solar/station_info.csv'\nimport numpy as np", "_____no_output_____" ] ], [ [ "### Define the parameters needed to run the analysis script.", "_____no_output_____" ] ], [ [ "# Choose up to 98 stations; not specifying a station means to use all that fall within the given lats and longs. If the\n# parameter 'all' is given, then it will use all stations no matter the provided lats and longs\nstation = ['all']\n\n# Determine which dates will be used to train the model. No specified date means use the entire set from 1994-01-01 \n# until 2007-12-31.\ntrain_dates = ['1994-01-01', '2007-12-31']\n\n#2008-01-01 until 2012-11-30\ntest_dates = ['2008-01-01', '2012-11-30']\n\nstation_layout = True\n\n# Use all variables\nvar = ['all']\n\n# Keep model 0 (the default model) as a column for each of the variables (aggregated over other dimensions)\nmodel = [0]\n\n# Aggregate over all times\ntimes = ['all']\n\ndefault_grid = {'type':'relative', 'axes':{'var':var, 'models':model, 'times':times,\n 'station':station}}\n# This just uses the station_names as another feature\nstat_names = {'type':'station_names'}\n\nfrac_dist = {'type':'frac_dist'}\n\ndays_solstice = {'type':'days_from_solstice'}\ndays_cold = {'type':'days_from_coldest'}\n\nall_feats = [stat_names, default_grid, frac_dist, days_solstice, days_cold]\n#all_feats = [stat_names, days_solstice, days_cold]", "_____no_output_____" ] ], [ [ "### Define the directories that contain the code needed to run the analysis", "_____no_output_____" ] ], [ [ "import solar.report.submission\nimport solar.wrangle.wrangle\nimport solar.wrangle.subset\nimport solar.wrangle.engineer\nimport solar.analyze.model", "_____no_output_____" ] ], [ [ "### Reload the modules to load in any code changes since the last run. Load in all of the data needed for the run and store in a pickle file. The 'external' flag determines whether to look to save the pickle file in a connected hard drive or to store locally. The information in pink shows what has been written to the log file.", "_____no_output_____" ] ], [ [ "# test combination of station names and grid\nreload(solar.wrangle.wrangle)\nreload(solar.wrangle.subset)\nreload(solar.wrangle.engineer)\nfrom solar.wrangle.wrangle import SolarData\n\n#external = True\n\ninput_data = SolarData.load(Xtrain_dir, ytrain_file, Xtest_dir, station_file, \\\n train_dates, test_dates, station, \\\n station_layout, all_feats, 'extern')", "INFO:solar.wrangle.wrangle:Started building test and training data\nINFO:solar.wrangle.wrangle:Features: [{'type': 'station_names'}, {'axes': {'var': ['all'], 'models': [0], 'station': ['all'], 'times': ['all']}, 'type': 'relative'}, {'type': 'frac_dist'}, {'type': 'days_from_solstice'}, {'type': 'days_from_coldest'}]\nINFO:solar.wrangle.wrangle:Train dates: ['1994-01-01', '2007-12-31']\nINFO:solar.wrangle.wrangle:Test dates: ['2008-01-01', '2012-11-30']\nINFO:solar.wrangle.wrangle:Stations: ['all']\n" ], [ "reload(solar.analyze.model)\nimport numpy as np\nfrom solar.analyze.model import Model\n\nfrom sklearn.linear_model import Ridge\nfrom sklearn import metrics\n\nerror_formula = 'mean_absolute_error'\n\nmodel = Model.model(input_data, Ridge, {'alpha':np.logspace(-3,1,10,base=10)}, 10, \n error_formula, 4, 'extern', normalize=True)", "_____no_output_____" ], [ "reload(solar.analyze.model)\nimport numpy as np\nfrom solar.analyze.model import Model\n\nfrom sklearn.ensemble import GradientBoostingRegressor\nfrom sklearn import metrics\n\nerror_formula = 'mean_absolute_error'\n\nmodel = Model.model_from_pickle('input_2016-02-06-18-17-28.p', GradientBoostingRegressor, {'n_estimators':range(100,500, 100), 'learning_rate':np.logspace(-3,1,5,base=10)}, 10, \n error_formula, loss='ls', max_depth=1, random_state=0)", "_____no_output_____" ], [ "reload(solar.report.submission)\nfrom solar.report.submission import Submission\n\npreds = Submission.submit_from_pickle('model_2016-02-06-18-21-41.p', 'input_2016-02-06-18-17-28.p', True)\n\n", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code", "code" ] ]
e7fe4dbdce7d1032e12bf7fc7be5a22f24c69374
2,730
ipynb
Jupyter Notebook
_notebooks/2020-03-02-jupyter-notebook.ipynb
fabge/snippets
eba330edc270990c5a4eb7bc5e96ae4f748b0026
[ "Apache-2.0" ]
null
null
null
_notebooks/2020-03-02-jupyter-notebook.ipynb
fabge/snippets
eba330edc270990c5a4eb7bc5e96ae4f748b0026
[ "Apache-2.0" ]
null
null
null
_notebooks/2020-03-02-jupyter-notebook.ipynb
fabge/snippets
eba330edc270990c5a4eb7bc5e96ae4f748b0026
[ "Apache-2.0" ]
null
null
null
17.388535
81
0.50696
[ [ [ "# \"Jupyter notebook\"\n> \"Setup and snippets for a smooth jupyter notebook experience\"\n- toc: False\n- branch: master\n- categories: [code snippets, jupyter, python]", "_____no_output_____" ], [ "# Start jupyter notebook on boot", "_____no_output_____" ], [ "Edit the crontab for your user.", "_____no_output_____" ] ], [ [ "crontab -e", "_____no_output_____" ] ], [ [ "Add the following line.", "_____no_output_____" ] ], [ [ "@reboot source ~/.venv/venv/bin/activate; ~/.venv/venv/bin/jupyter-notebook", "_____no_output_____" ] ], [ [ "---", "_____no_output_____" ], [ "# Magic Commands", "_____no_output_____" ], [ "Autoreload imports when file changes were made.", "_____no_output_____" ] ], [ [ "%load_ext autoreload\n%autoreload 2", "_____no_output_____" ] ], [ [ "Show matplotlib plots inside the notebook.", "_____no_output_____" ] ], [ [ "import matplotlib.pyplot as plt\n%matplotlib inline", "_____no_output_____" ] ], [ [ "Measure excecution time of a cell.", "_____no_output_____" ] ], [ [ "%%time", "_____no_output_____" ] ], [ [ "`pip install` from jupyter notebook.", "_____no_output_____" ] ], [ [ "import sys\n!{sys.executable} -m pip install numpy", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ] ]
e7fe616e3645d2f0a82b542a465cba174c66858e
37,378
ipynb
Jupyter Notebook
pyfund/Cap10/Notebooks/DSA-Python-Cap10-Lab04.ipynb
guimaraesalves/Data-Science-Academy
c24247cbf643a5a7c037013e4a122389aa523ac3
[ "MIT" ]
null
null
null
pyfund/Cap10/Notebooks/DSA-Python-Cap10-Lab04.ipynb
guimaraesalves/Data-Science-Academy
c24247cbf643a5a7c037013e4a122389aa523ac3
[ "MIT" ]
null
null
null
pyfund/Cap10/Notebooks/DSA-Python-Cap10-Lab04.ipynb
guimaraesalves/Data-Science-Academy
c24247cbf643a5a7c037013e4a122389aa523ac3
[ "MIT" ]
null
null
null
112.924471
14,188
0.854861
[ [ [ "# <font color='blue'>Data Science Academy - Python Fundamentos - Capítulo 10</font>\n\n## Download: http://github.com/dsacademybr", "_____no_output_____" ] ], [ [ "# Versão da Linguagem Python\nfrom platform import python_version\nprint('Versão da Linguagem Python Usada Neste Jupyter Notebook:', python_version())", "Versão da Linguagem Python Usada Neste Jupyter Notebook: 3.7.6\n" ] ], [ [ "# Lab 4 - Construindo um Modelo de Regressão Linear com TensorFlow", "_____no_output_____" ], [ "Use como referência o Deep Learning Book: http://www.deeplearningbook.com.br/", "_____no_output_____" ], [ "Obs: Embora a versão 2.x do TensorFlow já esteja disponível, este Jupyter Notebook usar a versão 1.15, que também é mantida pela equipe do Google.\n\nCaso queira aprender TensorFlow 2.0, esta versão já está disponível nos cursos da Formação IA, aqui na DSA.\n\nExecute a célula abaixo para instalar o TensorFlow na sua máquina.", "_____no_output_____" ] ], [ [ "# Versão do TensorFlow a ser usada \n!pip install -q tensorflow==1.15.2", "_____no_output_____" ], [ "# Imports\nimport tensorflow as tf\nimport numpy as np\nimport matplotlib.pyplot as plt\n%matplotlib inline", "_____no_output_____" ] ], [ [ "## Definindo os hyperparâmetros do modelo", "_____no_output_____" ] ], [ [ "# Hyperparâmetros do modelo\nlearning_rate = 0.01\ntraining_epochs = 2000\ndisplay_step = 200", "_____no_output_____" ] ], [ [ "## Definindo os datasets de treino e de teste\n\n## Considere X como o tamanho de uma casa e y o preço de uma casa", "_____no_output_____" ] ], [ [ "# Dataset de treino\ntrain_X = np.asarray([3.3,4.4,5.5,6.71,6.93,4.168,9.779,6.182,7.59,2.167,7.042,10.791,5.313,7.997,5.654,9.27,3.1])\ntrain_y = np.asarray([1.7,2.76,2.09,3.19,1.694,1.573,3.366,2.596,2.53,1.221,2.827,3.465,1.65,2.904,2.42,2.94,1.3])\nn_samples = train_X.shape[0]\n \n# Dataset de teste\ntest_X = np.asarray([6.83, 4.668, 8.9, 7.91, 5.7, 8.7, 3.1, 2.1])\ntest_y = np.asarray([1.84, 2.273, 3.2, 2.831, 2.92, 3.24, 1.35, 1.03])", "_____no_output_____" ] ], [ [ "## Placeholders e variáveis", "_____no_output_____" ] ], [ [ "# Placeholders para as variáveis preditoras (x) e para variável target (y)\nX = tf.placeholder(tf.float32)\ny = tf.placeholder(tf.float32)\n \n# Pesos e bias do modelo\nW = tf.Variable(np.random.randn(), name=\"weight\")\nb = tf.Variable(np.random.randn(), name=\"bias\")", "_____no_output_____" ] ], [ [ "## Construindo o modelo", "_____no_output_____" ] ], [ [ "# Construindo o modelo linear\n# Fórmula do modelo linear: y = W*X + b\nlinear_model = W*X + b\n \n# Mean squared error (erro quadrado médio)\ncost = tf.reduce_sum(tf.square(linear_model - y)) / (2*n_samples)\n \n# Otimização com Gradient descent\noptimizer = tf.train.GradientDescentOptimizer(learning_rate).minimize(cost)", "_____no_output_____" ] ], [ [ "## Executando o grafo computacional, treinando e testando o modelo", "_____no_output_____" ] ], [ [ "# Definindo a inicialização das variáveis\ninit = tf.global_variables_initializer()\n \n# Iniciando a sessão\nwith tf.Session() as sess:\n # Iniciando as variáveis\n sess.run(init)\n \n # Treinamento do modelo\n for epoch in range(training_epochs):\n \n # Otimização com Gradient Descent\n sess.run(optimizer, feed_dict={X: train_X, y: train_y})\n \n # Display de cada epoch\n if (epoch+1) % display_step == 0:\n c = sess.run(cost, feed_dict={X: train_X, y: train_y})\n print(\"Epoch:{0:6} \\t Custo (Erro):{1:10.4} \\t W:{2:6.4} \\t b:{3:6.4}\".format(epoch+1, c, sess.run(W), sess.run(b)))\n \n # Imprimindo os parâmetros finais do modelo\n print(\"\\nOtimização Concluída!\")\n training_cost = sess.run(cost, feed_dict={X: train_X, y: train_y})\n print(\"Custo Final de Treinamento:\", training_cost, \" - W Final:\", sess.run(W), \" - b Final:\", sess.run(b), '\\n')\n \n # Visualizando o resultado\n plt.plot(train_X, train_y, 'ro', label='Dados Originais')\n plt.plot(train_X, sess.run(W) * train_X + sess.run(b), label='Linha de Regressão')\n plt.legend()\n plt.show()\n \n # Testando o modelo\n testing_cost = sess.run(tf.reduce_sum(tf.square(linear_model - y)) / (2 * test_X.shape[0]), \n feed_dict={X: test_X, y: test_y})\n \n print(\"Custo Final em Teste:\", testing_cost)\n print(\"Diferença Média Quadrada Absoluta:\", abs(training_cost - testing_cost))\n \n # Display em Teste\n plt.plot(test_X, test_y, 'bo', label='Dados de Teste')\n plt.plot(train_X, sess.run(W) * train_X + sess.run(b), label='Linha de Regressão')\n plt.legend()\n plt.show()\n \nsess.close()", "Epoch: 200 \t Custo (Erro): 0.2628 \t W:0.4961 \t b:-0.934\nEpoch: 400 \t Custo (Erro): 0.1913 \t W:0.4433 \t b:-0.5603\nEpoch: 600 \t Custo (Erro): 0.1473 \t W: 0.402 \t b:-0.2672\nEpoch: 800 \t Custo (Erro): 0.1202 \t W:0.3696 \t b:-0.03732\nEpoch: 1000 \t Custo (Erro): 0.1036 \t W:0.3441 \t b: 0.143\nEpoch: 1200 \t Custo (Erro): 0.09331 \t W:0.3242 \t b:0.2844\nEpoch: 1400 \t Custo (Erro): 0.087 \t W:0.3085 \t b:0.3954\nEpoch: 1600 \t Custo (Erro): 0.08313 \t W:0.2963 \t b:0.4824\nEpoch: 1800 \t Custo (Erro): 0.08074 \t W:0.2866 \t b:0.5506\nEpoch: 2000 \t Custo (Erro): 0.07927 \t W:0.2791 \t b:0.6041\n\nOtimização Concluída!\nCusto Final de Treinamento: 0.07927451 - W Final: 0.2790933 - b Final: 0.60413384 \n\n" ] ], [ [ "Conheça a Formação Inteligência Artificial, um programa completo, 100% online e 100% em português, com 402 horas em 9 cursos de nível intermediário/avançado, que vão ajudá-lo a se tornar um dos profissionais mais cobiçados do mercado de tecnologia. Clique no link abaixo, faça sua inscrição, comece hoje mesmo e aumente sua empregabilidade:\n\nhttps://www.datascienceacademy.com.br/pages/formacao-inteligencia-artificial", "_____no_output_____" ], [ "# Fim", "_____no_output_____" ], [ "### Obrigado - Data Science Academy - <a href=\"http://facebook.com/dsacademybr\">facebook.com/dsacademybr</a>", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown" ] ]
e7fe6b95ebc9680a8a8b4da8ccfe3075942b1543
182,454
ipynb
Jupyter Notebook
Superposition/Superposition.ipynb
djjacqmin/QuantumKatas
4ebaaa5705aacfd2ad443411eee26b6d5d1c3472
[ "MIT" ]
null
null
null
Superposition/Superposition.ipynb
djjacqmin/QuantumKatas
4ebaaa5705aacfd2ad443411eee26b6d5d1c3472
[ "MIT" ]
null
null
null
Superposition/Superposition.ipynb
djjacqmin/QuantumKatas
4ebaaa5705aacfd2ad443411eee26b6d5d1c3472
[ "MIT" ]
null
null
null
45.981351
507
0.296667
[ [ [ "empty" ] ] ]
[ "empty" ]
[ [ "empty" ] ]
e7fe76d9f4fa37c6c098223b73c135e31c5f53d3
184,963
ipynb
Jupyter Notebook
Project Bike Sharing/Bay_Area_Bike_Share_Analysis.ipynb
afshimono/data_analyst_nanodegree
8a047abe3770fbd2865c078ecaa121ce096189c2
[ "MIT" ]
null
null
null
Project Bike Sharing/Bay_Area_Bike_Share_Analysis.ipynb
afshimono/data_analyst_nanodegree
8a047abe3770fbd2865c078ecaa121ce096189c2
[ "MIT" ]
null
null
null
Project Bike Sharing/Bay_Area_Bike_Share_Analysis.ipynb
afshimono/data_analyst_nanodegree
8a047abe3770fbd2865c078ecaa121ce096189c2
[ "MIT" ]
null
null
null
146.796032
14,038
0.841893
[ [ [ "# Bay Area Bike Share Analysis\n\n## Introduction\n\n> **Tip**: Quoted sections like this will provide helpful instructions on how to navigate and use an iPython notebook.\n\n[Bay Area Bike Share](http://www.bayareabikeshare.com/) is a company that provides on-demand bike rentals for customers in San Francisco, Redwood City, Palo Alto, Mountain View, and San Jose. Users can unlock bikes from a variety of stations throughout each city, and return them to any station within the same city. Users pay for the service either through a yearly subscription or by purchasing 3-day or 24-hour passes. Users can make an unlimited number of trips, with trips under thirty minutes in length having no additional charge; longer trips will incur overtime fees.\n\nIn this project, you will put yourself in the shoes of a data analyst performing an exploratory analysis on the data. You will take a look at two of the major parts of the data analysis process: data wrangling and exploratory data analysis. But before you even start looking at data, think about some questions you might want to understand about the bike share data. Consider, for example, if you were working for Bay Area Bike Share: what kinds of information would you want to know about in order to make smarter business decisions? Or you might think about if you were a user of the bike share service. What factors might influence how you would want to use the service?\n\n**Question 1**: Write at least two questions you think could be answered by data.\n\n**Answer**: What trajectory has more users and what time of the day that happens.", "_____no_output_____" ], [ "## Using Visualizations to Communicate Findings in Data\n\nAs a data analyst, the ability to effectively communicate findings is a key part of the job. After all, your best analysis is only as good as your ability to communicate it.\n\nIn 2014, Bay Area Bike Share held an [Open Data Challenge](http://www.bayareabikeshare.com/datachallenge-2014) to encourage data analysts to create visualizations based on their open data set. You’ll create your own visualizations in this project, but first, take a look at the [submission winner for Best Analysis](http://thfield.github.io/babs/index.html) from Tyler Field. Read through the entire report to answer the following question:\n\n**Question 2**: What visualizations do you think provide the most interesting insights? Are you able to answer either of the questions you identified above based on Tyler’s analysis? Why or why not?\n\n**Answer**: I was able to answer one question because there is a specific graphs for it. The most interesting graph was definitely the interactive one, that crossed information with games, rainy days and temperature. Another very useful graph is the one with the heatmap diagram of systemwide rides.\n\nThis heatmap provides the answer to \"what trajectory has the most users\", where Harry Bridges Plaza to Embarcadero at Sansome is the winner with 1330 rides.\n\nI could not find in what time of day does this trajectory has the most users, unfortunately.", "_____no_output_____" ], [ "## Data Wrangling\n\nNow it's time to explore the data for yourself. Year 1 and Year 2 data from the Bay Area Bike Share's [Open Data](http://www.bayareabikeshare.com/open-data) page have already been provided with the project materials; you don't need to download anything extra. The data comes in three parts: the first half of Year 1 (files starting `201402`), the second half of Year 1 (files starting `201408`), and all of Year 2 (files starting `201508`). There are three main datafiles associated with each part: trip data showing information about each trip taken in the system (`*_trip_data.csv`), information about the stations in the system (`*_station_data.csv`), and daily weather data for each city in the system (`*_weather_data.csv`).\n\nWhen dealing with a lot of data, it can be useful to start by working with only a sample of the data. This way, it will be much easier to check that our data wrangling steps are working since our code will take less time to complete. Once we are satisfied with the way things are working, we can then set things up to work on the dataset as a whole.\n\nSince the bulk of the data is contained in the trip information, we should target looking at a subset of the trip data to help us get our bearings. You'll start by looking at only the first month of the bike trip data, from 2013-08-29 to 2013-09-30. The code below will take the data from the first half of the first year, then write the first month's worth of data to an output file. This code exploits the fact that the data is sorted by date (though it should be noted that the first two days are sorted by trip time, rather than being completely chronological).\n\nFirst, load all of the packages and functions that you'll be using in your analysis by running the first code cell below. Then, run the second code cell to read a subset of the first trip data file, and write a new file containing just the subset we are initially interested in.\n\n> **Tip**: You can run a code cell like you formatted Markdown cells by clicking on the cell and using the keyboard shortcut **Shift** + **Enter** or **Shift** + **Return**. Alternatively, a code cell can be executed using the **Play** button in the toolbar after selecting it. While the cell is running, you will see an asterisk in the message to the left of the cell, i.e. `In [*]:`. The asterisk will change into a number to show that execution has completed, e.g. `In [1]`. If there is output, it will show up as `Out [1]:`, with an appropriate number to match the \"In\" number.", "_____no_output_____" ] ], [ [ "# import all necessary packages and functions.\nimport csv\nfrom datetime import datetime\nimport numpy as np\nimport pandas as pd\nfrom babs_datacheck import question_3\nfrom babs_visualizations import usage_stats, usage_plot\nfrom IPython.display import display\n%matplotlib inline", "_____no_output_____" ], [ "# file locations\nfile_in = '201402_trip_data.csv'\nfile_out = '201309_trip_data.csv'\n\nwith open(file_out, 'w') as f_out, open(file_in, 'r') as f_in:\n # set up csv reader and writer objects\n in_reader = csv.reader(f_in)\n out_writer = csv.writer(f_out)\n\n # write rows from in-file to out-file until specified date reached\n while True:\n datarow = next(in_reader)\n # trip start dates in 3rd column, m/d/yyyy HH:MM formats\n if datarow[2][:9] == '10/1/2013':\n break\n out_writer.writerow(datarow)", "_____no_output_____" ] ], [ [ "### Condensing the Trip Data\n\nThe first step is to look at the structure of the dataset to see if there's any data wrangling we should perform. The below cell will read in the sampled data file that you created in the previous cell, and print out the first few rows of the table.", "_____no_output_____" ] ], [ [ "sample_data = pd.read_csv('201309_trip_data.csv')\n\ndisplay(sample_data.head())", "_____no_output_____" ] ], [ [ "In this exploration, we're going to concentrate on factors in the trip data that affect the number of trips that are taken. Let's focus down on a few selected columns: the trip duration, start time, start terminal, end terminal, and subscription type. Start time will be divided into year, month, and hour components. We will also add a column for the day of the week and abstract the start and end terminal to be the start and end _city_.\n\nLet's tackle the lattermost part of the wrangling process first. Run the below code cell to see how the station information is structured, then observe how the code will create the station-city mapping. Note that the station mapping is set up as a function, `create_station_mapping()`. Since it is possible that more stations are added or dropped over time, this function will allow us to combine the station information across all three parts of our data when we are ready to explore everything.", "_____no_output_____" ] ], [ [ "# Display the first few rows of the station data file.\nstation_info = pd.read_csv('201402_station_data.csv')\ndisplay(station_info.head())\n\n# This function will be called by another function later on to create the mapping.\ndef create_station_mapping(station_data):\n \"\"\"\n Create a mapping from station IDs to cities, returning the\n result as a dictionary.\n \"\"\"\n station_map = {}\n for data_file in station_data:\n with open(data_file, 'r') as f_in:\n # set up csv reader object - note that we are using DictReader, which\n # takes the first row of the file as a header row for each row's\n # dictionary keys\n weather_reader = csv.DictReader(f_in)\n\n for row in weather_reader:\n station_map[row['station_id']] = row['landmark']\n return station_map", "_____no_output_____" ] ], [ [ "You can now use the mapping to condense the trip data to the selected columns noted above. This will be performed in the `summarise_data()` function below. As part of this function, the `datetime` module is used to **p**arse the timestamp strings from the original data file as datetime objects (`strptime`), which can then be output in a different string **f**ormat (`strftime`). The parsed objects also have a variety of attributes and methods to quickly obtain\n\nThere are two tasks that you will need to complete to finish the `summarise_data()` function. First, you should perform an operation to convert the trip durations from being in terms of seconds to being in terms of minutes. (There are 60 seconds in a minute.) Secondly, you will need to create the columns for the year, month, hour, and day of the week. Take a look at the [documentation for datetime objects in the datetime module](https://docs.python.org/2/library/datetime.html#datetime-objects). **Find the appropriate attributes and method to complete the below code.**", "_____no_output_____" ] ], [ [ "def summarise_data(trip_in, station_data, trip_out):\n \"\"\"\n This function takes trip and station information and outputs a new\n data file with a condensed summary of major trip information. The\n trip_in and station_data arguments will be lists of data files for\n the trip and station information, respectively, while trip_out\n specifies the location to which the summarized data will be written.\n \"\"\"\n # generate dictionary of station - city mapping\n station_map = create_station_mapping(station_data)\n \n with open(trip_out, 'w') as f_out:\n # set up csv writer object \n out_colnames = ['duration', 'start_date', 'start_year',\n 'start_month', 'start_hour', 'weekday',\n 'start_city', 'end_city', 'subscription_type'] \n trip_writer = csv.DictWriter(f_out, fieldnames = out_colnames)\n trip_writer.writeheader()\n \n for data_file in trip_in:\n with open(data_file, 'r') as f_in:\n # set up csv reader object\n trip_reader = csv.DictReader(f_in)\n\n # collect data from and process each row\n for row in trip_reader:\n new_point = {}\n \n # convert duration units from seconds to minutes\n ### Question 3a: Add a mathematical operation below ###\n ### to convert durations from seconds to minutes. ###\n new_point['duration'] = float(row['Duration'])/60\n \n # reformat datestrings into multiple columns\n ### Question 3b: Fill in the blanks below to generate ###\n ### the expected time values. ###\n trip_date = datetime.strptime(row['Start Date'], '%m/%d/%Y %H:%M')\n new_point['start_date'] = trip_date.strftime('%Y-%m-%d')\n new_point['start_year'] = trip_date.strftime('%Y')\n new_point['start_month'] = trip_date.strftime('%m')\n new_point['start_hour'] = trip_date.strftime('%H')\n new_point['weekday'] = trip_date.strftime('%A')\n \n # remap start and end terminal with start and end city\n new_point['start_city'] = station_map[row['Start Terminal']]\n new_point['end_city'] = station_map[row['End Terminal']]\n # two different column names for subscribers depending on file\n if 'Subscription Type' in row:\n new_point['subscription_type'] = row['Subscription Type']\n else:\n new_point['subscription_type'] = row['Subscriber Type']\n\n # write the processed information to the output file.\n trip_writer.writerow(new_point)", "_____no_output_____" ] ], [ [ "**Question 3**: Run the below code block to call the `summarise_data()` function you finished in the above cell. It will take the data contained in the files listed in the `trip_in` and `station_data` variables, and write a new file at the location specified in the `trip_out` variable. If you've performed the data wrangling correctly, the below code block will print out the first few lines of the dataframe and a message verifying that the data point counts are correct.", "_____no_output_____" ] ], [ [ "# Process the data by running the function we wrote above.\nstation_data = ['201402_station_data.csv']\ntrip_in = ['201309_trip_data.csv']\ntrip_out = '201309_trip_summary.csv'\nsummarise_data(trip_in, station_data, trip_out)\n\n# Load in the data file and print out the first few rows\nsample_data = pd.read_csv(trip_out)\ndisplay(sample_data.head())\n\n# Verify the dataframe by counting data points matching each of the time features.\nquestion_3(sample_data)", "_____no_output_____" ] ], [ [ "> **Tip**: If you save a jupyter Notebook, the output from running code blocks will also be saved. However, the state of your workspace will be reset once a new session is started. Make sure that you run all of the necessary code blocks from your previous session to reestablish variables and functions before picking up where you last left off.\n\n## Exploratory Data Analysis\n\nNow that you have some data saved to a file, let's look at some initial trends in the data. Some code has already been written for you in the `babs_visualizations.py` script to help summarize and visualize the data; this has been imported as the functions `usage_stats()` and `usage_plot()`. In this section we'll walk through some of the things you can do with the functions, and you'll use the functions for yourself in the last part of the project. First, run the following cell to load the data, then use the `usage_stats()` function to see the total number of trips made in the first month of operations, along with some statistics regarding how long trips took.", "_____no_output_____" ] ], [ [ "trip_data = pd.read_csv('201309_trip_summary.csv')\n\nusage_stats(trip_data)", "There are 27345 data points in the dataset.\nThe average duration of trips is 27.60 minutes.\nThe median trip duration is 10.72 minutes.\n25% of trips are shorter than 6.82 minutes.\n25% of trips are longer than 17.28 minutes.\n" ] ], [ [ "You should see that there are over 27,000 trips in the first month, and that the average trip duration is larger than the median trip duration (the point where 50% of trips are shorter, and 50% are longer). In fact, the mean is larger than the 75% shortest durations. This will be interesting to look at later on.\n\nLet's start looking at how those trips are divided by subscription type. One easy way to build an intuition about the data is to plot it. We'll use the `usage_plot()` function for this. The second argument of the function allows us to count up the trips across a selected variable, displaying the information in a plot. The expression below will show how many customer and how many subscriber trips were made. Try it out!", "_____no_output_____" ] ], [ [ "usage_plot(trip_data, 'subscription_type')", "_____no_output_____" ] ], [ [ "Seems like there's about 50% more trips made by subscribers in the first month than customers. Let's try a different variable now. What does the distribution of trip durations look like?", "_____no_output_____" ] ], [ [ "usage_plot(trip_data, 'duration')", "_____no_output_____" ] ], [ [ "Looks pretty strange, doesn't it? Take a look at the duration values on the x-axis. Most rides are expected to be 30 minutes or less, since there are overage charges for taking extra time in a single trip. The first bar spans durations up to about 1000 minutes, or over 16 hours. Based on the statistics we got out of `usage_stats()`, we should have expected some trips with very long durations that bring the average to be so much higher than the median: the plot shows this in a dramatic, but unhelpful way.\n\nWhen exploring the data, you will often need to work with visualization function parameters in order to make the data easier to understand. Here's where the third argument of the `usage_plot()` function comes in. Filters can be set for data points as a list of conditions. Let's start by limiting things to trips of less than 60 minutes.", "_____no_output_____" ] ], [ [ "usage_plot(trip_data, 'duration', ['duration < 60'])", "_____no_output_____" ] ], [ [ "This is looking better! You can see that most trips are indeed less than 30 minutes in length, but there's more that you can do to improve the presentation. Since the minimum duration is not 0, the left hand bar is slighly above 0. We want to be able to tell where there is a clear boundary at 30 minutes, so it will look nicer if we have bin sizes and bin boundaries that correspond to some number of minutes. Fortunately, you can use the optional \"boundary\" and \"bin_width\" parameters to adjust the plot. By setting \"boundary\" to 0, one of the bin edges (in this case the left-most bin) will start at 0 rather than the minimum trip duration. And by setting \"bin_width\" to 5, each bar will count up data points in five-minute intervals.", "_____no_output_____" ] ], [ [ "usage_plot(trip_data, 'duration', ['duration < 60'], boundary = 0, bin_width = 5)", "_____no_output_____" ] ], [ [ "**Question 4**: Which five-minute trip duration shows the most number of trips? Approximately how many trips were made in this range?\n\n**Answer**: 5 to 10 minutes trip, with approximately 9.000 trips.", "_____no_output_____" ], [ "Visual adjustments like this might be small, but they can go a long way in helping you understand the data and convey your findings to others.\n\n## Performing Your Own Analysis\n\nNow that you've done some exploration on a small sample of the dataset, it's time to go ahead and put together all of the data in a single file and see what trends you can find. The code below will use the same `summarise_data()` function as before to process data. After running the cell below, you'll have processed all the data into a single data file. Note that the function will not display any output while it runs, and this can take a while to complete since you have much more data than the sample you worked with above.", "_____no_output_____" ] ], [ [ "station_data = ['201402_station_data.csv',\n '201408_station_data.csv',\n '201508_station_data.csv' ]\ntrip_in = ['201402_trip_data.csv',\n '201408_trip_data.csv',\n '201508_trip_data.csv' ]\ntrip_out = 'babs_y1_y2_summary.csv'\n\n# This function will take in the station data and trip data and\n# write out a new data file to the name listed above in trip_out.\nsummarise_data(trip_in, station_data, trip_out)", "_____no_output_____" ] ], [ [ "Since the `summarise_data()` function has created a standalone file, the above cell will not need to be run a second time, even if you close the notebook and start a new session. You can just load in the dataset and then explore things from there.", "_____no_output_____" ] ], [ [ "trip_data = pd.read_csv('babs_y1_y2_summary.csv')\ndisplay(trip_data.head())", "_____no_output_____" ] ], [ [ "#### Now it's your turn to explore the new dataset with `usage_stats()` and `usage_plot()` and report your findings! Here's a refresher on how to use the `usage_plot()` function:\n- first argument (required): loaded dataframe from which data will be analyzed.\n- second argument (required): variable on which trip counts will be divided.\n- third argument (optional): data filters limiting the data points that will be counted. Filters should be given as a list of conditions, each element should be a string in the following format: `'<field> <op> <value>'` using one of the following operations: >, <, >=, <=, ==, !=. Data points must satisfy all conditions to be counted or visualized. For example, `[\"duration < 15\", \"start_city == 'San Francisco'\"]` retains only trips that originated in San Francisco and are less than 15 minutes long.\n\nIf data is being split on a numeric variable (thus creating a histogram), some additional parameters may be set by keyword.\n- \"n_bins\" specifies the number of bars in the resultant plot (default is 10).\n- \"bin_width\" specifies the width of each bar (default divides the range of the data by number of bins). \"n_bins\" and \"bin_width\" cannot be used simultaneously.\n- \"boundary\" specifies where one of the bar edges will be placed; other bar edges will be placed around that value (this may result in an additional bar being plotted). This argument may be used alongside the \"n_bins\" and \"bin_width\" arguments.\n\nYou can also add some customization to the `usage_stats()` function as well. The second argument of the function can be used to set up filter conditions, just like how they are set up in `usage_plot()`.", "_____no_output_____" ] ], [ [ "usage_stats(trip_data)", "There are 669959 data points in the dataset.\nThe average duration of trips is 18.47 minutes.\nThe median trip duration is 8.62 minutes.\n25% of trips are shorter than 5.73 minutes.\n25% of trips are longer than 12.58 minutes.\n" ], [ "usage_plot(trip_data,'start_hour',['subscription_type == Subscriber'])", "_____no_output_____" ] ], [ [ "Explore some different variables using the functions above and take note of some trends you find. Feel free to create additional cells if you want to explore the dataset in other ways or multiple ways.\n\n> **Tip**: In order to add additional cells to a notebook, you can use the \"Insert Cell Above\" and \"Insert Cell Below\" options from the menu bar above. There is also an icon in the toolbar for adding new cells, with additional icons for moving the cells up and down the document. By default, new cells are of the code type; you can also specify the cell type (e.g. Code or Markdown) of selected cells from the Cell menu or the dropdown in the toolbar.\n\nOne you're done with your explorations, copy the two visualizations you found most interesting into the cells below, then answer the following questions with a few sentences describing what you found and why you selected the figures. Make sure that you adjust the number of bins or the bin limits so that they effectively convey data findings. Feel free to supplement this with any additional numbers generated from `usage_stats()` or place multiple visualizations to support your observations.", "_____no_output_____" ] ], [ [ "# Final Plot 1\nusage_plot(trip_data,'start_hour',['subscription_type == Subscriber'],bin_width=1)\nusage_plot(trip_data,'weekday',['subscription_type == Subscriber'])", "_____no_output_____" ] ], [ [ "**Question 5a**: What is interesting about the above visualization? Why did you select it?\n\n**Answer**: Both graphs show that most Subscribers use the service to go to work, since vast majority happened on weekdays, and between 7-9 AM and 4-5 PM.", "_____no_output_____" ] ], [ [ "# Final Plot 2\nusage_plot(trip_data,'start_month',['subscription_type == Customer'], boundary = 1)\nusage_plot(trip_data,'start_month',['subscription_type == Customer'], boundary = 1, n_bins=12)\nusage_plot(trip_data,'weekday',['subscription_type == Customer','start_month > 6'],bin_width=30)\nusage_plot(trip_data,'start_city',['subscription_type == Customer'])", "_____no_output_____" ] ], [ [ "**Question 5b**: What is interesting about the above visualization? Why did you select it?\n\n**Answer**: Majority of customer users happened in the second semester of the year, took place in San Francisco and during the weekends. Obs: I corrected the display by removing bin_width.\n\nOBS: Dear evaluator, please notice the graph using n_bins does not get any better. I tried it before, and had this unfortunate result.", "_____no_output_____" ], [ "## Conclusions\n\nCongratulations on completing the project! This is only a sampling of the data analysis process: from generating questions, wrangling the data, and to exploring the data. Normally, at this point in the data analysis process, you might want to draw conclusions about our data by performing a statistical test or fitting the data to a model for making predictions. There are also a lot of potential analyses that could be performed on the data which are not possible with only the code given. Instead of just looking at number of trips on the outcome axis, you could see what features affect things like trip duration. We also haven't looked at how the weather data ties into bike usage.\n\n**Question 6**: Think of a topic or field of interest where you would like to be able to apply the techniques of data science. What would you like to be able to learn from your chosen subject?\n\n**Answer**: A big concern in Brazil nowadays is in politics corruption and misuse of tax payer´s money. One topic that would really interest me is to use some data made available from cities to control of politicians are using money properly and not for personal gain and enrichment.\n\n", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "markdown", "markdown", "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ] ]
e7fe7ae8f0fa3d7880d849ee86507e94306714a8
46,552
ipynb
Jupyter Notebook
Projects/upper_sco_age/apsidal_motion_age.ipynb
gfeiden/Notebook
daf211f5c059d4b11dab0b57b05ec05918f430b2
[ "MIT" ]
null
null
null
Projects/upper_sco_age/apsidal_motion_age.ipynb
gfeiden/Notebook
daf211f5c059d4b11dab0b57b05ec05918f430b2
[ "MIT" ]
null
null
null
Projects/upper_sco_age/apsidal_motion_age.ipynb
gfeiden/Notebook
daf211f5c059d4b11dab0b57b05ec05918f430b2
[ "MIT" ]
null
null
null
117.853165
33,234
0.85543
[ [ [ "# Apsidal Motion Age for HD 144548\n\nHere, I am attempting to derive an age for the triple eclipsing hierarchical triple HD 144548 (Upper Scoripus member) based on the observed orbital precession (apsidal motion) of the inner binary system's orbit about the tertiary companion (star A). A value for the orbital precession is reported by Alonso et al. ([2015, arXiv: 1510.03773](http://adsabs.harvard.edu/abs/2015arXiv151003773A)) as $\\dot{\\omega} = 0.0235 \\pm 0.002\\ {\\rm deg\\, cycle}^{-1}$, obtained from photo-dynamical modeling of a _Kepler_/K2 lightcurve. The technique of determining an age from apsidal motion observed in young binary systems is detailed by Feiden & Dotter ([2013, ApJ, 765, 86](http://adsabs.harvard.edu/abs/2013ApJ...765...86F)). Their technique relies heavily on the analytical framework for the classical theory of orbital precession due to tidal and rotational distortions of graviational potentials by Kopal ([1978, ASSL, 68](http://adsabs.harvard.edu/abs/1978ASSL...68.....K)) and the inclusion of general relativistic orbital precession by Giménez ([1985, ApJ, 297, 405](http://adsabs.harvard.edu/abs/1985ApJ...297..405G)). \n\nIn brief, the technique outlined by Feiden & Dotter (2013) relies on the fact that young stars are contracting quasi-hydrostatically as they approach the main-sequence. As they contract, the mean density of the increases (assuming the star has constant mass), thereby altering the distribution of mass with respect to the mean density. This alters the interior structure parameter, which is related to the deviation from sphericity of the star and its resulting gravitational potential. A non-symmetric potential induces a precession of the point of periastron in a companion star's orbit, provided the orbit is eccentric. \n\nSince the internal density structure of a young star is changing as it contracts, the inferred interior structure parameter, and thus the induced pertrubation on the star's gravitational potential, also changes. Therefore, the rate at which the precision of a binary companions point of periastron occurs changes as a function of time. By measuring the rate of precession, one can then estimate the age of the system by inferring the required density distribution to induce that rate of precession, subject to the constraint that the orbital and stellar fundamental properties must be well determined - hence the reason why Feiden & Dotter (2013) focused exclusively on eclipsing binary systems.\n\nWhile a rate of orbital precession was measured by Alonso et al. (2015) for HD 144548, and the properties of all three stars were determined with reasonable precision, there is a fundamental difficulty: it's a triple system. The method outlined by Feiden & Dotter (2013) was intended for binary systems, with no discussion of the influence of a tertiary companion.\n\nFortunately, the measured orbtial precision is for the orbit of the inner binary (Ba/Bb) about the tertiary star (A). Below, I focus on modeling the inner binary as a single object orbiting the tertiary star with a mass equal to the sum of the component masses (thus more massive than component A). \n\n\nThe first big hurdle is to figure out how to treat the Ba/Bb component as a single star. For an initial attempt, we can assume that the B component is a \"single\" star with a mass equal to the total mass of the binary system with an interior structure constant equal to the weighted mean of the two individual interior structure constants.\n\nTo compute the mean interior structure constants, we first need to compute the individual weights $c_{2, i}$. For $e = 0$, we have $f(e) = g(e) = 1$. ", "_____no_output_____" ] ], [ [ "def c2(masses, radii, e, a, rotation=None):\n f = (1.0 - e**2)**-2\n g = (8.0 + 12.0*e**2 + e**4)*f**(5.0/2.0) / 8.0\n if rotation == None:\n omega_ratio_sq = 0.0\n elif rotation == 'synchronized':\n omega_ratio_sq = (1.0 + e)/(1.0 - e)**3 \n else:\n omega_ratio_sq = 0.0\n \n c2_0 = (omega_ratio_sq*(1.0 + masses[1]/masses[0])*f + 15.0*g*masses[1]/masses[0])*(radii[0]/a)**5\n c2_1 = (omega_ratio_sq*(1.0 + masses[0]/masses[1])*f + 15.0*g*masses[0]/masses[1])*(radii[1]/a)**5\n \n return c2_0, c2_1", "_____no_output_____" ], [ "# parameters for the orbit of Ba/Bb\ne = 0.0015\na = 7.249776\nmasses = [0.984, 0.944]", "_____no_output_____" ], [ "# c2_B = c2(masses, radii, e, a)", "_____no_output_____" ] ], [ [ "What complicates the issue is that the interior structure constants for the B components also vary as a function of age, so we need to compute a mean mass track using the $c_2$ coefficients and the individual $k_2$ values.", "_____no_output_____" ] ], [ [ "import numpy as np", "_____no_output_____" ], [ "trk_Ba = np.genfromtxt('/Users/grefe950/evolve/dmestar/trk/gs98/p000/a0/amlt1884/m0980_GS98_p000_p0_y28_mlt1.884.trk')\ntrk_Bb = np.genfromtxt('/Users/grefe950/evolve/dmestar/trk/gs98/p000/a0/amlt1884/m0940_GS98_p000_p0_y28_mlt1.884.trk')", "_____no_output_____" ] ], [ [ "Create tracks with equally spaced time steps.", "_____no_output_____" ] ], [ [ "from scipy.interpolate import interp1d", "_____no_output_____" ], [ "log10_age = np.arange(6.0, 8.0, 0.01) # log10(age/yr)\nages = 10**log10_age\n\nicurve = interp1d(trk_Ba[:,0], trk_Ba, kind='linear', axis=0)\nnew_trk_Ba = icurve(ages)\n\nicurve = interp1d(trk_Bb[:,0], trk_Bb, kind='linear', axis=0)\nnew_trk_Bb = icurve(ages)", "_____no_output_____" ] ], [ [ "Now, compute the $c_2$ coefficients for each age.", "_____no_output_____" ] ], [ [ "mean_trk_B = np.empty((len(ages), 3))\nfor i, age in enumerate(ages):\n c2s = c2(masses, [10**new_trk_Ba[i, 4], 10**new_trk_Bb[i, 4]], e, a, \n rotation='synchronized')\n avg_k2 = (c2s[0]*new_trk_Ba[i, 10] + c2s[1]*new_trk_Bb[i, 10])/(sum(c2s))\n \n mean_trk_B[i] = np.array([age, 10**new_trk_Ba[i, 4] + 10**new_trk_Bb[i, 4], avg_k2])", "_____no_output_____" ] ], [ [ "With that, we have an estimate for the mean B component properties as a function of age. One complicating factor is the \"radius\" of the average B component. If we are modeling the potential created by the Ba/Bb components as that of a single star, we need to assume that the A component never enters into any region of the combined potential that is dominated by either component.\n\nUnfortunately, it is very likely that the ratio of the Ba/Bb binary \"radius\" to the semi-major axis of the A/B orbit is going to be a dominant contributor to the apsidal motion.\n\n## Attempt 1: Semi-major axis + radius of B component\n\nLet's define orbtial properties of the (A, B) system.", "_____no_output_____" ] ], [ [ "e2 = 0.2652\na2 = 66.2319\nmasses_2 = [1.44, 1.928]", "_____no_output_____" ], [ "trk_A = np.genfromtxt('/Users/grefe950/evolve/dmestar/trk/gs98/p000/a0/amlt1884/m1450_GS98_p000_p0_y28_mlt1.884.trk', \n usecols=(0,1,2,3,4,5,6,7,8,9,10))", "_____no_output_____" ], [ "icurve = interp1d(trk_A[:,0], trk_A, kind='linear', axis=0)\nnew_trk_A = icurve(ages)", "_____no_output_____" ] ], [ [ "We are now in a position to compute the classical apsidal motion rate from the combined A/B tracks.", "_____no_output_____" ] ], [ [ "cl_apsidal_motion_rate = np.empty((len(ages), 2))\nfor i, age in enumerate(ages):\n c2_AB = c2(masses_2, [10**new_trk_A[i, 4], a + 0.5*mean_trk_B[i, 1]], e2, a2)\n cl_apsidal_motion_rate[i] = np.array([age, 360.0*(c2_AB[0]*new_trk_A[i, 10] + c2_AB[1]*mean_trk_B[i, 2])])", "_____no_output_____" ], [ "GR_apsidal_motion_rate = 5.45e-4*(sum(masses)/33.945)**(2./3.) / (1.0 - e2**2) # Giménez (1985)", "_____no_output_____" ], [ "GR_apsidal_motion_rate", "_____no_output_____" ] ], [ [ "One can see from this that the general relativistic component is a very small contribution to the total apsidal motion of the system. \n\nLet's look at the evolution of the apsidal motion for the A/B binary system.", "_____no_output_____" ] ], [ [ "%matplotlib inline\nimport matplotlib.pyplot as plt", "_____no_output_____" ], [ "fig, ax = plt.subplots(1, 1, figsize=(8., 8.), sharex=True)\n\nax.grid(True)\nax.tick_params(which='major', axis='both', length=15., labelsize=18.)\nax.set_xlabel('Age (yr)', fontsize=20., family='serif')\nax.set_ylabel('Apsidal Motion Rate (deg / cycle)', fontsize=20., family='serif')\nax.plot([1.0e6, 1.0e8], [0.0215, 0.0215], '--', lw=1, c='#555555')\nax.plot([1.0e6, 1.0e8], [0.0255, 0.0255], '--', lw=1, c='#555555')\nax.semilogx(cl_apsidal_motion_rate[:, 0], cl_apsidal_motion_rate[:, 1], '-', lw=2, c='#b22222')", "_____no_output_____" ] ], [ [ "How sensitive is this to the properties of the A component, which are fairly uncertain?", "_____no_output_____" ] ], [ [ "icurve = interp1d(cl_apsidal_motion_rate[:,1], cl_apsidal_motion_rate[:,0], kind='linear')\nprint icurve(0.0235)/1.0e6, icurve(0.0255)/1.0e6, icurve(0.0215)/1.0e6", "11.2030404132 9.66153795127 12.8365039818\n" ] ], [ [ "From the classical apsidal motion rate, we might estimate the age of the system to be $11.2 \\pm 1.6\\ {\\rm Myr}$.", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ] ]
e7fe7d2c51fa1259359ec1a38f67c1a6b2fb9cb4
869,440
ipynb
Jupyter Notebook
covid_regression.ipynb
faisal004/covid-predicton-model
03b08ace8bb250178ca09b55b9bd37c972b84dce
[ "MIT" ]
null
null
null
covid_regression.ipynb
faisal004/covid-predicton-model
03b08ace8bb250178ca09b55b9bd37c972b84dce
[ "MIT" ]
null
null
null
covid_regression.ipynb
faisal004/covid-predicton-model
03b08ace8bb250178ca09b55b9bd37c972b84dce
[ "MIT" ]
null
null
null
229.706737
255,964
0.864134
[ [ [ "#this project is created by faisal husian and faizal khan in covid pandemic, linear regression fits best for our data set rather than decision tree.\n\nimport pandas as pd", "_____no_output_____" ], [ "covid = pd.read_csv('mydata.csv')", "_____no_output_____" ], [ "#Having a glance at some of the records\ncovid.head()", "_____no_output_____" ], [ "#Looking at the shape\ncovid.shape", "_____no_output_____" ], [ "covid.columns\n#summing up all the features in our dataset", "_____no_output_____" ], [ "#Looking at the different countries\ncovid[\"location\"].value_counts()", "_____no_output_____" ], [ "#Checking if columns have null values\ncovid.isna().any()", "_____no_output_____" ], [ "#Getting the sum of null values across each column\ncovid.isna().sum()", "_____no_output_____" ], [ "#Getting the cases in India\nindia_case=covid[covid[\"location\"]==\"India\"] ", "_____no_output_____" ], [ "india_case.head()\nindia_case.shape\n#getting insight of our indian case data", "_____no_output_____" ], [ "india_case.tail()", "_____no_output_____" ], [ "import seaborn as sns\nfrom matplotlib import pyplot as plt", "_____no_output_____" ], [ "#Total cases per day\nsns.set(rc={'figure.figsize':(15,10)})\nsns.lineplot(x=\"date\",y=\"total_cases\",data=india_case)\nplt.show()", "_____no_output_____" ], [ "#Making a dataframe for last 5 days\nindia_last_5_days=india_case.tail()", "_____no_output_____" ], [ "#Total cases in last 5 days\nsns.set(rc={'figure.figsize':(3,3)})\nsns.lineplot(x=\"date\",y=\"total_cases\",data=india_last_5_days)\nplt.show()", "_____no_output_____" ], [ "#Total tests per day\nsns.set(rc={'figure.figsize':(15,10)})\nsns.lineplot(x=\"date\",y=\"total_tests\",data=india_case)\nplt.show()", "_____no_output_____" ], [ "#Total tests in last 5 days\nsns.set(rc={'figure.figsize':(15,10)})\nsns.lineplot(x=\"date\",y=\"total_tests\",data=india_last_5_days)\nplt.show()", "_____no_output_____" ], [ "#Brazil Case\nbrazil_case=covid[covid[\"location\"]==\"Brazil\"] \nbrazil_case.head()", "_____no_output_____" ], [ "brazil_case.tail()", "_____no_output_____" ], [ "#Making a dataframe for brazil for last 5 days\nbrazil_last_5_days=brazil_case.tail()", "_____no_output_____" ], [ "#Total cases in last 5 days\nsns.set(rc={'figure.figsize':(15,10)})\nsns.lineplot(x=\"date\",y=\"total_cases\",data=brazil_last_5_days)\nplt.show()", "_____no_output_____" ], [ "#Understanding cases of India, China and Japan\nindia_japan_china=covid[(covid[\"location\"] ==\"India\") | (covid[\"location\"] ==\"China\") | (covid[\"location\"]==\"Japan\")]", "_____no_output_____" ], [ "#Plotting growth of cases across China, India and Japan\nsns.set(rc={'figure.figsize':(15,10)})\nsns.barplot(x=\"location\",y=\"total_cases\",data=india_japan_china,hue=\"date\")\nplt.show()", "_____no_output_____" ], [ "#Understanding cases of germany and spain\ngermany_spain=covid[(covid[\"location\"] ==\"Germany\") | (covid[\"location\"] ==\"Spain\")]", "_____no_output_____" ], [ "#Plotting growth of cases across Germany and Spain\nsns.set(rc={'figure.figsize':(15,10)})\nsns.barplot(x=\"location\",y=\"total_cases\",data=germany_spain,hue=\"date\")\nplt.show()", "_____no_output_____" ], [ "#Getting latest data\nlast_day_cases=covid[covid[\"date\"]==\"2020-05-24\"]\nlast_day_cases", "_____no_output_____" ], [ "#Sorting data w.r.t total_cases\nmax_cases_country=last_day_cases.sort_values(by=\"total_cases\",ascending=False)\nmax_cases_country", "_____no_output_____" ], [ "#Top 5 countries with maximum cases\nmax_cases_country[1:6]", "_____no_output_____" ], [ "#Making bar-plot for countries with top cases\nsns.barplot(x=\"location\",y=\"total_cases\",data=max_cases_country[1:6],hue=\"location\")\nplt.show()", "_____no_output_____" ], [ "india_case.head()", "_____no_output_____" ], [ "#Linear regression\nfrom sklearn.model_selection import train_test_split", "_____no_output_____" ], [ "#converting string date to date-time\nimport datetime as dt\nindia_case['date'] = pd.to_datetime(india_case['date']) \nindia_case.head()", "<ipython-input-38-41c360152603>:3: SettingWithCopyWarning: \nA value is trying to be set on a copy of a slice from a DataFrame.\nTry using .loc[row_indexer,col_indexer] = value instead\n\nSee the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy\n india_case['date'] = pd.to_datetime(india_case['date'])\n" ], [ "india_case.head()", "_____no_output_____" ], [ "#converting date-time to ordinal\nindia_case['date']=india_case['date'].map(dt.datetime.toordinal)\nindia_case.head()", "<ipython-input-40-9eb3bcdebcda>:2: SettingWithCopyWarning: \nA value is trying to be set on a copy of a slice from a DataFrame.\nTry using .loc[row_indexer,col_indexer] = value instead\n\nSee the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy\n india_case['date']=india_case['date'].map(dt.datetime.toordinal)\n" ], [ "#getting dependent variable and inpedent variable\nx=india_case['date']\ny=india_case['total_cases']", "_____no_output_____" ], [ "x_train,x_test,y_train,y_test=train_test_split(x,y,test_size=0.3)", "_____no_output_____" ], [ "from sklearn.linear_model import LinearRegression", "_____no_output_____" ], [ "lr = LinearRegression()", "_____no_output_____" ], [ "import numpy as np\nlr.fit(np.array(x_train).reshape(-1,1),np.array(y_train).reshape(-1,1))", "_____no_output_____" ], [ "india_case.tail()", "_____no_output_____" ], [ "y_pred=lr.predict(np.array(x_test).reshape(-1,1))", "_____no_output_____" ], [ "from sklearn.metrics import mean_squared_error", "_____no_output_____" ], [ "mean_squared_error(x_test,y_pred)", "_____no_output_____" ], [ "lr.predict(np.array([[737573]]))", "_____no_output_____" ] ] ]
[ "code" ]
[ [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ] ]
e7fe87b8997c7a5ba3e33a7662ae43f4b98323ea
706,504
ipynb
Jupyter Notebook
CORD19_Analysis.ipynb
davidcdupuis/CORD-19
a158886544b9919f3d23fd803c47721f81cf683a
[ "MIT" ]
null
null
null
CORD19_Analysis.ipynb
davidcdupuis/CORD-19
a158886544b9919f3d23fd803c47721f81cf683a
[ "MIT" ]
null
null
null
CORD19_Analysis.ipynb
davidcdupuis/CORD-19
a158886544b9919f3d23fd803c47721f81cf683a
[ "MIT" ]
null
null
null
80.623531
33,336
0.641835
[ [ [ "# CORD19 Analysis", "_____no_output_____" ] ], [ [ "%matplotlib inline", "_____no_output_____" ], [ "# import nltk\n# nltk.download('stopwords')\n# nltk.download('punkt')\n# nltk.download('averaged_perceptron_tagger')", "_____no_output_____" ], [ "import json\nimport yaml\nimport os\nimport nltk\nimport matplotlib.pyplot as plt\nimport re\nimport pandas as pd\nfrom nltk.corpus import stopwords \n#import plotly.graph_objects as go\nimport networkx as nx", "_____no_output_____" ] ], [ [ "# Configurations", "_____no_output_____" ] ], [ [ "# import configurations\nwith open('config.yaml','r') as ymlfile:\n cfg = yaml.load(ymlfile)", "C:\\Users\\david\\Anaconda3\\lib\\site-packages\\ipykernel_launcher.py:3: YAMLLoadWarning: calling yaml.load() without Loader=... is deprecated, as the default Loader is unsafe. Please read https://msg.pyyaml.org/load for full details.\n This is separate from the ipykernel package so we can avoid doing imports until\n" ] ], [ [ "## General Functions", "_____no_output_____" ] ], [ [ "def get_papers(path):\n # get list of papers .json from path\n papers = []\n for file_name in os.listdir(path):\n papers.append(file_name)\n return papers\n\ndef extract_authors(authors_list):\n '''\n Function to extract authors metadata from list of authors \n '''\n authors = []\n for curr_author in authors_list:\n author = {}\n author['first'] = curr_author['first']\n author['middle'] = curr_author['middle']\n author['last'] = curr_author['last']\n authors.append(author)\n return authors\n\ndef extract_abstract(abstract):\n # use regex to remove 'word count: 194', get word count ?\n # use regex to remove Text word count: 5168', get text word count ?\n # remove 1,2 digit numbers that don't have text attached ? \n stop_sentences = ['All rights reserved.','No reuse allowed without permission.','Abstract','author/funder']\n abstract_text = ''\n for section in abstract:\n abstract_text = abstract_text + ' ' + section['text']\n abstract_text = abstract_text.strip(\" \")\n for s in stop_sentences:\n abstract_text = abstract_text.replace(s,\"\")\n return abstract_text\n\ndef extract_references(bib_entries):\n refs = []\n for r in bib_entries:\n ref = {}\n ref['id'] = bib_entries[r]['ref_id']\n ref['title'] = bib_entries[r]['title']\n ref['authors'] = bib_entries[r]['authors']\n ref['year'] = bib_entries[r]['year']\n refs.append(ref)\n return refs\n\ndef extract_paper_metadata(paper):\n paper_metadata = {}\n paper_metadata['id'] = paper['paper_id']\n paper_metadata['title'] = paper['metadata']['title']\n paper_metadata['authors'] = extract_authors(paper['authors'])\n paper_metadata['abstract'] = extract_abstract(paper['abstract'])\n paper_metadata['refs'] = extract_references(paper['bib_entries'])\n return paper_metadata\n\ndef get_paper_data(path, paper_id):\n file_path = os.path.join(path, paper_id)\n with open(file_path, 'r') as f:\n paper_info = json.load(f)\n return paper_info", "_____no_output_____" ] ], [ [ "## Objects", "_____no_output_____" ] ], [ [ "class Author:\n def __init__(self, firstname, middlename, lastname):\n self.firstName = firstname\n self.middleName = middlename\n self.lastName = lastname\n \n def __str__(self):\n return '{} {} {}'.format(self.firstName, self.middleName, self.lastName)", "_____no_output_____" ], [ "class Paper:\n def __init__(self, sha, title='', authors=None, date=None):\n self.id = sha\n self.title = title\n self.authors = authors\n self.date = date\n self.url = ''\n \n def __str__(self):\n s = ''\n s += 'Paper ID: {}\\n'.format(self.id)\n s += 'Title: {}\\n'.format(self.title)\n if self.authors:\n s += '# Authors: {}\\n'.format(len(self.authors))\n else:\n s += '# Authors: 0\\n'\n s += 'Date: {}\\n'.format(self.date)\n s += 'URL: {}'.format(self.url)\n return s", "_____no_output_____" ], [ "path = cfg['data-path'] + biorxiv\nprint(path)\n#paper = json.loads(path)", "C:\\Users\\david\\OneDrive\\Bureau\\CORD-19-research-challenge\\\\2020-03-13\\biorxiv_medrxiv\\biorxiv_medrxiv\n" ] ], [ [ "# Metadata", "_____no_output_____" ] ], [ [ "meta = '2020-03-13/all_sources_metadata_2020-03-13.csv'\ndf_meta = pd.read_csv(cfg['data-path'] + meta)\ndf_meta.head()", "_____no_output_____" ], [ "df_meta[df_meta['has_full_text']==True]", "_____no_output_____" ], [ "df_meta.info()", "<class 'pandas.core.frame.DataFrame'>\nRangeIndex: 29500 entries, 0 to 29499\nData columns (total 14 columns):\nsha 17420 non-null object\nsource_x 29500 non-null object\ntitle 29130 non-null object\ndoi 26357 non-null object\npmcid 27337 non-null object\npubmed_id 16730 non-null float64\nlicense 17692 non-null object\nabstract 26553 non-null object\npublish_time 18248 non-null object\nauthors 28554 non-null object\njournal 17791 non-null object\nMicrosoft Academic Paper ID 1134 non-null float64\nWHO #Covidence 1236 non-null object\nhas_full_text 17420 non-null object\ndtypes: float64(2), object(12)\nmemory usage: 3.2+ MB\n" ], [ "df_meta['source_x'].unique()", "_____no_output_____" ], [ "paper_ids = set(df_meta.iloc[:,0])\npaper_ids.pop()\npaper_ids", "_____no_output_____" ], [ "df_meta[df_meta['source_x']=='biorxiv'][['sha','doi']]", "_____no_output_____" ] ], [ [ "# biorxiv_medrxiv", "_____no_output_____" ] ], [ [ "biorxiv = '\\\\2020-03-13\\\\biorxiv_medrxiv\\\\biorxiv_medrxiv'\npath = cfg['data-path'] + biorxiv\npapers = get_papers(path)\ncnt = 0\n# check if paper are in metadata dataframe\nfor paper in papers:\n if paper[:-5] not in paper_ids:\n print(paper)\n else:\n cnt += 1\nprint('There are {}/{} papers present in the metadataset.'.format(cnt, len(papers)))\nprint('Examples:')\nfor paper in papers[:5]:\n print(paper)", "There are 803/803 papers present in the metadataset.\nExamples:\n0015023cc06b5362d332b3baf348d11567ca2fbb.json\n004f0f8bb66cf446678dc13cf2701feec4f36d76.json\n00d16927588fb04d4be0e6b269fc02f0d3c2aa7b.json\n013d9d1cba8a54d5d3718c229b812d7cf91b6c89.json\n01d162d7fae6aaba8e6e60e563ef4c2fca7b0e18.json\n" ], [ "paper_info = get_paper_data(cfg['data-path'] + biorxiv, papers[10])", "_____no_output_____" ], [ "paper_info", "_____no_output_____" ], [ "extract_paper_metadata(paper_info)", "_____no_output_____" ], [ "df_meta[df_meta['sha']==paper_info['id']]", "_____no_output_____" ] ], [ [ "# pmc_custom_license", "_____no_output_____" ] ], [ [ "pmc = '2020-03-13\\pmc_custom_license\\pmc_custom_license'\npath = cfg['data-path'] + pmc\npmc_papers = get_papers(path)\npmc_papers[:5]", "_____no_output_____" ], [ "cnt = 0\n# check if paper are in metadata dataframe\nfor paper in pmc_papers:\n if paper[:-5] not in paper_ids:\n print(paper)\n else:\n cnt += 1\nprint('There are {}/{} papers present in the metadataset.'.format(cnt, len(pmc_papers)))", "There are 1426/1426 papers present in the metadataset.\n" ], [ "paper_info = get_paper_data(path, pmc_papers[10])", "_____no_output_____" ], [ "paper_info", "_____no_output_____" ], [ "df_meta[df_meta['sha']=='0036e8891c93ae63611bde179ada1e03e8577dea']", "_____no_output_____" ] ], [ [ "# comm_use_subset", "_____no_output_____" ], [ "# noncomm_use_subset", "_____no_output_____" ] ], [ [ "# extract data from all papers\nall_papers_data = []\nfor paper_name in papers:\n file_path = os.path.join(path,paper_name)\n with open(file_path, 'r') as f:\n paper_info = extract_paper_metadata(json.load(f))\n all_papers_data.append(paper_info)", "_____no_output_____" ], [ "for i in range(10):\n print('- {}'.format(all_papers_data[i]['title']))", "- The RNA pseudoknots in foot-and-mouth disease virus are dispensable for genome replication but essential for the production of infectious virus. 2 3\n- Healthcare-resource-adjusted vulnerabilities towards the 2019-nCoV epidemic across China\n- Real-time, MinION-based, amplicon sequencing for lineage typing of infectious bronchitis virus from upper respiratory samples\n- Assessing spread risk of Wuhan novel coronavirus within and beyond China, January-April 2020: a travel network-based modelling study\n- TWIRLS, an automated topic-wise inference method based on massive literature, suggests a possible mechanism via ACE2 for the pathological changes in the human host after coronavirus infection\n- Title: Viruses are a dominant driver of protein adaptation in mammals\n- The impact of regular school closure on seasonal influenza epidemics: a data-driven spatial transmission model for Belgium\n- Carbon Nanocarriers Deliver siRNA to Intact Plant Cells for Efficient Gene\n- Protective Population Behavior Change in Outbreaks of Emerging Infectious Disease 1 2\n- A hidden gene in astroviruses encodes a cell-permeabilizing protein involved in virus release\n" ], [ "# get json data of current paper\nfile_path = os.path.join(path,papers[0])\nwith open(file_path, 'r') as f:\n paper = extract_paper_metadata(json.load(f))\nprint(paper['id'])", "0015023cc06b5362d332b3baf348d11567ca2fbb\n" ] ], [ [ "# Authors", "_____no_output_____" ] ], [ [ "def are_equal(author1, author2):\n if (author1['first'][0] == author2['first'][0]) and (author1['mid'] == author2['mid']) and (author1['last'] == author2['last']):\n return True", "_____no_output_____" ], [ "class Author:\n def __init__(self, firstname, middlename, lastname):\n self.firstName = firstname\n self.middleName = middlename\n self.lastName = lastname\n self.papers = []\n \n def __str__(self):\n return '{} {} {}'.format(self.firstName, self.middleName, self.lastName)", "_____no_output_____" ], [ "authors = []", "_____no_output_____" ] ], [ [ "### Co-Authors", "_____no_output_____" ] ], [ [ "from itertools import combinations", "_____no_output_____" ], [ "co_authors_net = nx.Graph()", "_____no_output_____" ], [ "# for each paper\nfor i in range(len(all_papers_data)):\n # get list of authors\n co_authors = []\n for author in all_papers_data[i]['authors']:\n author_full_name = ''\n \n # only keep authors with first and last names\n if author['first'] and author['last']:\n author_full_name += author['first']\n for initial in author['middle']:\n author_full_name += ' ' + initial\n author_full_name += ' ' + author['last']\n author_full_name.strip(' ')\n co_authors.append(author_full_name)\n #print(co_authors)\n for combo in combinations(co_authors,2):\n co_authors_net.add_edge(combo[0],combo[1])\n #print('-'*60)\n", "_____no_output_____" ], [ "for i in combinations([1,2,3],2):\n print(i)", "(1, 2)\n(1, 3)\n(2, 3)\n" ], [ "nx.draw(co_authors_net, node_color='blue',node_size=10)\nplt.savefig(\"graph.png\", dpi=1000)", "C:\\Users\\david\\Anaconda3\\lib\\site-packages\\networkx\\drawing\\nx_pylab.py:611: MatplotlibDeprecationWarning: isinstance(..., numbers.Number)\n if cb.is_numlike(alpha):\n" ] ], [ [ "### Reference Authors", "_____no_output_____" ] ], [ [ "for i in range(3):\n for author in all_papers_data[i]['authors']:\n print(author)\n # referenced authors\n for ref in all_papers_data[i]['refs']:\n for author in ref['authors']:\n print(author)\n print('-'*60)", "{'first': 'Joseph', 'middle': ['C'], 'last': 'Ward'}\n{'first': 'Lidia', 'middle': [], 'last': 'Lasecka-Dykes'}\n{'first': 'Chris', 'middle': [], 'last': 'Neil'}\n{'first': 'Oluwapelumi', 'middle': [], 'last': 'Adeyemi'}\n{'first': 'Sarah', 'middle': [], 'last': ''}\n{'first': '', 'middle': [], 'last': 'Gold'}\n{'first': 'Niall', 'middle': [], 'last': 'Mclean'}\n{'first': 'Caroline', 'middle': [], 'last': 'Wright'}\n{'first': 'Morgan', 'middle': ['R'], 'last': 'Herod'}\n{'first': 'David', 'middle': [], 'last': 'Kealy'}\n{'first': 'Emma', 'middle': [], 'last': ''}\n{'first': 'Warner', 'middle': [], 'last': ''}\n{'first': 'Donald', 'middle': ['P'], 'last': 'King'}\n{'first': 'Tobias', 'middle': ['J'], 'last': 'Tuthill'}\n{'first': 'David', 'middle': ['J'], 'last': 'Rowlands'}\n{'first': 'Nicola', 'middle': ['J'], 'last': ''}\n{'first': 'Stonehouse', 'middle': [], 'last': 'A#'}\n{'first': 'T', 'middle': [], 'last': 'Jackson', 'suffix': ''}\n{'first': 'T', 'middle': ['J'], 'last': 'Tuthill', 'suffix': ''}\n{'first': 'D', 'middle': ['J'], 'last': 'Rowlands', 'suffix': ''}\n{'first': 'N', 'middle': ['J'], 'last': 'Stonehouse', 'suffix': ''}\n{'first': 'N', 'middle': ['D'], 'last': 'Sanderson', 'suffix': ''}\n{'first': 'N', 'middle': ['J'], 'last': 'Knowles', 'suffix': ''}\n{'first': 'D', 'middle': ['P'], 'last': 'King', 'suffix': ''}\n{'first': 'E', 'middle': ['M'], 'last': 'Cottam', 'suffix': ''}\n{'first': 'A', 'middle': [], 'last': 'Acevedo', 'suffix': ''}\n{'first': 'R', 'middle': [], 'last': 'Andino', 'suffix': ''}\n{'first': 'Y', 'middle': [], 'last': 'Peng', 'suffix': ''}\n{'first': 'Hcm', 'middle': [], 'last': 'Leung', 'suffix': ''}\n{'first': 'S', 'middle': ['M'], 'last': 'Yiu', 'suffix': ''}\n{'first': 'Fyl', 'middle': [], 'last': 'Chin', 'suffix': ''}\n{'first': 'S', 'middle': ['F'], 'last': 'Altschul', 'suffix': ''}\n{'first': 'W', 'middle': [], 'last': 'Gish', 'suffix': ''}\n{'first': 'W', 'middle': [], 'last': 'Miller', 'suffix': ''}\n{'first': 'E', 'middle': ['W'], 'last': 'Myers', 'suffix': ''}\n{'first': 'D', 'middle': ['J'], 'last': 'Lipman', 'suffix': ''}\n{'first': 'E', 'middle': [], 'last': 'Rieder', 'suffix': ''}\n{'first': 'T', 'middle': [], 'last': 'Bunch', 'suffix': ''}\n{'first': 'F', 'middle': [], 'last': 'Brown', 'suffix': ''}\n{'first': 'P', 'middle': ['W'], 'last': 'Mason', 'suffix': ''}\n{'first': 'N', 'middle': ['J'], 'last': 'Stonehouse', 'suffix': ''}\n{'first': 'L', 'middle': [], 'last': 'Martin', 'suffix': ''}\n{'first': 'G', 'middle': [], 'last': 'Duke', 'suffix': ''}\n{'first': 'J', 'middle': [], 'last': 'Osorio', 'suffix': ''}\n{'first': 'D', 'middle': [], 'last': 'Hall', 'suffix': ''}\n{'first': 'A', 'middle': [], 'last': 'Palmenberg', 'suffix': ''}\n{'first': '3d', 'middle': [], 'last': 'Wt', 'suffix': ''}\n------------------------------------------------------------\n{'first': 'Hanchu', 'middle': [], 'last': 'Zhou'}\n{'first': 'Jiannan', 'middle': [], 'last': 'Yang'}\n{'first': 'Kaicheng', 'middle': [], 'last': 'Tang'}\n{'first': '†', 'middle': [], 'last': ''}\n{'first': 'Qingpeng', 'middle': [], 'last': 'Zhang'}\n{'first': 'Zhidong', 'middle': [], 'last': 'Cao'}\n{'first': 'Dirk', 'middle': [], 'last': 'Pfeiffer'}\n{'first': 'Daniel', 'middle': ['Dajun'], 'last': 'Zeng'}\n{'first': 'C', 'middle': [], 'last': 'Wang', 'suffix': ''}\n{'first': 'P', 'middle': ['W'], 'last': 'Horby', 'suffix': ''}\n{'first': 'F', 'middle': ['G'], 'last': 'Hayden', 'suffix': ''}\n{'first': 'G', 'middle': ['F'], 'last': 'Gao', 'suffix': ''}\n------------------------------------------------------------\n{'first': 'Salman', 'middle': ['L'], 'last': 'Butt'}\n{'first': 'Eric', 'middle': ['C'], 'last': 'Erwood'}\n{'first': 'Jian', 'middle': [], 'last': 'Zhang'}\n{'first': 'Holly', 'middle': ['S'], 'last': 'Sellers'}\n{'first': 'Kelsey', 'middle': [], 'last': 'Young'}\n{'first': 'Kevin', 'middle': ['K'], 'last': 'Lahmers'}\n{'first': 'James', 'middle': ['B'], 'last': 'Stanton'}\n{'first': 'S', 'middle': ['H'], 'last': 'Abro', 'suffix': ''}\n{'first': 'Y', 'middle': ['A'], 'last': 'Bochkov', 'suffix': ''}\n{'first': 'S', 'middle': ['L'], 'last': 'Butt', 'suffix': ''}\n{'first': 'S', 'middle': ['A'], 'last': 'Callison', 'suffix': ''}\n{'first': 'P', 'middle': [], 'last': 'De Herdt', 'suffix': ''}\n{'first': 'S', 'middle': [], 'last': 'Escutenaire', 'suffix': ''}\n{'first': '', 'middle': [], 'last': 'Sybr', 'suffix': ''}\n{'first': 'H', 'middle': [], 'last': 'Ferreira', 'suffix': ''}\n{'first': 'T', 'middle': [], 'last': 'Hodgson', 'suffix': ''}\n{'first': 'M', 'middle': ['W'], 'last': 'Jackwood', 'suffix': ''}\n{'first': 'M', 'middle': ['W'], 'last': 'Jackwood', 'suffix': ''}\n{'first': 'M', 'middle': ['W'], 'last': 'Jackwood', 'suffix': ''}\n{'first': 'M', 'middle': [], 'last': 'Jain', 'suffix': ''}\n{'first': 'M', 'middle': ['A'], 'last': 'Johnson', 'suffix': ''}\n{'first': 'N', 'middle': ['M'], 'last': 'Kamble', 'suffix': ''}\n{'first': 'C', 'middle': ['L'], 'last': 'Keeler', 'suffix': ''}\n{'first': 'Jr', 'middle': [], 'last': '', 'suffix': ''}\n{'first': 'D', 'middle': [], 'last': 'Kim', 'suffix': ''}\n{'first': 'B', 'middle': [], 'last': 'Kingham', 'suffix': ''}\n{'first': 'E', 'middle': ['T'], 'last': 'Mckinley', 'suffix': ''}\n{'first': 'M', 'middle': ['M'], 'last': 'Naguib', 'suffix': ''}\n{'first': 'C', 'middle': ['H'], 'last': 'Okino', 'suffix': ''}\n{'first': 'T', 'middle': [], 'last': 'Pohuang', 'suffix': ''}\n{'first': 'J', 'middle': [], 'last': 'Quick', 'suffix': ''}\n{'first': 'J', 'middle': [], 'last': 'Quick', 'suffix': ''}\n{'first': 'H-J', 'middle': [], 'last': 'Roh', 'suffix': ''}\n{'first': 'P', 'middle': ['D'], 'last': 'Schloss', 'suffix': ''}\n{'first': 'W', 'middle': [], 'last': 'Spaan', 'suffix': ''}\n{'first': 'S', 'middle': ['J'], 'last': 'Spatz', 'suffix': ''}\n{'first': 'T', 'middle': [], 'last': 'Stenzel', 'suffix': ''}\n{'first': 'S', 'middle': [], 'last': 'Sutou', 'suffix': ''}\n{'first': 'K', 'middle': [], 'last': 'Tamura', 'suffix': ''}\n{'first': 'Z', 'middle': [], 'last': 'Tarnagda', 'suffix': ''}\n{'first': 'V', 'middle': [], 'last': 'Valastro', 'suffix': ''}\n{'first': 'C-H', 'middle': [], 'last': 'Wang', 'suffix': ''}\n{'first': 'J', 'middle': [], 'last': 'Wang', 'suffix': ''}\n{'first': 'S', 'middle': [], 'last': 'Wei', 'suffix': ''}\n{'first': 'I', 'middle': ['A'], 'last': 'Wickramasinghe', 'suffix': ''}\n{'first': 'A', 'middle': ['K'], 'last': 'Williams', 'suffix': ''}\n{'first': 'D', 'middle': ['E'], 'last': 'Wood', 'suffix': ''}\n{'first': 'J', 'middle': [], 'last': 'Ye', 'suffix': ''}\n------------------------------------------------------------\n" ] ], [ [ "## Extracting Key Words", "_____no_output_____" ] ], [ [ "paper_json['body_text']", "_____no_output_____" ], [ "stop_sentences = ['All rights reserved.','No reuse allowed without permission.','Abstract','author/funder','The copyright holder for this preprint (which was not peer-reviewed) is the']\nabstract_text = extract_abstract(paper_json['abstract'])\nbody_text = ''\nfor t in paper_json['body_text']:\n body_text += ' ' + t['text']\ntotal_text = abstract_text + ' ' + body_text\n\nfor s in stop_sentences:\n total_text = total_text.replace(s,\"\")\n\nprint(total_text)", "word count: 194 22 Text word count: 5168 23 24 25 . 27 The positive stranded RNA genomes of picornaviruses comprise a single large open reading 28 frame flanked by 5′ and 3′ untranslated regions (UTRs). Foot-and-mouth disease virus (FMDV) 29 has an unusually large 5′ UTR (1.3 kb) containing five structural domains. These include the 30 internal ribosome entry site (IRES), which facilitates initiation of translation, and the cis-acting 31 replication element (cre). Less well characterised structures are a 5′ terminal 360 nucleotide 32 stem-loop, a variable length poly-C-tract of approximately 100-200 nucleotides and a series of 33 two to four tandemly repeated pseudoknots (PKs). We investigated the structures of the PKs 34 by selective 2′ hydroxyl acetylation analysed by primer extension (SHAPE) analysis and 35 determined their contribution to genome replication by mutation and deletion experiments. 36 SHAPE and mutation experiments confirmed the importance of the previously predicted PK 37 structures for their function. Deletion experiments showed that although PKs are not essential 38 for replication, they provide genomes with a competitive advantage. However, although 39 replicons and full-length genomes lacking all PKs were replication competent, no infectious 40 virus was rescued from genomes containing less than one PK copy. This is consistent with our 41 earlier report describing the presence of putative packaging signals in the PK region. 42 43 . VP3, and VP0 (which is further processed to VP2 and VP4 during virus assembly) (6). The P2 64 and P3 regions encode the non-structural proteins 2B and 2C and 3A, 3B (1-3) (VPg), 3C pro and 4 structural protein-coding region is replaced by reporter genes, allow the study of genome 68 replication without the requirement for high containment (9, 10) ( figure 1A ). The FMDV 5′ UTR is the largest known picornavirus UTR, comprising approximately 1300 71 nucleotides and containing several highly structured regions. The first 360 nucleotides at the 5′ 72 end are predicted to fold into a single large stem loop termed the S-fragment, followed by a The PKs were originally predicted in 1987 and consist of two to four tandem repeats of a ~48 86 nucleotide region containing a small stem loop and downstream interaction site (figure 1B) 87 (12). Due to the sequence similarity between the PKs (figure 1C), it is speculated that they 88 were formed by duplication events during viral replication, probably involving recombination. 89 Between two and four PKs are present in different virus isolates but no strain has been 90 identified with less than two PKs, emphasising their potential importance in the viral life cycle 91 (19, 20) . The presence of PKs has been reported in the 5′ UTR of other picornaviruses such as 92 . can occur in the absence of PKs at least one is required for wild-type (wt) replication. 104 Furthermore, competition experiments showed that extra copies of PKs conferred a replicative 105 advantage to genomes. Although replicons and full-length genomes lacking PKs were 106 replication-competent, no infectious virus was rescued from genomes containing less than one 107 PK copy. This is consistent with our earlier report describing the presence of putative 108 packaging signals in the PK region (22). 109 110 . Plasmid construction. 117 The FMDV replicon plasmids, pRep-ptGFP, and the replication-defective polymerase mutant 118 control, 3D-GNN, have already been described (10). To introduce mutations into the PK region, the pRep-ptGFP replicon plasmid was digested 121 with SpeI and KpnI and the resulting fragment inserted into a sub-cloning vector (pBluescript) 122 to create the pBluescript PK. PKs 3 and 4 were removed by digestion with HindIII and AatII 123 before insertion of a synthetic DNA sequence with PK 3 and 4 deleted. PKs 2, 3 and 4 were 124 deleted by PCR amplification using ΔPK 234 Forward primer and FMDV 1331-1311 reverse 125 primer, the resultant product was digested with HindIII and AatII and ligated into the 126 pBluescript PK vector. Complete PK deletion was achieved by introduction of an AflII site at 127 the 3′ end of the poly-C tract by PCR mutagenesis to create the sub-cloning vector, pBluescript 128 C11, which was then used to remove all the PKs by PCR mutagenesis using ΔPK 1234 forward 129 primer and FMDV 1331-1311 reverse primer. The modified PK sequences were removed from 130 the sub-cloning vectors and inserted into the pRep-ptGFP plasmid using NheI-HF and KpnI-131 HF. 132 133 . . https://doi.org/10.1101/2020.01.10.901801 doi: bioRxiv preprint 7 Mutations to disrupt and reform PK structure were introduced using synthetic DNA by 134 digestion with AflII and AatII and ligation into a similarly digested pBluescript PK vector. Mutations were then introduced into the replicon plasmid as described above. To assess the effects of truncation of the poly-C-tract on replication the entire sequence was 137 removed. This was performed by PCR mutagenesis using primers C0 SpeI, and FMDV 1331- In vitro transcription. 143 In vitro transcription reactions for replicon assays were performed as described previously (28). Transcription reactions to produce large amounts of RNA for SHAPE analysis were performed 145 with purified linear DNA as described above, and 1 μg of linearised DNA was then used in a 146 HiScribe T7 synthesis kit (NEB), before DNase treatment and purification using a PureLink FastQ files were quality checked using FastQC with poor quality reads filtered using the 225 Sickle algorithm. Host cell reads were removed using FastQ Screen algorithm and FMDV 226 reads assembled de novo into contigs using IDBA-UD (35). Contigs that matched the FMDV 227 library (identified using Basic Local ALighnment Search Tool (BLAST)) were assembled 228 . . https://doi.org/10.1101/2020.01.10.901801 doi: bioRxiv preprint into consensus sequences using SeqMan Pro software in the DNA STAR Lasergene 13 229 package (DNA STAR) (36). The SHAPE data largely agreed with the predicted structures with the stems of PK 1, 2 and 3, interacting nucleotides showed little to no reactivity, suggesting NMIA could not interact with 300 . . https://doi.org/10.1101/2020.01.10.901801 doi: bioRxiv preprint 14 these nucleotides either due to the predicted base pairing or steric hindrance (figure 2B). The NMIA reactivity for the interacting nucleotides in the stem-loops with downstream residues of 302 PK 1, 2 and 3 again largely agreed with the predicted structure, although the SHAPE data 303 suggests that there might be fewer interactions than previously predicted. However, differences 304 here could be due to heterogeneity in the formation of PKs in this experiment. The evidence 305 for loop-downstream interaction was weaker for PK4. . https://doi.org/10.1101/2020.01.10.901801 doi: bioRxiv preprint orientation. 351 Since removal of all four PKs resulted in a significant decrease in replication, the minimal 352 requirements to maintain wt levels of replication were investigated. As near wt level of 353 replication was observed when only one PK was present, all further mutagenesis was 354 performed in a C11 replicon plasmid containing only PK 1. In addition, the orientation of PK 1 was reversed by \"flipping\" the nucleotide sequence to 367 potentially facilitate hybridisation of the loop with upstream rather than downstream sequences. Changing the orientation of the PK reduced replicon replication to a similar level seen in the replication decreased until at passage three there is a 2.5 fold reduction compared to that of 398 . . https://doi.org/10.1101/2020.01.10.901801 doi: bioRxiv preprint passage 0 (figure 5B). Therefore, it appears that replicons with a single PK are at a competitive 399 disadvantage compared to those with two or more. . https://doi.org/10.1101/2020.01.10.901801 doi: bioRxiv preprint 20 of infectious virus despite being able to replicate after transfection into cells, is consistent with 448 a requirement for RNA structure within the PK region being required for virus assembly. The 5′ UTR of FMDV is unique amongst picornaviruses due to its large size and the presence 454 of multiple RNA elements, some of which still have unknown function. One of these features 455 is a series of repeated PKs varying in number from 2-4, depending on virus strain. In this study, 456 we sequentially deleted or mutated the PKs to help understand their role in the viral life cycle. 457 We also confirmed the predicted PK structures by SHAPE mapping, although there may be Although all viruses isolated to date contain at least two PKs, replicons or viruses containing a 464 single PK were still replication competent. However, replicons with more than a single PK 465 were found to have a competitive advantage over replicons with a single PK when sequentially 466 passaged. Replicons lacking all PKs displayed poor passaging potential even when co-467 transfected with yeast tRNA, reinforcing the observation of a significant impact in replication. Moreover, viruses recovered from genomes with reduced numbers of PKs were slower growing 469 and produced smaller plaques. In addition, these differences were more pronounced in more PKs is functionally competent as no differences was seen between replicons congaing a single 472 . . https://doi.org/10.1101/2020.01.10.901801 doi: bioRxiv preprint 21 copy of PK1 or PK4. This observation is consistent with a previous report of deletion of PK1, 473 along with the Poly-C-tract, with no adverse effect in viral replication (37). This also supports 474 our findings that the truncation of the Poly-C-tract to create the C11 construct had no effect on 475 replicon replication in the cell lines tested. As has been described with Mengo virus, it is 476 possible that the role of the poly-C-tract is essential in other aspects of the viral lifecycle which 477 cannot be recapitulated in a standard tissue culture system (39). The presence of at least two PKs in all viral isolates sequenced so far suggests that multiple 480 PKs confer a competitive advantage in replication. Here we showed by sequential passage that 481 replicons containing at least two PKs were maintained at a level similar to wt, but replicons 482 containing only one PK showed a persistent decline. It is unclear why some viral isolates 483 contain two, three or four PKs is still unknown, but this may be stochastic variation or may 484 reflect subtle effects of host range or geographical localisation. . https://doi.org/10.1101/2020.01.10.901801 doi: bioRxiv preprint . . https://doi.org/10.1101/2020.01.10.901801 doi: bioRxiv preprint . https://doi.org/10.1101/2020.01.10.901801 doi: bioRxiv preprint Significance is shown comparing the replication of C11 PK disrupt and C11 PK restore (Aii). Significance shown is compared to wt replicon. Error bars are calculated by SEM, n = 3, * P 673 < 0.05, **** P < 0.0001. 674 . . https://doi.org/10.1101/2020.01.10.901801 doi: bioRxiv preprint 33 675 . . https://doi.org/10.1101/2020.01.10.901801 doi: bioRxiv preprint \n" ], [ "stop_words = set(stopwords.words('english'))", "_____no_output_____" ], [ "punctuation = [',','.',';',':','(',')','′','~']", "_____no_output_____" ], [ "#only keep nouns ...\nother_words = set(['two','one','three','bioRxiv','furthermore','word','count','text'])\nall_words = []\nfor word in total_text.split(\" \"):\n for p in punctuation:\n word = word.replace(p,\"\")\n word = word.strip(\" \")\n try:\n int(word)\n except:\n if (not word.lower() in stop_words) and (word) and (word[:4] != 'http') and (not word.lower() in other_words):\n print(word)\n all_words.append(word)", "positive\nstranded\nRNA\ngenomes\npicornaviruses\ncomprise\nsingle\nlarge\nopen\nreading\nframe\nflanked\nuntranslated\nregions\nUTRs\nFoot-and-mouth\ndisease\nvirus\nFMDV\nunusually\nlarge\nUTR\nkb\ncontaining\nfive\nstructural\ndomains\ninclude\ninternal\nribosome\nentry\nsite\nIRES\nfacilitates\ninitiation\ntranslation\ncis-acting\nreplication\nelement\ncre\nLess\nwell\ncharacterised\nstructures\nterminal\nnucleotide\nstem-loop\nvariable\nlength\npoly-C-tract\napproximately\n100-200\nnucleotides\nseries\nfour\ntandemly\nrepeated\npseudoknots\nPKs\ninvestigated\nstructures\nPKs\nselective\nhydroxyl\nacetylation\nanalysed\nprimer\nextension\nSHAPE\nanalysis\ndetermined\ncontribution\ngenome\nreplication\nmutation\ndeletion\nexperiments\nSHAPE\nmutation\nexperiments\nconfirmed\nimportance\npreviously\npredicted\nPK\nstructures\nfunction\nDeletion\nexperiments\nshowed\nalthough\nPKs\nessential\nreplication\nprovide\ngenomes\ncompetitive\nadvantage\nHowever\nalthough\nreplicons\nfull-length\ngenomes\nlacking\nPKs\nreplication\ncompetent\ninfectious\nvirus\nrescued\ngenomes\ncontaining\nless\nPK\ncopy\nconsistent\nearlier\nreport\ndescribing\npresence\nputative\npackaging\nsignals\nPK\nregion\nVP3\nVP0\nprocessed\nVP2\nVP4\nvirus\nassembly\nP2\nP3\nregions\nencode\nnon-structural\nproteins\n2B\n2C\n3A\n3B\n1-3\nVPg\n3C\npro\nstructural\nprotein-coding\nregion\nreplaced\nreporter\ngenes\nallow\nstudy\ngenome\nreplication\nwithout\nrequirement\nhigh\ncontainment\nfigure\n1A\nFMDV\nUTR\nlargest\nknown\npicornavirus\nUTR\ncomprising\napproximately\nnucleotides\ncontaining\nseveral\nhighly\nstructured\nregions\nfirst\nnucleotides\nend\npredicted\nfold\nsingle\nlarge\nstem\nloop\ntermed\nS-fragment\nfollowed\nPKs\noriginally\npredicted\nconsist\nfour\ntandem\nrepeats\nnucleotide\nregion\ncontaining\nsmall\nstem\nloop\ndownstream\ninteraction\nsite\nfigure\n1B\nDue\nsequence\nsimilarity\nPKs\nfigure\n1C\nspeculated\nformed\nduplication\nevents\nviral\nreplication\nprobably\ninvolving\nrecombination\nfour\nPKs\npresent\ndifferent\nvirus\nisolates\nstrain\nidentified\nless\nPKs\nemphasising\npotential\nimportance\nviral\nlife\ncycle\npresence\nPKs\nreported\nUTR\npicornaviruses\noccur\nabsence\nPKs\nleast\nrequired\nwild-type\nwt\nreplication\ncompetition\nexperiments\nshowed\nextra\ncopies\nPKs\nconferred\nreplicative\nadvantage\ngenomes\nAlthough\nreplicons\nfull-length\ngenomes\nlacking\nPKs\nreplication-competent\ninfectious\nvirus\nrescued\ngenomes\ncontaining\nless\nPK\ncopy\nconsistent\nearlier\nreport\ndescribing\npresence\nputative\npackaging\nsignals\nPK\nregion\nPlasmid\nconstruction\nFMDV\nreplicon\nplasmids\npRep-ptGFP\nreplication-defective\npolymerase\nmutant\ncontrol\n3D-GNN\nalready\ndescribed\nintroduce\nmutations\nPK\nregion\npRep-ptGFP\nreplicon\nplasmid\ndigested\nSpeI\nKpnI\nresulting\nfragment\ninserted\nsub-cloning\nvector\npBluescript\ncreate\npBluescript\nPK\nPKs\nremoved\ndigestion\nHindIII\nAatII\ninsertion\nsynthetic\nDNA\nsequence\nPK\ndeleted\nPKs\ndeleted\nPCR\namplification\nusing\nΔPK\nForward\nprimer\nFMDV\n1331-1311\nreverse\nprimer\nresultant\nproduct\ndigested\nHindIII\nAatII\nligated\npBluescript\nPK\nvector\nComplete\nPK\ndeletion\nachieved\nintroduction\nAflII\nsite\nend\npoly-C\ntract\nPCR\nmutagenesis\ncreate\nsub-cloning\nvector\npBluescript\nC11\nused\nremove\nPKs\nPCR\nmutagenesis\nusing\nΔPK\nforward\nprimer\nFMDV\n1331-1311\nreverse\nprimer\nmodified\nPK\nsequences\nremoved\nsub-cloning\nvectors\ninserted\npRep-ptGFP\nplasmid\nusing\nNheI-HF\nKpnI-131\nHF\ndoi\nbioRxiv\npreprint\nMutations\ndisrupt\nreform\nPK\nstructure\nintroduced\nusing\nsynthetic\nDNA\ndigestion\nAflII\nAatII\nligation\nsimilarly\ndigested\npBluescript\nPK\nvector\nMutations\nintroduced\nreplicon\nplasmid\ndescribed\nassess\neffects\ntruncation\npoly-C-tract\nreplication\nentire\nsequence\nremoved\nperformed\nPCR\nmutagenesis\nusing\nprimers\nC0\nSpeI\nFMDV\n1331-\nvitro\ntranscription\nvitro\ntranscription\nreactions\nreplicon\nassays\nperformed\ndescribed\npreviously\nTranscription\nreactions\nproduce\nlarge\namounts\nRNA\nSHAPE\nanalysis\nperformed\npurified\nlinear\nDNA\ndescribed\nμg\nlinearised\nDNA\nused\nHiScribe\nT7\nsynthesis\nkit\nNEB\nDNase\ntreatment\npurification\nusing\nPureLink\nFastQ\nfiles\nquality\nchecked\nusing\nFastQC\npoor\nquality\nreads\nfiltered\nusing\nSickle\nalgorithm\nHost\ncell\nreads\nremoved\nusing\nFastQ\nScreen\nalgorithm\nFMDV\nreads\nassembled\nde\nnovo\ncontigs\nusing\nIDBA-UD\nContigs\nmatched\nFMDV\nlibrary\nidentified\nusing\nBasic\nLocal\nALighnment\nSearch\nTool\nBLAST\nassembled\ndoi\nbioRxiv\npreprint\nconsensus\nsequences\nusing\nSeqMan\nPro\nsoftware\nDNA\nSTAR\nLasergene\npackage\nDNA\nSTAR\nSHAPE\ndata\nlargely\nagreed\npredicted\nstructures\nstems\nPK\ninteracting\nnucleotides\nshowed\nlittle\nreactivity\nsuggesting\nNMIA\ncould\ninteract\ndoi\nbioRxiv\npreprint\nnucleotides\neither\ndue\npredicted\nbase\npairing\nsteric\nhindrance\nfigure\n2B\nNMIA\nreactivity\ninteracting\nnucleotides\nstem-loops\ndownstream\nresidues\nPK\nlargely\nagreed\npredicted\nstructure\nalthough\nSHAPE\ndata\nsuggests\nmight\nfewer\ninteractions\npreviously\npredicted\nHowever\ndifferences\ncould\ndue\nheterogeneity\nformation\nPKs\nexperiment\nevidence\nloop-downstream\ninteraction\nweaker\nPK4\ndoi\nbioRxiv\npreprint\norientation\nSince\nremoval\nfour\nPKs\nresulted\nsignificant\ndecrease\nreplication\nminimal\nrequirements\nmaintain\nwt\nlevels\nreplication\ninvestigated\nnear\nwt\nlevel\nreplication\nobserved\nPK\npresent\nmutagenesis\nperformed\nC11\nreplicon\nplasmid\ncontaining\nPK\naddition\norientation\nPK\nreversed\n\"flipping\"\nnucleotide\nsequence\npotentially\nfacilitate\nhybridisation\nloop\nupstream\nrather\ndownstream\nsequences\nChanging\norientation\nPK\nreduced\nreplicon\nreplication\nsimilar\nlevel\nseen\nreplication\ndecreased\npassage\nfold\nreduction\ncompared\ndoi\nbioRxiv\npreprint\npassage\nfigure\n5B\nTherefore\nappears\nreplicons\nsingle\nPK\ncompetitive\ndisadvantage\ncompared\ndoi\nbioRxiv\npreprint\ninfectious\nvirus\ndespite\nable\nreplicate\ntransfection\ncells\nconsistent\nrequirement\nRNA\nstructure\nwithin\nPK\nregion\nrequired\nvirus\nassembly\nUTR\nFMDV\nunique\namongst\npicornaviruses\ndue\nlarge\nsize\npresence\nmultiple\nRNA\nelements\nstill\nunknown\nfunction\nfeatures\nseries\nrepeated\nPKs\nvarying\nnumber\n2-4\ndepending\nvirus\nstrain\nstudy\nsequentially\ndeleted\nmutated\nPKs\nhelp\nunderstand\nrole\nviral\nlife\ncycle\nalso\nconfirmed\npredicted\nPK\nstructures\nSHAPE\nmapping\nalthough\nmay\nAlthough\nviruses\nisolated\ndate\ncontain\nleast\nPKs\nreplicons\nviruses\ncontaining\nsingle\nPK\nstill\nreplication\ncompetent\nHowever\nreplicons\nsingle\nPK\nfound\ncompetitive\nadvantage\nreplicons\nsingle\nPK\nsequentially\npassaged\nReplicons\nlacking\nPKs\ndisplayed\npoor\npassaging\npotential\neven\nco-467\ntransfected\nyeast\ntRNA\nreinforcing\nobservation\nsignificant\nimpact\nreplication\nMoreover\nviruses\nrecovered\ngenomes\nreduced\nnumbers\nPKs\nslower\ngrowing\nproduced\nsmaller\nplaques\naddition\ndifferences\npronounced\nPKs\nfunctionally\ncompetent\ndifferences\nseen\nreplicons\ncongaing\nsingle\ndoi\nbioRxiv\npreprint\ncopy\nPK1\nPK4\nobservation\nconsistent\nprevious\nreport\ndeletion\nPK1\nalong\nPoly-C-tract\nadverse\neffect\nviral\nreplication\nalso\nsupports\nfindings\ntruncation\nPoly-C-tract\ncreate\nC11\nconstruct\neffect\nreplicon\nreplication\ncell\nlines\ntested\ndescribed\nMengo\nvirus\npossible\nrole\npoly-C-tract\nessential\naspects\nviral\nlifecycle\ncannot\nrecapitulated\nstandard\ntissue\nculture\nsystem\npresence\nleast\nPKs\nviral\nisolates\nsequenced\nfar\nsuggests\nmultiple\nPKs\nconfer\ncompetitive\nadvantage\nreplication\nshowed\nsequential\npassage\nreplicons\ncontaining\nleast\nPKs\nmaintained\nlevel\nsimilar\nwt\nreplicons\ncontaining\nPK\nshowed\npersistent\ndecline\nunclear\nviral\nisolates\ncontain\nfour\nPKs\nstill\nunknown\nmay\nstochastic\nvariation\nmay\nreflect\nsubtle\neffects\nhost\nrange\ngeographical\nlocalisation\ndoi\nbioRxiv\npreprint\ndoi\nbioRxiv\npreprint\ndoi\nbioRxiv\npreprint\nSignificance\nshown\ncomparing\nreplication\nC11\nPK\ndisrupt\nC11\nPK\nrestore\nAii\nSignificance\nshown\ncompared\nwt\nreplicon\nError\nbars\ncalculated\nSEM\nn\n=\n*\nP\n<\n****\nP\n<\ndoi\nbioRxiv\npreprint\ndoi\nbioRxiv\npreprint\n" ], [ "'also'.lower() in stop_words", "_____no_output_____" ], [ "try:\n print(int('5'))\nexcept:\n print('5′ is text')", "5\n" ], [ "freq = nltk.FreqDist(all_words)", "_____no_output_____" ], [ "freq.plot(20, cumulative=False)", "_____no_output_____" ], [ "test_word = 'https//'\nprint(test_word[:4])", "http\n" ], [ "lines = 'lines is some string of words'\n# function to test if something is a noun\nis_noun = lambda pos: pos[:2] == 'NN'\n# do the nlp stuff\ntokenized = nltk.word_tokenize(total_text)\nnouns = [word for (word, pos) in nltk.pos_tag(tokenized) if is_noun(pos)] \n\nprint(nouns)", "['word', 'count', 'Text', 'word', 'count', 'RNA', 'genomes', 'picornaviruses', 'frame', 'regions', 'UTRs', 'disease', 'virus', 'FMDV', 'UTR', 'kb', 'domains', 'ribosome', 'entry', 'site', 'IRES', 'initiation', 'translation', 'replication', 'element', 'cre', 'Less', 'structures', 'stem-loop', 'length', 'poly-C-tract', 'nucleotides', 'series', 'tandemly', 'pseudoknots', 'PKs', 'structures', 'PKs', 'acetylation', 'primer', 'extension', 'SHAPE', 'analysis', 'contribution', 'replication', 'mutation', 'deletion', 'experiments', 'SHAPE', 'mutation', 'experiments', 'importance', 'PK', 'structures', 'function', 'Deletion', 'experiments', 'PKs', 'replication', 'genomes', 'advantage', 'replicons', 'genomes', 'PKs', 'competent', 'virus', 'genomes', 'PK', 'copy', 'report', 'presence', 'packaging', 'signals', 'PK', 'region', 'VP3', 'VP0', 'VP2', 'VP4', 'virus', 'assembly', 'P2', 'P3', 'regions', 'proteins', 'VPg', 'pro', 'region', 'reporter', 'genes', 'study', 'replication', 'requirement', 'containment', 'FMDV', 'UTR', 'picornavirus', 'UTR', 'nucleotides', 'regions', 'nucleotides', 'end', 'stem', 'loop', 'S-fragment', 'PKs', 'tandem', 'repeats', 'region', 'stem', 'loop', 'downstream', 'interaction', 'site', 'sequence', 'similarity', 'PKs', 'figure', 'duplication', 'events', 'replication', 'recombination', 'PKs', 'virus', 'isolates', 'strain', 'PKs', 'importance', 'life', 'cycle', 'presence', 'PKs', 'UTR', 'picornaviruses', 'absence', 'PKs', 'wild-type', 'wt', 'replication', 'Furthermore', 'competition', 'experiments', 'copies', 'PKs', 'advantage', 'genomes', 'replicons', 'genomes', 'PKs', 'virus', 'genomes', 'PK', 'copy', 'report', 'presence', 'packaging', 'signals', 'PK', 'region', 'Plasmid', 'construction', 'FMDV', 'replicon', 'plasmids', 'pRep-ptGFP', 'polymerase', 'control', 'mutations', 'PK', 'region', 'replicon', 'plasmid', 'SpeI', 'KpnI', 'fragment', 'vector', 'pBluescript', 'pBluescript', 'PK', 'PKs', 'digestion', 'HindIII', 'AatII', 'insertion', 'DNA', 'sequence', 'PK', 'PKs', 'PCR', 'amplification', 'Forward', 'primer', 'FMDV', 'reverse', 'primer', 'product', 'HindIII', 'AatII', 'pBluescript', 'PK', 'vector', 'PK', 'deletion', 'introduction', 'AflII', 'site', 'end', 'tract', 'PCR', 'mutagenesis', 'vector', 'pBluescript', 'C11', 'PKs', 'PCR', 'mutagenesis', 'primer', 'FMDV', 'reverse', 'primer', 'PK', 'sequences', 'vectors', 'plasmid', 'NheI-HF', 'KpnI-131', 'HF', 'https', 'doi', 'bioRxiv', 'preprint', 'Mutations', 'PK', 'structure', 'DNA', 'digestion', 'AflII', 'AatII', 'ligation', 'pBluescript', 'PK', 'vector', 'Mutations', 'replicon', 'plasmid', 'effects', 'truncation', 'poly-C-tract', 'replication', 'sequence', 'PCR', 'mutagenesis', 'primers', 'C0', 'SpeI', 'FMDV', 'transcription', 'transcription', 'reactions', 'replicon', 'assays', 'Transcription', 'reactions', 'amounts', 'RNA', 'SHAPE', 'analysis', 'DNA', 'μg', 'DNA', 'HiScribe', 'T7', 'synthesis', 'kit', 'NEB', 'DNase', 'treatment', 'purification', 'PureLink', 'FastQ', 'files', 'quality', 'FastQC', 'quality', 'reads', 'Sickle', 'algorithm', 'Host', 'cell', 'reads', 'FastQ', 'Screen', 'algorithm', 'FMDV', 'reads', 'contigs', 'IDBA-UD', 'Contigs', 'FMDV', 'library', 'Basic', 'Local', 'ALighnment', 'Search', 'Tool', 'BLAST', 'https', 'doi', 'bioRxiv', 'preprint', 'consensus', 'sequences', 'SeqMan', 'Pro', 'software', 'DNA', 'STAR', 'Lasergene', 'package', 'DNA', 'STAR', 'SHAPE', 'data', 'structures', 'stems', 'PK', 'nucleotides', 'reactivity', 'NMIA', 'https', 'doi', 'bioRxiv', 'preprint', 'nucleotides', 'base', 'pairing', 'hindrance', 'NMIA', 'reactivity', 'interacting', 'nucleotides', 'downstream', 'residues', 'PK', 'structure', 'SHAPE', 'suggests', 'interactions', 'differences', 'heterogeneity', 'formation', 'PKs', 'experiment', 'evidence', 'interaction', 'PK4', 'https', 'doi', 'bioRxiv', 'preprint', 'orientation', 'removal', 'PKs', 'decrease', 'replication', 'requirements', 'levels', 'replication', 'level', 'replication', 'PK', 'mutagenesis', 'C11', 'replicon', 'plasmid', 'PK', 'addition', 'orientation', 'PK', 'sequence', 'hybridisation', 'loop', 'upstream', 'downstream', 'sequences', 'orientation', 'PK', 'replication', 'level', 'replication', 'passage', 'reduction', 'https', 'doi', 'bioRxiv', 'preprint', 'passage', 'figure', 'replicons', 'PK', 'disadvantage', 'https', 'doi', 'bioRxiv', 'preprint', 'virus', 'transfection', 'cells', 'requirement', 'RNA', 'structure', 'PK', 'region', 'virus', 'assembly', 'UTR', 'FMDV', 'picornaviruses', 'size', 'presence', 'RNA', 'elements', 'function', 'features', 'series', 'PKs', 'varying', 'number', 'virus', 'strain', 'study', 'PKs', 'role', 'life', 'cycle', 'PK', 'structures', 'SHAPE', 'mapping', 'viruses', 'date', 'contain', 'PKs', 'replicons', 'viruses', 'PK', 'competent', 'replicons', 'PK', 'advantage', 'replicons', 'PK', 'Replicons', 'PKs', 'co-467', 'yeast', 'tRNA', 'observation', 'impact', 'replication', 'viruses', 'genomes', 'numbers', 'PKs', 'plaques', 'addition', 'differences', 'PKs', 'differences', 'replicons', 'https', 'doi', 'bioRxiv', 'preprint', 'copy', 'PK1', 'PK4', 'observation', 'report', 'deletion', 'PK1', 'Poly-C-tract', 'effect', 'replication', 'findings', 'truncation', 'Poly-C-tract', 'C11', 'construct', 'effect', 'replicon', 'replication', 'cell', 'lines', 'Mengo', 'virus', 'role', 'poly-C-tract', 'aspects', 'lifecycle', 'tissue', 'culture', 'system', 'presence', 'PKs', 'isolates', 'PKs', 'advantage', 'replication', 'passage', 'replicons', 'PKs', 'level', 'replicons', 'PK', 'decline', 'isolates', 'contain', 'PKs', 'variation', 'effects', 'host', 'range', 'localisation', 'https', 'doi', 'bioRxiv', 'preprint', 'https', 'doi', 'bioRxiv', 'preprint', 'https', 'doi', 'bioRxiv', 'preprint', 'Significance', 'replication', 'C11', 'PK', 'disrupt', 'C11', 'PK', 'restore', 'Aii', 'Significance', 'replicon', 'Error', 'bars', 'SEM', '=', '*', 'P', '<', '****', 'P', 'https', 'doi', 'bioRxiv', 'preprint', 'https', 'doi', 'bioRxiv', 'preprint']\n" ], [ "remove = set(['=','<','*','http','https','doi','biorxiv','preprint','word','count','text'])\nwords = [noun.replace('PKs','pseudoknot').replace('PK','pseudoknot') for noun in nouns if not noun.lower() in remove]", "_____no_output_____" ], [ "freq = nltk.FreqDist(words)\nfreq.plot(20, cumulative=False)", "_____no_output_____" ], [ "freq", "_____no_output_____" ] ], [ [ "# TEST", "_____no_output_____" ] ], [ [ "paper_info = get_paper_data(cfg['data-path'] + biorxiv, papers[10])", "_____no_output_____" ], [ "paper_info", "_____no_output_____" ], [ "def get_sections_from_body(body):\n sections = {}\n for section in body:\n if section['section'].isupper():\n if section['section'] not in sections:\n sections[section['section']] = ''\n else:\n sections[section['section']] += section['text']\n return sections\n\nprint(sections.keys())", "dict_keys(['INTRODUCTION', 'RESULTS', 'DISCUSSION'])\n" ], [ "sections", "_____no_output_____" ], [ "txt = 'INTRODUCTION'\ntxt[0] + txt[1:].lower()", "_____no_output_____" ], [ "print('ID: {}'.format(paper_info['paper_id']))\nprint('\\nTitle: {}'.format(paper_info['metadata']['title']))\nprint('\\nAuthors: {}'.format(paper_info['metadata']['authors']))\nprint('\\nAbstract: {}'.format(paper_info['abstract']))\nsections = get_sections_from_body(paper_info['body_text'])\nfor section in sections.keys():\n print('\\n{}: {}'.format(section[0] + section[1:].lower(),sections[section]))", "ID: 0015023cc06b5362d332b3baf348d11567ca2fbb\n\nTitle: The RNA pseudoknots in foot-and-mouth disease virus are dispensable for genome replication but essential for the production of infectious virus. 2 3\n\nAuthors: [{'first': 'Joseph', 'middle': ['C'], 'last': 'Ward', 'suffix': '', 'affiliation': {}, 'email': ''}, {'first': 'Lidia', 'middle': [], 'last': 'Lasecka-Dykes', 'suffix': '', 'affiliation': {}, 'email': ''}, {'first': 'Chris', 'middle': [], 'last': 'Neil', 'suffix': '', 'affiliation': {}, 'email': ''}, {'first': 'Oluwapelumi', 'middle': [], 'last': 'Adeyemi', 'suffix': '', 'affiliation': {}, 'email': ''}, {'first': 'Sarah', 'middle': [], 'last': '', 'suffix': '', 'affiliation': {}, 'email': ''}, {'first': '', 'middle': [], 'last': 'Gold', 'suffix': '', 'affiliation': {}, 'email': ''}, {'first': 'Niall', 'middle': [], 'last': 'Mclean', 'suffix': '', 'affiliation': {}, 'email': ''}, {'first': 'Caroline', 'middle': [], 'last': 'Wright', 'suffix': '', 'affiliation': {}, 'email': ''}, {'first': 'Morgan', 'middle': ['R'], 'last': 'Herod', 'suffix': '', 'affiliation': {}, 'email': ''}, {'first': 'David', 'middle': [], 'last': 'Kealy', 'suffix': '', 'affiliation': {}, 'email': ''}, {'first': 'Emma', 'middle': [], 'last': '', 'suffix': '', 'affiliation': {}, 'email': ''}, {'first': 'Warner', 'middle': [], 'last': '', 'suffix': '', 'affiliation': {}, 'email': ''}, {'first': 'Donald', 'middle': ['P'], 'last': 'King', 'suffix': '', 'affiliation': {}, 'email': ''}, {'first': 'Tobias', 'middle': ['J'], 'last': 'Tuthill', 'suffix': '', 'affiliation': {}, 'email': ''}, {'first': 'David', 'middle': ['J'], 'last': 'Rowlands', 'suffix': '', 'affiliation': {}, 'email': ''}, {'first': 'Nicola', 'middle': ['J'], 'last': '', 'suffix': '', 'affiliation': {}, 'email': ''}, {'first': 'Stonehouse', 'middle': [], 'last': 'A#', 'suffix': '', 'affiliation': {}, 'email': ''}]\n\nAbstract: [{'text': 'word count: 194 22 Text word count: 5168 23 24 25 author/funder. All rights reserved. No reuse allowed without permission. Abstract 27 The positive stranded RNA genomes of picornaviruses comprise a single large open reading 28 frame flanked by 5′ and 3′ untranslated regions (UTRs). Foot-and-mouth disease virus (FMDV) 29 has an unusually large 5′ UTR (1.3 kb) containing five structural domains. These include the 30 internal ribosome entry site (IRES), which facilitates initiation of translation, and the cis-acting 31 replication element (cre). Less well characterised structures are a 5′ terminal 360 nucleotide 32 stem-loop, a variable length poly-C-tract of approximately 100-200 nucleotides and a series of 33 two to four tandemly repeated pseudoknots (PKs). We investigated the structures of the PKs 34 by selective 2′ hydroxyl acetylation analysed by primer extension (SHAPE) analysis and 35 determined their contribution to genome replication by mutation and deletion experiments. 36 SHAPE and mutation experiments confirmed the importance of the previously predicted PK 37 structures for their function. Deletion experiments showed that although PKs are not essential 38', 'cite_spans': [], 'ref_spans': [], 'section': 'Abstract'}, {'text': 'for replication, they provide genomes with a competitive advantage. However, although 39 replicons and full-length genomes lacking all PKs were replication competent, no infectious 40 virus was rescued from genomes containing less than one PK copy. This is consistent with our 41 earlier report describing the presence of putative packaging signals in the PK region. 42 43 author/funder. All rights reserved. No reuse allowed without permission.', 'cite_spans': [], 'ref_spans': [], 'section': 'Abstract'}]\n" ], [ "sections.keys()", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code" ], [ "markdown", "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code" ] ]
e7fe92d19970c867f5c590c7d3631bd0e4901d9e
20,736
ipynb
Jupyter Notebook
Named_Entity_Recognition.ipynb
David-Woroniuk/Data_Driven_Investor
ae429fb29a26f584523e663bf30768a002dda326
[ "MIT" ]
14
2020-09-24T15:38:08.000Z
2022-02-03T17:43:54.000Z
Named_Entity_Recognition.ipynb
David-Woroniuk/Data_Driven_Investor
ae429fb29a26f584523e663bf30768a002dda326
[ "MIT" ]
2
2020-10-20T11:46:57.000Z
2021-02-25T15:30:39.000Z
Named_Entity_Recognition.ipynb
David-Woroniuk/Data_Driven_Investor
ae429fb29a26f584523e663bf30768a002dda326
[ "MIT" ]
6
2020-10-12T09:29:47.000Z
2021-07-28T16:59:39.000Z
47.342466
191
0.491802
[ [ [ "# install the FedTools package:\n!pip install FedTools\n\n# install chart studio (Plotly):\n!pip install chart-studio\n\n# import pandas and numpy for data wrangling:\nimport pandas as pd\nimport numpy as np\n\n# from FedTools, import the MonetrayPolicyCommittee module to download statements:\nfrom FedTools import MonetaryPolicyCommittee\n\n# import spacy and displaycy for visualisation:\nimport spacy\nimport en_core_web_sm\nnlp = en_core_web_sm.load()\nfrom spacy import displacy\n\n# import Counter for counting:\nfrom collections import Counter\n\n# import plotly for plotting:\nimport plotly.graph_objects as go", "Collecting FedTools\n Downloading https://files.pythonhosted.org/packages/bb/ed/538cb88d9eb08be4d7413a1f56243cfaeb261bb458c2a9587f3d137b61fd/FedTools-0.0.1-py3-none-any.whl\nRequirement already satisfied: pandas in /usr/local/lib/python3.6/dist-packages (from FedTools) (1.0.5)\nRequirement already satisfied: bs4 in /usr/local/lib/python3.6/dist-packages (from FedTools) (0.0.1)\nRequirement already satisfied: python-dateutil>=2.6.1 in /usr/local/lib/python3.6/dist-packages (from pandas->FedTools) (2.8.1)\nRequirement already satisfied: numpy>=1.13.3 in /usr/local/lib/python3.6/dist-packages (from pandas->FedTools) (1.18.5)\nRequirement already satisfied: pytz>=2017.2 in /usr/local/lib/python3.6/dist-packages (from pandas->FedTools) (2018.9)\nRequirement already satisfied: beautifulsoup4 in /usr/local/lib/python3.6/dist-packages (from bs4->FedTools) (4.6.3)\nRequirement already satisfied: six>=1.5 in /usr/local/lib/python3.6/dist-packages (from python-dateutil>=2.6.1->pandas->FedTools) (1.15.0)\nInstalling collected packages: FedTools\nSuccessfully installed FedTools-0.0.1\nCollecting chart-studio\n\u001b[?25l Downloading https://files.pythonhosted.org/packages/ca/ce/330794a6b6ca4b9182c38fc69dd2a9cbff60fd49421cb8648ee5fee352dc/chart_studio-1.1.0-py3-none-any.whl (64kB)\n\u001b[K |████████████████████████████████| 71kB 3.6MB/s \n\u001b[?25hRequirement already satisfied: plotly in /usr/local/lib/python3.6/dist-packages (from chart-studio) (4.4.1)\nRequirement already satisfied: requests in /usr/local/lib/python3.6/dist-packages (from chart-studio) (2.23.0)\nRequirement already satisfied: six in /usr/local/lib/python3.6/dist-packages (from chart-studio) (1.15.0)\nRequirement already satisfied: retrying>=1.3.3 in /usr/local/lib/python3.6/dist-packages (from chart-studio) (1.3.3)\nRequirement already satisfied: urllib3!=1.25.0,!=1.25.1,<1.26,>=1.21.1 in /usr/local/lib/python3.6/dist-packages (from requests->chart-studio) (1.24.3)\nRequirement already satisfied: idna<3,>=2.5 in /usr/local/lib/python3.6/dist-packages (from requests->chart-studio) (2.10)\nRequirement already satisfied: certifi>=2017.4.17 in /usr/local/lib/python3.6/dist-packages (from requests->chart-studio) (2020.6.20)\nRequirement already satisfied: chardet<4,>=3.0.2 in /usr/local/lib/python3.6/dist-packages (from requests->chart-studio) (3.0.4)\nInstalling collected packages: chart-studio\nSuccessfully installed chart-studio-1.1.0\n" ], [ "def dataset_parsing():\n '''\n This function calls the MonetaryPolicyCommittee module of the FedTools package\n to collect FOMC Statements. These statements are parsed using SpaCy.\n\n Inputs: N/A.\n\n Outputs: dataset: a Pandas DataFrame which contains:\n\n 'FOMC_Statements' - original FOMC Statements downloaded by FedTools.\n 'tokenised_data' - tokenised FOMC Statements.\n 'lemmatised_data' - lematised FOMC Statements.\n 'part_of_speech' - part of speech tags from FOMC Statements.\n 'named_entities' - the named entities detected within the FOMC Statements.\n 'labels' - the corresponding labels associated with named_entities.\n 'number_of_labels' - a dictionary displaying the number of each label detected.\n 'items' - the number of times each item is detected within the FOMC Statements.\n\n '''\n\n # collect FOMC Statements into DataFrame called dataset:\n dataset = MonetaryPolicyCommittee().find_statements()\n\n # remove additional operators within the text:\n for i in range(len(dataset)):\n dataset.iloc[i,0] = dataset.iloc[i,0].replace('\\\\n','. ')\n dataset.iloc[i,0] = dataset.iloc[i,0].replace('\\n',' ')\n dataset.iloc[i,0] = dataset.iloc[i,0].replace('\\r',' ')\n dataset.iloc[i,0] = dataset.iloc[i,0].replace('\\xa0',' ')\n\n # initialise empty lists:\n tokens = []\n lemma = []\n pos = []\n ents = []\n labels = []\n count = []\n items = []\n\n # for each document in the pipeline:\n for doc in nlp.pipe(dataset['FOMC_Statements'].astype('unicode').values, batch_size=50, n_threads=10):\n # if the document is successfully parsed:\n if doc.is_parsed:\n # append various data to appropriate categories:\n tokens.append([n.text for n in doc])\n lemma.append([n.lemma_ for n in doc])\n pos.append([n.pos_ for n in doc])\n ents.append([n.text for n in doc.ents])\n labels.append([n.label_ for n in doc.ents])\n count.append(Counter([n.label_ for n in doc.ents]))\n items.append(Counter([n.text for n in doc.ents]))\n\n # if document parsing fails, return 'None' to maintain DataFrame dimensions:\n else:\n tokens.append(None)\n lemma.append(None)\n pos.append(None)\n ents.append(None)\n labels.append(None)\n count.append(None)\n items.append(None)\n\n # now assign the lists columns within the dataframe:\n dataset['tokenised_data'] = tokens\n dataset['lemmatised_data'] = lemma\n dataset['part_of_speech'] = pos\n dataset['named_entities'] = ents\n dataset['labels'] = labels\n dataset['number_of_labels'] = count\n dataset['items'] = items\n\n return dataset", "_____no_output_____" ], [ "def generate_additional_information():\n '''\n This function generates additional information from the parsed documents, quantifying\n the usage of specific named entities within FOMC Statements.\n\n Inputs: N/A.\n\n Outputs: dataset: a Pandas DataFrame which contains:\n\n 'person' - the number of times people are mentioned in each statement.\n 'date' - the number of times dates are mentioned within each statement.\n 'percent' - the number of times percentages are mentioned within each statement.\n 'time' - the number of times a time is mentioned within each statement.\n 'ordinal' - the number of times an 'ordinal' ie) \"first\" is mentioned within each statement.\n 'organisations' - the number of times an organisation is mentioned within each statement.\n 'money' - the number of times money is mentioned within each statement.\n 'event' - the number of times an event is mentioned within each statement.\n 'law' - the number of times a law is mentioned within each statement.\n 'quantity' - the number of times a quantity is mentioned within each statement.\n 'groups' - the number of times specific groups are mentioned within each statement.\n 'information_content' - the number of named entities detected within each statement.\n\n '''\n # call the function defined above:\n dataset = dataset_parsing()\n\n # generate additional information through the detection of named entities:\n dataset['person'] = dataset['number_of_labels'].apply(lambda x: x.get('PERSON'))\n dataset['date'] = dataset['number_of_labels'].apply(lambda x: x.get('DATE'))\n dataset['percent'] = dataset['number_of_labels'].apply(lambda x: x.get('PERCENT'))\n dataset['product'] = dataset['number_of_labels'].apply(lambda x: x.get('PRODUCT'))\n dataset['time'] = dataset['number_of_labels'].apply(lambda x: x.get('TIME'))\n dataset['ordinal'] = dataset['number_of_labels'].apply(lambda x: x.get('ORDINAL'))\n dataset['organisations'] = dataset['number_of_labels'].apply(lambda x: x.get('ORG'))\n dataset['money'] = dataset['number_of_labels'].apply(lambda x: x.get('MONEY'))\n dataset['event'] = dataset['number_of_labels'].apply(lambda x: x.get('EVENT'))\n dataset['law'] = dataset['number_of_labels'].apply(lambda x: x.get('LAW'))\n dataset['quantity'] = dataset['number_of_labels'].apply(lambda x: x.get('QUANTITY'))\n dataset['groups'] = dataset['number_of_labels'].apply(lambda x: x.get('NORP'))\n\n # replace any 'NaN' values with 0, then calculate the 'information content',as defined\n # by the total number of named entities:\n dataset = dataset.replace(np.nan, 0)\n dataset['information_content'] = dataset.iloc[:,8:].sum(axis = 1)\n\n return dataset", "_____no_output_____" ], [ "def generate_chairperson(dataset):\n '''\n This function uses Named Entity Recognition in order to detect the presence of \n chairpeople within the FOMC statements. \n\n Inputs: dataset: a Pandas DataFrame as defined above.\n\n Outputs: dataset: a Pandas DataFrame which identifies the FOMC Chairperson.\n '''\n\n # try to detect specific names within 'items':\n dataset['Greenspan'] = dataset['items'].apply(lambda x: x.get('Alan Greenspan'))\n dataset['Bernanke'] = dataset['items'].apply(lambda x: x.get('Ben S. Bernanke'))\n dataset['Yellen'] = dataset['items'].apply(lambda x: x.get('Janet L. Yellen'))\n dataset['Powell'] = dataset['items'].apply(lambda x: x.get('Jerome H. Powell'))\n\n # replace all 'Nan' values with 0:\n dataset = dataset.replace(np.nan, 0)\n\n return dataset", "_____no_output_____" ], [ "def plot_figure():\n '''\n This function constructs a Plotly chart by calling the above functions to generate\n the dataset, and subsequently plotting relevant data. \n '''\n\n # define the dataset as a global variable, which can be used outside of the function:\n global dataset\n # call the above functions to generate the required data:\n dataset = generate_additional_information()\n dataset = generate_chairperson(dataset)\n\n # initialise figure:\n fig = go.Figure()\n\n # add figure traces:\n fig.add_trace(go.Scatter(x = dataset.index, y = dataset['information_content'],\n mode = 'lines',\n name = 'Information Content',\n connectgaps=True))\n \n fig.add_trace(go.Scatter(x = dataset.index, y = dataset['percent'],\n mode = 'lines',\n name = 'Number of times \"Percentage\" mentioned',\n connectgaps=True))\n \n fig.add_trace(go.Scatter(x = dataset.index, y = dataset['person'],\n mode = 'lines',\n name = 'Number of People mentioned',\n connectgaps=True))\n \n fig.add_trace(go.Scatter(x = dataset.index, y = dataset['money'],\n mode = 'lines',\n name = 'Number of times Money mentioned',\n connectgaps=True))\n \n fig.add_trace(go.Scatter(x = dataset.index, y = dataset['quantity'],\n mode = 'lines',\n name = 'Number of Quantities mentioned',\n connectgaps=True))\n \n fig.add_trace(go.Scatter(x = dataset.index, y = dataset['event'],\n mode = 'lines',\n name = 'Number of Events mentioned',\n connectgaps=True))\n \n fig.add_trace(go.Scatter(x = dataset.index, y = dataset['organisations'],\n mode = 'lines',\n name = 'Number of Organisations mentioned',\n connectgaps=True))\n\n # add a rangeslider and buttons:\n fig.update_xaxes(\n rangeslider_visible=True,\n rangeselector=dict(\n buttons=list([\n dict(count=1, label=\"YTD\", step=\"year\", stepmode=\"todate\"),\n dict(count=5, label=\"5 Years\", step=\"year\", stepmode=\"backward\"),\n dict(count=10, label=\"10 Years\", step=\"year\", stepmode=\"backward\"),\n dict(count=15, label=\"15 Years\", step=\"year\", stepmode=\"backward\"),\n dict(label=\"All\", step=\"all\")\n ]))) \n\n # add a chart title and axis title:\n fig.update_layout(\n title=\"FOMC Named Entity Recognition\",\n xaxis_title=\"Date\",\n yaxis_title=\"\",\n font=dict(\n family=\"Arial\",\n size=11,\n color=\"#7f7f7f\"\n ))\n \n # add toggle buttons for dataset display:\n fig.update_layout(\n updatemenus=[\n dict(\n buttons=list([\n dict(\n label = 'All',\n method = 'update',\n args = [{'visible': [True, True, True, True, True, True, True]}]\n ),\n\n dict(\n label = 'Information Content',\n method = 'update',\n args = [{'visible': [True, False, False, False, False, False, False]}]\n ),\n\n dict(\n label = 'Percentage mentions',\n method = 'update',\n args = [{'visible': [False, True, False, False, False, False, False,]}]\n ),\n\n dict(\n label = 'People mentions',\n method = 'update',\n args = [{'visible': [False, False, True, False, False, False, False,]}]\n ),\n\n dict(\n label = 'Money mentions',\n method = 'update',\n args = [{'visible': [False, False, False, True, False, False, False,]}]\n ),\n\n dict(\n label = 'Quantity mentions',\n method = 'update',\n args = [{'visible': [False, False, False, False, True, False, False,]}]\n ),\n\n dict(\n label = 'Event mentions',\n method = 'update',\n args = [{'visible': [False, False, False, False, False, True, False,]}]\n ),\n\n dict(\n label = 'Organisation mentions',\n method = 'update',\n args = [{'visible': [False, False, False, False, False, False, True]}]\n ),\n ]),\n direction=\"down\",\n pad={\"r\": 10, \"t\": 10},\n showactive=True,\n x=1.0,\n xanchor=\"right\",\n y=1.2,\n yanchor=\"top\"\n ),])\n \n return fig.show()", "_____no_output_____" ], [ "plot_figure()", "_____no_output_____" ], [ "# now visualise the named entities detected within specific FOMC Statements:\ndisplacy.render(nlp(dataset['FOMC_Statements'][103]), jupyter = True, style = 'ent')", "_____no_output_____" ] ] ]
[ "code" ]
[ [ "code", "code", "code", "code", "code", "code", "code" ] ]
e7fe9487ae8159203014b328afa4238b4c1d6084
123,618
ipynb
Jupyter Notebook
DataOrganize_ver5.ipynb
SiyangJ/Epidemiology
f7115c08ade1f3b6525dea9b26beea0e9fadd9cd
[ "MIT" ]
null
null
null
DataOrganize_ver5.ipynb
SiyangJ/Epidemiology
f7115c08ade1f3b6525dea9b26beea0e9fadd9cd
[ "MIT" ]
null
null
null
DataOrganize_ver5.ipynb
SiyangJ/Epidemiology
f7115c08ade1f3b6525dea9b26beea0e9fadd9cd
[ "MIT" ]
null
null
null
35.098807
106
0.319573
[ [ [ "import pandas as pd\nimport numpy as np\nimport datetime\nimport matplotlib.pyplot as plt\n%matplotlib inline\nimport re", "_____no_output_____" ], [ "df = pd.read_csv('SZFZ_2.csv')", "_____no_output_____" ], [ "df['DateSymptom'] = pd.to_datetime(df['DateSymptom'])\ndf = df[['Class','DateSymptom',]]\ndf['DateSymptom'] = df['DateSymptom'].dt.to_period('M')\n\ndf.loc[df['Class']=='SZFZ','Class'] = 0\ndf.loc[:,'Class']=df.loc[:,'Class'].astype(float)", "_____no_output_____" ], [ "df", "_____no_output_____" ], [ "date_start = df.sort_values(by='DateSymptom').iloc[0]['DateSymptom']\ndate_end = df.sort_values(by='DateSymptom').iloc[-1]['DateSymptom']\ndate_index = pd.PeriodIndex(start=date_start,end=date_end,freq='m')\n\np_total = 1118\nc_total = 18\n\nPERIODS = [2.5,0.5,1,1.3,2.5,1,1,1.5,1.3,1.3,1.1,3,0.5,1.3]\n\nTYPES = ['I_new','I','I/population','A(not tp)','tp','A','I/(I+A)','I_total/population','period']\nCLASSES = df['Class'].unique()\nCLASSES.sort()\nCLASSES = CLASSES[::-1]\nCLASSES = np.append(CLASSES,[1,])\nCLASSES = CLASSES.astype(float)\n\nPOPULATION = [p_total/c_total,] * (len(CLASSES)-1)\nPOPULATION = np.append(POPULATION,[p_total,])\n\nassert(len(PERIODS)==len(CLASSES) and len(POPULATION)==len(CLASSES))\n\ntype_class = [(t,c)for t in TYPES for c in CLASSES]\n\ntype_index = pd.MultiIndex.from_tuples(type_class)\n\norg_df = pd.DataFrame(index=date_index,columns=type_index)\n\norg_df.loc[:,['I_new','I','I/population']] = 0\n\ndate_length = len(date_index)\nfor c in CLASSES[:-1]:\n class_df = df[df['Class']==c]\n t = 'I'\n n = 1\n cur_df = class_df[class_df['Type']==n]['DateSymptom'].value_counts().sort_index()\n cur_dates = cur_df.index.values\n cur_vals = cur_df.values\n l = 0\n for i in range(date_length):\n if i>0:\n org_df.loc[date_index[i],(t,c)] = org_df.loc[date_index[i-1],(t,c)] * 5/6\n if l<len(cur_dates) and date_index[i]==cur_dates[l]:\n org_df.loc[date_index[i],(t,c)] += cur_vals[l]\n org_df.loc[date_index[i],('I_new',c)] = cur_vals[l]\n l += 1\n\nfor t in ['I_new','I']:\n org_df.loc[:,(t,1)] = org_df.loc[:,t].iloc[:,:-1].sum(axis=1)\n \nfor (c,p,r) in zip(CLASSES,POPULATION,PERIODS):\n org_df.loc[:,('I/population',c)] = org_df.loc[:,('I',c)] / p\n org_df.loc[date_index[0],('period',c)] = r\n \nI_total = 0\nfor c,p in zip(CLASSES[:-1],POPULATION[:-1]):\n I = len(df.loc[np.logical_and(df['Class']==c,df['Type']==1)]) \n I_total += I\n A_ntp = len(df.loc[np.logical_and(df['Class']==c,df['Type']==2)]) \n tp = len(df.loc[np.logical_and(df['Class']==c,df['Type']==3)]) \n org_df.loc[date_index[0],('A(not tp)',c)] = A_ntp\n org_df.loc[date_index[0],('tp',c)] = tp\n org_df.loc[date_index[0],('A',c)] = A_ntp + tp\n org_df.loc[date_index[0],('I/(I+A)',c)] = I/(I+A_ntp + tp)\n org_df.loc[date_index[0],('I_total/population',c)] = I/p\n\nfor t in ['A(not tp)','tp','A']:\n org_df.loc[date_index[0],(t,1)] = org_df.loc[date_index[0],t].iloc[:-1].sum()\n \norg_df.loc[date_index[0],('I/(I+A)',1)] = I_total/(I_total+org_df.loc[date_index[0],('A',1)])\norg_df.loc[date_index[0],('I_total/population',1)] = I_total/POPULATION[-1]\n\norg_df=org_df.swaplevel(axis=1)\nv = pd.Categorical(org_df.columns.get_level_values(0), \n categories=CLASSES, \n ordered=True)\nv2 = pd.Categorical(org_df.columns.get_level_values(1), \n categories=TYPES,\n ordered=True)\norg_df.columns = pd.MultiIndex.from_arrays([v, v2]) \norg_df = org_df.sort_index(axis=1, level=[0, 1])\norg_df", "_____no_output_____" ], [ "org_df.to_csv('SZFZ_ver4.csv')", "_____no_output_____" ], [ "df = pd.read_csv('SYZX_2.csv')\ndf['DateSymptom'] = pd.to_datetime(df['DateSymptom'])\ndf['DateConfirm'] = pd.to_datetime(df['DateConfirm'])\n\ndf.drop('DateConfirm',axis=1,inplace=True)\ndf.drop(94,inplace=True)\n\ndf.reset_index(inplace=True)\ndf.drop('index',axis=1,inplace=True)\n\ndf['DateSymptom'] = pd.to_datetime(df['DateSymptom'])\ndf['DateSymptom'] = df['DateSymptom'].dt.to_period('M')\n\ndf.loc[df['Class'].isnull(),'Class'] = 0\n\ndate_start = df[df['Type']!=2].sort_values(by='DateSymptom').iloc[0]['DateSymptom']\ndate_end = df[df['Type']!=2].sort_values(by='DateSymptom').iloc[-1]['DateSymptom']\ndate_index = pd.PeriodIndex(start=date_start,end=date_end,freq='m')\n\ndf.loc[:,'Class']=df.loc[:,'Class'].astype(float)\n\np_total = 2760\nc_total = 30\n\nPERIODS = [0.6,0.6,0.6,0.6,0.6,0.6,0.6,0.9,0.4,0.6,0.6,0.6,0.6,0.6]\n\nTYPES = ['I_new','I','I/population','A(not tp)','tp','A','I/(I+A)','I_total/population','period']\nCLASSES = df['Class'].unique()\nCLASSES.sort()\nCLASSES = CLASSES[::-1]\nCLASSES = np.append(CLASSES,[1,])\nCLASSES = CLASSES.astype(float)\n\nPOPULATION = [p_total/c_total,] * (len(CLASSES)-1)\nPOPULATION = np.append(POPULATION,[p_total,])\n\nassert(len(PERIODS)==len(CLASSES) and len(POPULATION)==len(CLASSES))\n\ntype_class = [(t,c)for t in TYPES for c in CLASSES]\n\ntype_index = pd.MultiIndex.from_tuples(type_class)\n\norg_df = pd.DataFrame(index=date_index,columns=type_index)\n\norg_df.loc[:,['I_new','I','I/population']] = 0\n\ndate_length = len(date_index)\nfor c in CLASSES[:-1]:\n class_df = df[df['Class']==c]\n t = 'I'\n n = 1\n cur_df = class_df[class_df['Type']==n]['DateSymptom'].value_counts().sort_index()\n cur_dates = cur_df.index.values\n cur_vals = cur_df.values\n l = 0\n for i in range(date_length):\n if i>0:\n org_df.loc[date_index[i],(t,c)] = org_df.loc[date_index[i-1],(t,c)] * 5/6\n if l<len(cur_dates) and date_index[i]==cur_dates[l]:\n org_df.loc[date_index[i],(t,c)] += cur_vals[l]\n org_df.loc[date_index[i],('I_new',c)] = cur_vals[l]\n l += 1\n\nfor t in ['I_new','I']:\n org_df.loc[:,(t,1)] = org_df.loc[:,t].iloc[:,:-1].sum(axis=1)\n \nfor (c,p,r) in zip(CLASSES,POPULATION,PERIODS):\n org_df.loc[:,('I/population',c)] = org_df.loc[:,('I',c)] / p\n org_df.loc[date_index[0],('period',c)] = r\n \nI_total = 0\nfor c,p in zip(CLASSES[:-1],POPULATION[:-1]):\n I = len(df.loc[np.logical_and(df['Class']==c,df['Type']==1)]) \n I_total += I\n A_ntp = len(df.loc[np.logical_and(df['Class']==c,df['Type']==2)]) \n tp = len(df.loc[np.logical_and(df['Class']==c,df['Type']==3)]) \n org_df.loc[date_index[0],('A(not tp)',c)] = A_ntp\n org_df.loc[date_index[0],('tp',c)] = tp\n org_df.loc[date_index[0],('A',c)] = A_ntp + tp\n org_df.loc[date_index[0],('I/(I+A)',c)] = I/(I+A_ntp + tp)\n org_df.loc[date_index[0],('I_total/population',c)] = I/p\n\nfor t in ['A(not tp)','tp','A']:\n org_df.loc[date_index[0],(t,1)] = org_df.loc[date_index[0],t].iloc[:-1].sum()\n \norg_df.loc[date_index[0],('I/(I+A)',1)] = I_total/(I_total+org_df.loc[date_index[0],('A',1)])\norg_df.loc[date_index[0],('I_total/population',1)] = I_total/POPULATION[-1]\n\norg_df=org_df.swaplevel(axis=1)\nv = pd.Categorical(org_df.columns.get_level_values(0), \n categories=CLASSES, \n ordered=True)\nv2 = pd.Categorical(org_df.columns.get_level_values(1), \n categories=TYPES,\n ordered=True)\norg_df.columns = pd.MultiIndex.from_arrays([v, v2]) \norg_df = org_df.sort_index(axis=1, level=[0, 1])\norg_df", "_____no_output_____" ], [ "org_df.to_csv('SYZX_ver4.csv')", "_____no_output_____" ], [ "df = pd.read_csv('MBZX_2.csv')\ndf['DateSymptom'] = pd.to_datetime(df['DateSymptom'])\ndf['DateConfirm'] = pd.to_datetime(df['DateConfirm'])\n\ndf.drop('DateConfirm',axis=1,inplace=True)\ndf['DateSymptom'] = pd.to_datetime(df['DateSymptom'])\n\ndf['DateSymptom'] = df['DateSymptom'].dt.to_period('M')\n\ndate_start = df.sort_values(by='DateSymptom').iloc[0]['DateSymptom']\ndate_end = df.sort_values(by='DateSymptom').iloc[-1]['DateSymptom']\ndate_index = pd.PeriodIndex(start=date_start,end=date_end,freq='m')\n\ndf.loc[:,'Class']=df.loc[:,'Class'].astype(float)\n\nPERIODS = [5,5,5]\n\nTYPES = ['I_new','I','I/population','A(not tp)','tp','A','I/(I+A)','I_total/population','period']\nCLASSES = df['Class'].unique()\nCLASSES.sort()\nCLASSES = CLASSES[::-1]\nCLASSES = np.append(CLASSES,[1,])\nCLASSES = CLASSES.astype(float)\n\nPOPULATION = [51,68,303]\n\nassert(len(PERIODS)==len(CLASSES) and len(POPULATION)==len(CLASSES))\n\ntype_class = [(t,c)for t in TYPES for c in CLASSES]\n\ntype_index = pd.MultiIndex.from_tuples(type_class)\n\norg_df = pd.DataFrame(index=date_index,columns=type_index)\n\norg_df.loc[:,['I_new','I','I/population']] = 0\n\ndate_length = len(date_index)\nfor c in CLASSES[:-1]:\n class_df = df[df['Class']==c]\n t = 'I'\n n = 1\n cur_df = class_df[class_df['Type']==n]['DateSymptom'].value_counts().sort_index()\n cur_dates = cur_df.index.values\n cur_vals = cur_df.values\n l = 0\n for i in range(date_length):\n if i>0:\n org_df.loc[date_index[i],(t,c)] = org_df.loc[date_index[i-1],(t,c)] * 5/6\n if l<len(cur_dates) and date_index[i]==cur_dates[l]:\n org_df.loc[date_index[i],(t,c)] += cur_vals[l]\n org_df.loc[date_index[i],('I_new',c)] = cur_vals[l]\n l += 1\n\nfor t in ['I_new','I']:\n org_df.loc[:,(t,1)] = org_df.loc[:,t].iloc[:,:-1].sum(axis=1)\n \nfor (c,p,r) in zip(CLASSES,POPULATION,PERIODS):\n org_df.loc[:,('I/population',c)] = org_df.loc[:,('I',c)] / p\n org_df.loc[date_index[0],('period',c)] = r\n \nI_total = 0\nfor c,p in zip(CLASSES[:-1],POPULATION[:-1]):\n I = len(df.loc[np.logical_and(df['Class']==c,df['Type']==1)]) \n I_total += I\n A_ntp = len(df.loc[np.logical_and(df['Class']==c,df['Type']==2)]) \n tp = len(df.loc[np.logical_and(df['Class']==c,df['Type']==3)]) \n org_df.loc[date_index[0],('A(not tp)',c)] = A_ntp\n org_df.loc[date_index[0],('tp',c)] = tp\n org_df.loc[date_index[0],('A',c)] = A_ntp + tp\n org_df.loc[date_index[0],('I/(I+A)',c)] = I/(I+A_ntp + tp)\n org_df.loc[date_index[0],('I_total/population',c)] = I/p\n\nfor t in ['A(not tp)','tp','A']:\n org_df.loc[date_index[0],(t,1)] = org_df.loc[date_index[0],t].iloc[:-1].sum()\n \norg_df.loc[date_index[0],('I/(I+A)',1)] = I_total/(I_total+org_df.loc[date_index[0],('A',1)])\norg_df.loc[date_index[0],('I_total/population',1)] = I_total/POPULATION[-1]\n\norg_df=org_df.swaplevel(axis=1)\nv = pd.Categorical(org_df.columns.get_level_values(0), \n categories=CLASSES, \n ordered=True)\nv2 = pd.Categorical(org_df.columns.get_level_values(1), \n categories=TYPES,\n ordered=True)\norg_df.columns = pd.MultiIndex.from_arrays([v, v2]) \norg_df = org_df.sort_index(axis=1, level=[0, 1])\norg_df", "_____no_output_____" ], [ "org_df.to_csv('MBZX_ver4.csv')", "_____no_output_____" ], [ "df = pd.read_csv('QJYS_2.csv')\ndf['DateSymptom'] = pd.to_datetime(df['DateSymptom'])\ndf['DateConfirm'] = pd.to_datetime(df['DateConfirm'])\n\ndf.drop('DateConfirm',axis=1,inplace=True)\n\ndf.reset_index(inplace=True)\ndf.drop('index',axis=1,inplace=True)\n\ndf['DateSymptom'] = pd.to_datetime(df['DateSymptom'])\ndf['DateSymptom'] = df['DateSymptom'].dt.to_period('M')\n\ndate_start = df[df['Type']!=2].sort_values(by='DateSymptom').iloc[0]['DateSymptom']\ndate_end = df[df['Type']!=2].sort_values(by='DateSymptom').iloc[-1]['DateSymptom']\ndate_index = pd.PeriodIndex(start=date_start,end=date_end,freq='m')\n\ndf.loc[:,'Class']=df.loc[:,'Class'].astype(float)\n\np_total = 1958\nc_total = 26\n\nPERIODS = [6,6,6]\n\nTYPES = ['I_new','I','I/population','A(not tp)','tp','A','I/(I+A)','I_total/population','period']\nCLASSES = df['Class'].unique()\nCLASSES.sort()\nCLASSES = CLASSES[::-1]\nCLASSES = np.append(CLASSES,[1,])\nCLASSES = CLASSES.astype(float)\n\nPOPULATION = [p_total/c_total,] * (len(CLASSES)-1)\nPOPULATION = np.append(POPULATION,[p_total,])\n\nassert(len(PERIODS)==len(CLASSES) and len(POPULATION)==len(CLASSES))\n\ntype_class = [(t,c)for t in TYPES for c in CLASSES]\n\ntype_index = pd.MultiIndex.from_tuples(type_class)\n\norg_df = pd.DataFrame(index=date_index,columns=type_index)\n\norg_df.loc[:,['I_new','I','I/population']] = 0\n\ndate_length = len(date_index)\nfor c in CLASSES[:-1]:\n class_df = df[df['Class']==c]\n t = 'I'\n n = 1\n cur_df = class_df[class_df['Type']==n]['DateSymptom'].value_counts().sort_index()\n cur_dates = cur_df.index.values\n cur_vals = cur_df.values\n l = 0\n for i in range(date_length):\n if i>0:\n org_df.loc[date_index[i],(t,c)] = org_df.loc[date_index[i-1],(t,c)] * 5/6\n if l<len(cur_dates) and date_index[i]==cur_dates[l]:\n org_df.loc[date_index[i],(t,c)] += cur_vals[l]\n org_df.loc[date_index[i],('I_new',c)] = cur_vals[l]\n l += 1\n\nfor t in ['I_new','I']:\n org_df.loc[:,(t,1)] = org_df.loc[:,t].iloc[:,:-1].sum(axis=1)\n \nfor (c,p,r) in zip(CLASSES,POPULATION,PERIODS):\n org_df.loc[:,('I/population',c)] = org_df.loc[:,('I',c)] / p\n org_df.loc[date_index[0],('period',c)] = r\n \nI_total = 0\nfor c,p in zip(CLASSES[:-1],POPULATION[:-1]):\n I = len(df.loc[np.logical_and(df['Class']==c,df['Type']==1)]) \n I_total += I\n A_ntp = len(df.loc[np.logical_and(df['Class']==c,df['Type']==2)]) \n tp = len(df.loc[np.logical_and(df['Class']==c,df['Type']==3)]) \n org_df.loc[date_index[0],('A(not tp)',c)] = A_ntp\n org_df.loc[date_index[0],('tp',c)] = tp\n org_df.loc[date_index[0],('A',c)] = A_ntp + tp\n org_df.loc[date_index[0],('I/(I+A)',c)] = I/(I+A_ntp + tp)\n org_df.loc[date_index[0],('I_total/population',c)] = I/p\n\nfor t in ['A(not tp)','tp','A']:\n org_df.loc[date_index[0],(t,1)] = org_df.loc[date_index[0],t].iloc[:-1].sum()\n \norg_df.loc[date_index[0],('I/(I+A)',1)] = I_total/(I_total+org_df.loc[date_index[0],('A',1)])\norg_df.loc[date_index[0],('I_total/population',1)] = I_total/POPULATION[-1]\n\norg_df=org_df.swaplevel(axis=1)\nv = pd.Categorical(org_df.columns.get_level_values(0), \n categories=CLASSES, \n ordered=True)\nv2 = pd.Categorical(org_df.columns.get_level_values(1), \n categories=TYPES,\n ordered=True)\norg_df.columns = pd.MultiIndex.from_arrays([v, v2]) \norg_df = org_df.sort_index(axis=1, level=[0, 1])\norg_df", "_____no_output_____" ], [ "org_df.to_csv('QJYS_ver4.csv')", "_____no_output_____" ], [ "df = pd.read_csv('SYZX_monthly_ver2.csv',header=[0,1],index_col=0)", "_____no_output_____" ], [ "def organize3(name,population,periods):\n df = pd.read_csv(name+'_monthly_ver2.csv',header=[0,1],index_col=0)\n df.drop(['I_new','A(not tp)','tp','I/(I+A)','I/population'],axis=1,level=1)\n classes = df.columns.levels[0].values\n ", "_____no_output_____" ], [ "df.columns.levels[0].values", "_____no_output_____" ], [ "df.drop('I_new',axis=1,level=1)", "_____no_output_____" ], [ "df.drop()", "_____no_output_____" ], [ "df.columns.levels[0]", "_____no_output_____" ], [ "df=df.swaplevel(axis=1)", "_____no_output_____" ], [ "df[['I','A']]", "_____no_output_____" ] ] ]
[ "code" ]
[ [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ] ]
e7fe9921716f6d81456f9a59fa8021207f2d4a92
97,107
ipynb
Jupyter Notebook
python/trajectory-following/plots.ipynb
be2rlab/exoskeleton_ros
deffd353ec17c1a31de31273b4fd3e92391a82de
[ "MIT" ]
null
null
null
python/trajectory-following/plots.ipynb
be2rlab/exoskeleton_ros
deffd353ec17c1a31de31273b4fd3e92391a82de
[ "MIT" ]
null
null
null
python/trajectory-following/plots.ipynb
be2rlab/exoskeleton_ros
deffd353ec17c1a31de31273b4fd3e92391a82de
[ "MIT" ]
null
null
null
416.76824
27,056
0.93605
[ [ [ "%matplotlib inline\nimport numpy as np\nimport matplotlib.pyplot as plt\nwith open('data.txt','r') as f:\n data = f.read().split()\ni = 0\nplt_x = []\nplt_y = []\nplt_ros_x = []\nplt_ros_y = []\nplt_f = []\nplt_r = []\nplt_ros_f = []\nplt_ros_r = []\nplt_t = []\nend = len(data)%9\nt0 = float(data[0])\nwhile i < (len(data)-end):\n plt_t.append(float(data[i])-t0)\n plt_x.append(float(data[i+1]))\n plt_y.append(float(data[i+2]))\n plt_ros_x.append(float(data[i+3]))\n plt_ros_y.append(float(data[i+4]))\n plt_f.append(float(data[i+5]))\n plt_r.append(float(data[i+6]))\n plt_ros_f.append(float(data[i+7]))\n plt_ros_r.append(float(data[i+8]))\n i = i + 9\n", "_____no_output_____" ], [ "fig, ax = plt.subplots()\nax.set_xlim([0,40])\nax.set_ylim([-150,150])\nax.plot(plt_t,plt_ros_f,'r',plt_t,plt_ros_r,'b',\n plt_t,plt_f,'r+',plt_t,plt_r,'b+')\nax.legend(['Front Angle PV (simulation)','Rear Angle PV (simulation)', 'Front Angle SP', 'Rear Angle SP']) \nax.set_xlabel('Time (in seconds)')\nax.set_ylabel('Joint Angle (in degrees)')", "_____no_output_____" ], [ "fig, ax = plt.subplots()\nplt_hw_f = []\nplt_hw_r = []\nwith open('hw_data.txt','r') as f:\n data = [line.split() for line in f]\nfor i in range(len(data)):\n plt_hw_f.append(float(data[i][0]))\n plt_hw_r.append(float(data[i][1]))\nax.set_xlim([0,40])\nax.set_ylim([-150,150])\nax.plot(plt_t,plt_hw_f,'g',plt_t,plt_hw_r,'y',#plt_t,plt_ros_f,'r',plt_t,plt_ros_r,'b',\n plt_t,plt_f,'r+',plt_t,plt_r,'b+')\nax.legend(['Front Angle PV (experimental)','Rear Angle PV (experimental)','Front Angle SP', 'Rear Angle SP']) \nax.set_xlabel('Time (in seconds)')\nax.set_ylabel('Joint Angle (in degrees)')\nplt.show()", "_____no_output_____" ], [ "f_length = 0.3035\nr_length = 0.3035\n\ndef direct_kinematics(fwd_angle, rear_angle, fwd_link_length, rear_link_length):\n fwd_angle = np.deg2rad(fwd_angle)\n rear_angle = np.deg2rad(rear_angle)\n return [rear_link_length*np.cos(rear_angle)+fwd_link_length*np.cos(fwd_angle+rear_angle), rear_link_length*np.sin(rear_angle)+fwd_link_length*np.sin(fwd_angle+rear_angle)]\n\nhw_xy = direct_kinematics(plt_hw_f,plt_hw_r,f_length,r_length)\nfig, ax = plt.subplots()\nax.set_xlim([-0.5,0.1])\nax.set_ylim([-0.7,-0.3])\nax.plot(plt_x,plt_y,'b+',plt_ros_x,plt_ros_y,'r+',-hw_xy[0],hw_xy[1],'g+')\nax.legend(['XY trajectory (desired)','XY trajectory (simulation)','XY trajectory (experimental)'])\nax.set_xlabel('X coordinate(in m)')\nax.set_ylabel('Y coordinate(in m)')", "_____no_output_____" ], [ "fig, ax = plt.subplots()\nax.set_xlim([0,40])\nax.set_ylim([-0.7,0.1])\nax.plot(plt_t,plt_x,'r+',plt_t,plt_y,'b+',plt_t,plt_ros_x,'r',plt_t,plt_ros_y,'b',plt_t,-hw_xy[0],'r*',plt_t,hw_xy[1],'b*')\nax.legend(['X desired','Y desired','X (simulation)','Y (simulation)','X (experimental)','Y (experimental)',])\nax.set_xlabel('Time (in seconds)')\nax.set_ylabel('Distance from origin (in m)')", "_____no_output_____" ], [ "#with open('hw_data.txt','w') as f:\n# for i in range(len(plt_hw_f)):\n# f.write(str(plt_hw_f[i])+' '+str(plt_hw_r[i])+'\\n')", "_____no_output_____" ] ] ]
[ "code" ]
[ [ "code", "code", "code", "code", "code", "code" ] ]
e7fec074fa122809def31fb1c4681dc3aa50785b
551,547
ipynb
Jupyter Notebook
Lesson 3: Introduction to Neural Networks/4 GradientDescent.ipynb
makeithappenlois/Udacity-AI
c82c7abaf179730740fecae76d388b14cbb3917c
[ "MIT" ]
2
2020-01-03T17:36:15.000Z
2020-01-13T18:24:14.000Z
Lesson 3: Introduction to Neural Networks/4 GradientDescent.ipynb
makeithappenlois/Udacity-AI
c82c7abaf179730740fecae76d388b14cbb3917c
[ "MIT" ]
null
null
null
Lesson 3: Introduction to Neural Networks/4 GradientDescent.ipynb
makeithappenlois/Udacity-AI
c82c7abaf179730740fecae76d388b14cbb3917c
[ "MIT" ]
2
2020-01-03T16:30:16.000Z
2020-01-08T15:03:14.000Z
45.398551
104,184
0.659418
[ [ [ "# Implementing the Gradient Descent Algorithm\n\nIn this lab, we'll implement the basic functions of the Gradient Descent algorithm to find the boundary in a small dataset. First, we'll start with some functions that will help us plot and visualize the data.", "_____no_output_____" ] ], [ [ "import matplotlib.pyplot as plt\nimport numpy as np\nimport pandas as pd\n\n#Some helper functions for plotting and drawing lines\n\ndef plot_points(X, y):\n admitted = X[np.argwhere(y==1)]\n rejected = X[np.argwhere(y==0)]\n plt.scatter([s[0][0] for s in rejected], [s[0][1] for s in rejected], s = 25, color = 'blue', edgecolor = 'k')\n plt.scatter([s[0][0] for s in admitted], [s[0][1] for s in admitted], s = 25, color = 'red', edgecolor = 'k')\n\ndef display(m, b, color='g--'):\n plt.xlim(-0.05,1.05)\n plt.ylim(-0.05,1.05)\n x = np.arange(-10, 10, 0.1)\n plt.plot(x, m*x+b, color)", "_____no_output_____" ] ], [ [ "## Reading and plotting the data", "_____no_output_____" ] ], [ [ "data = pd.read_csv('data.csv', header=None)\nX = np.array(data[[0,1]])\ny = np.array(data[2])\nplot_points(X,y)\nplt.show()", "_____no_output_____" ] ], [ [ "## TODO: Implementing the basic functions\nHere is your turn to shine. Implement the following formulas, as explained in the text.\n- Sigmoid activation function\n\n$$\\sigma(x) = \\frac{1}{1+e^{-x}}$$\n\n- Output (prediction) formula\n\n$$\\hat{y} = \\sigma(w_1 x_1 + w_2 x_2 + b)$$\n\n- Error function\n\n$$Error(y, \\hat{y}) = - y \\log(\\hat{y}) - (1-y) \\log(1-\\hat{y})$$\n\n- The function that updates the weights\n\n$$ w_i \\longrightarrow w_i + \\alpha (y - \\hat{y}) x_i$$\n\n$$ b \\longrightarrow b + \\alpha (y - \\hat{y})$$", "_____no_output_____" ] ], [ [ "# Implement the following functions\n# Activation (sigmoid) function\ndef sigmoid(x):\n sigmoid_result = 1/(1 + np.exp(-x))\n return sigmoid_result\n\n# Output (prediction) formula\ndef output_formula(features, weights, bias):\n x = features.dot(weights) + bias\n print(x)\n output= sigmoid(x)\n return output\n\n# Error (log-loss) formula\ndef error_formula(y, output):\n error= -y*np.log(output) - (1-y)*np.log(1-output)\n return error\n\n# Gradient descent step\ndef update_weights(x, y, weights, bias, learnrate,output):\n weights = weights + learnrate*(y-output)*x\n bias = bias + learnrate*(y-output)\n return weights, bias", "_____no_output_____" ] ], [ [ "## Training function\nThis function will help us iterate the gradient descent algorithm through all the data, for a number of epochs. It will also plot the data, and some of the boundary lines obtained as we run the algorithm.", "_____no_output_____" ] ], [ [ "np.random.seed(44)\n\nepochs = 100\nlearnrate = 0.01\n\ndef train(features, targets, epochs, learnrate, graph_lines=False):\n \n errors = []\n n_records, n_features = features.shape\n last_loss = None\n weights = np.random.normal(scale=1 / n_features**.5, size=n_features)\n bias = 0\n for e in range(epochs):\n del_w = np.zeros(weights.shape)\n for x, y in zip(features, targets):\n output = output_formula(x, weights, bias)\n error = error_formula(y, output)\n weights, bias = update_weights(x, y, weights, bias, learnrate, output)\n \n # Printing out the log-loss error on the training set\n out = output_formula(features, weights, bias)\n loss = np.mean(error_formula(targets, out))\n errors.append(loss)\n if e % (epochs / 10) == 0:\n print(\"\\n========== Epoch\", e,\"==========\")\n if last_loss and last_loss < loss:\n print(\"Train loss: \", loss, \" WARNING - Loss Increasing\")\n else:\n print(\"Train loss: \", loss)\n last_loss = loss\n predictions = out > 0.5\n accuracy = np.mean(predictions == targets)\n print(\"Accuracy: \", accuracy)\n if graph_lines and e % (epochs / 100) == 0:\n display(-weights[0]/weights[1], -bias/weights[1])\n \n\n # Plotting the solution boundary\n plt.title(\"Solution boundary\")\n display(-weights[0]/weights[1], -bias/weights[1], 'black')\n\n # Plotting the data\n plot_points(features, targets)\n plt.show()\n\n # Plotting the error\n plt.title(\"Error Plot\")\n plt.xlabel('Number of epochs')\n plt.ylabel('Error')\n plt.plot(errors)\n plt.show()", "_____no_output_____" ] ], [ [ "## Time to train the algorithm!\nWhen we run the function, we'll obtain the following:\n- 10 updates with the current training loss and accuracy\n- A plot of the data and some of the boundary lines obtained. The final one is in black. Notice how the lines get closer and closer to the best fit, as we go through more epochs.\n- A plot of the error function. Notice how it decreases as we go through more epochs.", "_____no_output_____" ] ], [ [ "train(X, y, epochs, learnrate, True)", "-0.473530635888\n0.125936869166\n-0.0361573775337\n0.256515978501\n0.0843418004565\n-0.0179345755974\n0.198924764205\n0.10068384058\n0.270812660579\n0.158779604536\n0.172757003252\n0.306288885566\n0.328800256305\n0.209874987271\n0.267438947578\n0.047722264903\n0.0292851983816\n0.156739365126\n0.309723821939\n0.299274606528\n0.276005815744\n0.429675232057\n0.2921510035\n0.364606600253\n0.12578919906\n0.641702400365\n0.599422022024\n0.310737562666\n0.0183354643532\n0.148554925344\n0.40003471251\n-0.0203031863132\n0.579938469388\n0.472752585787\n0.368941323533\n0.333232257115\n0.29846769527\n0.438832443683\n0.237952131937\n0.202982388536\n0.387709399587\n0.56040728227\n0.155229259389\n0.606795024705\n0.20609247896\n0.343010668522\n0.428619019042\n0.595505110161\n0.29566334023\n0.351850767654\n0.968547748808\n0.596693804223\n0.467408683354\n0.771046922899\n0.344475697379\n0.463224373188\n0.618007227011\n0.60095801005\n0.341550583778\n0.543606123802\n0.425673558661\n0.458713976336\n0.375257985976\n0.481305097256\n0.254997990576\n0.468541619801\n0.383089506323\n0.373376923178\n0.0186890053455\n0.385087806512\n0.601638033062\n0.268082477521\n0.555913968534\n0.314671465567\n0.223379432124\n0.199415164363\n0.14169840636\n0.41587011179\n0.0923171037297\n0.333822566489\n0.356745630609\n0.271178869636\n0.252357427055\n0.214304878022\n0.141604032251\n0.174584719612\n0.105140382844\n0.327171296831\n0.074388854521\n0.264789650244\n0.371034223554\n0.0569070316703\n-0.0052387740715\n0.313582897051\n0.110410491228\n0.0391078125133\n0.0664726601423\n0.127553402849\n0.0831172973566\n0.0969785354206\n[ -6.29819655e-01 -2.63025471e-02 -1.93737208e-01 7.46770741e-02\n -1.21470889e-01 -1.66141739e-01 1.43795244e-02 -1.08465866e-01\n 5.30446664e-02 -2.48106685e-02 3.17494051e-02 1.15548463e-01\n 1.01607381e-01 -4.76537775e-02 -1.37588773e-03 -1.85645495e-01\n -2.22802812e-01 -7.83346024e-02 2.46678539e-02 2.69746444e-02\n -1.52787841e-02 1.30838821e-01 -3.23374993e-02 1.18497425e-01\n -2.28512830e-01 2.90368731e-01 2.87541848e-01 4.29448844e-02\n -2.43245807e-01 -1.92591706e-01 9.56979268e-02 -3.05473310e-01\n 2.29936236e-01 6.79888444e-02 3.86629021e-02 1.55369122e-02\n -1.36492026e-01 4.90026900e-02 -1.60222840e-01 -1.78928397e-01\n -3.25472305e-03 8.36673503e-02 -1.84895779e-01 1.45780988e-01\n -8.97790553e-02 2.07935440e-03 -1.86998750e-02 1.59043121e-01\n -8.17021237e-02 -7.55938177e-02 3.66316870e-01 1.12773525e-01\n -9.79299953e-02 2.26280888e-01 -1.30001679e-01 -1.08975454e-01\n 1.00535989e-01 1.12326147e-01 -1.19591113e-01 7.55739965e-02\n 3.98754548e-02 7.09734117e-03 -2.71394773e-02 7.50260694e-02\n -1.34790175e-01 7.47713499e-02 3.54724500e-02 5.59102833e-02\n -2.97415333e-01 2.32670724e-02 2.39455842e-01 -3.58476379e-02\n 2.31095262e-01 3.12442782e-02 -3.80008483e-02 -8.92098732e-02\n -1.23435009e-01 1.66219811e-01 -1.38018776e-01 1.16204273e-01\n 1.49548838e-01 7.05827384e-02 5.77996283e-02 4.57429494e-02\n 6.04940733e-04 5.44302360e-03 -3.12561635e-02 1.79320701e-01\n -5.38368753e-02 1.42009954e-01 2.55577983e-01 -3.52309913e-02\n -8.69449845e-02 2.38504177e-01 4.72430872e-02 -1.29085986e-02\n 2.53634507e-02 1.01256890e-01 5.98984355e-02 8.82850390e-02]\n\n========== Epoch 0 ==========\nTrain loss: 0.713584519538\nAccuracy: 0.4\n-0.629819654976\n-0.0184338122198\n-0.179313550436\n0.0951114405453\n-0.0930968080136\n-0.134932071696\n0.0528114248976\n-0.0618430452006\n0.105590092609\n0.0297985751817\n0.0857267880583\n0.179509149355\n0.174882696184\n0.0354849550038\n0.0890699025589\n-0.0944314591101\n-0.122780365474\n0.0232879023597\n0.139888980046\n0.144736913504\n0.111488518259\n0.263324115837\n0.111694531704\n0.248069527403\n-0.0666675846925\n0.454386195778\n0.445701356676\n0.194290715094\n-0.0885283379732\n-0.0114577566297\n0.269320121542\n-0.131432843086\n0.42420661068\n0.283979209145\n0.236043608697\n0.212675060952\n0.10331503597\n0.277917720841\n0.0769103721314\n0.0571662995772\n0.239019222566\n0.360325381436\n0.0474606551033\n0.424648960694\n0.13152917112\n0.24303154775\n0.266606784839\n0.442656467616\n0.183401749151\n0.212785375748\n0.726188688638\n0.419340763535\n0.236206732275\n0.542932863774\n0.156655749121\n0.20540157899\n0.386288664721\n0.380427735593\n0.134072789046\n0.322321216279\n0.257044647488\n0.234087003013\n0.180036870363\n0.274427933022\n0.0542751782784\n0.254723884086\n0.199550548429\n0.208105820206\n-0.15054165416\n0.164043420241\n0.367431672752\n0.0798034200086\n0.33427900724\n0.124791808029\n0.0474378935678\n-0.0182828356415\n-0.061421890029\n0.217659004942\n-0.0939121244667\n0.151024513957\n0.174033928519\n0.0830011625221\n0.05711336408\n0.0447589844371\n0.00520079709849\n-0.028324503337\n-0.0546317556456\n0.124125655066\n-0.110775953639\n0.0673608487121\n0.160344492086\n-0.121238648149\n-0.18077253326\n0.125672753419\n-0.0764708952619\n-0.145709671734\n-0.115449241037\n-0.0105596969488\n-0.124638556733\n-0.0514942262735\n[ -7.27336470e-01 -1.15551303e-01 -2.82074004e-01 -2.93023680e-02\n -2.38650477e-01 -2.34328937e-01 -7.81541814e-02 -2.14612509e-01\n -5.59735632e-02 -1.04400881e-01 -1.37579685e-02 3.70824097e-02\n 5.65862892e-04 -1.65702127e-01 -1.23263863e-01 -2.77880413e-01\n -3.23530920e-01 -1.64269215e-01 -9.18378864e-02 -7.73251570e-02\n -1.28202530e-01 1.57471733e-02 -1.59396280e-01 4.39706145e-02\n -3.66132612e-01 1.55734005e-01 1.80460849e-01 -3.29598882e-02\n -3.10967859e-01 -3.05211899e-01 6.84211599e-03 -3.77637286e-01\n 1.19300071e-01 -7.06605600e-02 -5.34116927e-02 -6.64125574e-02\n -2.81526738e-01 -6.79658987e-02 -2.77473869e-01 -2.83702695e-01\n -1.11083302e-01 -6.83981778e-02 -2.58586106e-01 7.97377301e-03\n -1.35888886e-01 -6.60334243e-02 -1.40077910e-01 4.48362867e-02\n -1.61219505e-01 -1.78388040e-01 1.73895740e-01 -2.43060753e-02\n -2.81784867e-01 4.36604986e-02 -2.78086478e-01 -3.18765313e-01\n -8.80406638e-02 -6.74784577e-02 -2.88586748e-01 -1.06704988e-01\n -9.66340010e-02 -1.79714288e-01 -1.88743364e-01 -9.78768730e-02\n -3.02871730e-01 -1.06130498e-01 -1.19029212e-01 -8.27764167e-02\n -4.40005533e-01 -1.67270095e-01 3.56889273e-02 -1.98607904e-01\n 3.67527410e-02 -1.34810186e-01 -1.91942159e-01 -2.82529321e-01\n -3.04338165e-01 -1.12063437e-02 -3.05028526e-01 -4.84991314e-02\n -1.58711258e-02 -1.00768370e-01 -1.21101717e-01 -1.09676575e-01\n -1.23929679e-01 -1.83007918e-01 -1.79457803e-01 -1.10856293e-02\n -2.28046619e-01 -4.49712841e-02 5.46944182e-02 -2.05582171e-01\n -2.55585135e-01 5.68901434e-02 -1.34293921e-01 -1.93366997e-01\n -1.53185158e-01 -3.47612656e-02 -1.45843507e-01 -5.95725351e-02]\n-0.727336470424\n-0.107419991776\n-0.267106102046\n-0.00807938381459\n-0.209089605452\n-0.201732929898\n-0.0380856454825\n-0.165983799655\n-0.00111163959838\n-0.0473219640467\n0.0426285164914\n0.10379626832\n0.0769938473285\n-0.078924399174\n-0.0287575984773\n-0.182489106043\n-0.21895081778\n-0.0580211024574\n0.0286224722101\n0.0458722840811\n0.00445999600281\n0.154466683974\n-0.00850037923842\n0.179787012916\n-0.196516091821\n0.327768092439\n0.346495063949\n0.125924184604\n-0.148639660476\n-0.11523644814\n0.188959078292\n-0.195159071749\n0.322985322284\n0.155883084206\n0.153697192447\n0.140408191772\n-0.029921319282\n0.172347547953\n-0.0284980996905\n-0.0358090011679\n0.143313576585\n0.222211910791\n-0.0144985291737\n0.300995174879\n0.0965587931198\n0.18696590607\n0.159538470884\n0.342749908625\n0.117179055886\n0.124444892107\n0.552111642246\n0.299086811668\n0.0719132295905\n0.380542495834\n0.0279392491967\n0.0188952373649\n0.220824149859\n0.224070094599\n-0.0112461714023\n0.165187543352\n0.143319644216\n0.0741010203602\n0.0442759071833\n0.128771779745\n-0.0860665012545\n0.103107686726\n0.0729342376196\n0.0964760937676\n-0.264988270209\n0.00608232051312\n0.198113149283\n-0.0511323177412\n0.175167784569\n-0.00777819983771\n-0.0735688803634\n-0.173982258103\n-0.205105232446\n0.0774861336031\n-0.223775262987\n0.0235130761861\n0.0464398389445\n-0.049129530454\n-0.0809969828666\n-0.0720412951381\n-0.0840096252384\n-0.173084754714\n-0.163484142385\n-0.0212568263981\n-0.241006678678\n-0.0735297156616\n0.00807913780896\n-0.245986913698\n-0.303382851514\n-0.00789516214271\n-0.209013947284\n-0.276564622455\n-0.244064218438\n-0.102664470609\n-0.275212323151\n-0.152488565\n[-0.78459375 -0.16673364 -0.33224898 -0.09300351 -0.31315653 -0.26761876\n -0.1321544 -0.27987003 -0.12387592 -0.14747254 -0.0282828 -0.00524732\n -0.06070635 -0.24104431 -0.2018979 -0.33134134 -0.38407226 -0.21259279\n -0.16598858 -0.14122663 -0.1992869 -0.05736877 -0.24232898 0.00494546\n -0.45765339 0.06597262 0.11384768 -0.07303358 -0.34375583 -0.37578571\n -0.04418602 -0.41406662 0.04978373 -0.16347358 -0.10706128 -0.11152407\n -0.37940844 -0.14254045 -0.35198407 -0.34770593 -0.17790524 -0.17250286\n -0.29647132 -0.08424397 -0.15074459 -0.09950322 -0.21825864 -0.0275771\n -0.20414799 -0.24087703 0.03548639 -0.11586507 -0.41232968 -0.08631473\n -0.37853949 -0.47107866 -0.22283768 -0.19492427 -0.40661912 -0.23617621\n -0.18760902 -0.31289406 -0.3007002 -0.21947177 -0.4201144 -0.23444382\n -0.22511068 -0.17560335 -0.53560197 -0.3036025 -0.11206761 -0.3115235\n -0.10307559 -0.2505895 -0.29744705 -0.42103661 -0.43236688 -0.13673203\n -0.42136626 -0.16326607 -0.13128801 -0.22105338 -0.24771023 -0.21654382\n -0.20478966 -0.31756227 -0.2801513 -0.14753308 -0.35055343 -0.178488\n -0.09066345 -0.32487475 -0.37336597 -0.0722574 -0.26310107 -0.32118145\n -0.27945063 -0.12541208 -0.29500017 -0.16014934]\n-0.784593747289\n-0.158452194424\n-0.31696994185\n-0.0713319199481\n-0.282905709167\n-0.234200276034\n-0.0911342740436\n-0.230074486375\n-0.0676560465367\n-0.0889350873662\n0.0295151380555\n0.0630497757555\n0.0175294379745\n-0.15216615243\n-0.105023391287\n-0.233493020694\n-0.276813522197\n-0.103625692341\n-0.0424510193138\n-0.0148195490302\n-0.0631338327645\n0.0850566142351\n-0.0873333606055\n0.14450496069\n-0.283383096188\n0.242837382015\n0.284656502824\n0.090419999968\n-0.176833268152\n-0.180483773645\n0.143051317061\n-0.226515971612\n0.259124747259\n0.0694299265508\n0.105929547961\n0.101142157523\n-0.120672475895\n0.10469251231\n-0.0958071629181\n-0.0926319462158\n0.0838723505714\n0.126625422769\n-0.0452129712835\n0.217442595858\n0.0885050014057\n0.160832699467\n0.0900830736329\n0.279070320769\n0.0823568394361\n0.0707683331729\n0.42495002496\n0.217920774085\n-0.046487989712\n0.263225507858\n-0.0603480038643\n-0.118696617558\n0.100747824269\n0.11163673207\n-0.114067969066\n0.0519387546261\n0.0670482228619\n-0.0416836981595\n-0.0508910600125\n0.024930606501\n-0.185197814266\n-0.00602749199625\n-0.0148719864412\n0.0213798821623\n-0.342133531714\n-0.108806045279\n0.0731192891355\n-0.143002529016\n0.0587199826905\n-0.101319522517\n-0.157197709436\n-0.28742128375\n-0.308305448847\n-0.0231552956273\n-0.315287914308\n-0.0663787359787\n-0.0436536282258\n-0.143125849065\n-0.180225464817\n-0.152996962821\n-0.141216900584\n-0.278266645553\n-0.237759018743\n-0.127387172976\n-0.33389067312\n-0.175952261635\n-0.104412672576\n-0.334459810517\n-0.390045656059\n-0.104514063714\n-0.304625252069\n-0.370749853177\n-0.336457788419\n-0.163585636226\n-0.386881095586\n-0.221282171229\n[-0.81467557 -0.19222454 -0.35667539 -0.12947917 -0.35879197 -0.27741475\n -0.16013823 -0.31748954 -0.16397506 -0.16592248 -0.02200911 -0.02321211\n -0.09510344 -0.28748944 -0.25125387 -0.35863017 -0.41746702 -0.2355424\n -0.2114811 -0.17782217 -0.24207159 -0.10208314 -0.29538246 -0.01015326\n -0.51794171 0.00662391 0.07460218 -0.08895712 -0.35302867 -0.41792609\n -0.0696813 -0.42643078 0.00808264 -0.22522003 -0.13476817 -0.13179098\n -0.44532435 -0.18842858 -0.39757919 -0.38415645 -0.21700242 -0.24407184\n -0.31023633 -0.14556171 -0.14462045 -0.10964615 -0.26720136 -0.07171339\n -0.22240911 -0.27612983 -0.06619034 -0.17657339 -0.5066513 -0.18050923\n -0.44669549 -0.58428977 -0.32107651 -0.28679327 -0.49005139 -0.32976226\n -0.24772763 -0.4096227 -0.37896176 -0.30621804 -0.50284373 -0.32702353\n -0.29834153 -0.2373482 -0.59935115 -0.40308695 -0.22171688 -0.39060771\n -0.20583119 -0.33223637 -0.37009396 -0.52228207 -0.52447481 -0.22699483\n -0.50330617 -0.24413017 -0.21275427 -0.30665721 -0.33879038 -0.29046861\n -0.25608069 -0.41548261 -0.34862803 -0.2472943 -0.43794548 -0.27566212\n -0.19824819 -0.40949691 -0.45661683 -0.16574707 -0.35607839 -0.41322861\n -0.37019615 -0.18531877 -0.40566169 -0.22866024]\n-0.814675571419\n-0.183865476645\n-0.341238533383\n-0.107583605888\n-0.328181489145\n-0.243549072267\n-0.1186228609\n-0.267086968177\n-0.107037344073\n-0.106602977722\n0.0365322894414\n0.0458845906609\n-0.0159614972914\n-0.197543401873\n-0.153147533028\n-0.25948078353\n-0.308791832756\n-0.125136809097\n-0.0863187656643\n-0.0497018382835\n-0.104045708983\n0.0423459997676\n-0.138149603675\n0.131462829439\n-0.341119450933\n0.186170581691\n0.248090635986\n0.0770585714661\n-0.183550354109\n-0.219673133382\n0.120395253921\n-0.236081675614\n0.22053755834\n0.0112061497349\n0.0815003349374\n0.0841222948917\n-0.182618622788\n0.062689477892\n-0.137345988561\n-0.125031461865\n0.0489422473587\n0.0598906714958\n-0.0549001343987\n0.161065666046\n0.0984872476428\n0.154829042444\n0.0460746704772\n0.239886949081\n0.068677972282\n0.0404960738135\n0.329696406805\n0.163222927921\n-0.133722288652\n0.176521706695\n-0.121257347593\n-0.223037335659\n0.0114892694767\n0.0290046531353\n-0.188089006844\n-0.0315397083719\n0.0160950206865\n-0.127484149603\n-0.118571306664\n-0.0505741669904\n-0.256420987446\n-0.0863641803986\n-0.076421431692\n-0.0290325326658\n-0.394078328376\n-0.194483373391\n-0.0217881500546\n-0.208440460117\n-0.0287934591083\n-0.168451930523\n-0.21556087113\n-0.372217014852\n-0.384089957429\n-0.0970234449855\n-0.380857051411\n-0.130818225167\n-0.108374513432\n-0.211313215843\n-0.253126154952\n-0.209725624628\n-0.17685616188\n-0.356633135175\n-0.288679508059\n-0.206913857651\n-0.401499201259\n-0.252310243072\n-0.189928757522\n-0.398392273351\n-0.452393070674\n-0.176105404139\n-0.375227357188\n-0.440106037802\n-0.404326835156\n-0.2034685721\n-0.472155490003\n-0.268326100343\n[-0.82652074 -0.2004837 -0.36383729 -0.14765462 -0.38499382 -0.27150947\n -0.1706632 -0.33653012 -0.18537403 -0.16788281 -0.00189608 -0.02485971\n -0.11144232 -0.31447963 -0.28088863 -0.3683603 -0.43262851 -0.24148267\n -0.23768143 -0.19606364 -0.26581446 -0.127679 -0.32829891 -0.00923889\n -0.55716214 -0.03220448 0.05376426 -0.08871546 -0.3465886 -0.44093913\n -0.07804988 -0.42270297 -0.01490269 -0.26600157 -0.14506634 -0.13541199\n -0.48965983 -0.21500396 -0.42371145 -0.40209012 -0.23745617 -0.29365596\n -0.30786623 -0.18602691 -0.12453614 -0.10419677 -0.29645138 -0.0968164\n -0.2241513 -0.29308116 -0.14295875 -0.21646476 -0.57643704 -0.25046159\n -0.49304096 -0.67096796 -0.39453893 -0.35456627 -0.55007481 -0.39903987\n -0.2870283 -0.48165303 -0.43443846 -0.36937506 -0.56222503 -0.39539999\n -0.34937097 -0.27811786 -0.64160872 -0.47759711 -0.30550929 -0.44681296\n -0.28344802 -0.39079267 -0.42053737 -0.59827068 -0.59225773 -0.29337724\n -0.56197831 -0.30205953 -0.27125126 -0.36878815 -0.40581038 -0.34212787\n -0.28744863 -0.48857782 -0.39534588 -0.32218705 -0.50156889 -0.34820772\n -0.28020723 -0.47065796 -0.51650616 -0.23507932 -0.42478681 -0.48105273\n -0.43688868 -0.22448288 -0.49020457 -0.27551203]\n-0.826520739\n-0.192094323252\n-0.348343744125\n-0.125683651867\n-0.354242879392\n-0.237445709351\n-0.128955750853\n-0.285892735059\n-0.128144268525\n-0.108231726066\n0.0569437612232\n0.044514576223\n-0.0319951623009\n-0.224154337971\n-0.18230864671\n-0.268681315322\n-0.323379095066\n-0.130492959655\n-0.111863602486\n-0.0672291814421\n-0.126996284276\n0.0176165153212\n-0.170074042878\n0.133305752918\n-0.379193579275\n0.148586918457\n0.228529269452\n0.0785173869914\n-0.175919981655\n-0.241326200802\n0.113337708522\n-0.23107835344\n0.198962965385\n-0.0279544892183\n0.0727333248441\n0.0820055947936\n-0.225104136635\n0.0379635706869\n-0.161531889622\n-0.141013555584\n0.0304992348588\n0.0126681743623\n-0.050528713793\n0.123039986429\n0.120452351766\n0.162270137763\n0.0192124307006\n0.217196253482\n0.0691500260201\n0.025950273452\n0.256107071063\n0.126390889989\n-0.19982696626\n0.11057624394\n-0.1636742865\n-0.304793515849\n-0.0568689464259\n-0.0334337675833\n-0.242620710172\n-0.0948436081271\n-0.017784445563\n-0.192960136678\n-0.167666092603\n-0.106895392356\n-0.308764870212\n-0.147190170307\n-0.120231198087\n-0.0627988620149\n-0.429027732885\n-0.260353797671\n-0.0962677584073\n-0.256008121042\n-0.0966789826375\n-0.217723974747\n-0.256861054743\n-0.437591896117\n-0.441307248218\n-0.152758468091\n-0.428880843203\n-0.178042637619\n-0.155932360607\n-0.262033086832\n-0.308194319482\n-0.250086974814\n-0.197985338319\n-0.41681908554\n-0.323836571486\n-0.268394533278\n-0.451995893777\n-0.310993451465\n-0.25712522567\n-0.445716926454\n-0.498288104733\n-0.230730268056\n-0.428879692298\n-0.492636756778\n-0.455578124643\n-0.22917046825\n-0.539493302104\n-0.300684234958\n[-0.82616615 -0.19722674 -0.35946593 -0.15356108 -0.39813868 -0.2551663\n -0.16951134 -0.34311265 -0.19422426 -0.15884742 0.02735597 -0.0156276\n -0.11568097 -0.32839525 -0.29726048 -0.36635058 -0.43557836 -0.2360646\n -0.25091883 -0.20200001 -0.27677139 -0.14043003 -0.34766193 0.00234201\n -0.58218316 -0.05719929 0.04527808 -0.07770323 -0.32970519 -0.45111245\n -0.07497173 -0.40826814 -0.02532215 -0.29264553 -0.14372199 -0.12792626\n -0.51943322 -0.22860111 -0.43676756 -0.40761172 -0.24540295 -0.32838665\n -0.29475465 -0.21243062 -0.09523234 -0.08838003 -0.31245908 -0.10913333\n -0.2148792 -0.29776785 -0.20281358 -0.24232074 -0.62958665 -0.30397246\n -0.52466276 -0.73960982 -0.45118956 -0.40600427 -0.59425326 -0.45183476\n -0.31229545 -0.53692962 -0.47450457 -0.41655376 -0.60580453 -0.44736754\n -0.38539682 -0.30474315 -0.66937213 -0.53515942 -0.37172732 -0.48754195\n -0.34399439 -0.43372193 -0.4559783 -0.65711723 -0.6435533 -0.34357396\n -0.60490506 -0.34446816 -0.31420214 -0.41502254 -0.45652243 -0.37873819\n -0.30541201 -0.54483041 -0.42737281 -0.38020071 -0.54909271 -0.4040438\n -0.34475344 -0.51593444 -0.56058255 -0.28802907 -0.47704125 -0.53245713\n -0.48727941 -0.24966418 -0.55699576 -0.30773869]\n-0.826166148351\n-0.188838269006\n-0.343982799729\n-0.131613387357\n-0.367393138175\n-0.221070215534\n-0.127813577651\n-0.292488268295\n-0.136986007203\n-0.099164930263\n0.0861976835832\n0.0536766563969\n-0.0363288194511\n-0.238149554954\n-0.198712168776\n-0.266656319413\n-0.326316194542\n-0.125060325131\n-0.125091708325\n-0.073117763424\n-0.137881700984\n0.00497295491558\n-0.189276277289\n0.145062035342\n-0.404006750779\n0.123877302449\n0.220382059418\n0.0898477970665\n-0.158758677895\n-0.251202722867\n0.116705646413\n-0.216385568833\n0.188815985277\n-0.054248949401\n0.0744408397268\n0.0898301617149\n-0.254445177847\n0.0248543004003\n-0.174053074264\n-0.145988012538\n0.0231205981756\n-0.0213554384019\n-0.0368056280859\n0.0974013362779\n0.150313192548\n0.178641103988\n0.00388790299289\n0.205590621745\n0.0790509243957\n0.0219450456152\n0.19725837072\n0.10161512505\n-0.251580701585\n0.0587336761997\n-0.193595975625\n-0.371166152594\n-0.111020960969\n-0.0821618372331\n-0.283944370315\n-0.144446632008\n-0.0401501007204\n-0.244626953704\n-0.204177642271\n-0.150205145436\n-0.348316514007\n-0.194767969386\n-0.152042710095\n-0.0853367261531\n-0.452511125185\n-0.312757462297\n-0.156831491684\n-0.291475160719\n-0.151209192355\n-0.254895236204\n-0.286624211733\n-0.489759708755\n-0.485919021343\n-0.196182635231\n-0.465017067633\n-0.213602243804\n-0.191858614726\n-0.300905427225\n-0.351153586516\n-0.27937587042\n-0.209358404271\n-0.464642384187\n-0.348344033532\n-0.317595958354\n-0.490880850232\n-0.357655495115\n-0.311835538276\n-0.481777972473\n-0.533028342407\n-0.273821201248\n-0.471013036243\n-0.533735339238\n-0.495540291208\n-0.245312199451\n-0.59459576894\n-0.323117543979\n[-0.81767815 -0.18630443 -0.3474223 -0.15126258 -0.40252332 -0.23193042\n -0.16057834 -0.34136129 -0.1946709 -0.14251743 0.0625815 0.00082089\n -0.11183407 -0.33353582 -0.30472162 -0.35652114 -0.43037349 -0.22309525\n -0.25545842 -0.19970721 -0.27915787 -0.14456403 -0.3579082 0.02098749\n -0.59763337 -0.07286758 0.04506253 -0.05955468 -0.30592762 -0.45268258\n -0.06427383 -0.38675326 -0.02732024 -0.30975339 -0.13462018 -0.11306556\n -0.53937449 -0.23348878 -0.44105101 -0.40483444 -0.24497767 -0.35307069\n -0.27453465 -0.22934999 -0.05990154 -0.06571552 -0.31957129 -0.11287423\n -0.19830105 -0.29425743 -0.25114497 -0.25871202 -0.67142515 -0.34630068\n -0.5463371 -0.79594329 -0.49639769 -0.44633908 -0.62768498 -0.4934224\n -0.32810151 -0.58080801 -0.50413054 -0.45288461 -0.63866857 -0.48818043\n -0.41127055 -0.32182789 -0.68735688 -0.58118471 -0.4259551 -0.51778427\n -0.39290981 -0.466055 -0.4812702 -0.70429193 -0.68364456 -0.38277219\n -0.63715664 -0.37635366 -0.34661077 -0.45046747 -0.4961523 -0.40516376\n -0.31436368 -0.58962151 -0.4494725 -0.42672141 -0.58568631 -0.44850893\n -0.39742401 -0.55043332 -0.59393402 -0.32983783 -0.51810979 -0.57270194\n -0.52659337 -0.2654186 -0.61167605 -0.33008122]\n-0.817678148271\n-0.17793767123\n-0.331994285564\n-0.129403926683\n-0.371880440748\n-0.197912557548\n-0.119024752397\n-0.290915093797\n-0.137613173617\n-0.0830036978165\n0.121227406085\n0.0698238009752\n-0.0328431752417\n-0.24367510201\n-0.206541433935\n-0.257154189556\n-0.321472616044\n-0.112455810455\n-0.130052478751\n-0.071221197318\n-0.140677075784\n0.00044048433473\n-0.199916121877\n0.163380533387\n-0.419874961947\n0.107854267415\n0.219879715856\n0.107714479425\n-0.135312031746\n-0.253185475435\n0.127011917248\n-0.195291553644\n0.18633096179\n-0.0718558080372\n0.0831260890181\n0.104251751666\n-0.274900383925\n0.0195454348999\n-0.178744109637\n-0.143601421708\n0.0231506469634\n-0.046437549812\n-0.0169026394825\n0.0801282258183\n0.185316594142\n0.200899761813\n-0.00367958458266\n0.201424785371\n0.0951988865726\n0.0249853791781\n0.14848105538\n0.0849801587394\n-0.293552745009\n0.0165075958758\n-0.215063321402\n-0.427008800476\n-0.155478430749\n-0.12154850183\n-0.316291946323\n-0.184710924743\n-0.0547471737954\n-0.286874412606\n-0.23214941689\n-0.184661646217\n-0.379176361157\n-0.233316886007\n-0.175723489005\n-0.100295171644\n-0.468252511107\n-0.355966212637\n-0.207867425332\n-0.318727991694\n-0.196610322504\n-0.283845247606\n-0.30857223939\n-0.532906553411\n-0.521941620591\n-0.231218894119\n-0.493077270255\n-0.241236643408\n-0.219880238921\n-0.331716872572\n-0.385860674352\n-0.301159758223\n-0.214177770782\n-0.504023769475\n-0.365647584637\n-0.358405092104\n-0.521860810891\n-0.396107341719\n-0.357992391089\n-0.510177406168\n-0.560184584643\n-0.309040886951\n-0.505288743384\n-0.56703811148\n-0.527806401175\n-0.255009021715\n-0.641308266758\n-0.33883647902\n[-0.80379715 -0.17031208 -0.33030843 -0.14349868 -0.40104441 -0.20419048\n -0.14648991 -0.33405607 -0.18950831 -0.12138699 0.10164831 0.02201699\n -0.10260781 -0.3327999 -0.30620628 -0.34151399 -0.41974837 -0.20514033\n -0.25417561 -0.19193277 -0.27581574 -0.14293132 -0.36202904 0.04427028\n -0.60663359 -0.08224856 0.0503663 -0.03671883 -0.27764715 -0.44850544\n -0.04853551 -0.36060207 -0.02369105 -0.32042793 -0.12037947 -0.09334479\n -0.55267306 -0.23254489 -0.43946298 -0.39653077 -0.23896772 -0.37094966\n -0.24965423 -0.23987143 -0.02069407 -0.03857478 -0.32071858 -0.11087741\n -0.17691577 -0.28529167 -0.2915891 -0.26872061 -0.70554423 -0.38099354\n -0.56128467 -0.84383237 -0.53378509 -0.47909961 -0.65380837 -0.52736111\n -0.33752964 -0.61690059 -0.52666846 -0.48182794 -0.66424744 -0.52138269\n -0.43026389 -0.33247642 -0.6987425 -0.61932277 -0.47195978 -0.54090497\n -0.43386369 -0.49118484 -0.4996861 -0.74348458 -0.71609482 -0.41447072\n -0.66215245 -0.40108662 -0.37185196 -0.47856749 -0.52822472 -0.42468492\n -0.31726535 -0.62658084 -0.46485729 -0.46538241 -0.61483623 -0.48520418\n -0.44195435 -0.57759898 -0.61999207 -0.36404116 -0.55154577 -0.60533507\n -0.55835477 -0.27481818 -0.65805067 -0.34573674]\n-0.803797153197\n-0.161980984995\n-0.314965360409\n-0.121772893741\n-0.370569015555\n-0.170324534462\n-0.105170041881\n-0.283897980151\n-0.132756928406\n-0.0621753004289\n0.159966894408\n0.0905651791407\n-0.0241546296487\n-0.243526658433\n-0.208617472461\n-0.242701826994\n-0.311457168984\n-0.0951176793218\n-0.129476717063\n-0.0641379726017\n-0.138063053991\n0.00133846156514\n-0.20479916622\n0.186002241784\n-0.429709317206\n0.0976929845042\n0.22448025709\n0.129869139873\n-0.10776742739\n-0.249892758261\n0.141905423488\n-0.170012572245\n0.188968338606\n-0.0835936801607\n0.096431502857\n0.12301596163\n-0.289342369779\n0.0194632686615\n-0.178190945187\n-0.136312668535\n0.0281241504657\n-0.0654503826461\n0.0070424404666\n0.0685079362957\n0.223607781184\n0.226995851947\n-0.00603967746712\n0.202240520771\n0.115449343865\n0.0327148746\n0.106624293324\n0.0738454760607\n-0.328825452653\n-0.0191286363017\n-0.230801293641\n-0.47559596447\n-0.193284990185\n-0.154540824177\n-0.34251766162\n-0.218578985298\n-0.0641009973519\n-0.322663673721\n-0.254308431458\n-0.213069645661\n-0.404109978923\n-0.265683169188\n-0.193881900385\n-0.110134881466\n-0.478763083599\n-0.392862134072\n-0.252336509932\n-0.340388030813\n-0.235733910349\n-0.307190896214\n-0.325215465405\n-0.569856842018\n-0.552084896824\n-0.260514328962\n-0.515633050691\n-0.263469109194\n-0.242512048255\n-0.357022640856\n-0.414918347801\n-0.317845781583\n-0.214603605439\n-0.537609671326\n-0.378072319087\n-0.393445985662\n-0.547437827972\n-0.428921949547\n-0.398251384681\n-0.533346855153\n-0.582167527315\n-0.338862656554\n-0.534179248985\n-0.595000752988\n-0.554803109235\n-0.260364118625\n-0.682228757326\n-0.350009092425\n[-0.7863724 -0.1510009 -0.30988001 -0.1321184 -0.39565734 -0.17355753\n -0.12901789 -0.32307331 -0.18062263 -0.09713875 0.14311902 0.04629519\n -0.08982865 -0.32814423 -0.30369537 -0.32311191 -0.40554839 -0.18393094\n -0.24901148 -0.18053109 -0.26866329 -0.13745602 -0.36204405 0.07055297\n -0.6112911 -0.08739444 0.0593323 -0.01084778 -0.24647659 -0.44050873\n -0.02949724 -0.33146293 -0.01632064 -0.32676435 -0.10276684 -0.07046068\n -0.56148275 -0.22771217 -0.43396191 -0.38457194 -0.22925449 -0.38421282\n -0.2217647 -0.24607909 0.02094049 -0.00855733 -0.3178794 -0.10505888\n -0.15240922 -0.272721 -0.3266028 -0.27442767 -0.73437034 -0.41044753\n -0.57168049 -0.88588804 -0.56579871 -0.50666985 -0.67494596 -0.55605485\n -0.34266169 -0.6476479 -0.54438245 -0.50572127 -0.68485818 -0.54936855\n -0.44458651 -0.338785 -0.70567596 -0.65203948 -0.5122869 -0.55917691\n -0.4693353 -0.51140329 -0.5134365 -0.77718815 -0.74331126 -0.44103316\n -0.68220214 -0.42094375 -0.39220535 -0.50164939 -0.55512086 -0.43951717\n -0.31611659 -0.65816066 -0.47569656 -0.49863848 -0.63889766 -0.51656246\n -0.48086853 -0.59975804 -0.64107457 -0.39302763 -0.57974972 -0.63275333\n -0.58494437 -0.27993729 -0.6986909 -0.35686438]\n-0.786372399042\n-0.142714837553\n-0.29464171822\n-0.110554567726\n-0.365392450877\n-0.139892467613\n-0.0879910438446\n-0.273276098699\n-0.124260906395\n-0.0383173894021\n0.201023322004\n0.114287790942\n-0.0120290823338\n-0.239591602733\n-0.206844931389\n-0.225004466383\n-0.298030628872\n-0.0746912092735\n-0.125208081543\n-0.053622093379\n-0.131849346302\n0.00585706865193\n-0.205819886138\n0.211403123683\n-0.435475802262\n0.0914856146148\n0.232467624725\n0.154794950635\n-0.0776002731285\n-0.243092162536\n0.159799562708\n-0.142043641315\n0.195013662347\n-0.0913661617176\n0.112766072814\n0.144601731824\n-0.299711314028\n0.0228701403166\n-0.174139476275\n-0.125781527574\n0.0363769208487\n-0.0803341715512\n0.0335874950601\n0.0607083487363\n0.263936527396\n0.255546564197\n-0.00491382446055\n0.206378275168\n0.138355543883\n0.0435431192152\n0.0695587282008\n0.0664280301432\n-0.359481392741\n-0.0502200314264\n-0.242650144566\n-0.51914062462\n-0.226497304685\n-0.183129999144\n-0.364549677404\n-0.248039022947\n-0.0699168636167\n-0.353995658767\n-0.27249686316\n-0.237324160721\n-0.424985779188\n-0.293790162766\n-0.208279953346\n-0.116517730666\n-0.485738900974\n-0.425393162644\n-0.292240390811\n-0.358226463969\n-0.27050748939\n-0.326700447288\n-0.33824989776\n-0.602519821244\n-0.578180526211\n-0.285858272715\n-0.53442246807\n-0.282005108771\n-0.261453848768\n-0.378550013098\n-0.440086269645\n-0.331060870827\n-0.212095172031\n-0.567189656641\n-0.387189698027\n-0.424493326968\n-0.569303753504\n-0.457839556465\n-0.434409070679\n-0.552930703261\n-0.600607452983\n-0.36495961098\n-0.559357077158\n-0.61928453116\n-0.578172129827\n-0.262799448174\n-0.719115661881\n-0.358101692806\n[-0.76665322 -0.12955374 -0.28732289 -0.11837114 -0.3876838 -0.14111907\n -0.10935918 -0.30968096 -0.16928851 -0.07090894 0.18602417 0.07253088\n -0.07473064 -0.32089142 -0.29852799 -0.30251918 -0.38902043 -0.16063627\n -0.24127803 -0.16675525 -0.258997 -0.12943862 -0.35931861 0.09873007\n -0.61303083 -0.08969287 0.07070554 0.01694299 -0.21350459 -0.42999531\n -0.00833459 -0.30044866 -0.00648364 -0.33017946 -0.08297591 -0.04555907\n -0.56726019 -0.2203038 -0.42587154 -0.37022237 -0.21710937 -0.39434098\n -0.19198089 -0.2493823 0.06402443 0.02325711 -0.31239114 -0.09671362\n -0.12591979 -0.25779574 -0.35784892 -0.27724046 -0.75954552 -0.43628446\n -0.5789956 -0.92387785 -0.5940946 -0.53066307 -0.69266913 -0.58113041\n -0.34490541 -0.67470162 -0.55880418 -0.52614623 -0.70206846 -0.5737582\n -0.4557331 -0.34217115 -0.70960921 -0.68100378 -0.5486596 -0.57413774\n -0.50100274 -0.52826091 -0.5240166 -0.80708993 -0.76692284 -0.46405892\n -0.69886835 -0.43746533 -0.4092132 -0.52128753 -0.57845203 -0.45115917\n -0.31226914 -0.68602062 -0.48345756 -0.52815102 -0.65946422 -0.54423032\n -0.51587507 -0.61848471 -0.65874976 -0.41841366 -0.60434612 -0.65657865\n -0.60797322 -0.28217859 -0.73533732 -0.36492446]\n-0.766653219268\n-0.121318996814\n-0.272202455524\n-0.096988387658\n-0.35765769445\n-0.107687148883\n-0.0686642047691\n-0.260292240352\n-0.113371015826\n-0.012535775516\n0.243457377691\n0.139902561309\n0.00234038840155\n-0.233145618021\n-0.202511295537\n-0.205213934576\n-0.282382609869\n-0.0522875192656\n-0.118492680801\n-0.0408587150245\n-0.123259015824\n0.0127730937756\n-0.204259063404\n0.238554322857\n-0.438503843998\n0.0879420664434\n0.242682190451\n0.181467888842\n-0.0458063349927\n-0.23397825477\n0.179622788855\n-0.112393890105\n0.203308346489\n-0.0964605009762\n0.131055212479\n0.167982060521\n-0.307319364033\n0.0285915893214\n-0.167769730206\n-0.113129453179\n0.0467844225079\n-0.0924013533492\n0.0617591156044\n0.0554905469031\n0.305459965206\n0.285618784405\n-0.00146559860433\n0.212716519082\n0.162940834212\n0.0563956774703\n0.0358429818815\n0.0615222397514\n-0.386929949285\n-0.0781505906705\n-0.251854419412\n-0.559141082072\n-0.25650747397\n-0.208663610623\n-0.383692645229\n-0.27443670412\n-0.0733478605428\n-0.382224768152\n-0.287961093462\n-0.258707642424\n-0.443068095061\n-0.318939753516\n-0.220109709209\n-0.120567632726\n-0.490327105671\n-0.454878167248\n-0.328934678928\n-0.373441892953\n-0.302236322531\n-0.343570656432\n-0.348823106162\n-0.632188420397\n-0.601468703256\n-0.308462246356\n-0.550622084673\n-0.297999141551\n-0.277856348238\n-0.397468467889\n-0.46255607031\n-0.341906235292\n-0.207639394296\n-0.593975936635\n-0.394063289566\n-0.452749408899\n-0.588604413185\n-0.484039145493\n-0.467682976792\n-0.570042679676\n-0.616608549916\n-0.388465335871\n-0.581955450103\n-0.641015075055\n-0.599025865115\n-0.263277499411\n-0.753161228256\n-0.364107218972\n[-0.74548425 -0.10676999 -0.26343845 -0.10310178 -0.37801798 -0.10760934\n -0.08832275 -0.29473649 -0.15636805 -0.04346512 0.22971011 0.09996439\n -0.05814817 -0.31193632 -0.29161024 -0.28054979 -0.37100753 -0.13604634\n -0.23186284 -0.15145258 -0.24769389 -0.11975889 -0.35477675 0.12805496\n -0.61281745 -0.09008319 0.08363732 0.04590008 -0.17946616 -0.41784648\n 0.01415799 -0.26831074 0.00495779 -0.33163248 -0.06181344 -0.01941406\n -0.57099171 -0.21120815 -0.41608744 -0.35433712 -0.20339223 -0.40233702\n -0.16105573 -0.25073517 0.10789847 0.05613956 -0.30515873 -0.08671764\n -0.09821648 -0.24136135 -0.38645477 -0.27811169 -0.78218273 -0.45960356\n -0.58422617 -0.95900048 -0.61979534 -0.5521726 -0.70804258 -0.6036904\n -0.3452138 -0.6991813 -0.57097125 -0.54417457 -0.7169404 -0.59564974\n -0.464716 -0.34359456 -0.71152534 -0.70734699 -0.58224634 -0.58682911\n -0.53000367 -0.54280819 -0.53243913 -0.83433378 -0.78803355 -0.48463193\n -0.71320979 -0.4516949 -0.42392039 -0.53854867 -0.59931022 -0.46062597\n -0.30663777 -0.71128572 -0.48913389 -0.55504628 -0.6776158 -0.56932385\n -0.54813245 -0.63484561 -0.6740802 -0.44129478 -0.62643596 -0.67791027\n -0.62853313 -0.28249164 -0.76917025 -0.37090589]\n\n========== Epoch 10 ==========\nTrain loss: 0.622583521045\nAccuracy: 0.59\n-0.745484254676\n-0.0985907559605\n-0.248444378069\n-0.081912442392\n-0.348248914638\n-0.0744314830735\n-0.0479846238235\n-0.24578680139\n-0.100929845284\n0.0144226324815\n0.286635851519\n0.166673979532\n0.0181472281333\n-0.225051664692\n-0.196487630019\n-0.184108622406\n-0.265317318967\n-0.0286572524339\n-0.110173404392\n-0.0266491195529\n-0.11311925328\n0.0212591815574\n-0.200983370307\n0.266761205464\n-0.439693351554\n0.0861891306983\n0.254339856408\n0.209196546276\n-0.0130576556681\n-0.223358919013\n0.20065117565\n-0.0817445159466\n0.213068940952\n-0.0997480467094\n0.150572863056\n0.192463392145\n-0.313054905502\n0.0358332274097\n-0.159879874326\n-0.0991146087838\n0.0585863193167\n-0.102540732176\n0.0909002479571\n0.0520159178363\n0.347610214277\n0.316582926565\n0.00351788527298\n0.220496803091\n0.188545832179\n0.0705463107879\n0.00449985171937\n0.0583121382393\n-0.412126383395\n-0.103858376072\n-0.259256779688\n-0.596613548885\n-0.284259255986\n-0.232055033725\n-0.400830571689\n-0.298684198783\n-0.0751744815118\n-0.40826922651\n-0.301545515458\n-0.278089236815\n-0.459213679696\n-0.342014506705\n-0.230178631988\n-0.123045464836\n-0.49330442527\n-0.482211532764\n-0.363339038292\n-0.386846511467\n-0.331805741646\n-0.358612583898\n-0.357712513158\n-0.659739636818\n-0.622790403534\n-0.329147735549\n-0.565029431147\n-0.312233749012\n-0.292499549955\n-0.414570902909\n-0.483135884757\n-0.351128115532\n-0.20190423115\n-0.618790902692\n-0.399413681068\n-0.479030005434\n-0.606116894872\n-0.508320655921\n-0.498899574582\n-0.585438092113\n-0.630919639982\n-0.410148960854\n-0.602743269794\n-0.660956141818\n-0.618119086322\n-0.262450262987\n-0.78517515009\n-0.368698683115\n[-0.7234366 -0.08318983 -0.2387682 -0.08688173 -0.36726508 -0.07352376\n -0.06645539 -0.27881992 -0.1424445 -0.0153256 0.27373668 0.12808279\n -0.04064547 -0.30188461 -0.28355552 -0.25775389 -0.35207978 -0.11069491\n -0.22136666 -0.13519618 -0.2353474 -0.10901221 -0.34904422 0.15802369\n -0.61130454 -0.08920172 0.09755353 0.07551488 -0.14485723 -0.40465859\n 0.03744367 -0.23555641 0.01742035 -0.33177329 -0.03982466 0.00745152\n -0.57334576 -0.20102652 -0.40521578 -0.33749456 -0.18868478 -0.40888084\n -0.12949745 -0.25078411 0.15211863 0.08959818 -0.2967948 -0.07566377\n -0.06981861 -0.2239896 -0.4131857 -0.27768679 -0.8030374 -0.48115085\n -0.58804743 -0.99207016 -0.64366299 -0.57194044 -0.72178858 -0.62448313\n -0.34423248 -0.72184691 -0.58158735 -0.56053344 -0.73019453 -0.61578839\n -0.47222159 -0.34370546 -0.71209055 -0.73183716 -0.61384066 -0.59795751\n -0.55711052 -0.55575772 -0.53939066 -0.85969632 -0.80739266 -0.5034877\n -0.72594461 -0.46434024 -0.43703567 -0.55415656 -0.61843651 -0.46860583\n -0.29984182 -0.73471966 -0.49339905 -0.58008886 -0.69408513 -0.59260066\n -0.57842738 -0.64956438 -0.68778667 -0.46241452 -0.64676647 -0.6974942\n -0.64736501 -0.2815197 -0.80099133 -0.37547894]\n-0.723436601081\n-0.0750688554385\n-0.223905865928\n-0.0658935200631\n-0.337764508497\n-0.0406131790601\n-0.0264896079664\n-0.230328173452\n-0.087507349641\n0.0420535611129\n0.330132507003\n0.194105557648\n0.0348461619554\n-0.215893732187\n-0.189363528265\n-0.162214478046\n-0.247378435896\n-0.00430735952071\n-0.100820582061\n-0.0115350934413\n-0.101989650539\n0.0307555675352\n-0.196579562788\n0.295555104073\n-0.439653913089\n0.0856353992934\n0.26691039315\n0.237514393014\n0.0201925904484\n-0.211780652862\n0.222395805398\n-0.0505550096147\n0.223765584341\n-0.101819032652\n0.170828579585\n0.217577597312\n-0.317519904784\n0.0440575862221\n-0.151009950964\n-0.0842495723565\n0.0712684758234\n-0.111354771265\n0.120567721776\n0.0497164291911\n0.390005334707\n0.348014615849\n0.00950397497976\n0.229206105906\n0.214725640202\n0.0855041502859\n-0.025134166719\n0.0562450808802\n-0.435719076192\n-0.127980086299\n-0.265428284494\n-0.632248467721\n-0.310393408918\n-0.253924189012\n-0.416563170203\n-0.32140069525\n-0.0759253679516\n-0.432752464836\n-0.313822810756\n-0.296058730189\n-0.474003790253\n-0.363613550119\n-0.239034198366\n-0.124466677977\n-0.495197175466\n-0.508000700469\n-0.396078434677\n-0.398991278281\n-0.359817190443\n-0.372376527866\n-0.36544527667\n-0.685769264677\n-0.642716669908\n-0.348472482864\n-0.578185724248\n-0.325239927624\n-0.305912754876\n-0.430395537838\n-0.502374488946\n-0.359232662003\n-0.195341879361\n-0.642193293655\n-0.403729449493\n-0.503889446347\n-0.622368849916\n-0.531227610714\n-0.528620787466\n-0.599629751403\n-0.644049100867\n-0.430533013966\n-0.622242920121\n-0.679626611376\n-0.635964539694\n-0.260759568363\n-0.815708222231\n-0.372332535619\n[-0.70089616 -0.05917794 -0.21367771 -0.07009727 -0.35583472 -0.03919604\n -0.04412634 -0.26232346 -0.12791229 0.01316003 0.3178083 0.15654019\n -0.02260387 -0.29114622 -0.27477915 -0.23450302 -0.33262251 -0.08494219\n -0.21019601 -0.11837351 -0.22235902 -0.09760146 -0.3425448 0.18829654\n -0.60893509 -0.08747963 0.11206598 0.10544449 -0.11001185 -0.39083511\n 0.06116008 -0.2025275 0.03050949 -0.3310424 -0.01737776 0.03468495\n -0.57477564 -0.19016583 -0.39366681 -0.32008569 -0.17338037 -0.41443368\n -0.09764862 -0.24996723 0.19638671 0.12330171 -0.28771408 -0.06395308\n -0.04107642 -0.20606702 -0.43856207 -0.2764032 -0.82262289 -0.50143333\n -0.59091735 -1.02364092 -0.66621557 -0.59047075 -0.73439761 -0.64401713\n -0.34239901 -0.74321499 -0.59113014 -0.57571674 -0.74232009 -0.63468054\n -0.47871557 -0.34294457 -0.71175654 -0.75499645 -0.64398238 -0.60800252\n -0.58284852 -0.56759328 -0.54533705 -0.88370558 -0.82550929 -0.52112593\n -0.73756043 -0.47588187 -0.44904021 -0.56860272 -0.63633438 -0.47556573\n -0.2923007 -0.75684157 -0.49670982 -0.60379854 -0.70936992 -0.61457569\n -0.60729495 -0.66313244 -0.70035866 -0.48227812 -0.66584542 -0.71583734\n" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ] ]
e7fec720ea0a9a3ecd7f5c230ef857fe84fa9b3a
1,497
ipynb
Jupyter Notebook
combine-monitoring-and-reporting-mprns-and-gprns/README.ipynb
Rebeccacachia/projects
4f8bebce1c0b4ba2d032aba658b38c3de9effb32
[ "MIT" ]
null
null
null
combine-monitoring-and-reporting-mprns-and-gprns/README.ipynb
Rebeccacachia/projects
4f8bebce1c0b4ba2d032aba658b38c3de9effb32
[ "MIT" ]
null
null
null
combine-monitoring-and-reporting-mprns-and-gprns/README.ipynb
Rebeccacachia/projects
4f8bebce1c0b4ba2d032aba658b38c3de9effb32
[ "MIT" ]
null
null
null
21.084507
251
0.515698
[ [ [ "# Monitoring & Reporting\n\n## What `pipeline.py` is doing:\n\n- Load:\n - Monitoring & Reporting Data\n- Link MPRNs to GPRNs\n\n## Caveat\n\nThe M&R data is publicly available, however, the user still needs to [create their own s3 credentials](https://aws.amazon.com/s3/) to fully reproduce the pipeline this pipeline (*i.e. they need an AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY*)\n\n## Setup\n\n| ❗ Skip if running on Binder |\n|-------------------------------|\n\nVia [conda](https://github.com/conda-forge/miniforge):", "_____no_output_____" ] ], [ [ "conda env create --file environment.yml\nconda activate hdd", "_____no_output_____" ] ], [ [ "## Run", "_____no_output_____" ] ], [ [ "python pipeline.py", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code" ]
[ [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ] ]
e7fedfcf1bd831ba7d8c4becd7c6afe697acfb22
9,005
ipynb
Jupyter Notebook
day4/06. Astropy - Time.ipynb
ubutnux/bosscha-python-workshop-2022
6a146bae4a4d3d5f1ba797140ec3271a884167f3
[ "Unlicense" ]
null
null
null
day4/06. Astropy - Time.ipynb
ubutnux/bosscha-python-workshop-2022
6a146bae4a4d3d5f1ba797140ec3271a884167f3
[ "Unlicense" ]
null
null
null
day4/06. Astropy - Time.ipynb
ubutnux/bosscha-python-workshop-2022
6a146bae4a4d3d5f1ba797140ec3271a884167f3
[ "Unlicense" ]
null
null
null
26.961078
453
0.594336
[ [ [ "# Time and Dates", "_____no_output_____" ], [ "The `astropy.time` package provides functionality for manipulating times and dates. Specific emphasis is placed on supporting time scales (e.g. UTC, TAI, UT1, TDB) and time representations (e.g. JD, MJD, ISO 8601) that are used in astronomy and required to calculate, e.g., sidereal times and barycentric corrections. It uses Cython to wrap the C language ERFA time and calendar routines, using a fast and memory efficient vectorization scheme.\n\nAll time manipulations and arithmetic operations are done internally using two 64-bit floats to represent time. Floating point algorithms are used so that the Time object maintains sub-nanosecond precision over times spanning the age of the universe.", "_____no_output_____" ], [ "The basic way to use `astropy.time` is to create a `Time` object by supplying one or more input time values as well as the time format and time scale of those values. The input time(s) can either be a single scalar like `\"2010-01-01 00:00:00\"` or a list or a numpy array of values as shown below. In general any output values have the same shape (scalar or array) as the input.", "_____no_output_____" ] ], [ [ "import numpy as np\nfrom astropy.time import Time", "_____no_output_____" ], [ "times = ['1999-01-01T00:00:00.123456789', '2010-01-01T00:00:00']\nt = Time(times, format='isot', scale='utc')\nt", "_____no_output_____" ], [ "t[1]", "_____no_output_____" ] ], [ [ "The `format` argument specifies how to interpret the input values, e.g. ISO or JD or Unix time. The `scale` argument specifies the time scale for the values, e.g. UTC or TT or UT1. The `scale` argument is optional and defaults to UTC except for Time from epoch formats. We could have written the above as:", "_____no_output_____" ] ], [ [ "t = Time(times, format='isot')", "_____no_output_____" ] ], [ [ "When the format of the input can be unambiguously determined then the format argument is not required, so we can simplify even further:", "_____no_output_____" ] ], [ [ "t = Time(times)\nt", "_____no_output_____" ] ], [ [ "Now let’s get the representation of these times in the JD and MJD formats by requesting the corresponding Time attributes:", "_____no_output_____" ] ], [ [ "t.jd", "_____no_output_____" ], [ "t.mjd", "_____no_output_____" ] ], [ [ "The default representation can be changed by setting the `format` attribute:\n\n", "_____no_output_____" ] ], [ [ "t.format = 'fits'\nt", "_____no_output_____" ], [ "t.format = 'isot'\nt", "_____no_output_____" ] ], [ [ "We can also convert to a different time scale, for instance from UTC to TT. This uses the same attribute mechanism as above but now returns a new `Time` object:", "_____no_output_____" ] ], [ [ "t2 = t.tt\nt2", "_____no_output_____" ], [ "t2.jd", "_____no_output_____" ] ], [ [ "Note that both the ISO (ISOT) and JD representations of t2 are different than for t because they are expressed relative to the TT time scale. Of course, from the numbers or strings one could not tell; one format in which this information is kept is the `fits` format:", "_____no_output_____" ] ], [ [ "print(t2.fits)", "_____no_output_____" ] ], [ [ "## Sidereal Time\nApparent or mean sidereal time can be calculated using `sidereal_time()`. The method returns a `Longitude` with units of hourangle, which by default is for the longitude corresponding to the location with which the `Time` object is initialized. Like the scale transformations, ERFA C-library routines are used under the hood, which support calculations following different IAU resolutions. Sample usage:", "_____no_output_____" ] ], [ [ "t = Time('2006-01-15 21:24:37.5', scale='utc', location=('120d', '45d'))\nt.sidereal_time('mean') ", "_____no_output_____" ], [ "t.sidereal_time('apparent') ", "_____no_output_____" ] ], [ [ "## Time Deltas\n\nSimple time arithmetic is supported using the TimeDelta class. The following operations are available:\n\n* Create a TimeDelta explicitly by instantiating a class object\n* Create a TimeDelta by subtracting two Times\n* Add a TimeDelta to a Time object to get a new Time\n* Subtract a TimeDelta from a Time object to get a new Time\n* Add two TimeDelta objects to get a new TimeDelta\n* Negate a TimeDelta or take its absolute value\n* Multiply or divide a TimeDelta by a constant or array\n* Convert TimeDelta objects to and from time-like Quantities\n\nThe `TimeDelta` class is derived from the `Time` class and shares many of its properties. One difference is that the time scale has to be one for which one day is exactly 86400 seconds. Hence, the scale cannot be UTC.", "_____no_output_____" ] ], [ [ "t1 = Time('2010-01-01 00:00:00')\nt2 = Time('2010-02-01 00:00:00')\ndt = t2 - t1 # Difference between two Times\ndt", "_____no_output_____" ], [ "dt.sec", "_____no_output_____" ], [ "from astropy.time import TimeDelta\ndt2 = TimeDelta(50.0, format='sec')\nt3 = t2 + dt2 # Add a TimeDelta to a Time\nt3", "_____no_output_____" ] ], [ [ "## Timezones\nWhen a Time object is constructed from a timezone-aware `datetime`, no timezone information is saved in the `Time` object. However, `Time` objects can be converted to timezone-aware datetime objects:", "_____no_output_____" ] ], [ [ "from datetime import datetime\nfrom astropy.time import Time, TimezoneInfo\nimport astropy.units as u", "_____no_output_____" ], [ "utc_plus_one_hour = TimezoneInfo(utc_offset=1*u.hour)\ndt_aware = datetime(2000, 1, 1, 0, 0, 0, tzinfo=utc_plus_one_hour)\nt = Time(dt_aware) # Loses timezone info, converts to UTC\nprint(t) # will return UTC", "_____no_output_____" ], [ "print(t.to_datetime(timezone=utc_plus_one_hour)) # to timezone-aware datetime", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown", "markdown", "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code" ] ]
e7fee0d42414995e634e4114ff1fdcbc5d927d55
369,661
ipynb
Jupyter Notebook
Model backlog/Deep Learning/NasNetLarge/[43th] - Fine-tune - NasNetLarge - head.ipynb
dimitreOliveira/iMet-Collection-2019-FGVC6
4f22485b9ec5ef3696d6532185a5ea90f9cf7489
[ "MIT" ]
1
2021-01-21T01:20:27.000Z
2021-01-21T01:20:27.000Z
Model backlog/Deep Learning/NasNetLarge/[43th] - Fine-tune - NasNetLarge - head.ipynb
dimitreOliveira/iMet-Collection-2019-FGVC6
4f22485b9ec5ef3696d6532185a5ea90f9cf7489
[ "MIT" ]
null
null
null
Model backlog/Deep Learning/NasNetLarge/[43th] - Fine-tune - NasNetLarge - head.ipynb
dimitreOliveira/iMet-Collection-2019-FGVC6
4f22485b9ec5ef3696d6532185a5ea90f9cf7489
[ "MIT" ]
1
2020-11-18T23:44:21.000Z
2020-11-18T23:44:21.000Z
118.48109
72,140
0.739626
[ [ [ "import os\nimport cv2\nimport math\nimport warnings\nimport numpy as np\nimport pandas as pd\nimport seaborn as sns\nimport tensorflow as tf\nimport matplotlib.pyplot as plt\nfrom sklearn.model_selection import train_test_split\nfrom sklearn.metrics import confusion_matrix, fbeta_score\nfrom keras import optimizers\nfrom keras import backend as K\nfrom keras.models import Sequential, Model\nfrom keras import applications\nfrom keras.preprocessing.image import ImageDataGenerator\nfrom keras.callbacks import LearningRateScheduler, EarlyStopping\nfrom keras.layers import Dense, Dropout, Flatten, Conv2D, MaxPool2D, Activation, BatchNormalization, GlobalAveragePooling2D, Input\n\n# Set seeds to make the experiment more reproducible.\nfrom tensorflow import set_random_seed\nfrom numpy.random import seed\nset_random_seed(0)\nseed(0)\n\n%matplotlib inline\nsns.set(style=\"whitegrid\")\nwarnings.filterwarnings(\"ignore\")", "Using TensorFlow backend.\n" ], [ "train = pd.read_csv('../input/imet-2019-fgvc6/train.csv')\nlabels = pd.read_csv('../input/imet-2019-fgvc6/labels.csv')\ntest = pd.read_csv('../input/imet-2019-fgvc6/sample_submission.csv')\n\ntrain[\"attribute_ids\"] = train[\"attribute_ids\"].apply(lambda x:x.split(\" \"))\ntrain[\"id\"] = train[\"id\"].apply(lambda x: x + \".png\")\ntest[\"id\"] = test[\"id\"].apply(lambda x: x + \".png\")\n\nprint('Number of train samples: ', train.shape[0])\nprint('Number of test samples: ', test.shape[0])\nprint('Number of labels: ', labels.shape[0])\ndisplay(train.head())\ndisplay(labels.head())", "Number of train samples: 109237\nNumber of test samples: 7443\nNumber of labels: 1103\n" ] ], [ [ "### Model parameters", "_____no_output_____" ] ], [ [ "# Model parameters\nBATCH_SIZE = 64\nEPOCHS = 30\nLEARNING_RATE = 0.0001\nHEIGHT = 64\nWIDTH = 64\nCANAL = 3\nN_CLASSES = labels.shape[0]\nES_PATIENCE = 5\nDECAY_DROP = 0.5\nDECAY_EPOCHS = 10\nclasses = list(map(str, range(N_CLASSES)))", "_____no_output_____" ], [ "def f2_score_thr(threshold=0.5):\n def f2_score(y_true, y_pred):\n beta = 2\n y_pred = K.cast(K.greater(K.clip(y_pred, 0, 1), threshold), K.floatx())\n\n true_positives = K.sum(K.clip(y_true * y_pred, 0, 1), axis=1)\n predicted_positives = K.sum(K.clip(y_pred, 0, 1), axis=1)\n possible_positives = K.sum(K.clip(y_true, 0, 1), axis=1)\n\n precision = true_positives / (predicted_positives + K.epsilon())\n recall = true_positives / (possible_positives + K.epsilon())\n\n return K.mean(((1+beta**2)*precision*recall) / ((beta**2)*precision+recall+K.epsilon()))\n return f2_score\n\n\ndef custom_f2(y_true, y_pred):\n beta = 2\n\n tp = np.sum((y_true == 1) & (y_pred == 1))\n tn = np.sum((y_true == 0) & (y_pred == 0))\n fp = np.sum((y_true == 0) & (y_pred == 1))\n fn = np.sum((y_true == 1) & (y_pred == 0))\n \n p = tp / (tp + fp + K.epsilon())\n r = tp / (tp + fn + K.epsilon())\n\n f2 = (1+beta**2)*p*r / (p*beta**2 + r + 1e-15)\n\n return f2\n\ndef step_decay(epoch):\n initial_lrate = LEARNING_RATE\n drop = DECAY_DROP\n epochs_drop = DECAY_EPOCHS\n lrate = initial_lrate * math.pow(drop, math.floor((1+epoch)/epochs_drop))\n \n return lrate", "_____no_output_____" ], [ "train_datagen=ImageDataGenerator(rescale=1./255, validation_split=0.25)\n\ntrain_generator=train_datagen.flow_from_dataframe(\n dataframe=train,\n directory=\"../input/imet-2019-fgvc6/train\",\n x_col=\"id\",\n y_col=\"attribute_ids\",\n batch_size=BATCH_SIZE,\n shuffle=True,\n class_mode=\"categorical\",\n classes=classes,\n target_size=(HEIGHT, WIDTH),\n subset='training')\n\nvalid_generator=train_datagen.flow_from_dataframe(\n dataframe=train,\n directory=\"../input/imet-2019-fgvc6/train\",\n x_col=\"id\",\n y_col=\"attribute_ids\",\n batch_size=BATCH_SIZE,\n shuffle=True,\n class_mode=\"categorical\", \n classes=classes,\n target_size=(HEIGHT, WIDTH),\n subset='validation')\n\ntest_datagen = ImageDataGenerator(rescale=1./255)\n\ntest_generator = test_datagen.flow_from_dataframe( \n dataframe=test,\n directory = \"../input/imet-2019-fgvc6/test\", \n x_col=\"id\",\n target_size=(HEIGHT, WIDTH),\n batch_size=1,\n shuffle=False,\n class_mode=None)", "Found 81928 images belonging to 1103 classes.\nFound 27309 images belonging to 1103 classes.\nFound 7443 images.\n" ] ], [ [ "### Model", "_____no_output_____" ] ], [ [ "print(os.listdir(\"../input/nasnetlarge\"))", "['NASNet-large-no-top.h5']\n" ], [ "def create_model(input_shape, n_out):\n input_tensor = Input(shape=input_shape)\n base_model = applications.NASNetLarge(weights=None, include_top=False,\n input_tensor=input_tensor)\n base_model.load_weights('../input/nasnetlarge/NASNet-large-no-top.h5')\n\n x = GlobalAveragePooling2D()(base_model.output)\n x = Dropout(0.5)(x)\n x = Dense(1024, activation='relu')(x)\n x = Dropout(0.5)(x)\n final_output = Dense(n_out, activation='sigmoid', name='final_output')(x)\n model = Model(input_tensor, final_output)\n \n return model", "_____no_output_____" ], [ "# warm up model\n# first: train only the top layers (which were randomly initialized)\nmodel = create_model(input_shape=(HEIGHT, WIDTH, CANAL), n_out=N_CLASSES)\n\nfor layer in model.layers:\n layer.trainable = False\n\nfor i in range(-5,0):\n model.layers[i].trainable = True\n \noptimizer = optimizers.Adam(lr=LEARNING_RATE)\nmetrics = [\"accuracy\", \"categorical_accuracy\"]\nes = EarlyStopping(monitor='val_loss', mode='min', verbose=1, patience=ES_PATIENCE)\ncallbacks = [es]\nmodel.compile(optimizer=optimizer, loss=\"binary_crossentropy\", metrics=metrics)\nmodel.summary()", "WARNING:tensorflow:From /opt/conda/lib/python3.6/site-packages/tensorflow/python/framework/op_def_library.py:263: colocate_with (from tensorflow.python.framework.ops) is deprecated and will be removed in a future version.\nInstructions for updating:\nColocations handled automatically by placer.\nWARNING:tensorflow:From /opt/conda/lib/python3.6/site-packages/keras/backend/tensorflow_backend.py:3445: calling dropout (from tensorflow.python.ops.nn_ops) with keep_prob is deprecated and will be removed in a future version.\nInstructions for updating:\nPlease use `rate` instead of `keep_prob`. Rate should be set to `rate = 1 - keep_prob`.\n__________________________________________________________________________________________________\nLayer (type) Output Shape Param # Connected to \n==================================================================================================\ninput_1 (InputLayer) (None, 64, 64, 3) 0 \n__________________________________________________________________________________________________\nstem_conv1 (Conv2D) (None, 31, 31, 96) 2592 input_1[0][0] \n__________________________________________________________________________________________________\nstem_bn1 (BatchNormalization) (None, 31, 31, 96) 384 stem_conv1[0][0] \n__________________________________________________________________________________________________\nactivation_1 (Activation) (None, 31, 31, 96) 0 stem_bn1[0][0] \n__________________________________________________________________________________________________\nreduction_conv_1_stem_1 (Conv2D (None, 31, 31, 42) 4032 activation_1[0][0] \n__________________________________________________________________________________________________\nreduction_bn_1_stem_1 (BatchNor (None, 31, 31, 42) 168 reduction_conv_1_stem_1[0][0] \n__________________________________________________________________________________________________\nactivation_2 (Activation) (None, 31, 31, 42) 0 reduction_bn_1_stem_1[0][0] \n__________________________________________________________________________________________________\nactivation_4 (Activation) (None, 31, 31, 96) 0 stem_bn1[0][0] \n__________________________________________________________________________________________________\nseparable_conv_1_pad_reduction_ (None, 35, 35, 42) 0 activation_2[0][0] \n__________________________________________________________________________________________________\nseparable_conv_1_pad_reduction_ (None, 37, 37, 96) 0 activation_4[0][0] \n__________________________________________________________________________________________________\nseparable_conv_1_reduction_left (None, 16, 16, 42) 2814 separable_conv_1_pad_reduction_le\n__________________________________________________________________________________________________\nseparable_conv_1_reduction_righ (None, 16, 16, 42) 8736 separable_conv_1_pad_reduction_ri\n__________________________________________________________________________________________________\nseparable_conv_1_bn_reduction_l (None, 16, 16, 42) 168 separable_conv_1_reduction_left1_\n__________________________________________________________________________________________________\nseparable_conv_1_bn_reduction_r (None, 16, 16, 42) 168 separable_conv_1_reduction_right1\n__________________________________________________________________________________________________\nactivation_3 (Activation) (None, 16, 16, 42) 0 separable_conv_1_bn_reduction_lef\n__________________________________________________________________________________________________\nactivation_5 (Activation) (None, 16, 16, 42) 0 separable_conv_1_bn_reduction_rig\n__________________________________________________________________________________________________\nseparable_conv_2_reduction_left (None, 16, 16, 42) 2814 activation_3[0][0] \n__________________________________________________________________________________________________\nseparable_conv_2_reduction_righ (None, 16, 16, 42) 3822 activation_5[0][0] \n__________________________________________________________________________________________________\nactivation_6 (Activation) (None, 31, 31, 96) 0 stem_bn1[0][0] \n__________________________________________________________________________________________________\nseparable_conv_2_bn_reduction_l (None, 16, 16, 42) 168 separable_conv_2_reduction_left1_\n__________________________________________________________________________________________________\nseparable_conv_2_bn_reduction_r (None, 16, 16, 42) 168 separable_conv_2_reduction_right1\n__________________________________________________________________________________________________\nseparable_conv_1_pad_reduction_ (None, 37, 37, 96) 0 activation_6[0][0] \n__________________________________________________________________________________________________\nactivation_8 (Activation) (None, 31, 31, 96) 0 stem_bn1[0][0] \n__________________________________________________________________________________________________\nreduction_add_1_stem_1 (Add) (None, 16, 16, 42) 0 separable_conv_2_bn_reduction_lef\n separable_conv_2_bn_reduction_rig\n__________________________________________________________________________________________________\nseparable_conv_1_reduction_righ (None, 16, 16, 42) 8736 separable_conv_1_pad_reduction_ri\n__________________________________________________________________________________________________\nseparable_conv_1_pad_reduction_ (None, 35, 35, 96) 0 activation_8[0][0] \n__________________________________________________________________________________________________\nactivation_10 (Activation) (None, 16, 16, 42) 0 reduction_add_1_stem_1[0][0] \n__________________________________________________________________________________________________\nseparable_conv_1_bn_reduction_r (None, 16, 16, 42) 168 separable_conv_1_reduction_right2\n__________________________________________________________________________________________________\nseparable_conv_1_reduction_righ (None, 16, 16, 42) 6432 separable_conv_1_pad_reduction_ri\n__________________________________________________________________________________________________\nseparable_conv_1_reduction_left (None, 16, 16, 42) 2142 activation_10[0][0] \n__________________________________________________________________________________________________\nactivation_7 (Activation) (None, 16, 16, 42) 0 separable_conv_1_bn_reduction_rig\n__________________________________________________________________________________________________\nseparable_conv_1_bn_reduction_r (None, 16, 16, 42) 168 separable_conv_1_reduction_right3\n__________________________________________________________________________________________________\nseparable_conv_1_bn_reduction_l (None, 16, 16, 42) 168 separable_conv_1_reduction_left4_\n__________________________________________________________________________________________________\nreduction_pad_1_stem_1 (ZeroPad (None, 33, 33, 42) 0 reduction_bn_1_stem_1[0][0] \n__________________________________________________________________________________________________\nseparable_conv_2_reduction_righ (None, 16, 16, 42) 3822 activation_7[0][0] \n__________________________________________________________________________________________________\nactivation_9 (Activation) (None, 16, 16, 42) 0 separable_conv_1_bn_reduction_rig\n__________________________________________________________________________________________________\nactivation_11 (Activation) (None, 16, 16, 42) 0 separable_conv_1_bn_reduction_lef\n__________________________________________________________________________________________________\nreduction_left2_stem_1 (MaxPool (None, 16, 16, 42) 0 reduction_pad_1_stem_1[0][0] \n__________________________________________________________________________________________________\nseparable_conv_2_bn_reduction_r (None, 16, 16, 42) 168 separable_conv_2_reduction_right2\n__________________________________________________________________________________________________\nseparable_conv_2_reduction_righ (None, 16, 16, 42) 2814 activation_9[0][0] \n__________________________________________________________________________________________________\nseparable_conv_2_reduction_left (None, 16, 16, 42) 2142 activation_11[0][0] \n__________________________________________________________________________________________________\nadjust_relu_1_stem_2 (Activatio (None, 31, 31, 96) 0 stem_bn1[0][0] \n__________________________________________________________________________________________________\nreduction_add_2_stem_1 (Add) (None, 16, 16, 42) 0 reduction_left2_stem_1[0][0] \n separable_conv_2_bn_reduction_rig\n__________________________________________________________________________________________________\nreduction_left3_stem_1 (Average (None, 16, 16, 42) 0 reduction_pad_1_stem_1[0][0] \n__________________________________________________________________________________________________\nseparable_conv_2_bn_reduction_r (None, 16, 16, 42) 168 separable_conv_2_reduction_right3\n__________________________________________________________________________________________________\nreduction_left4_stem_1 (Average (None, 16, 16, 42) 0 reduction_add_1_stem_1[0][0] \n__________________________________________________________________________________________________\nseparable_conv_2_bn_reduction_l (None, 16, 16, 42) 168 separable_conv_2_reduction_left4_\n__________________________________________________________________________________________________\nreduction_right5_stem_1 (MaxPoo (None, 16, 16, 42) 0 reduction_pad_1_stem_1[0][0] \n__________________________________________________________________________________________________\nzero_padding2d_1 (ZeroPadding2D (None, 32, 32, 96) 0 adjust_relu_1_stem_2[0][0] \n__________________________________________________________________________________________________\nreduction_add3_stem_1 (Add) (None, 16, 16, 42) 0 reduction_left3_stem_1[0][0] \n separable_conv_2_bn_reduction_rig\n__________________________________________________________________________________________________\nadd_1 (Add) (None, 16, 16, 42) 0 reduction_add_2_stem_1[0][0] \n reduction_left4_stem_1[0][0] \n__________________________________________________________________________________________________\nreduction_add4_stem_1 (Add) (None, 16, 16, 42) 0 separable_conv_2_bn_reduction_lef\n reduction_right5_stem_1[0][0] \n__________________________________________________________________________________________________\ncropping2d_1 (Cropping2D) (None, 31, 31, 96) 0 zero_padding2d_1[0][0] \n__________________________________________________________________________________________________\nreduction_concat_stem_1 (Concat (None, 16, 16, 168) 0 reduction_add_2_stem_1[0][0] \n reduction_add3_stem_1[0][0] \n add_1[0][0] \n reduction_add4_stem_1[0][0] \n__________________________________________________________________________________________________\nadjust_avg_pool_1_stem_2 (Avera (None, 16, 16, 96) 0 adjust_relu_1_stem_2[0][0] \n__________________________________________________________________________________________________\nadjust_avg_pool_2_stem_2 (Avera (None, 16, 16, 96) 0 cropping2d_1[0][0] \n__________________________________________________________________________________________________\nactivation_12 (Activation) (None, 16, 16, 168) 0 reduction_concat_stem_1[0][0] \n__________________________________________________________________________________________________\nadjust_conv_1_stem_2 (Conv2D) (None, 16, 16, 42) 4032 adjust_avg_pool_1_stem_2[0][0] \n__________________________________________________________________________________________________\nadjust_conv_2_stem_2 (Conv2D) (None, 16, 16, 42) 4032 adjust_avg_pool_2_stem_2[0][0] \n__________________________________________________________________________________________________\nreduction_conv_1_stem_2 (Conv2D (None, 16, 16, 84) 14112 activation_12[0][0] \n__________________________________________________________________________________________________\nconcatenate_1 (Concatenate) (None, 16, 16, 84) 0 adjust_conv_1_stem_2[0][0] \n adjust_conv_2_stem_2[0][0] \n__________________________________________________________________________________________________\nreduction_bn_1_stem_2 (BatchNor (None, 16, 16, 84) 336 reduction_conv_1_stem_2[0][0] \n__________________________________________________________________________________________________\nadjust_bn_stem_2 (BatchNormaliz (None, 16, 16, 84) 336 concatenate_1[0][0] \n__________________________________________________________________________________________________\nactivation_13 (Activation) (None, 16, 16, 84) 0 reduction_bn_1_stem_2[0][0] \n__________________________________________________________________________________________________\nactivation_15 (Activation) (None, 16, 16, 84) 0 adjust_bn_stem_2[0][0] \n__________________________________________________________________________________________________\nseparable_conv_1_pad_reduction_ (None, 19, 19, 84) 0 activation_13[0][0] \n__________________________________________________________________________________________________\nseparable_conv_1_pad_reduction_ (None, 21, 21, 84) 0 activation_15[0][0] \n__________________________________________________________________________________________________\nseparable_conv_1_reduction_left (None, 8, 8, 84) 9156 separable_conv_1_pad_reduction_le\n__________________________________________________________________________________________________\nseparable_conv_1_reduction_righ (None, 8, 8, 84) 11172 separable_conv_1_pad_reduction_ri\n__________________________________________________________________________________________________\nseparable_conv_1_bn_reduction_l (None, 8, 8, 84) 336 separable_conv_1_reduction_left1_\n__________________________________________________________________________________________________\nseparable_conv_1_bn_reduction_r (None, 8, 8, 84) 336 separable_conv_1_reduction_right1\n__________________________________________________________________________________________________\nactivation_14 (Activation) (None, 8, 8, 84) 0 separable_conv_1_bn_reduction_lef\n__________________________________________________________________________________________________\nactivation_16 (Activation) (None, 8, 8, 84) 0 separable_conv_1_bn_reduction_rig\n__________________________________________________________________________________________________\nseparable_conv_2_reduction_left (None, 8, 8, 84) 9156 activation_14[0][0] \n__________________________________________________________________________________________________\nseparable_conv_2_reduction_righ (None, 8, 8, 84) 11172 activation_16[0][0] \n__________________________________________________________________________________________________\nactivation_17 (Activation) (None, 16, 16, 84) 0 adjust_bn_stem_2[0][0] \n__________________________________________________________________________________________________\nseparable_conv_2_bn_reduction_l (None, 8, 8, 84) 336 separable_conv_2_reduction_left1_\n__________________________________________________________________________________________________\nseparable_conv_2_bn_reduction_r (None, 8, 8, 84) 336 separable_conv_2_reduction_right1\n__________________________________________________________________________________________________\nseparable_conv_1_pad_reduction_ (None, 21, 21, 84) 0 activation_17[0][0] \n__________________________________________________________________________________________________\nactivation_19 (Activation) (None, 16, 16, 84) 0 adjust_bn_stem_2[0][0] \n__________________________________________________________________________________________________\nreduction_add_1_stem_2 (Add) (None, 8, 8, 84) 0 separable_conv_2_bn_reduction_lef\n separable_conv_2_bn_reduction_rig\n__________________________________________________________________________________________________\nseparable_conv_1_reduction_righ (None, 8, 8, 84) 11172 separable_conv_1_pad_reduction_ri\n__________________________________________________________________________________________________\nseparable_conv_1_pad_reduction_ (None, 19, 19, 84) 0 activation_19[0][0] \n__________________________________________________________________________________________________\nactivation_21 (Activation) (None, 8, 8, 84) 0 reduction_add_1_stem_2[0][0] \n__________________________________________________________________________________________________\nseparable_conv_1_bn_reduction_r (None, 8, 8, 84) 336 separable_conv_1_reduction_right2\n__________________________________________________________________________________________________\nseparable_conv_1_reduction_righ (None, 8, 8, 84) 9156 separable_conv_1_pad_reduction_ri\n__________________________________________________________________________________________________\nseparable_conv_1_reduction_left (None, 8, 8, 84) 7812 activation_21[0][0] \n__________________________________________________________________________________________________\nactivation_18 (Activation) (None, 8, 8, 84) 0 separable_conv_1_bn_reduction_rig\n__________________________________________________________________________________________________\nseparable_conv_1_bn_reduction_r (None, 8, 8, 84) 336 separable_conv_1_reduction_right3\n__________________________________________________________________________________________________\nseparable_conv_1_bn_reduction_l (None, 8, 8, 84) 336 separable_conv_1_reduction_left4_\n__________________________________________________________________________________________________\nreduction_pad_1_stem_2 (ZeroPad (None, 17, 17, 84) 0 reduction_bn_1_stem_2[0][0] \n__________________________________________________________________________________________________\nseparable_conv_2_reduction_righ (None, 8, 8, 84) 11172 activation_18[0][0] \n__________________________________________________________________________________________________\nactivation_20 (Activation) (None, 8, 8, 84) 0 separable_conv_1_bn_reduction_rig\n__________________________________________________________________________________________________\nactivation_22 (Activation) (None, 8, 8, 84) 0 separable_conv_1_bn_reduction_lef\n__________________________________________________________________________________________________\nreduction_left2_stem_2 (MaxPool (None, 8, 8, 84) 0 reduction_pad_1_stem_2[0][0] \n__________________________________________________________________________________________________\nseparable_conv_2_bn_reduction_r (None, 8, 8, 84) 336 separable_conv_2_reduction_right2\n__________________________________________________________________________________________________\nseparable_conv_2_reduction_righ (None, 8, 8, 84) 9156 activation_20[0][0] \n__________________________________________________________________________________________________\nseparable_conv_2_reduction_left (None, 8, 8, 84) 7812 activation_22[0][0] \n__________________________________________________________________________________________________\nadjust_relu_1_0 (Activation) (None, 16, 16, 168) 0 reduction_concat_stem_1[0][0] \n__________________________________________________________________________________________________\nreduction_add_2_stem_2 (Add) (None, 8, 8, 84) 0 reduction_left2_stem_2[0][0] \n separable_conv_2_bn_reduction_rig\n__________________________________________________________________________________________________\nreduction_left3_stem_2 (Average (None, 8, 8, 84) 0 reduction_pad_1_stem_2[0][0] \n__________________________________________________________________________________________________\nseparable_conv_2_bn_reduction_r (None, 8, 8, 84) 336 separable_conv_2_reduction_right3\n__________________________________________________________________________________________________\nreduction_left4_stem_2 (Average (None, 8, 8, 84) 0 reduction_add_1_stem_2[0][0] \n__________________________________________________________________________________________________\nseparable_conv_2_bn_reduction_l (None, 8, 8, 84) 336 separable_conv_2_reduction_left4_\n__________________________________________________________________________________________________\nreduction_right5_stem_2 (MaxPoo (None, 8, 8, 84) 0 reduction_pad_1_stem_2[0][0] \n__________________________________________________________________________________________________\nzero_padding2d_2 (ZeroPadding2D (None, 17, 17, 168) 0 adjust_relu_1_0[0][0] \n__________________________________________________________________________________________________\nreduction_add3_stem_2 (Add) (None, 8, 8, 84) 0 reduction_left3_stem_2[0][0] \n separable_conv_2_bn_reduction_rig\n__________________________________________________________________________________________________\nadd_2 (Add) (None, 8, 8, 84) 0 reduction_add_2_stem_2[0][0] \n reduction_left4_stem_2[0][0] \n__________________________________________________________________________________________________\nreduction_add4_stem_2 (Add) (None, 8, 8, 84) 0 separable_conv_2_bn_reduction_lef\n reduction_right5_stem_2[0][0] \n__________________________________________________________________________________________________\ncropping2d_2 (Cropping2D) (None, 16, 16, 168) 0 zero_padding2d_2[0][0] \n__________________________________________________________________________________________________\nreduction_concat_stem_2 (Concat (None, 8, 8, 336) 0 reduction_add_2_stem_2[0][0] \n reduction_add3_stem_2[0][0] \n add_2[0][0] \n reduction_add4_stem_2[0][0] \n__________________________________________________________________________________________________\nadjust_avg_pool_1_0 (AveragePoo (None, 8, 8, 168) 0 adjust_relu_1_0[0][0] \n__________________________________________________________________________________________________\nadjust_avg_pool_2_0 (AveragePoo (None, 8, 8, 168) 0 cropping2d_2[0][0] \n__________________________________________________________________________________________________\nadjust_conv_1_0 (Conv2D) (None, 8, 8, 84) 14112 adjust_avg_pool_1_0[0][0] \n__________________________________________________________________________________________________\nadjust_conv_2_0 (Conv2D) (None, 8, 8, 84) 14112 adjust_avg_pool_2_0[0][0] \n__________________________________________________________________________________________________\nactivation_23 (Activation) (None, 8, 8, 336) 0 reduction_concat_stem_2[0][0] \n__________________________________________________________________________________________________\nconcatenate_2 (Concatenate) (None, 8, 8, 168) 0 adjust_conv_1_0[0][0] \n adjust_conv_2_0[0][0] \n__________________________________________________________________________________________________\nnormal_conv_1_0 (Conv2D) (None, 8, 8, 168) 56448 activation_23[0][0] \n__________________________________________________________________________________________________\nadjust_bn_0 (BatchNormalization (None, 8, 8, 168) 672 concatenate_2[0][0] \n__________________________________________________________________________________________________\nnormal_bn_1_0 (BatchNormalizati (None, 8, 8, 168) 672 normal_conv_1_0[0][0] \n__________________________________________________________________________________________________\nactivation_24 (Activation) (None, 8, 8, 168) 0 normal_bn_1_0[0][0] \n__________________________________________________________________________________________________\nactivation_26 (Activation) (None, 8, 8, 168) 0 adjust_bn_0[0][0] \n__________________________________________________________________________________________________\nactivation_28 (Activation) (None, 8, 8, 168) 0 adjust_bn_0[0][0] \n__________________________________________________________________________________________________\nactivation_30 (Activation) (None, 8, 8, 168) 0 adjust_bn_0[0][0] \n__________________________________________________________________________________________________\nactivation_32 (Activation) (None, 8, 8, 168) 0 normal_bn_1_0[0][0] \n__________________________________________________________________________________________________\nseparable_conv_1_normal_left1_0 (None, 8, 8, 168) 32424 activation_24[0][0] \n__________________________________________________________________________________________________\nseparable_conv_1_normal_right1_ (None, 8, 8, 168) 29736 activation_26[0][0] \n__________________________________________________________________________________________________\nseparable_conv_1_normal_left2_0 (None, 8, 8, 168) 32424 activation_28[0][0] \n__________________________________________________________________________________________________\nseparable_conv_1_normal_right2_ (None, 8, 8, 168) 29736 activation_30[0][0] \n__________________________________________________________________________________________________\nseparable_conv_1_normal_left5_0 (None, 8, 8, 168) 29736 activation_32[0][0] \n__________________________________________________________________________________________________\nseparable_conv_1_bn_normal_left (None, 8, 8, 168) 672 separable_conv_1_normal_left1_0[0\n__________________________________________________________________________________________________\nseparable_conv_1_bn_normal_righ (None, 8, 8, 168) 672 separable_conv_1_normal_right1_0[\n__________________________________________________________________________________________________\nseparable_conv_1_bn_normal_left (None, 8, 8, 168) 672 separable_conv_1_normal_left2_0[0\n__________________________________________________________________________________________________\nseparable_conv_1_bn_normal_righ (None, 8, 8, 168) 672 separable_conv_1_normal_right2_0[\n__________________________________________________________________________________________________\nseparable_conv_1_bn_normal_left (None, 8, 8, 168) 672 separable_conv_1_normal_left5_0[0\n__________________________________________________________________________________________________\nactivation_25 (Activation) (None, 8, 8, 168) 0 separable_conv_1_bn_normal_left1_\n__________________________________________________________________________________________________\nactivation_27 (Activation) (None, 8, 8, 168) 0 separable_conv_1_bn_normal_right1\n__________________________________________________________________________________________________\nactivation_29 (Activation) (None, 8, 8, 168) 0 separable_conv_1_bn_normal_left2_\n__________________________________________________________________________________________________\nactivation_31 (Activation) (None, 8, 8, 168) 0 separable_conv_1_bn_normal_right2\n__________________________________________________________________________________________________\nactivation_33 (Activation) (None, 8, 8, 168) 0 separable_conv_1_bn_normal_left5_\n__________________________________________________________________________________________________\nseparable_conv_2_normal_left1_0 (None, 8, 8, 168) 32424 activation_25[0][0] \n__________________________________________________________________________________________________\nseparable_conv_2_normal_right1_ (None, 8, 8, 168) 29736 activation_27[0][0] \n__________________________________________________________________________________________________\nseparable_conv_2_normal_left2_0 (None, 8, 8, 168) 32424 activation_29[0][0] \n__________________________________________________________________________________________________\nseparable_conv_2_normal_right2_ (None, 8, 8, 168) 29736 activation_31[0][0] \n__________________________________________________________________________________________________\nseparable_conv_2_normal_left5_0 (None, 8, 8, 168) 29736 activation_33[0][0] \n__________________________________________________________________________________________________\nseparable_conv_2_bn_normal_left (None, 8, 8, 168) 672 separable_conv_2_normal_left1_0[0\n__________________________________________________________________________________________________\nseparable_conv_2_bn_normal_righ (None, 8, 8, 168) 672 separable_conv_2_normal_right1_0[\n__________________________________________________________________________________________________\nseparable_conv_2_bn_normal_left (None, 8, 8, 168) 672 separable_conv_2_normal_left2_0[0\n__________________________________________________________________________________________________\nseparable_conv_2_bn_normal_righ (None, 8, 8, 168) 672 separable_conv_2_normal_right2_0[\n__________________________________________________________________________________________________\nnormal_left3_0 (AveragePooling2 (None, 8, 8, 168) 0 normal_bn_1_0[0][0] \n__________________________________________________________________________________________________\nnormal_left4_0 (AveragePooling2 (None, 8, 8, 168) 0 adjust_bn_0[0][0] \n__________________________________________________________________________________________________\nnormal_right4_0 (AveragePooling (None, 8, 8, 168) 0 adjust_bn_0[0][0] \n__________________________________________________________________________________________________\nseparable_conv_2_bn_normal_left (None, 8, 8, 168) 672 separable_conv_2_normal_left5_0[0\n__________________________________________________________________________________________________\nnormal_add_1_0 (Add) (None, 8, 8, 168) 0 separable_conv_2_bn_normal_left1_\n separable_conv_2_bn_normal_right1\n__________________________________________________________________________________________________\nnormal_add_2_0 (Add) (None, 8, 8, 168) 0 separable_conv_2_bn_normal_left2_\n separable_conv_2_bn_normal_right2\n__________________________________________________________________________________________________\nnormal_add_3_0 (Add) (None, 8, 8, 168) 0 normal_left3_0[0][0] \n adjust_bn_0[0][0] \n__________________________________________________________________________________________________\nnormal_add_4_0 (Add) (None, 8, 8, 168) 0 normal_left4_0[0][0] \n normal_right4_0[0][0] \n__________________________________________________________________________________________________\nnormal_add_5_0 (Add) (None, 8, 8, 168) 0 separable_conv_2_bn_normal_left5_\n normal_bn_1_0[0][0] \n__________________________________________________________________________________________________\nnormal_concat_0 (Concatenate) (None, 8, 8, 1008) 0 adjust_bn_0[0][0] \n normal_add_1_0[0][0] \n normal_add_2_0[0][0] \n normal_add_3_0[0][0] \n normal_add_4_0[0][0] \n normal_add_5_0[0][0] \n__________________________________________________________________________________________________\nactivation_34 (Activation) (None, 8, 8, 336) 0 reduction_concat_stem_2[0][0] \n__________________________________________________________________________________________________\nactivation_35 (Activation) (None, 8, 8, 1008) 0 normal_concat_0[0][0] \n__________________________________________________________________________________________________\nadjust_conv_projection_1 (Conv2 (None, 8, 8, 168) 56448 activation_34[0][0] \n__________________________________________________________________________________________________\nnormal_conv_1_1 (Conv2D) (None, 8, 8, 168) 169344 activation_35[0][0] \n__________________________________________________________________________________________________\nadjust_bn_1 (BatchNormalization (None, 8, 8, 168) 672 adjust_conv_projection_1[0][0] \n__________________________________________________________________________________________________\nnormal_bn_1_1 (BatchNormalizati (None, 8, 8, 168) 672 normal_conv_1_1[0][0] \n__________________________________________________________________________________________________\nactivation_36 (Activation) (None, 8, 8, 168) 0 normal_bn_1_1[0][0] \n__________________________________________________________________________________________________\nactivation_38 (Activation) (None, 8, 8, 168) 0 adjust_bn_1[0][0] \n__________________________________________________________________________________________________\nactivation_40 (Activation) (None, 8, 8, 168) 0 adjust_bn_1[0][0] \n__________________________________________________________________________________________________\nactivation_42 (Activation) (None, 8, 8, 168) 0 adjust_bn_1[0][0] \n__________________________________________________________________________________________________\nactivation_44 (Activation) (None, 8, 8, 168) 0 normal_bn_1_1[0][0] \n__________________________________________________________________________________________________\nseparable_conv_1_normal_left1_1 (None, 8, 8, 168) 32424 activation_36[0][0] \n__________________________________________________________________________________________________\nseparable_conv_1_normal_right1_ (None, 8, 8, 168) 29736 activation_38[0][0] \n__________________________________________________________________________________________________\nseparable_conv_1_normal_left2_1 (None, 8, 8, 168) 32424 activation_40[0][0] \n__________________________________________________________________________________________________\nseparable_conv_1_normal_right2_ (None, 8, 8, 168) 29736 activation_42[0][0] \n__________________________________________________________________________________________________\nseparable_conv_1_normal_left5_1 (None, 8, 8, 168) 29736 activation_44[0][0] \n__________________________________________________________________________________________________\nseparable_conv_1_bn_normal_left (None, 8, 8, 168) 672 separable_conv_1_normal_left1_1[0\n__________________________________________________________________________________________________\nseparable_conv_1_bn_normal_righ (None, 8, 8, 168) 672 separable_conv_1_normal_right1_1[\n__________________________________________________________________________________________________\nseparable_conv_1_bn_normal_left (None, 8, 8, 168) 672 separable_conv_1_normal_left2_1[0\n__________________________________________________________________________________________________\nseparable_conv_1_bn_normal_righ (None, 8, 8, 168) 672 separable_conv_1_normal_right2_1[\n__________________________________________________________________________________________________\nseparable_conv_1_bn_normal_left (None, 8, 8, 168) 672 separable_conv_1_normal_left5_1[0\n__________________________________________________________________________________________________\nactivation_37 (Activation) (None, 8, 8, 168) 0 separable_conv_1_bn_normal_left1_\n__________________________________________________________________________________________________\nactivation_39 (Activation) (None, 8, 8, 168) 0 separable_conv_1_bn_normal_right1\n__________________________________________________________________________________________________\nactivation_41 (Activation) (None, 8, 8, 168) 0 separable_conv_1_bn_normal_left2_\n__________________________________________________________________________________________________\nactivation_43 (Activation) (None, 8, 8, 168) 0 separable_conv_1_bn_normal_right2\n__________________________________________________________________________________________________\nactivation_45 (Activation) (None, 8, 8, 168) 0 separable_conv_1_bn_normal_left5_\n__________________________________________________________________________________________________\nseparable_conv_2_normal_left1_1 (None, 8, 8, 168) 32424 activation_37[0][0] \n__________________________________________________________________________________________________\nseparable_conv_2_normal_right1_ (None, 8, 8, 168) 29736 activation_39[0][0] \n__________________________________________________________________________________________________\nseparable_conv_2_normal_left2_1 (None, 8, 8, 168) 32424 activation_41[0][0] \n__________________________________________________________________________________________________\nseparable_conv_2_normal_right2_ (None, 8, 8, 168) 29736 activation_43[0][0] \n__________________________________________________________________________________________________\nseparable_conv_2_normal_left5_1 (None, 8, 8, 168) 29736 activation_45[0][0] \n__________________________________________________________________________________________________\nseparable_conv_2_bn_normal_left (None, 8, 8, 168) 672 separable_conv_2_normal_left1_1[0\n__________________________________________________________________________________________________\nseparable_conv_2_bn_normal_righ (None, 8, 8, 168) 672 separable_conv_2_normal_right1_1[\n__________________________________________________________________________________________________\nseparable_conv_2_bn_normal_left (None, 8, 8, 168) 672 separable_conv_2_normal_left2_1[0\n__________________________________________________________________________________________________\nseparable_conv_2_bn_normal_righ (None, 8, 8, 168) 672 separable_conv_2_normal_right2_1[\n__________________________________________________________________________________________________\nnormal_left3_1 (AveragePooling2 (None, 8, 8, 168) 0 normal_bn_1_1[0][0] \n__________________________________________________________________________________________________\nnormal_left4_1 (AveragePooling2 (None, 8, 8, 168) 0 adjust_bn_1[0][0] \n__________________________________________________________________________________________________\nnormal_right4_1 (AveragePooling (None, 8, 8, 168) 0 adjust_bn_1[0][0] \n__________________________________________________________________________________________________\nseparable_conv_2_bn_normal_left (None, 8, 8, 168) 672 separable_conv_2_normal_left5_1[0\n__________________________________________________________________________________________________\nnormal_add_1_1 (Add) (None, 8, 8, 168) 0 separable_conv_2_bn_normal_left1_\n separable_conv_2_bn_normal_right1\n__________________________________________________________________________________________________\nnormal_add_2_1 (Add) (None, 8, 8, 168) 0 separable_conv_2_bn_normal_left2_\n separable_conv_2_bn_normal_right2\n__________________________________________________________________________________________________\nnormal_add_3_1 (Add) (None, 8, 8, 168) 0 normal_left3_1[0][0] \n adjust_bn_1[0][0] \n__________________________________________________________________________________________________\nnormal_add_4_1 (Add) (None, 8, 8, 168) 0 normal_left4_1[0][0] \n normal_right4_1[0][0] \n__________________________________________________________________________________________________\nnormal_add_5_1 (Add) (None, 8, 8, 168) 0 separable_conv_2_bn_normal_left5_\n normal_bn_1_1[0][0] \n__________________________________________________________________________________________________\nnormal_concat_1 (Concatenate) (None, 8, 8, 1008) 0 adjust_bn_1[0][0] \n normal_add_1_1[0][0] \n normal_add_2_1[0][0] \n normal_add_3_1[0][0] \n normal_add_4_1[0][0] \n normal_add_5_1[0][0] \n__________________________________________________________________________________________________\nactivation_46 (Activation) (None, 8, 8, 1008) 0 normal_concat_0[0][0] \n__________________________________________________________________________________________________\nactivation_47 (Activation) (None, 8, 8, 1008) 0 normal_concat_1[0][0] \n__________________________________________________________________________________________________\nadjust_conv_projection_2 (Conv2 (None, 8, 8, 168) 169344 activation_46[0][0] \n__________________________________________________________________________________________________\nnormal_conv_1_2 (Conv2D) (None, 8, 8, 168) 169344 activation_47[0][0] \n__________________________________________________________________________________________________\nadjust_bn_2 (BatchNormalization (None, 8, 8, 168) 672 adjust_conv_projection_2[0][0] \n__________________________________________________________________________________________________\nnormal_bn_1_2 (BatchNormalizati (None, 8, 8, 168) 672 normal_conv_1_2[0][0] \n__________________________________________________________________________________________________\nactivation_48 (Activation) (None, 8, 8, 168) 0 normal_bn_1_2[0][0] \n__________________________________________________________________________________________________\nactivation_50 (Activation) (None, 8, 8, 168) 0 adjust_bn_2[0][0] \n__________________________________________________________________________________________________\nactivation_52 (Activation) (None, 8, 8, 168) 0 adjust_bn_2[0][0] \n__________________________________________________________________________________________________\nactivation_54 (Activation) (None, 8, 8, 168) 0 adjust_bn_2[0][0] \n__________________________________________________________________________________________________\nactivation_56 (Activation) (None, 8, 8, 168) 0 normal_bn_1_2[0][0] \n__________________________________________________________________________________________________\nseparable_conv_1_normal_left1_2 (None, 8, 8, 168) 32424 activation_48[0][0] \n__________________________________________________________________________________________________\nseparable_conv_1_normal_right1_ (None, 8, 8, 168) 29736 activation_50[0][0] \n__________________________________________________________________________________________________\nseparable_conv_1_normal_left2_2 (None, 8, 8, 168) 32424 activation_52[0][0] \n__________________________________________________________________________________________________\nseparable_conv_1_normal_right2_ (None, 8, 8, 168) 29736 activation_54[0][0] \n__________________________________________________________________________________________________\nseparable_conv_1_normal_left5_2 (None, 8, 8, 168) 29736 activation_56[0][0] \n__________________________________________________________________________________________________\nseparable_conv_1_bn_normal_left (None, 8, 8, 168) 672 separable_conv_1_normal_left1_2[0\n__________________________________________________________________________________________________\nseparable_conv_1_bn_normal_righ (None, 8, 8, 168) 672 separable_conv_1_normal_right1_2[\n__________________________________________________________________________________________________\nseparable_conv_1_bn_normal_left (None, 8, 8, 168) 672 separable_conv_1_normal_left2_2[0\n__________________________________________________________________________________________________\nseparable_conv_1_bn_normal_righ (None, 8, 8, 168) 672 separable_conv_1_normal_right2_2[\n__________________________________________________________________________________________________\nseparable_conv_1_bn_normal_left (None, 8, 8, 168) 672 separable_conv_1_normal_left5_2[0\n__________________________________________________________________________________________________\nactivation_49 (Activation) (None, 8, 8, 168) 0 separable_conv_1_bn_normal_left1_\n__________________________________________________________________________________________________\nactivation_51 (Activation) (None, 8, 8, 168) 0 separable_conv_1_bn_normal_right1\n__________________________________________________________________________________________________\nactivation_53 (Activation) (None, 8, 8, 168) 0 separable_conv_1_bn_normal_left2_\n__________________________________________________________________________________________________\nactivation_55 (Activation) (None, 8, 8, 168) 0 separable_conv_1_bn_normal_right2\n__________________________________________________________________________________________________\nactivation_57 (Activation) (None, 8, 8, 168) 0 separable_conv_1_bn_normal_left5_\n__________________________________________________________________________________________________\nseparable_conv_2_normal_left1_2 (None, 8, 8, 168) 32424 activation_49[0][0] \n__________________________________________________________________________________________________\nseparable_conv_2_normal_right1_ (None, 8, 8, 168) 29736 activation_51[0][0] \n__________________________________________________________________________________________________\nseparable_conv_2_normal_left2_2 (None, 8, 8, 168) 32424 activation_53[0][0] \n__________________________________________________________________________________________________\nseparable_conv_2_normal_right2_ (None, 8, 8, 168) 29736 activation_55[0][0] \n__________________________________________________________________________________________________\nseparable_conv_2_normal_left5_2 (None, 8, 8, 168) 29736 activation_57[0][0] \n__________________________________________________________________________________________________\nseparable_conv_2_bn_normal_left (None, 8, 8, 168) 672 separable_conv_2_normal_left1_2[0\n__________________________________________________________________________________________________\nseparable_conv_2_bn_normal_righ (None, 8, 8, 168) 672 separable_conv_2_normal_right1_2[\n__________________________________________________________________________________________________\nseparable_conv_2_bn_normal_left (None, 8, 8, 168) 672 separable_conv_2_normal_left2_2[0\n__________________________________________________________________________________________________\nseparable_conv_2_bn_normal_righ (None, 8, 8, 168) 672 separable_conv_2_normal_right2_2[\n__________________________________________________________________________________________________\nnormal_left3_2 (AveragePooling2 (None, 8, 8, 168) 0 normal_bn_1_2[0][0] \n__________________________________________________________________________________________________\nnormal_left4_2 (AveragePooling2 (None, 8, 8, 168) 0 adjust_bn_2[0][0] \n__________________________________________________________________________________________________\nnormal_right4_2 (AveragePooling (None, 8, 8, 168) 0 adjust_bn_2[0][0] \n__________________________________________________________________________________________________\nseparable_conv_2_bn_normal_left (None, 8, 8, 168) 672 separable_conv_2_normal_left5_2[0\n__________________________________________________________________________________________________\nnormal_add_1_2 (Add) (None, 8, 8, 168) 0 separable_conv_2_bn_normal_left1_\n separable_conv_2_bn_normal_right1\n__________________________________________________________________________________________________\nnormal_add_2_2 (Add) (None, 8, 8, 168) 0 separable_conv_2_bn_normal_left2_\n separable_conv_2_bn_normal_right2\n__________________________________________________________________________________________________\nnormal_add_3_2 (Add) (None, 8, 8, 168) 0 normal_left3_2[0][0] \n adjust_bn_2[0][0] \n__________________________________________________________________________________________________\nnormal_add_4_2 (Add) (None, 8, 8, 168) 0 normal_left4_2[0][0] \n normal_right4_2[0][0] \n__________________________________________________________________________________________________\nnormal_add_5_2 (Add) (None, 8, 8, 168) 0 separable_conv_2_bn_normal_left5_\n normal_bn_1_2[0][0] \n__________________________________________________________________________________________________\nnormal_concat_2 (Concatenate) (None, 8, 8, 1008) 0 adjust_bn_2[0][0] \n normal_add_1_2[0][0] \n normal_add_2_2[0][0] \n normal_add_3_2[0][0] \n normal_add_4_2[0][0] \n normal_add_5_2[0][0] \n__________________________________________________________________________________________________\nactivation_58 (Activation) (None, 8, 8, 1008) 0 normal_concat_1[0][0] \n__________________________________________________________________________________________________\nactivation_59 (Activation) (None, 8, 8, 1008) 0 normal_concat_2[0][0] \n__________________________________________________________________________________________________\nadjust_conv_projection_3 (Conv2 (None, 8, 8, 168) 169344 activation_58[0][0] \n__________________________________________________________________________________________________\nnormal_conv_1_3 (Conv2D) (None, 8, 8, 168) 169344 activation_59[0][0] \n__________________________________________________________________________________________________\nadjust_bn_3 (BatchNormalization (None, 8, 8, 168) 672 adjust_conv_projection_3[0][0] \n__________________________________________________________________________________________________\nnormal_bn_1_3 (BatchNormalizati (None, 8, 8, 168) 672 normal_conv_1_3[0][0] \n__________________________________________________________________________________________________\nactivation_60 (Activation) (None, 8, 8, 168) 0 normal_bn_1_3[0][0] \n__________________________________________________________________________________________________\nactivation_62 (Activation) (None, 8, 8, 168) 0 adjust_bn_3[0][0] \n__________________________________________________________________________________________________\nactivation_64 (Activation) (None, 8, 8, 168) 0 adjust_bn_3[0][0] \n__________________________________________________________________________________________________\nactivation_66 (Activation) (None, 8, 8, 168) 0 adjust_bn_3[0][0] \n__________________________________________________________________________________________________\nactivation_68 (Activation) (None, 8, 8, 168) 0 normal_bn_1_3[0][0] \n__________________________________________________________________________________________________\nseparable_conv_1_normal_left1_3 (None, 8, 8, 168) 32424 activation_60[0][0] \n__________________________________________________________________________________________________\nseparable_conv_1_normal_right1_ (None, 8, 8, 168) 29736 activation_62[0][0] \n__________________________________________________________________________________________________\nseparable_conv_1_normal_left2_3 (None, 8, 8, 168) 32424 activation_64[0][0] \n__________________________________________________________________________________________________\nseparable_conv_1_normal_right2_ (None, 8, 8, 168) 29736 activation_66[0][0] \n__________________________________________________________________________________________________\nseparable_conv_1_normal_left5_3 (None, 8, 8, 168) 29736 activation_68[0][0] \n__________________________________________________________________________________________________\nseparable_conv_1_bn_normal_left (None, 8, 8, 168) 672 separable_conv_1_normal_left1_3[0\n__________________________________________________________________________________________________\nseparable_conv_1_bn_normal_righ (None, 8, 8, 168) 672 separable_conv_1_normal_right1_3[\n__________________________________________________________________________________________________\nseparable_conv_1_bn_normal_left (None, 8, 8, 168) 672 separable_conv_1_normal_left2_3[0\n__________________________________________________________________________________________________\nseparable_conv_1_bn_normal_righ (None, 8, 8, 168) 672 separable_conv_1_normal_right2_3[\n__________________________________________________________________________________________________\nseparable_conv_1_bn_normal_left (None, 8, 8, 168) 672 separable_conv_1_normal_left5_3[0\n__________________________________________________________________________________________________\nactivation_61 (Activation) (None, 8, 8, 168) 0 separable_conv_1_bn_normal_left1_\n__________________________________________________________________________________________________\nactivation_63 (Activation) (None, 8, 8, 168) 0 separable_conv_1_bn_normal_right1\n__________________________________________________________________________________________________\nactivation_65 (Activation) (None, 8, 8, 168) 0 separable_conv_1_bn_normal_left2_\n__________________________________________________________________________________________________\nactivation_67 (Activation) (None, 8, 8, 168) 0 separable_conv_1_bn_normal_right2\n__________________________________________________________________________________________________\nactivation_69 (Activation) (None, 8, 8, 168) 0 separable_conv_1_bn_normal_left5_\n__________________________________________________________________________________________________\nseparable_conv_2_normal_left1_3 (None, 8, 8, 168) 32424 activation_61[0][0] \n__________________________________________________________________________________________________\nseparable_conv_2_normal_right1_ (None, 8, 8, 168) 29736 activation_63[0][0] \n__________________________________________________________________________________________________\nseparable_conv_2_normal_left2_3 (None, 8, 8, 168) 32424 activation_65[0][0] \n__________________________________________________________________________________________________\nseparable_conv_2_normal_right2_ (None, 8, 8, 168) 29736 activation_67[0][0] \n__________________________________________________________________________________________________\nseparable_conv_2_normal_left5_3 (None, 8, 8, 168) 29736 activation_69[0][0] \n__________________________________________________________________________________________________\nseparable_conv_2_bn_normal_left (None, 8, 8, 168) 672 separable_conv_2_normal_left1_3[0\n__________________________________________________________________________________________________\nseparable_conv_2_bn_normal_righ (None, 8, 8, 168) 672 separable_conv_2_normal_right1_3[\n__________________________________________________________________________________________________\nseparable_conv_2_bn_normal_left (None, 8, 8, 168) 672 separable_conv_2_normal_left2_3[0\n__________________________________________________________________________________________________\nseparable_conv_2_bn_normal_righ (None, 8, 8, 168) 672 separable_conv_2_normal_right2_3[\n__________________________________________________________________________________________________\nnormal_left3_3 (AveragePooling2 (None, 8, 8, 168) 0 normal_bn_1_3[0][0] \n__________________________________________________________________________________________________\nnormal_left4_3 (AveragePooling2 (None, 8, 8, 168) 0 adjust_bn_3[0][0] \n__________________________________________________________________________________________________\nnormal_right4_3 (AveragePooling (None, 8, 8, 168) 0 adjust_bn_3[0][0] \n__________________________________________________________________________________________________\nseparable_conv_2_bn_normal_left (None, 8, 8, 168) 672 separable_conv_2_normal_left5_3[0\n__________________________________________________________________________________________________\nnormal_add_1_3 (Add) (None, 8, 8, 168) 0 separable_conv_2_bn_normal_left1_\n separable_conv_2_bn_normal_right1\n__________________________________________________________________________________________________\nnormal_add_2_3 (Add) (None, 8, 8, 168) 0 separable_conv_2_bn_normal_left2_\n separable_conv_2_bn_normal_right2\n__________________________________________________________________________________________________\nnormal_add_3_3 (Add) (None, 8, 8, 168) 0 normal_left3_3[0][0] \n adjust_bn_3[0][0] \n__________________________________________________________________________________________________\nnormal_add_4_3 (Add) (None, 8, 8, 168) 0 normal_left4_3[0][0] \n normal_right4_3[0][0] \n__________________________________________________________________________________________________\nnormal_add_5_3 (Add) (None, 8, 8, 168) 0 separable_conv_2_bn_normal_left5_\n normal_bn_1_3[0][0] \n__________________________________________________________________________________________________\nnormal_concat_3 (Concatenate) (None, 8, 8, 1008) 0 adjust_bn_3[0][0] \n normal_add_1_3[0][0] \n normal_add_2_3[0][0] \n normal_add_3_3[0][0] \n normal_add_4_3[0][0] \n normal_add_5_3[0][0] \n__________________________________________________________________________________________________\nactivation_70 (Activation) (None, 8, 8, 1008) 0 normal_concat_2[0][0] \n__________________________________________________________________________________________________\nactivation_71 (Activation) (None, 8, 8, 1008) 0 normal_concat_3[0][0] \n__________________________________________________________________________________________________\nadjust_conv_projection_4 (Conv2 (None, 8, 8, 168) 169344 activation_70[0][0] \n__________________________________________________________________________________________________\nnormal_conv_1_4 (Conv2D) (None, 8, 8, 168) 169344 activation_71[0][0] \n__________________________________________________________________________________________________\nadjust_bn_4 (BatchNormalization (None, 8, 8, 168) 672 adjust_conv_projection_4[0][0] \n__________________________________________________________________________________________________\nnormal_bn_1_4 (BatchNormalizati (None, 8, 8, 168) 672 normal_conv_1_4[0][0] \n__________________________________________________________________________________________________\nactivation_72 (Activation) (None, 8, 8, 168) 0 normal_bn_1_4[0][0] \n__________________________________________________________________________________________________\nactivation_74 (Activation) (None, 8, 8, 168) 0 adjust_bn_4[0][0] \n__________________________________________________________________________________________________\nactivation_76 (Activation) (None, 8, 8, 168) 0 adjust_bn_4[0][0] \n__________________________________________________________________________________________________\nactivation_78 (Activation) (None, 8, 8, 168) 0 adjust_bn_4[0][0] \n__________________________________________________________________________________________________\nactivation_80 (Activation) (None, 8, 8, 168) 0 normal_bn_1_4[0][0] \n__________________________________________________________________________________________________\nseparable_conv_1_normal_left1_4 (None, 8, 8, 168) 32424 activation_72[0][0] \n__________________________________________________________________________________________________\nseparable_conv_1_normal_right1_ (None, 8, 8, 168) 29736 activation_74[0][0] \n__________________________________________________________________________________________________\nseparable_conv_1_normal_left2_4 (None, 8, 8, 168) 32424 activation_76[0][0] \n__________________________________________________________________________________________________\nseparable_conv_1_normal_right2_ (None, 8, 8, 168) 29736 activation_78[0][0] \n__________________________________________________________________________________________________\nseparable_conv_1_normal_left5_4 (None, 8, 8, 168) 29736 activation_80[0][0] \n__________________________________________________________________________________________________\nseparable_conv_1_bn_normal_left (None, 8, 8, 168) 672 separable_conv_1_normal_left1_4[0\n__________________________________________________________________________________________________\nseparable_conv_1_bn_normal_righ (None, 8, 8, 168) 672 separable_conv_1_normal_right1_4[\n__________________________________________________________________________________________________\nseparable_conv_1_bn_normal_left (None, 8, 8, 168) 672 separable_conv_1_normal_left2_4[0\n__________________________________________________________________________________________________\nseparable_conv_1_bn_normal_righ (None, 8, 8, 168) 672 separable_conv_1_normal_right2_4[\n__________________________________________________________________________________________________\nseparable_conv_1_bn_normal_left (None, 8, 8, 168) 672 separable_conv_1_normal_left5_4[0\n__________________________________________________________________________________________________\nactivation_73 (Activation) (None, 8, 8, 168) 0 separable_conv_1_bn_normal_left1_\n__________________________________________________________________________________________________\nactivation_75 (Activation) (None, 8, 8, 168) 0 separable_conv_1_bn_normal_right1\n__________________________________________________________________________________________________\nactivation_77 (Activation) (None, 8, 8, 168) 0 separable_conv_1_bn_normal_left2_\n__________________________________________________________________________________________________\nactivation_79 (Activation) (None, 8, 8, 168) 0 separable_conv_1_bn_normal_right2\n__________________________________________________________________________________________________\nactivation_81 (Activation) (None, 8, 8, 168) 0 separable_conv_1_bn_normal_left5_\n__________________________________________________________________________________________________\nseparable_conv_2_normal_left1_4 (None, 8, 8, 168) 32424 activation_73[0][0] \n__________________________________________________________________________________________________\nseparable_conv_2_normal_right1_ (None, 8, 8, 168) 29736 activation_75[0][0] \n__________________________________________________________________________________________________\nseparable_conv_2_normal_left2_4 (None, 8, 8, 168) 32424 activation_77[0][0] \n__________________________________________________________________________________________________\nseparable_conv_2_normal_right2_ (None, 8, 8, 168) 29736 activation_79[0][0] \n__________________________________________________________________________________________________\nseparable_conv_2_normal_left5_4 (None, 8, 8, 168) 29736 activation_81[0][0] \n__________________________________________________________________________________________________\nseparable_conv_2_bn_normal_left (None, 8, 8, 168) 672 separable_conv_2_normal_left1_4[0\n__________________________________________________________________________________________________\nseparable_conv_2_bn_normal_righ (None, 8, 8, 168) 672 separable_conv_2_normal_right1_4[\n__________________________________________________________________________________________________\nseparable_conv_2_bn_normal_left (None, 8, 8, 168) 672 separable_conv_2_normal_left2_4[0\n__________________________________________________________________________________________________\nseparable_conv_2_bn_normal_righ (None, 8, 8, 168) 672 separable_conv_2_normal_right2_4[\n__________________________________________________________________________________________________\nnormal_left3_4 (AveragePooling2 (None, 8, 8, 168) 0 normal_bn_1_4[0][0] \n__________________________________________________________________________________________________\nnormal_left4_4 (AveragePooling2 (None, 8, 8, 168) 0 adjust_bn_4[0][0] \n__________________________________________________________________________________________________\nnormal_right4_4 (AveragePooling (None, 8, 8, 168) 0 adjust_bn_4[0][0] \n__________________________________________________________________________________________________\nseparable_conv_2_bn_normal_left (None, 8, 8, 168) 672 separable_conv_2_normal_left5_4[0\n__________________________________________________________________________________________________\nnormal_add_1_4 (Add) (None, 8, 8, 168) 0 separable_conv_2_bn_normal_left1_\n separable_conv_2_bn_normal_right1\n__________________________________________________________________________________________________\nnormal_add_2_4 (Add) (None, 8, 8, 168) 0 separable_conv_2_bn_normal_left2_\n separable_conv_2_bn_normal_right2\n__________________________________________________________________________________________________\nnormal_add_3_4 (Add) (None, 8, 8, 168) 0 normal_left3_4[0][0] \n adjust_bn_4[0][0] \n__________________________________________________________________________________________________\nnormal_add_4_4 (Add) (None, 8, 8, 168) 0 normal_left4_4[0][0] \n normal_right4_4[0][0] \n__________________________________________________________________________________________________\nnormal_add_5_4 (Add) (None, 8, 8, 168) 0 separable_conv_2_bn_normal_left5_\n normal_bn_1_4[0][0] \n__________________________________________________________________________________________________\nnormal_concat_4 (Concatenate) (None, 8, 8, 1008) 0 adjust_bn_4[0][0] \n normal_add_1_4[0][0] \n normal_add_2_4[0][0] \n normal_add_3_4[0][0] \n normal_add_4_4[0][0] \n normal_add_5_4[0][0] \n__________________________________________________________________________________________________\nactivation_82 (Activation) (None, 8, 8, 1008) 0 normal_concat_3[0][0] \n__________________________________________________________________________________________________\nactivation_83 (Activation) (None, 8, 8, 1008) 0 normal_concat_4[0][0] \n__________________________________________________________________________________________________\nadjust_conv_projection_5 (Conv2 (None, 8, 8, 168) 169344 activation_82[0][0] \n__________________________________________________________________________________________________\nnormal_conv_1_5 (Conv2D) (None, 8, 8, 168) 169344 activation_83[0][0] \n__________________________________________________________________________________________________\nadjust_bn_5 (BatchNormalization (None, 8, 8, 168) 672 adjust_conv_projection_5[0][0] \n__________________________________________________________________________________________________\nnormal_bn_1_5 (BatchNormalizati (None, 8, 8, 168) 672 normal_conv_1_5[0][0] \n__________________________________________________________________________________________________\nactivation_84 (Activation) (None, 8, 8, 168) 0 normal_bn_1_5[0][0] \n__________________________________________________________________________________________________\nactivation_86 (Activation) (None, 8, 8, 168) 0 adjust_bn_5[0][0] \n__________________________________________________________________________________________________\nactivation_88 (Activation) (None, 8, 8, 168) 0 adjust_bn_5[0][0] \n__________________________________________________________________________________________________\nactivation_90 (Activation) (None, 8, 8, 168) 0 adjust_bn_5[0][0] \n__________________________________________________________________________________________________\nactivation_92 (Activation) (None, 8, 8, 168) 0 normal_bn_1_5[0][0] \n__________________________________________________________________________________________________\nseparable_conv_1_normal_left1_5 (None, 8, 8, 168) 32424 activation_84[0][0] \n__________________________________________________________________________________________________\nseparable_conv_1_normal_right1_ (None, 8, 8, 168) 29736 activation_86[0][0] \n__________________________________________________________________________________________________\nseparable_conv_1_normal_left2_5 (None, 8, 8, 168) 32424 activation_88[0][0] \n__________________________________________________________________________________________________\nseparable_conv_1_normal_right2_ (None, 8, 8, 168) 29736 activation_90[0][0] \n__________________________________________________________________________________________________\nseparable_conv_1_normal_left5_5 (None, 8, 8, 168) 29736 activation_92[0][0] \n__________________________________________________________________________________________________\nseparable_conv_1_bn_normal_left (None, 8, 8, 168) 672 separable_conv_1_normal_left1_5[0\n__________________________________________________________________________________________________\nseparable_conv_1_bn_normal_righ (None, 8, 8, 168) 672 separable_conv_1_normal_right1_5[\n__________________________________________________________________________________________________\nseparable_conv_1_bn_normal_left (None, 8, 8, 168) 672 separable_conv_1_normal_left2_5[0\n__________________________________________________________________________________________________\nseparable_conv_1_bn_normal_righ (None, 8, 8, 168) 672 separable_conv_1_normal_right2_5[\n__________________________________________________________________________________________________\nseparable_conv_1_bn_normal_left (None, 8, 8, 168) 672 separable_conv_1_normal_left5_5[0\n__________________________________________________________________________________________________\nactivation_85 (Activation) (None, 8, 8, 168) 0 separable_conv_1_bn_normal_left1_\n__________________________________________________________________________________________________\nactivation_87 (Activation) (None, 8, 8, 168) 0 separable_conv_1_bn_normal_right1\n__________________________________________________________________________________________________\nactivation_89 (Activation) (None, 8, 8, 168) 0 separable_conv_1_bn_normal_left2_\n__________________________________________________________________________________________________\nactivation_91 (Activation) (None, 8, 8, 168) 0 separable_conv_1_bn_normal_right2\n__________________________________________________________________________________________________\nactivation_93 (Activation) (None, 8, 8, 168) 0 separable_conv_1_bn_normal_left5_\n__________________________________________________________________________________________________\nseparable_conv_2_normal_left1_5 (None, 8, 8, 168) 32424 activation_85[0][0] \n__________________________________________________________________________________________________\nseparable_conv_2_normal_right1_ (None, 8, 8, 168) 29736 activation_87[0][0] \n__________________________________________________________________________________________________\nseparable_conv_2_normal_left2_5 (None, 8, 8, 168) 32424 activation_89[0][0] \n__________________________________________________________________________________________________\nseparable_conv_2_normal_right2_ (None, 8, 8, 168) 29736 activation_91[0][0] \n__________________________________________________________________________________________________\nseparable_conv_2_normal_left5_5 (None, 8, 8, 168) 29736 activation_93[0][0] \n__________________________________________________________________________________________________\nseparable_conv_2_bn_normal_left (None, 8, 8, 168) 672 separable_conv_2_normal_left1_5[0\n__________________________________________________________________________________________________\nseparable_conv_2_bn_normal_righ (None, 8, 8, 168) 672 separable_conv_2_normal_right1_5[\n__________________________________________________________________________________________________\nseparable_conv_2_bn_normal_left (None, 8, 8, 168) 672 separable_conv_2_normal_left2_5[0\n__________________________________________________________________________________________________\nseparable_conv_2_bn_normal_righ (None, 8, 8, 168) 672 separable_conv_2_normal_right2_5[\n__________________________________________________________________________________________________\nnormal_left3_5 (AveragePooling2 (None, 8, 8, 168) 0 normal_bn_1_5[0][0] \n__________________________________________________________________________________________________\nnormal_left4_5 (AveragePooling2 (None, 8, 8, 168) 0 adjust_bn_5[0][0] \n__________________________________________________________________________________________________\nnormal_right4_5 (AveragePooling (None, 8, 8, 168) 0 adjust_bn_5[0][0] \n__________________________________________________________________________________________________\nseparable_conv_2_bn_normal_left (None, 8, 8, 168) 672 separable_conv_2_normal_left5_5[0\n__________________________________________________________________________________________________\nnormal_add_1_5 (Add) (None, 8, 8, 168) 0 separable_conv_2_bn_normal_left1_\n separable_conv_2_bn_normal_right1\n__________________________________________________________________________________________________\nnormal_add_2_5 (Add) (None, 8, 8, 168) 0 separable_conv_2_bn_normal_left2_\n separable_conv_2_bn_normal_right2\n__________________________________________________________________________________________________\nnormal_add_3_5 (Add) (None, 8, 8, 168) 0 normal_left3_5[0][0] \n adjust_bn_5[0][0] \n__________________________________________________________________________________________________\nnormal_add_4_5 (Add) (None, 8, 8, 168) 0 normal_left4_5[0][0] \n normal_right4_5[0][0] \n__________________________________________________________________________________________________\nnormal_add_5_5 (Add) (None, 8, 8, 168) 0 separable_conv_2_bn_normal_left5_\n normal_bn_1_5[0][0] \n__________________________________________________________________________________________________\nnormal_concat_5 (Concatenate) (None, 8, 8, 1008) 0 adjust_bn_5[0][0] \n normal_add_1_5[0][0] \n normal_add_2_5[0][0] \n normal_add_3_5[0][0] \n normal_add_4_5[0][0] \n normal_add_5_5[0][0] \n__________________________________________________________________________________________________\nactivation_95 (Activation) (None, 8, 8, 1008) 0 normal_concat_5[0][0] \n__________________________________________________________________________________________________\nactivation_94 (Activation) (None, 8, 8, 1008) 0 normal_concat_4[0][0] \n__________________________________________________________________________________________________\nreduction_conv_1_reduce_6 (Conv (None, 8, 8, 336) 338688 activation_95[0][0] \n__________________________________________________________________________________________________\nadjust_conv_projection_reduce_6 (None, 8, 8, 336) 338688 activation_94[0][0] \n__________________________________________________________________________________________________\nreduction_bn_1_reduce_6 (BatchN (None, 8, 8, 336) 1344 reduction_conv_1_reduce_6[0][0] \n__________________________________________________________________________________________________\nadjust_bn_reduce_6 (BatchNormal (None, 8, 8, 336) 1344 adjust_conv_projection_reduce_6[0\n__________________________________________________________________________________________________\nactivation_96 (Activation) (None, 8, 8, 336) 0 reduction_bn_1_reduce_6[0][0] \n__________________________________________________________________________________________________\nactivation_98 (Activation) (None, 8, 8, 336) 0 adjust_bn_reduce_6[0][0] \n__________________________________________________________________________________________________\nseparable_conv_1_pad_reduction_ (None, 11, 11, 336) 0 activation_96[0][0] \n__________________________________________________________________________________________________\nseparable_conv_1_pad_reduction_ (None, 13, 13, 336) 0 activation_98[0][0] \n__________________________________________________________________________________________________\nseparable_conv_1_reduction_left (None, 4, 4, 336) 121296 separable_conv_1_pad_reduction_le\n__________________________________________________________________________________________________\nseparable_conv_1_reduction_righ (None, 4, 4, 336) 129360 separable_conv_1_pad_reduction_ri\n__________________________________________________________________________________________________\nseparable_conv_1_bn_reduction_l (None, 4, 4, 336) 1344 separable_conv_1_reduction_left1_\n__________________________________________________________________________________________________\nseparable_conv_1_bn_reduction_r (None, 4, 4, 336) 1344 separable_conv_1_reduction_right1\n__________________________________________________________________________________________________\nactivation_97 (Activation) (None, 4, 4, 336) 0 separable_conv_1_bn_reduction_lef\n__________________________________________________________________________________________________\nactivation_99 (Activation) (None, 4, 4, 336) 0 separable_conv_1_bn_reduction_rig\n__________________________________________________________________________________________________\nseparable_conv_2_reduction_left (None, 4, 4, 336) 121296 activation_97[0][0] \n__________________________________________________________________________________________________\nseparable_conv_2_reduction_righ (None, 4, 4, 336) 129360 activation_99[0][0] \n__________________________________________________________________________________________________\nactivation_100 (Activation) (None, 8, 8, 336) 0 adjust_bn_reduce_6[0][0] \n__________________________________________________________________________________________________\nseparable_conv_2_bn_reduction_l (None, 4, 4, 336) 1344 separable_conv_2_reduction_left1_\n__________________________________________________________________________________________________\nseparable_conv_2_bn_reduction_r (None, 4, 4, 336) 1344 separable_conv_2_reduction_right1\n__________________________________________________________________________________________________\nseparable_conv_1_pad_reduction_ (None, 13, 13, 336) 0 activation_100[0][0] \n__________________________________________________________________________________________________\nactivation_102 (Activation) (None, 8, 8, 336) 0 adjust_bn_reduce_6[0][0] \n__________________________________________________________________________________________________\nreduction_add_1_reduce_6 (Add) (None, 4, 4, 336) 0 separable_conv_2_bn_reduction_lef\n separable_conv_2_bn_reduction_rig\n__________________________________________________________________________________________________\nseparable_conv_1_reduction_righ (None, 4, 4, 336) 129360 separable_conv_1_pad_reduction_ri\n__________________________________________________________________________________________________\nseparable_conv_1_pad_reduction_ (None, 11, 11, 336) 0 activation_102[0][0] \n__________________________________________________________________________________________________\nactivation_104 (Activation) (None, 4, 4, 336) 0 reduction_add_1_reduce_6[0][0] \n__________________________________________________________________________________________________\nseparable_conv_1_bn_reduction_r (None, 4, 4, 336) 1344 separable_conv_1_reduction_right2\n__________________________________________________________________________________________________\nseparable_conv_1_reduction_righ (None, 4, 4, 336) 121296 separable_conv_1_pad_reduction_ri\n__________________________________________________________________________________________________\nseparable_conv_1_reduction_left (None, 4, 4, 336) 115920 activation_104[0][0] \n__________________________________________________________________________________________________\nactivation_101 (Activation) (None, 4, 4, 336) 0 separable_conv_1_bn_reduction_rig\n__________________________________________________________________________________________________\nseparable_conv_1_bn_reduction_r (None, 4, 4, 336) 1344 separable_conv_1_reduction_right3\n__________________________________________________________________________________________________\nseparable_conv_1_bn_reduction_l (None, 4, 4, 336) 1344 separable_conv_1_reduction_left4_\n__________________________________________________________________________________________________\nreduction_pad_1_reduce_6 (ZeroP (None, 9, 9, 336) 0 reduction_bn_1_reduce_6[0][0] \n__________________________________________________________________________________________________\nseparable_conv_2_reduction_righ (None, 4, 4, 336) 129360 activation_101[0][0] \n__________________________________________________________________________________________________\nactivation_103 (Activation) (None, 4, 4, 336) 0 separable_conv_1_bn_reduction_rig\n__________________________________________________________________________________________________\nactivation_105 (Activation) (None, 4, 4, 336) 0 separable_conv_1_bn_reduction_lef\n__________________________________________________________________________________________________\nreduction_left2_reduce_6 (MaxPo (None, 4, 4, 336) 0 reduction_pad_1_reduce_6[0][0] \n__________________________________________________________________________________________________\nseparable_conv_2_bn_reduction_r (None, 4, 4, 336) 1344 separable_conv_2_reduction_right2\n__________________________________________________________________________________________________\nseparable_conv_2_reduction_righ (None, 4, 4, 336) 121296 activation_103[0][0] \n__________________________________________________________________________________________________\nseparable_conv_2_reduction_left (None, 4, 4, 336) 115920 activation_105[0][0] \n__________________________________________________________________________________________________\nadjust_relu_1_7 (Activation) (None, 8, 8, 1008) 0 normal_concat_4[0][0] \n__________________________________________________________________________________________________\nreduction_add_2_reduce_6 (Add) (None, 4, 4, 336) 0 reduction_left2_reduce_6[0][0] \n separable_conv_2_bn_reduction_rig\n__________________________________________________________________________________________________\nreduction_left3_reduce_6 (Avera (None, 4, 4, 336) 0 reduction_pad_1_reduce_6[0][0] \n__________________________________________________________________________________________________\nseparable_conv_2_bn_reduction_r (None, 4, 4, 336) 1344 separable_conv_2_reduction_right3\n__________________________________________________________________________________________________\nreduction_left4_reduce_6 (Avera (None, 4, 4, 336) 0 reduction_add_1_reduce_6[0][0] \n__________________________________________________________________________________________________\nseparable_conv_2_bn_reduction_l (None, 4, 4, 336) 1344 separable_conv_2_reduction_left4_\n__________________________________________________________________________________________________\nreduction_right5_reduce_6 (MaxP (None, 4, 4, 336) 0 reduction_pad_1_reduce_6[0][0] \n__________________________________________________________________________________________________\nzero_padding2d_3 (ZeroPadding2D (None, 9, 9, 1008) 0 adjust_relu_1_7[0][0] \n__________________________________________________________________________________________________\nreduction_add3_reduce_6 (Add) (None, 4, 4, 336) 0 reduction_left3_reduce_6[0][0] \n separable_conv_2_bn_reduction_rig\n__________________________________________________________________________________________________\nadd_3 (Add) (None, 4, 4, 336) 0 reduction_add_2_reduce_6[0][0] \n reduction_left4_reduce_6[0][0] \n__________________________________________________________________________________________________\nreduction_add4_reduce_6 (Add) (None, 4, 4, 336) 0 separable_conv_2_bn_reduction_lef\n reduction_right5_reduce_6[0][0] \n__________________________________________________________________________________________________\ncropping2d_3 (Cropping2D) (None, 8, 8, 1008) 0 zero_padding2d_3[0][0] \n__________________________________________________________________________________________________\nreduction_concat_reduce_6 (Conc (None, 4, 4, 1344) 0 reduction_add_2_reduce_6[0][0] \n reduction_add3_reduce_6[0][0] \n add_3[0][0] \n reduction_add4_reduce_6[0][0] \n__________________________________________________________________________________________________\nadjust_avg_pool_1_7 (AveragePoo (None, 4, 4, 1008) 0 adjust_relu_1_7[0][0] \n__________________________________________________________________________________________________\nadjust_avg_pool_2_7 (AveragePoo (None, 4, 4, 1008) 0 cropping2d_3[0][0] \n__________________________________________________________________________________________________\nadjust_conv_1_7 (Conv2D) (None, 4, 4, 168) 169344 adjust_avg_pool_1_7[0][0] \n__________________________________________________________________________________________________\nadjust_conv_2_7 (Conv2D) (None, 4, 4, 168) 169344 adjust_avg_pool_2_7[0][0] \n__________________________________________________________________________________________________\nactivation_106 (Activation) (None, 4, 4, 1344) 0 reduction_concat_reduce_6[0][0] \n__________________________________________________________________________________________________\nconcatenate_3 (Concatenate) (None, 4, 4, 336) 0 adjust_conv_1_7[0][0] \n adjust_conv_2_7[0][0] \n__________________________________________________________________________________________________\nnormal_conv_1_7 (Conv2D) (None, 4, 4, 336) 451584 activation_106[0][0] \n__________________________________________________________________________________________________\nadjust_bn_7 (BatchNormalization (None, 4, 4, 336) 1344 concatenate_3[0][0] \n__________________________________________________________________________________________________\nnormal_bn_1_7 (BatchNormalizati (None, 4, 4, 336) 1344 normal_conv_1_7[0][0] \n__________________________________________________________________________________________________\nactivation_107 (Activation) (None, 4, 4, 336) 0 normal_bn_1_7[0][0] \n__________________________________________________________________________________________________\nactivation_109 (Activation) (None, 4, 4, 336) 0 adjust_bn_7[0][0] \n__________________________________________________________________________________________________\nactivation_111 (Activation) (None, 4, 4, 336) 0 adjust_bn_7[0][0] \n__________________________________________________________________________________________________\nactivation_113 (Activation) (None, 4, 4, 336) 0 adjust_bn_7[0][0] \n__________________________________________________________________________________________________\nactivation_115 (Activation) (None, 4, 4, 336) 0 normal_bn_1_7[0][0] \n__________________________________________________________________________________________________\nseparable_conv_1_normal_left1_7 (None, 4, 4, 336) 121296 activation_107[0][0] \n__________________________________________________________________________________________________\nseparable_conv_1_normal_right1_ (None, 4, 4, 336) 115920 activation_109[0][0] \n__________________________________________________________________________________________________\nseparable_conv_1_normal_left2_7 (None, 4, 4, 336) 121296 activation_111[0][0] \n__________________________________________________________________________________________________\nseparable_conv_1_normal_right2_ (None, 4, 4, 336) 115920 activation_113[0][0] \n__________________________________________________________________________________________________\nseparable_conv_1_normal_left5_7 (None, 4, 4, 336) 115920 activation_115[0][0] \n__________________________________________________________________________________________________\nseparable_conv_1_bn_normal_left (None, 4, 4, 336) 1344 separable_conv_1_normal_left1_7[0\n__________________________________________________________________________________________________\nseparable_conv_1_bn_normal_righ (None, 4, 4, 336) 1344 separable_conv_1_normal_right1_7[\n__________________________________________________________________________________________________\nseparable_conv_1_bn_normal_left (None, 4, 4, 336) 1344 separable_conv_1_normal_left2_7[0\n__________________________________________________________________________________________________\nseparable_conv_1_bn_normal_righ (None, 4, 4, 336) 1344 separable_conv_1_normal_right2_7[\n__________________________________________________________________________________________________\nseparable_conv_1_bn_normal_left (None, 4, 4, 336) 1344 separable_conv_1_normal_left5_7[0\n__________________________________________________________________________________________________\nactivation_108 (Activation) (None, 4, 4, 336) 0 separable_conv_1_bn_normal_left1_\n__________________________________________________________________________________________________\nactivation_110 (Activation) (None, 4, 4, 336) 0 separable_conv_1_bn_normal_right1\n__________________________________________________________________________________________________\nactivation_112 (Activation) (None, 4, 4, 336) 0 separable_conv_1_bn_normal_left2_\n__________________________________________________________________________________________________\nactivation_114 (Activation) (None, 4, 4, 336) 0 separable_conv_1_bn_normal_right2\n__________________________________________________________________________________________________\nactivation_116 (Activation) (None, 4, 4, 336) 0 separable_conv_1_bn_normal_left5_\n__________________________________________________________________________________________________\nseparable_conv_2_normal_left1_7 (None, 4, 4, 336) 121296 activation_108[0][0] \n__________________________________________________________________________________________________\nseparable_conv_2_normal_right1_ (None, 4, 4, 336) 115920 activation_110[0][0] \n__________________________________________________________________________________________________\nseparable_conv_2_normal_left2_7 (None, 4, 4, 336) 121296 activation_112[0][0] \n__________________________________________________________________________________________________\nseparable_conv_2_normal_right2_ (None, 4, 4, 336) 115920 activation_114[0][0] \n__________________________________________________________________________________________________\nseparable_conv_2_normal_left5_7 (None, 4, 4, 336) 115920 activation_116[0][0] \n__________________________________________________________________________________________________\nseparable_conv_2_bn_normal_left (None, 4, 4, 336) 1344 separable_conv_2_normal_left1_7[0\n__________________________________________________________________________________________________\nseparable_conv_2_bn_normal_righ (None, 4, 4, 336) 1344 separable_conv_2_normal_right1_7[\n__________________________________________________________________________________________________\nseparable_conv_2_bn_normal_left (None, 4, 4, 336) 1344 separable_conv_2_normal_left2_7[0\n__________________________________________________________________________________________________\nseparable_conv_2_bn_normal_righ (None, 4, 4, 336) 1344 separable_conv_2_normal_right2_7[\n__________________________________________________________________________________________________\nnormal_left3_7 (AveragePooling2 (None, 4, 4, 336) 0 normal_bn_1_7[0][0] \n__________________________________________________________________________________________________\nnormal_left4_7 (AveragePooling2 (None, 4, 4, 336) 0 adjust_bn_7[0][0] \n__________________________________________________________________________________________________\nnormal_right4_7 (AveragePooling (None, 4, 4, 336) 0 adjust_bn_7[0][0] \n__________________________________________________________________________________________________\nseparable_conv_2_bn_normal_left (None, 4, 4, 336) 1344 separable_conv_2_normal_left5_7[0\n__________________________________________________________________________________________________\nnormal_add_1_7 (Add) (None, 4, 4, 336) 0 separable_conv_2_bn_normal_left1_\n separable_conv_2_bn_normal_right1\n__________________________________________________________________________________________________\nnormal_add_2_7 (Add) (None, 4, 4, 336) 0 separable_conv_2_bn_normal_left2_\n separable_conv_2_bn_normal_right2\n__________________________________________________________________________________________________\nnormal_add_3_7 (Add) (None, 4, 4, 336) 0 normal_left3_7[0][0] \n adjust_bn_7[0][0] \n__________________________________________________________________________________________________\nnormal_add_4_7 (Add) (None, 4, 4, 336) 0 normal_left4_7[0][0] \n normal_right4_7[0][0] \n__________________________________________________________________________________________________\nnormal_add_5_7 (Add) (None, 4, 4, 336) 0 separable_conv_2_bn_normal_left5_\n normal_bn_1_7[0][0] \n__________________________________________________________________________________________________\nnormal_concat_7 (Concatenate) (None, 4, 4, 2016) 0 adjust_bn_7[0][0] \n normal_add_1_7[0][0] \n normal_add_2_7[0][0] \n normal_add_3_7[0][0] \n normal_add_4_7[0][0] \n normal_add_5_7[0][0] \n__________________________________________________________________________________________________\nactivation_117 (Activation) (None, 4, 4, 1344) 0 reduction_concat_reduce_6[0][0] \n__________________________________________________________________________________________________\nactivation_118 (Activation) (None, 4, 4, 2016) 0 normal_concat_7[0][0] \n__________________________________________________________________________________________________\nadjust_conv_projection_8 (Conv2 (None, 4, 4, 336) 451584 activation_117[0][0] \n__________________________________________________________________________________________________\nnormal_conv_1_8 (Conv2D) (None, 4, 4, 336) 677376 activation_118[0][0] \n__________________________________________________________________________________________________\nadjust_bn_8 (BatchNormalization (None, 4, 4, 336) 1344 adjust_conv_projection_8[0][0] \n__________________________________________________________________________________________________\nnormal_bn_1_8 (BatchNormalizati (None, 4, 4, 336) 1344 normal_conv_1_8[0][0] \n__________________________________________________________________________________________________\nactivation_119 (Activation) (None, 4, 4, 336) 0 normal_bn_1_8[0][0] \n__________________________________________________________________________________________________\nactivation_121 (Activation) (None, 4, 4, 336) 0 adjust_bn_8[0][0] \n__________________________________________________________________________________________________\nactivation_123 (Activation) (None, 4, 4, 336) 0 adjust_bn_8[0][0] \n__________________________________________________________________________________________________\nactivation_125 (Activation) (None, 4, 4, 336) 0 adjust_bn_8[0][0] \n__________________________________________________________________________________________________\nactivation_127 (Activation) (None, 4, 4, 336) 0 normal_bn_1_8[0][0] \n__________________________________________________________________________________________________\nseparable_conv_1_normal_left1_8 (None, 4, 4, 336) 121296 activation_119[0][0] \n__________________________________________________________________________________________________\nseparable_conv_1_normal_right1_ (None, 4, 4, 336) 115920 activation_121[0][0] \n__________________________________________________________________________________________________\nseparable_conv_1_normal_left2_8 (None, 4, 4, 336) 121296 activation_123[0][0] \n__________________________________________________________________________________________________\nseparable_conv_1_normal_right2_ (None, 4, 4, 336) 115920 activation_125[0][0] \n__________________________________________________________________________________________________\nseparable_conv_1_normal_left5_8 (None, 4, 4, 336) 115920 activation_127[0][0] \n__________________________________________________________________________________________________\nseparable_conv_1_bn_normal_left (None, 4, 4, 336) 1344 separable_conv_1_normal_left1_8[0\n__________________________________________________________________________________________________\nseparable_conv_1_bn_normal_righ (None, 4, 4, 336) 1344 separable_conv_1_normal_right1_8[\n__________________________________________________________________________________________________\nseparable_conv_1_bn_normal_left (None, 4, 4, 336) 1344 separable_conv_1_normal_left2_8[0\n__________________________________________________________________________________________________\nseparable_conv_1_bn_normal_righ (None, 4, 4, 336) 1344 separable_conv_1_normal_right2_8[\n__________________________________________________________________________________________________\nseparable_conv_1_bn_normal_left (None, 4, 4, 336) 1344 separable_conv_1_normal_left5_8[0\n__________________________________________________________________________________________________\nactivation_120 (Activation) (None, 4, 4, 336) 0 separable_conv_1_bn_normal_left1_\n__________________________________________________________________________________________________\nactivation_122 (Activation) (None, 4, 4, 336) 0 separable_conv_1_bn_normal_right1\n__________________________________________________________________________________________________\nactivation_124 (Activation) (None, 4, 4, 336) 0 separable_conv_1_bn_normal_left2_\n__________________________________________________________________________________________________\nactivation_126 (Activation) (None, 4, 4, 336) 0 separable_conv_1_bn_normal_right2\n__________________________________________________________________________________________________\nactivation_128 (Activation) (None, 4, 4, 336) 0 separable_conv_1_bn_normal_left5_\n__________________________________________________________________________________________________\nseparable_conv_2_normal_left1_8 (None, 4, 4, 336) 121296 activation_120[0][0] \n__________________________________________________________________________________________________\nseparable_conv_2_normal_right1_ (None, 4, 4, 336) 115920 activation_122[0][0] \n__________________________________________________________________________________________________\nseparable_conv_2_normal_left2_8 (None, 4, 4, 336) 121296 activation_124[0][0] \n__________________________________________________________________________________________________\nseparable_conv_2_normal_right2_ (None, 4, 4, 336) 115920 activation_126[0][0] \n__________________________________________________________________________________________________\nseparable_conv_2_normal_left5_8 (None, 4, 4, 336) 115920 activation_128[0][0] \n__________________________________________________________________________________________________\nseparable_conv_2_bn_normal_left (None, 4, 4, 336) 1344 separable_conv_2_normal_left1_8[0\n__________________________________________________________________________________________________\nseparable_conv_2_bn_normal_righ (None, 4, 4, 336) 1344 separable_conv_2_normal_right1_8[\n__________________________________________________________________________________________________\nseparable_conv_2_bn_normal_left (None, 4, 4, 336) 1344 separable_conv_2_normal_left2_8[0\n__________________________________________________________________________________________________\nseparable_conv_2_bn_normal_righ (None, 4, 4, 336) 1344 separable_conv_2_normal_right2_8[\n__________________________________________________________________________________________________\nnormal_left3_8 (AveragePooling2 (None, 4, 4, 336) 0 normal_bn_1_8[0][0] \n__________________________________________________________________________________________________\nnormal_left4_8 (AveragePooling2 (None, 4, 4, 336) 0 adjust_bn_8[0][0] \n__________________________________________________________________________________________________\nnormal_right4_8 (AveragePooling (None, 4, 4, 336) 0 adjust_bn_8[0][0] \n__________________________________________________________________________________________________\nseparable_conv_2_bn_normal_left (None, 4, 4, 336) 1344 separable_conv_2_normal_left5_8[0\n__________________________________________________________________________________________________\nnormal_add_1_8 (Add) (None, 4, 4, 336) 0 separable_conv_2_bn_normal_left1_\n separable_conv_2_bn_normal_right1\n__________________________________________________________________________________________________\nnormal_add_2_8 (Add) (None, 4, 4, 336) 0 separable_conv_2_bn_normal_left2_\n separable_conv_2_bn_normal_right2\n__________________________________________________________________________________________________\nnormal_add_3_8 (Add) (None, 4, 4, 336) 0 normal_left3_8[0][0] \n adjust_bn_8[0][0] \n__________________________________________________________________________________________________\nnormal_add_4_8 (Add) (None, 4, 4, 336) 0 normal_left4_8[0][0] \n normal_right4_8[0][0] \n__________________________________________________________________________________________________\nnormal_add_5_8 (Add) (None, 4, 4, 336) 0 separable_conv_2_bn_normal_left5_\n normal_bn_1_8[0][0] \n__________________________________________________________________________________________________\nnormal_concat_8 (Concatenate) (None, 4, 4, 2016) 0 adjust_bn_8[0][0] \n normal_add_1_8[0][0] \n normal_add_2_8[0][0] \n normal_add_3_8[0][0] \n normal_add_4_8[0][0] \n normal_add_5_8[0][0] \n__________________________________________________________________________________________________\nactivation_129 (Activation) (None, 4, 4, 2016) 0 normal_concat_7[0][0] \n__________________________________________________________________________________________________\nactivation_130 (Activation) (None, 4, 4, 2016) 0 normal_concat_8[0][0] \n__________________________________________________________________________________________________\nadjust_conv_projection_9 (Conv2 (None, 4, 4, 336) 677376 activation_129[0][0] \n__________________________________________________________________________________________________\nnormal_conv_1_9 (Conv2D) (None, 4, 4, 336) 677376 activation_130[0][0] \n__________________________________________________________________________________________________\nadjust_bn_9 (BatchNormalization (None, 4, 4, 336) 1344 adjust_conv_projection_9[0][0] \n__________________________________________________________________________________________________\nnormal_bn_1_9 (BatchNormalizati (None, 4, 4, 336) 1344 normal_conv_1_9[0][0] \n__________________________________________________________________________________________________\nactivation_131 (Activation) (None, 4, 4, 336) 0 normal_bn_1_9[0][0] \n__________________________________________________________________________________________________\nactivation_133 (Activation) (None, 4, 4, 336) 0 adjust_bn_9[0][0] \n__________________________________________________________________________________________________\nactivation_135 (Activation) (None, 4, 4, 336) 0 adjust_bn_9[0][0] \n__________________________________________________________________________________________________\nactivation_137 (Activation) (None, 4, 4, 336) 0 adjust_bn_9[0][0] \n__________________________________________________________________________________________________\nactivation_139 (Activation) (None, 4, 4, 336) 0 normal_bn_1_9[0][0] \n__________________________________________________________________________________________________\nseparable_conv_1_normal_left1_9 (None, 4, 4, 336) 121296 activation_131[0][0] \n__________________________________________________________________________________________________\nseparable_conv_1_normal_right1_ (None, 4, 4, 336) 115920 activation_133[0][0] \n__________________________________________________________________________________________________\nseparable_conv_1_normal_left2_9 (None, 4, 4, 336) 121296 activation_135[0][0] \n__________________________________________________________________________________________________\nseparable_conv_1_normal_right2_ (None, 4, 4, 336) 115920 activation_137[0][0] \n__________________________________________________________________________________________________\nseparable_conv_1_normal_left5_9 (None, 4, 4, 336) 115920 activation_139[0][0] \n__________________________________________________________________________________________________\nseparable_conv_1_bn_normal_left (None, 4, 4, 336) 1344 separable_conv_1_normal_left1_9[0\n__________________________________________________________________________________________________\nseparable_conv_1_bn_normal_righ (None, 4, 4, 336) 1344 separable_conv_1_normal_right1_9[\n__________________________________________________________________________________________________\nseparable_conv_1_bn_normal_left (None, 4, 4, 336) 1344 separable_conv_1_normal_left2_9[0\n__________________________________________________________________________________________________\nseparable_conv_1_bn_normal_righ (None, 4, 4, 336) 1344 separable_conv_1_normal_right2_9[\n__________________________________________________________________________________________________\nseparable_conv_1_bn_normal_left (None, 4, 4, 336) 1344 separable_conv_1_normal_left5_9[0\n__________________________________________________________________________________________________\nactivation_132 (Activation) (None, 4, 4, 336) 0 separable_conv_1_bn_normal_left1_\n__________________________________________________________________________________________________\nactivation_134 (Activation) (None, 4, 4, 336) 0 separable_conv_1_bn_normal_right1\n__________________________________________________________________________________________________\nactivation_136 (Activation) (None, 4, 4, 336) 0 separable_conv_1_bn_normal_left2_\n__________________________________________________________________________________________________\nactivation_138 (Activation) (None, 4, 4, 336) 0 separable_conv_1_bn_normal_right2\n__________________________________________________________________________________________________\nactivation_140 (Activation) (None, 4, 4, 336) 0 separable_conv_1_bn_normal_left5_\n__________________________________________________________________________________________________\nseparable_conv_2_normal_left1_9 (None, 4, 4, 336) 121296 activation_132[0][0] \n__________________________________________________________________________________________________\nseparable_conv_2_normal_right1_ (None, 4, 4, 336) 115920 activation_134[0][0] \n__________________________________________________________________________________________________\nseparable_conv_2_normal_left2_9 (None, 4, 4, 336) 121296 activation_136[0][0] \n__________________________________________________________________________________________________\nseparable_conv_2_normal_right2_ (None, 4, 4, 336) 115920 activation_138[0][0] \n__________________________________________________________________________________________________\nseparable_conv_2_normal_left5_9 (None, 4, 4, 336) 115920 activation_140[0][0] \n__________________________________________________________________________________________________\nseparable_conv_2_bn_normal_left (None, 4, 4, 336) 1344 separable_conv_2_normal_left1_9[0\n__________________________________________________________________________________________________\nseparable_conv_2_bn_normal_righ (None, 4, 4, 336) 1344 separable_conv_2_normal_right1_9[\n__________________________________________________________________________________________________\nseparable_conv_2_bn_normal_left (None, 4, 4, 336) 1344 separable_conv_2_normal_left2_9[0\n__________________________________________________________________________________________________\nseparable_conv_2_bn_normal_righ (None, 4, 4, 336) 1344 separable_conv_2_normal_right2_9[\n__________________________________________________________________________________________________\nnormal_left3_9 (AveragePooling2 (None, 4, 4, 336) 0 normal_bn_1_9[0][0] \n__________________________________________________________________________________________________\nnormal_left4_9 (AveragePooling2 (None, 4, 4, 336) 0 adjust_bn_9[0][0] \n__________________________________________________________________________________________________\nnormal_right4_9 (AveragePooling (None, 4, 4, 336) 0 adjust_bn_9[0][0] \n__________________________________________________________________________________________________\nseparable_conv_2_bn_normal_left (None, 4, 4, 336) 1344 separable_conv_2_normal_left5_9[0\n__________________________________________________________________________________________________\nnormal_add_1_9 (Add) (None, 4, 4, 336) 0 separable_conv_2_bn_normal_left1_\n separable_conv_2_bn_normal_right1\n__________________________________________________________________________________________________\nnormal_add_2_9 (Add) (None, 4, 4, 336) 0 separable_conv_2_bn_normal_left2_\n separable_conv_2_bn_normal_right2\n__________________________________________________________________________________________________\nnormal_add_3_9 (Add) (None, 4, 4, 336) 0 normal_left3_9[0][0] \n adjust_bn_9[0][0] \n__________________________________________________________________________________________________\nnormal_add_4_9 (Add) (None, 4, 4, 336) 0 normal_left4_9[0][0] \n normal_right4_9[0][0] \n__________________________________________________________________________________________________\nnormal_add_5_9 (Add) (None, 4, 4, 336) 0 separable_conv_2_bn_normal_left5_\n normal_bn_1_9[0][0] \n__________________________________________________________________________________________________\nnormal_concat_9 (Concatenate) (None, 4, 4, 2016) 0 adjust_bn_9[0][0] \n normal_add_1_9[0][0] \n normal_add_2_9[0][0] \n normal_add_3_9[0][0] \n normal_add_4_9[0][0] \n normal_add_5_9[0][0] \n__________________________________________________________________________________________________\nactivation_141 (Activation) (None, 4, 4, 2016) 0 normal_concat_8[0][0] \n__________________________________________________________________________________________________\nactivation_142 (Activation) (None, 4, 4, 2016) 0 normal_concat_9[0][0] \n__________________________________________________________________________________________________\nadjust_conv_projection_10 (Conv (None, 4, 4, 336) 677376 activation_141[0][0] \n__________________________________________________________________________________________________\nnormal_conv_1_10 (Conv2D) (None, 4, 4, 336) 677376 activation_142[0][0] \n__________________________________________________________________________________________________\nadjust_bn_10 (BatchNormalizatio (None, 4, 4, 336) 1344 adjust_conv_projection_10[0][0] \n__________________________________________________________________________________________________\nnormal_bn_1_10 (BatchNormalizat (None, 4, 4, 336) 1344 normal_conv_1_10[0][0] \n__________________________________________________________________________________________________\nactivation_143 (Activation) (None, 4, 4, 336) 0 normal_bn_1_10[0][0] \n__________________________________________________________________________________________________\nactivation_145 (Activation) (None, 4, 4, 336) 0 adjust_bn_10[0][0] \n__________________________________________________________________________________________________\nactivation_147 (Activation) (None, 4, 4, 336) 0 adjust_bn_10[0][0] \n__________________________________________________________________________________________________\nactivation_149 (Activation) (None, 4, 4, 336) 0 adjust_bn_10[0][0] \n__________________________________________________________________________________________________\nactivation_151 (Activation) (None, 4, 4, 336) 0 normal_bn_1_10[0][0] \n__________________________________________________________________________________________________\nseparable_conv_1_normal_left1_1 (None, 4, 4, 336) 121296 activation_143[0][0] \n__________________________________________________________________________________________________\nseparable_conv_1_normal_right1_ (None, 4, 4, 336) 115920 activation_145[0][0] \n__________________________________________________________________________________________________\nseparable_conv_1_normal_left2_1 (None, 4, 4, 336) 121296 activation_147[0][0] \n__________________________________________________________________________________________________\nseparable_conv_1_normal_right2_ (None, 4, 4, 336) 115920 activation_149[0][0] \n__________________________________________________________________________________________________\nseparable_conv_1_normal_left5_1 (None, 4, 4, 336) 115920 activation_151[0][0] \n__________________________________________________________________________________________________\nseparable_conv_1_bn_normal_left (None, 4, 4, 336) 1344 separable_conv_1_normal_left1_10[\n__________________________________________________________________________________________________\nseparable_conv_1_bn_normal_righ (None, 4, 4, 336) 1344 separable_conv_1_normal_right1_10\n__________________________________________________________________________________________________\nseparable_conv_1_bn_normal_left (None, 4, 4, 336) 1344 separable_conv_1_normal_left2_10[\n__________________________________________________________________________________________________\nseparable_conv_1_bn_normal_righ (None, 4, 4, 336) 1344 separable_conv_1_normal_right2_10\n__________________________________________________________________________________________________\nseparable_conv_1_bn_normal_left (None, 4, 4, 336) 1344 separable_conv_1_normal_left5_10[\n__________________________________________________________________________________________________\nactivation_144 (Activation) (None, 4, 4, 336) 0 separable_conv_1_bn_normal_left1_\n__________________________________________________________________________________________________\nactivation_146 (Activation) (None, 4, 4, 336) 0 separable_conv_1_bn_normal_right1\n__________________________________________________________________________________________________\nactivation_148 (Activation) (None, 4, 4, 336) 0 separable_conv_1_bn_normal_left2_\n__________________________________________________________________________________________________\nactivation_150 (Activation) (None, 4, 4, 336) 0 separable_conv_1_bn_normal_right2\n__________________________________________________________________________________________________\nactivation_152 (Activation) (None, 4, 4, 336) 0 separable_conv_1_bn_normal_left5_\n__________________________________________________________________________________________________\nseparable_conv_2_normal_left1_1 (None, 4, 4, 336) 121296 activation_144[0][0] \n__________________________________________________________________________________________________\nseparable_conv_2_normal_right1_ (None, 4, 4, 336) 115920 activation_146[0][0] \n__________________________________________________________________________________________________\nseparable_conv_2_normal_left2_1 (None, 4, 4, 336) 121296 activation_148[0][0] \n__________________________________________________________________________________________________\nseparable_conv_2_normal_right2_ (None, 4, 4, 336) 115920 activation_150[0][0] \n__________________________________________________________________________________________________\nseparable_conv_2_normal_left5_1 (None, 4, 4, 336) 115920 activation_152[0][0] \n__________________________________________________________________________________________________\nseparable_conv_2_bn_normal_left (None, 4, 4, 336) 1344 separable_conv_2_normal_left1_10[\n__________________________________________________________________________________________________\nseparable_conv_2_bn_normal_righ (None, 4, 4, 336) 1344 separable_conv_2_normal_right1_10\n__________________________________________________________________________________________________\nseparable_conv_2_bn_normal_left (None, 4, 4, 336) 1344 separable_conv_2_normal_left2_10[\n__________________________________________________________________________________________________\nseparable_conv_2_bn_normal_righ (None, 4, 4, 336) 1344 separable_conv_2_normal_right2_10\n__________________________________________________________________________________________________\nnormal_left3_10 (AveragePooling (None, 4, 4, 336) 0 normal_bn_1_10[0][0] \n__________________________________________________________________________________________________\nnormal_left4_10 (AveragePooling (None, 4, 4, 336) 0 adjust_bn_10[0][0] \n__________________________________________________________________________________________________\nnormal_right4_10 (AveragePoolin (None, 4, 4, 336) 0 adjust_bn_10[0][0] \n__________________________________________________________________________________________________\nseparable_conv_2_bn_normal_left (None, 4, 4, 336) 1344 separable_conv_2_normal_left5_10[\n__________________________________________________________________________________________________\nnormal_add_1_10 (Add) (None, 4, 4, 336) 0 separable_conv_2_bn_normal_left1_\n separable_conv_2_bn_normal_right1\n__________________________________________________________________________________________________\nnormal_add_2_10 (Add) (None, 4, 4, 336) 0 separable_conv_2_bn_normal_left2_\n separable_conv_2_bn_normal_right2\n__________________________________________________________________________________________________\nnormal_add_3_10 (Add) (None, 4, 4, 336) 0 normal_left3_10[0][0] \n adjust_bn_10[0][0] \n__________________________________________________________________________________________________\nnormal_add_4_10 (Add) (None, 4, 4, 336) 0 normal_left4_10[0][0] \n normal_right4_10[0][0] \n__________________________________________________________________________________________________\nnormal_add_5_10 (Add) (None, 4, 4, 336) 0 separable_conv_2_bn_normal_left5_\n normal_bn_1_10[0][0] \n__________________________________________________________________________________________________\nnormal_concat_10 (Concatenate) (None, 4, 4, 2016) 0 adjust_bn_10[0][0] \n normal_add_1_10[0][0] \n normal_add_2_10[0][0] \n normal_add_3_10[0][0] \n normal_add_4_10[0][0] \n normal_add_5_10[0][0] \n__________________________________________________________________________________________________\nactivation_153 (Activation) (None, 4, 4, 2016) 0 normal_concat_9[0][0] \n__________________________________________________________________________________________________\nactivation_154 (Activation) (None, 4, 4, 2016) 0 normal_concat_10[0][0] \n__________________________________________________________________________________________________\nadjust_conv_projection_11 (Conv (None, 4, 4, 336) 677376 activation_153[0][0] \n__________________________________________________________________________________________________\nnormal_conv_1_11 (Conv2D) (None, 4, 4, 336) 677376 activation_154[0][0] \n__________________________________________________________________________________________________\nadjust_bn_11 (BatchNormalizatio (None, 4, 4, 336) 1344 adjust_conv_projection_11[0][0] \n__________________________________________________________________________________________________\nnormal_bn_1_11 (BatchNormalizat (None, 4, 4, 336) 1344 normal_conv_1_11[0][0] \n__________________________________________________________________________________________________\nactivation_155 (Activation) (None, 4, 4, 336) 0 normal_bn_1_11[0][0] \n__________________________________________________________________________________________________\nactivation_157 (Activation) (None, 4, 4, 336) 0 adjust_bn_11[0][0] \n__________________________________________________________________________________________________\nactivation_159 (Activation) (None, 4, 4, 336) 0 adjust_bn_11[0][0] \n__________________________________________________________________________________________________\nactivation_161 (Activation) (None, 4, 4, 336) 0 adjust_bn_11[0][0] \n__________________________________________________________________________________________________\nactivation_163 (Activation) (None, 4, 4, 336) 0 normal_bn_1_11[0][0] \n__________________________________________________________________________________________________\nseparable_conv_1_normal_left1_1 (None, 4, 4, 336) 121296 activation_155[0][0] \n__________________________________________________________________________________________________\nseparable_conv_1_normal_right1_ (None, 4, 4, 336) 115920 activation_157[0][0] \n__________________________________________________________________________________________________\nseparable_conv_1_normal_left2_1 (None, 4, 4, 336) 121296 activation_159[0][0] \n__________________________________________________________________________________________________\nseparable_conv_1_normal_right2_ (None, 4, 4, 336) 115920 activation_161[0][0] \n__________________________________________________________________________________________________\nseparable_conv_1_normal_left5_1 (None, 4, 4, 336) 115920 activation_163[0][0] \n__________________________________________________________________________________________________\nseparable_conv_1_bn_normal_left (None, 4, 4, 336) 1344 separable_conv_1_normal_left1_11[\n__________________________________________________________________________________________________\nseparable_conv_1_bn_normal_righ (None, 4, 4, 336) 1344 separable_conv_1_normal_right1_11\n__________________________________________________________________________________________________\nseparable_conv_1_bn_normal_left (None, 4, 4, 336) 1344 separable_conv_1_normal_left2_11[\n__________________________________________________________________________________________________\nseparable_conv_1_bn_normal_righ (None, 4, 4, 336) 1344 separable_conv_1_normal_right2_11\n__________________________________________________________________________________________________\nseparable_conv_1_bn_normal_left (None, 4, 4, 336) 1344 separable_conv_1_normal_left5_11[\n__________________________________________________________________________________________________\nactivation_156 (Activation) (None, 4, 4, 336) 0 separable_conv_1_bn_normal_left1_\n__________________________________________________________________________________________________\nactivation_158 (Activation) (None, 4, 4, 336) 0 separable_conv_1_bn_normal_right1\n__________________________________________________________________________________________________\nactivation_160 (Activation) (None, 4, 4, 336) 0 separable_conv_1_bn_normal_left2_\n__________________________________________________________________________________________________\nactivation_162 (Activation) (None, 4, 4, 336) 0 separable_conv_1_bn_normal_right2\n__________________________________________________________________________________________________\nactivation_164 (Activation) (None, 4, 4, 336) 0 separable_conv_1_bn_normal_left5_\n__________________________________________________________________________________________________\nseparable_conv_2_normal_left1_1 (None, 4, 4, 336) 121296 activation_156[0][0] \n__________________________________________________________________________________________________\nseparable_conv_2_normal_right1_ (None, 4, 4, 336) 115920 activation_158[0][0] \n__________________________________________________________________________________________________\nseparable_conv_2_normal_left2_1 (None, 4, 4, 336) 121296 activation_160[0][0] \n__________________________________________________________________________________________________\nseparable_conv_2_normal_right2_ (None, 4, 4, 336) 115920 activation_162[0][0] \n__________________________________________________________________________________________________\nseparable_conv_2_normal_left5_1 (None, 4, 4, 336) 115920 activation_164[0][0] \n__________________________________________________________________________________________________\nseparable_conv_2_bn_normal_left (None, 4, 4, 336) 1344 separable_conv_2_normal_left1_11[\n__________________________________________________________________________________________________\nseparable_conv_2_bn_normal_righ (None, 4, 4, 336) 1344 separable_conv_2_normal_right1_11\n__________________________________________________________________________________________________\nseparable_conv_2_bn_normal_left (None, 4, 4, 336) 1344 separable_conv_2_normal_left2_11[\n__________________________________________________________________________________________________\nseparable_conv_2_bn_normal_righ (None, 4, 4, 336) 1344 separable_conv_2_normal_right2_11\n__________________________________________________________________________________________________\nnormal_left3_11 (AveragePooling (None, 4, 4, 336) 0 normal_bn_1_11[0][0] \n__________________________________________________________________________________________________\nnormal_left4_11 (AveragePooling (None, 4, 4, 336) 0 adjust_bn_11[0][0] \n__________________________________________________________________________________________________\nnormal_right4_11 (AveragePoolin (None, 4, 4, 336) 0 adjust_bn_11[0][0] \n__________________________________________________________________________________________________\nseparable_conv_2_bn_normal_left (None, 4, 4, 336) 1344 separable_conv_2_normal_left5_11[\n__________________________________________________________________________________________________\nnormal_add_1_11 (Add) (None, 4, 4, 336) 0 separable_conv_2_bn_normal_left1_\n separable_conv_2_bn_normal_right1\n__________________________________________________________________________________________________\nnormal_add_2_11 (Add) (None, 4, 4, 336) 0 separable_conv_2_bn_normal_left2_\n separable_conv_2_bn_normal_right2\n__________________________________________________________________________________________________\nnormal_add_3_11 (Add) (None, 4, 4, 336) 0 normal_left3_11[0][0] \n adjust_bn_11[0][0] \n__________________________________________________________________________________________________\nnormal_add_4_11 (Add) (None, 4, 4, 336) 0 normal_left4_11[0][0] \n normal_right4_11[0][0] \n__________________________________________________________________________________________________\nnormal_add_5_11 (Add) (None, 4, 4, 336) 0 separable_conv_2_bn_normal_left5_\n normal_bn_1_11[0][0] \n__________________________________________________________________________________________________\nnormal_concat_11 (Concatenate) (None, 4, 4, 2016) 0 adjust_bn_11[0][0] \n normal_add_1_11[0][0] \n normal_add_2_11[0][0] \n normal_add_3_11[0][0] \n normal_add_4_11[0][0] \n normal_add_5_11[0][0] \n__________________________________________________________________________________________________\nactivation_165 (Activation) (None, 4, 4, 2016) 0 normal_concat_10[0][0] \n__________________________________________________________________________________________________\nactivation_166 (Activation) (None, 4, 4, 2016) 0 normal_concat_11[0][0] \n__________________________________________________________________________________________________\nadjust_conv_projection_12 (Conv (None, 4, 4, 336) 677376 activation_165[0][0] \n__________________________________________________________________________________________________\nnormal_conv_1_12 (Conv2D) (None, 4, 4, 336) 677376 activation_166[0][0] \n__________________________________________________________________________________________________\nadjust_bn_12 (BatchNormalizatio (None, 4, 4, 336) 1344 adjust_conv_projection_12[0][0] \n__________________________________________________________________________________________________\nnormal_bn_1_12 (BatchNormalizat (None, 4, 4, 336) 1344 normal_conv_1_12[0][0] \n__________________________________________________________________________________________________\nactivation_167 (Activation) (None, 4, 4, 336) 0 normal_bn_1_12[0][0] \n__________________________________________________________________________________________________\nactivation_169 (Activation) (None, 4, 4, 336) 0 adjust_bn_12[0][0] \n__________________________________________________________________________________________________\nactivation_171 (Activation) (None, 4, 4, 336) 0 adjust_bn_12[0][0] \n__________________________________________________________________________________________________\nactivation_173 (Activation) (None, 4, 4, 336) 0 adjust_bn_12[0][0] \n__________________________________________________________________________________________________\nactivation_175 (Activation) (None, 4, 4, 336) 0 normal_bn_1_12[0][0] \n__________________________________________________________________________________________________\nseparable_conv_1_normal_left1_1 (None, 4, 4, 336) 121296 activation_167[0][0] \n__________________________________________________________________________________________________\nseparable_conv_1_normal_right1_ (None, 4, 4, 336) 115920 activation_169[0][0] \n__________________________________________________________________________________________________\nseparable_conv_1_normal_left2_1 (None, 4, 4, 336) 121296 activation_171[0][0] \n__________________________________________________________________________________________________\nseparable_conv_1_normal_right2_ (None, 4, 4, 336) 115920 activation_173[0][0] \n__________________________________________________________________________________________________\nseparable_conv_1_normal_left5_1 (None, 4, 4, 336) 115920 activation_175[0][0] \n__________________________________________________________________________________________________\nseparable_conv_1_bn_normal_left (None, 4, 4, 336) 1344 separable_conv_1_normal_left1_12[\n__________________________________________________________________________________________________\nseparable_conv_1_bn_normal_righ (None, 4, 4, 336) 1344 separable_conv_1_normal_right1_12\n__________________________________________________________________________________________________\nseparable_conv_1_bn_normal_left (None, 4, 4, 336) 1344 separable_conv_1_normal_left2_12[\n__________________________________________________________________________________________________\nseparable_conv_1_bn_normal_righ (None, 4, 4, 336) 1344 separable_conv_1_normal_right2_12\n__________________________________________________________________________________________________\nseparable_conv_1_bn_normal_left (None, 4, 4, 336) 1344 separable_conv_1_normal_left5_12[\n__________________________________________________________________________________________________\nactivation_168 (Activation) (None, 4, 4, 336) 0 separable_conv_1_bn_normal_left1_\n__________________________________________________________________________________________________\nactivation_170 (Activation) (None, 4, 4, 336) 0 separable_conv_1_bn_normal_right1\n__________________________________________________________________________________________________\nactivation_172 (Activation) (None, 4, 4, 336) 0 separable_conv_1_bn_normal_left2_\n__________________________________________________________________________________________________\nactivation_174 (Activation) (None, 4, 4, 336) 0 separable_conv_1_bn_normal_right2\n__________________________________________________________________________________________________\nactivation_176 (Activation) (None, 4, 4, 336) 0 separable_conv_1_bn_normal_left5_\n__________________________________________________________________________________________________\nseparable_conv_2_normal_left1_1 (None, 4, 4, 336) 121296 activation_168[0][0] \n__________________________________________________________________________________________________\nseparable_conv_2_normal_right1_ (None, 4, 4, 336) 115920 activation_170[0][0] \n__________________________________________________________________________________________________\nseparable_conv_2_normal_left2_1 (None, 4, 4, 336) 121296 activation_172[0][0] \n__________________________________________________________________________________________________\nseparable_conv_2_normal_right2_ (None, 4, 4, 336) 115920 activation_174[0][0] \n__________________________________________________________________________________________________\nseparable_conv_2_normal_left5_1 (None, 4, 4, 336) 115920 activation_176[0][0] \n__________________________________________________________________________________________________\nseparable_conv_2_bn_normal_left (None, 4, 4, 336) 1344 separable_conv_2_normal_left1_12[\n__________________________________________________________________________________________________\nseparable_conv_2_bn_normal_righ (None, 4, 4, 336) 1344 separable_conv_2_normal_right1_12\n__________________________________________________________________________________________________\nseparable_conv_2_bn_normal_left (None, 4, 4, 336) 1344 separable_conv_2_normal_left2_12[\n__________________________________________________________________________________________________\nseparable_conv_2_bn_normal_righ (None, 4, 4, 336) 1344 separable_conv_2_normal_right2_12\n__________________________________________________________________________________________________\nnormal_left3_12 (AveragePooling (None, 4, 4, 336) 0 normal_bn_1_12[0][0] \n__________________________________________________________________________________________________\nnormal_left4_12 (AveragePooling (None, 4, 4, 336) 0 adjust_bn_12[0][0] \n__________________________________________________________________________________________________\nnormal_right4_12 (AveragePoolin (None, 4, 4, 336) 0 adjust_bn_12[0][0] \n__________________________________________________________________________________________________\nseparable_conv_2_bn_normal_left (None, 4, 4, 336) 1344 separable_conv_2_normal_left5_12[\n__________________________________________________________________________________________________\nnormal_add_1_12 (Add) (None, 4, 4, 336) 0 separable_conv_2_bn_normal_left1_\n separable_conv_2_bn_normal_right1\n__________________________________________________________________________________________________\nnormal_add_2_12 (Add) (None, 4, 4, 336) 0 separable_conv_2_bn_normal_left2_\n separable_conv_2_bn_normal_right2\n__________________________________________________________________________________________________\nnormal_add_3_12 (Add) (None, 4, 4, 336) 0 normal_left3_12[0][0] \n adjust_bn_12[0][0] \n__________________________________________________________________________________________________\nnormal_add_4_12 (Add) (None, 4, 4, 336) 0 normal_left4_12[0][0] \n normal_right4_12[0][0] \n__________________________________________________________________________________________________\nnormal_add_5_12 (Add) (None, 4, 4, 336) 0 separable_conv_2_bn_normal_left5_\n normal_bn_1_12[0][0] \n__________________________________________________________________________________________________\nnormal_concat_12 (Concatenate) (None, 4, 4, 2016) 0 adjust_bn_12[0][0] \n normal_add_1_12[0][0] \n normal_add_2_12[0][0] \n normal_add_3_12[0][0] \n normal_add_4_12[0][0] \n normal_add_5_12[0][0] \n__________________________________________________________________________________________________\nactivation_178 (Activation) (None, 4, 4, 2016) 0 normal_concat_12[0][0] \n__________________________________________________________________________________________________\nactivation_177 (Activation) (None, 4, 4, 2016) 0 normal_concat_11[0][0] \n__________________________________________________________________________________________________\nreduction_conv_1_reduce_12 (Con (None, 4, 4, 672) 1354752 activation_178[0][0] \n__________________________________________________________________________________________________\nadjust_conv_projection_reduce_1 (None, 4, 4, 672) 1354752 activation_177[0][0] \n__________________________________________________________________________________________________\nreduction_bn_1_reduce_12 (Batch (None, 4, 4, 672) 2688 reduction_conv_1_reduce_12[0][0] \n__________________________________________________________________________________________________\nadjust_bn_reduce_12 (BatchNorma (None, 4, 4, 672) 2688 adjust_conv_projection_reduce_12[\n__________________________________________________________________________________________________\nactivation_179 (Activation) (None, 4, 4, 672) 0 reduction_bn_1_reduce_12[0][0] \n__________________________________________________________________________________________________\nactivation_181 (Activation) (None, 4, 4, 672) 0 adjust_bn_reduce_12[0][0] \n__________________________________________________________________________________________________\nseparable_conv_1_pad_reduction_ (None, 7, 7, 672) 0 activation_179[0][0] \n__________________________________________________________________________________________________\nseparable_conv_1_pad_reduction_ (None, 9, 9, 672) 0 activation_181[0][0] \n__________________________________________________________________________________________________\nseparable_conv_1_reduction_left (None, 2, 2, 672) 468384 separable_conv_1_pad_reduction_le\n__________________________________________________________________________________________________\nseparable_conv_1_reduction_righ (None, 2, 2, 672) 484512 separable_conv_1_pad_reduction_ri\n__________________________________________________________________________________________________\nseparable_conv_1_bn_reduction_l (None, 2, 2, 672) 2688 separable_conv_1_reduction_left1_\n__________________________________________________________________________________________________\nseparable_conv_1_bn_reduction_r (None, 2, 2, 672) 2688 separable_conv_1_reduction_right1\n__________________________________________________________________________________________________\nactivation_180 (Activation) (None, 2, 2, 672) 0 separable_conv_1_bn_reduction_lef\n__________________________________________________________________________________________________\nactivation_182 (Activation) (None, 2, 2, 672) 0 separable_conv_1_bn_reduction_rig\n__________________________________________________________________________________________________\nseparable_conv_2_reduction_left (None, 2, 2, 672) 468384 activation_180[0][0] \n__________________________________________________________________________________________________\nseparable_conv_2_reduction_righ (None, 2, 2, 672) 484512 activation_182[0][0] \n__________________________________________________________________________________________________\nactivation_183 (Activation) (None, 4, 4, 672) 0 adjust_bn_reduce_12[0][0] \n__________________________________________________________________________________________________\nseparable_conv_2_bn_reduction_l (None, 2, 2, 672) 2688 separable_conv_2_reduction_left1_\n__________________________________________________________________________________________________\nseparable_conv_2_bn_reduction_r (None, 2, 2, 672) 2688 separable_conv_2_reduction_right1\n__________________________________________________________________________________________________\nseparable_conv_1_pad_reduction_ (None, 9, 9, 672) 0 activation_183[0][0] \n__________________________________________________________________________________________________\nactivation_185 (Activation) (None, 4, 4, 672) 0 adjust_bn_reduce_12[0][0] \n__________________________________________________________________________________________________\nreduction_add_1_reduce_12 (Add) (None, 2, 2, 672) 0 separable_conv_2_bn_reduction_lef\n separable_conv_2_bn_reduction_rig\n__________________________________________________________________________________________________\nseparable_conv_1_reduction_righ (None, 2, 2, 672) 484512 separable_conv_1_pad_reduction_ri\n__________________________________________________________________________________________________\nseparable_conv_1_pad_reduction_ (None, 7, 7, 672) 0 activation_185[0][0] \n__________________________________________________________________________________________________\nactivation_187 (Activation) (None, 2, 2, 672) 0 reduction_add_1_reduce_12[0][0] \n__________________________________________________________________________________________________\nseparable_conv_1_bn_reduction_r (None, 2, 2, 672) 2688 separable_conv_1_reduction_right2\n__________________________________________________________________________________________________\nseparable_conv_1_reduction_righ (None, 2, 2, 672) 468384 separable_conv_1_pad_reduction_ri\n__________________________________________________________________________________________________\nseparable_conv_1_reduction_left (None, 2, 2, 672) 457632 activation_187[0][0] \n__________________________________________________________________________________________________\nactivation_184 (Activation) (None, 2, 2, 672) 0 separable_conv_1_bn_reduction_rig\n__________________________________________________________________________________________________\nseparable_conv_1_bn_reduction_r (None, 2, 2, 672) 2688 separable_conv_1_reduction_right3\n__________________________________________________________________________________________________\nseparable_conv_1_bn_reduction_l (None, 2, 2, 672) 2688 separable_conv_1_reduction_left4_\n__________________________________________________________________________________________________\nreduction_pad_1_reduce_12 (Zero (None, 5, 5, 672) 0 reduction_bn_1_reduce_12[0][0] \n__________________________________________________________________________________________________\nseparable_conv_2_reduction_righ (None, 2, 2, 672) 484512 activation_184[0][0] \n__________________________________________________________________________________________________\nactivation_186 (Activation) (None, 2, 2, 672) 0 separable_conv_1_bn_reduction_rig\n__________________________________________________________________________________________________\nactivation_188 (Activation) (None, 2, 2, 672) 0 separable_conv_1_bn_reduction_lef\n__________________________________________________________________________________________________\nreduction_left2_reduce_12 (MaxP (None, 2, 2, 672) 0 reduction_pad_1_reduce_12[0][0] \n__________________________________________________________________________________________________\nseparable_conv_2_bn_reduction_r (None, 2, 2, 672) 2688 separable_conv_2_reduction_right2\n__________________________________________________________________________________________________\nseparable_conv_2_reduction_righ (None, 2, 2, 672) 468384 activation_186[0][0] \n__________________________________________________________________________________________________\nseparable_conv_2_reduction_left (None, 2, 2, 672) 457632 activation_188[0][0] \n__________________________________________________________________________________________________\nadjust_relu_1_13 (Activation) (None, 4, 4, 2016) 0 normal_concat_11[0][0] \n__________________________________________________________________________________________________\nreduction_add_2_reduce_12 (Add) (None, 2, 2, 672) 0 reduction_left2_reduce_12[0][0] \n separable_conv_2_bn_reduction_rig\n__________________________________________________________________________________________________\nreduction_left3_reduce_12 (Aver (None, 2, 2, 672) 0 reduction_pad_1_reduce_12[0][0] \n__________________________________________________________________________________________________\nseparable_conv_2_bn_reduction_r (None, 2, 2, 672) 2688 separable_conv_2_reduction_right3\n__________________________________________________________________________________________________\nreduction_left4_reduce_12 (Aver (None, 2, 2, 672) 0 reduction_add_1_reduce_12[0][0] \n__________________________________________________________________________________________________\nseparable_conv_2_bn_reduction_l (None, 2, 2, 672) 2688 separable_conv_2_reduction_left4_\n__________________________________________________________________________________________________\nreduction_right5_reduce_12 (Max (None, 2, 2, 672) 0 reduction_pad_1_reduce_12[0][0] \n__________________________________________________________________________________________________\nzero_padding2d_4 (ZeroPadding2D (None, 5, 5, 2016) 0 adjust_relu_1_13[0][0] \n__________________________________________________________________________________________________\nreduction_add3_reduce_12 (Add) (None, 2, 2, 672) 0 reduction_left3_reduce_12[0][0] \n separable_conv_2_bn_reduction_rig\n__________________________________________________________________________________________________\nadd_4 (Add) (None, 2, 2, 672) 0 reduction_add_2_reduce_12[0][0] \n reduction_left4_reduce_12[0][0] \n__________________________________________________________________________________________________\nreduction_add4_reduce_12 (Add) (None, 2, 2, 672) 0 separable_conv_2_bn_reduction_lef\n reduction_right5_reduce_12[0][0] \n__________________________________________________________________________________________________\ncropping2d_4 (Cropping2D) (None, 4, 4, 2016) 0 zero_padding2d_4[0][0] \n__________________________________________________________________________________________________\nreduction_concat_reduce_12 (Con (None, 2, 2, 2688) 0 reduction_add_2_reduce_12[0][0] \n reduction_add3_reduce_12[0][0] \n add_4[0][0] \n reduction_add4_reduce_12[0][0] \n__________________________________________________________________________________________________\nadjust_avg_pool_1_13 (AveragePo (None, 2, 2, 2016) 0 adjust_relu_1_13[0][0] \n__________________________________________________________________________________________________\nadjust_avg_pool_2_13 (AveragePo (None, 2, 2, 2016) 0 cropping2d_4[0][0] \n__________________________________________________________________________________________________\nadjust_conv_1_13 (Conv2D) (None, 2, 2, 336) 677376 adjust_avg_pool_1_13[0][0] \n__________________________________________________________________________________________________\nadjust_conv_2_13 (Conv2D) (None, 2, 2, 336) 677376 adjust_avg_pool_2_13[0][0] \n__________________________________________________________________________________________________\nactivation_189 (Activation) (None, 2, 2, 2688) 0 reduction_concat_reduce_12[0][0] \n__________________________________________________________________________________________________\nconcatenate_4 (Concatenate) (None, 2, 2, 672) 0 adjust_conv_1_13[0][0] \n adjust_conv_2_13[0][0] \n__________________________________________________________________________________________________\nnormal_conv_1_13 (Conv2D) (None, 2, 2, 672) 1806336 activation_189[0][0] \n__________________________________________________________________________________________________\nadjust_bn_13 (BatchNormalizatio (None, 2, 2, 672) 2688 concatenate_4[0][0] \n__________________________________________________________________________________________________\nnormal_bn_1_13 (BatchNormalizat (None, 2, 2, 672) 2688 normal_conv_1_13[0][0] \n__________________________________________________________________________________________________\nactivation_190 (Activation) (None, 2, 2, 672) 0 normal_bn_1_13[0][0] \n__________________________________________________________________________________________________\nactivation_192 (Activation) (None, 2, 2, 672) 0 adjust_bn_13[0][0] \n__________________________________________________________________________________________________\nactivation_194 (Activation) (None, 2, 2, 672) 0 adjust_bn_13[0][0] \n__________________________________________________________________________________________________\nactivation_196 (Activation) (None, 2, 2, 672) 0 adjust_bn_13[0][0] \n__________________________________________________________________________________________________\nactivation_198 (Activation) (None, 2, 2, 672) 0 normal_bn_1_13[0][0] \n__________________________________________________________________________________________________\nseparable_conv_1_normal_left1_1 (None, 2, 2, 672) 468384 activation_190[0][0] \n__________________________________________________________________________________________________\nseparable_conv_1_normal_right1_ (None, 2, 2, 672) 457632 activation_192[0][0] \n__________________________________________________________________________________________________\nseparable_conv_1_normal_left2_1 (None, 2, 2, 672) 468384 activation_194[0][0] \n__________________________________________________________________________________________________\nseparable_conv_1_normal_right2_ (None, 2, 2, 672) 457632 activation_196[0][0] \n__________________________________________________________________________________________________\nseparable_conv_1_normal_left5_1 (None, 2, 2, 672) 457632 activation_198[0][0] \n__________________________________________________________________________________________________\nseparable_conv_1_bn_normal_left (None, 2, 2, 672) 2688 separable_conv_1_normal_left1_13[\n__________________________________________________________________________________________________\nseparable_conv_1_bn_normal_righ (None, 2, 2, 672) 2688 separable_conv_1_normal_right1_13\n__________________________________________________________________________________________________\nseparable_conv_1_bn_normal_left (None, 2, 2, 672) 2688 separable_conv_1_normal_left2_13[\n__________________________________________________________________________________________________\nseparable_conv_1_bn_normal_righ (None, 2, 2, 672) 2688 separable_conv_1_normal_right2_13\n__________________________________________________________________________________________________\nseparable_conv_1_bn_normal_left (None, 2, 2, 672) 2688 separable_conv_1_normal_left5_13[\n__________________________________________________________________________________________________\nactivation_191 (Activation) (None, 2, 2, 672) 0 separable_conv_1_bn_normal_left1_\n__________________________________________________________________________________________________\nactivation_193 (Activation) (None, 2, 2, 672) 0 separable_conv_1_bn_normal_right1\n__________________________________________________________________________________________________\nactivation_195 (Activation) (None, 2, 2, 672) 0 separable_conv_1_bn_normal_left2_\n__________________________________________________________________________________________________\nactivation_197 (Activation) (None, 2, 2, 672) 0 separable_conv_1_bn_normal_right2\n__________________________________________________________________________________________________\nactivation_199 (Activation) (None, 2, 2, 672) 0 separable_conv_1_bn_normal_left5_\n__________________________________________________________________________________________________\nseparable_conv_2_normal_left1_1 (None, 2, 2, 672) 468384 activation_191[0][0] \n__________________________________________________________________________________________________\nseparable_conv_2_normal_right1_ (None, 2, 2, 672) 457632 activation_193[0][0] \n__________________________________________________________________________________________________\nseparable_conv_2_normal_left2_1 (None, 2, 2, 672) 468384 activation_195[0][0] \n__________________________________________________________________________________________________\nseparable_conv_2_normal_right2_ (None, 2, 2, 672) 457632 activation_197[0][0] \n__________________________________________________________________________________________________\nseparable_conv_2_normal_left5_1 (None, 2, 2, 672) 457632 activation_199[0][0] \n__________________________________________________________________________________________________\nseparable_conv_2_bn_normal_left (None, 2, 2, 672) 2688 separable_conv_2_normal_left1_13[\n__________________________________________________________________________________________________\nseparable_conv_2_bn_normal_righ (None, 2, 2, 672) 2688 separable_conv_2_normal_right1_13\n__________________________________________________________________________________________________\nseparable_conv_2_bn_normal_left (None, 2, 2, 672) 2688 separable_conv_2_normal_left2_13[\n__________________________________________________________________________________________________\nseparable_conv_2_bn_normal_righ (None, 2, 2, 672) 2688 separable_conv_2_normal_right2_13\n__________________________________________________________________________________________________\nnormal_left3_13 (AveragePooling (None, 2, 2, 672) 0 normal_bn_1_13[0][0] \n__________________________________________________________________________________________________\nnormal_left4_13 (AveragePooling (None, 2, 2, 672) 0 adjust_bn_13[0][0] \n__________________________________________________________________________________________________\nnormal_right4_13 (AveragePoolin (None, 2, 2, 672) 0 adjust_bn_13[0][0] \n__________________________________________________________________________________________________\nseparable_conv_2_bn_normal_left (None, 2, 2, 672) 2688 separable_conv_2_normal_left5_13[\n__________________________________________________________________________________________________\nnormal_add_1_13 (Add) (None, 2, 2, 672) 0 separable_conv_2_bn_normal_left1_\n separable_conv_2_bn_normal_right1\n__________________________________________________________________________________________________\nnormal_add_2_13 (Add) (None, 2, 2, 672) 0 separable_conv_2_bn_normal_left2_\n separable_conv_2_bn_normal_right2\n__________________________________________________________________________________________________\nnormal_add_3_13 (Add) (None, 2, 2, 672) 0 normal_left3_13[0][0] \n adjust_bn_13[0][0] \n__________________________________________________________________________________________________\nnormal_add_4_13 (Add) (None, 2, 2, 672) 0 normal_left4_13[0][0] \n normal_right4_13[0][0] \n__________________________________________________________________________________________________\nnormal_add_5_13 (Add) (None, 2, 2, 672) 0 separable_conv_2_bn_normal_left5_\n normal_bn_1_13[0][0] \n__________________________________________________________________________________________________\nnormal_concat_13 (Concatenate) (None, 2, 2, 4032) 0 adjust_bn_13[0][0] \n normal_add_1_13[0][0] \n normal_add_2_13[0][0] \n normal_add_3_13[0][0] \n normal_add_4_13[0][0] \n normal_add_5_13[0][0] \n__________________________________________________________________________________________________\nactivation_200 (Activation) (None, 2, 2, 2688) 0 reduction_concat_reduce_12[0][0] \n__________________________________________________________________________________________________\nactivation_201 (Activation) (None, 2, 2, 4032) 0 normal_concat_13[0][0] \n__________________________________________________________________________________________________\nadjust_conv_projection_14 (Conv (None, 2, 2, 672) 1806336 activation_200[0][0] \n__________________________________________________________________________________________________\nnormal_conv_1_14 (Conv2D) (None, 2, 2, 672) 2709504 activation_201[0][0] \n__________________________________________________________________________________________________\nadjust_bn_14 (BatchNormalizatio (None, 2, 2, 672) 2688 adjust_conv_projection_14[0][0] \n__________________________________________________________________________________________________\nnormal_bn_1_14 (BatchNormalizat (None, 2, 2, 672) 2688 normal_conv_1_14[0][0] \n__________________________________________________________________________________________________\nactivation_202 (Activation) (None, 2, 2, 672) 0 normal_bn_1_14[0][0] \n__________________________________________________________________________________________________\nactivation_204 (Activation) (None, 2, 2, 672) 0 adjust_bn_14[0][0] \n__________________________________________________________________________________________________\nactivation_206 (Activation) (None, 2, 2, 672) 0 adjust_bn_14[0][0] \n__________________________________________________________________________________________________\nactivation_208 (Activation) (None, 2, 2, 672) 0 adjust_bn_14[0][0] \n__________________________________________________________________________________________________\nactivation_210 (Activation) (None, 2, 2, 672) 0 normal_bn_1_14[0][0] \n__________________________________________________________________________________________________\nseparable_conv_1_normal_left1_1 (None, 2, 2, 672) 468384 activation_202[0][0] \n__________________________________________________________________________________________________\nseparable_conv_1_normal_right1_ (None, 2, 2, 672) 457632 activation_204[0][0] \n__________________________________________________________________________________________________\nseparable_conv_1_normal_left2_1 (None, 2, 2, 672) 468384 activation_206[0][0] \n__________________________________________________________________________________________________\nseparable_conv_1_normal_right2_ (None, 2, 2, 672) 457632 activation_208[0][0] \n__________________________________________________________________________________________________\nseparable_conv_1_normal_left5_1 (None, 2, 2, 672) 457632 activation_210[0][0] \n__________________________________________________________________________________________________\nseparable_conv_1_bn_normal_left (None, 2, 2, 672) 2688 separable_conv_1_normal_left1_14[\n__________________________________________________________________________________________________\nseparable_conv_1_bn_normal_righ (None, 2, 2, 672) 2688 separable_conv_1_normal_right1_14\n__________________________________________________________________________________________________\nseparable_conv_1_bn_normal_left (None, 2, 2, 672) 2688 separable_conv_1_normal_left2_14[\n__________________________________________________________________________________________________\nseparable_conv_1_bn_normal_righ (None, 2, 2, 672) 2688 separable_conv_1_normal_right2_14\n__________________________________________________________________________________________________\nseparable_conv_1_bn_normal_left (None, 2, 2, 672) 2688 separable_conv_1_normal_left5_14[\n__________________________________________________________________________________________________\nactivation_203 (Activation) (None, 2, 2, 672) 0 separable_conv_1_bn_normal_left1_\n__________________________________________________________________________________________________\nactivation_205 (Activation) (None, 2, 2, 672) 0 separable_conv_1_bn_normal_right1\n__________________________________________________________________________________________________\nactivation_207 (Activation) (None, 2, 2, 672) 0 separable_conv_1_bn_normal_left2_\n__________________________________________________________________________________________________\nactivation_209 (Activation) (None, 2, 2, 672) 0 separable_conv_1_bn_normal_right2\n__________________________________________________________________________________________________\nactivation_211 (Activation) (None, 2, 2, 672) 0 separable_conv_1_bn_normal_left5_\n__________________________________________________________________________________________________\nseparable_conv_2_normal_left1_1 (None, 2, 2, 672) 468384 activation_203[0][0] \n__________________________________________________________________________________________________\nseparable_conv_2_normal_right1_ (None, 2, 2, 672) 457632 activation_205[0][0] \n__________________________________________________________________________________________________\nseparable_conv_2_normal_left2_1 (None, 2, 2, 672) 468384 activation_207[0][0] \n__________________________________________________________________________________________________\nseparable_conv_2_normal_right2_ (None, 2, 2, 672) 457632 activation_209[0][0] \n__________________________________________________________________________________________________\nseparable_conv_2_normal_left5_1 (None, 2, 2, 672) 457632 activation_211[0][0] \n__________________________________________________________________________________________________\nseparable_conv_2_bn_normal_left (None, 2, 2, 672) 2688 separable_conv_2_normal_left1_14[\n__________________________________________________________________________________________________\nseparable_conv_2_bn_normal_righ (None, 2, 2, 672) 2688 separable_conv_2_normal_right1_14\n__________________________________________________________________________________________________\nseparable_conv_2_bn_normal_left (None, 2, 2, 672) 2688 separable_conv_2_normal_left2_14[\n__________________________________________________________________________________________________\nseparable_conv_2_bn_normal_righ (None, 2, 2, 672) 2688 separable_conv_2_normal_right2_14\n__________________________________________________________________________________________________\nnormal_left3_14 (AveragePooling (None, 2, 2, 672) 0 normal_bn_1_14[0][0] \n__________________________________________________________________________________________________\nnormal_left4_14 (AveragePooling (None, 2, 2, 672) 0 adjust_bn_14[0][0] \n__________________________________________________________________________________________________\nnormal_right4_14 (AveragePoolin (None, 2, 2, 672) 0 adjust_bn_14[0][0] \n__________________________________________________________________________________________________\nseparable_conv_2_bn_normal_left (None, 2, 2, 672) 2688 separable_conv_2_normal_left5_14[\n__________________________________________________________________________________________________\nnormal_add_1_14 (Add) (None, 2, 2, 672) 0 separable_conv_2_bn_normal_left1_\n separable_conv_2_bn_normal_right1\n__________________________________________________________________________________________________\nnormal_add_2_14 (Add) (None, 2, 2, 672) 0 separable_conv_2_bn_normal_left2_\n separable_conv_2_bn_normal_right2\n__________________________________________________________________________________________________\nnormal_add_3_14 (Add) (None, 2, 2, 672) 0 normal_left3_14[0][0] \n adjust_bn_14[0][0] \n__________________________________________________________________________________________________\nnormal_add_4_14 (Add) (None, 2, 2, 672) 0 normal_left4_14[0][0] \n normal_right4_14[0][0] \n__________________________________________________________________________________________________\nnormal_add_5_14 (Add) (None, 2, 2, 672) 0 separable_conv_2_bn_normal_left5_\n normal_bn_1_14[0][0] \n__________________________________________________________________________________________________\nnormal_concat_14 (Concatenate) (None, 2, 2, 4032) 0 adjust_bn_14[0][0] \n normal_add_1_14[0][0] \n normal_add_2_14[0][0] \n normal_add_3_14[0][0] \n normal_add_4_14[0][0] \n normal_add_5_14[0][0] \n__________________________________________________________________________________________________\nactivation_212 (Activation) (None, 2, 2, 4032) 0 normal_concat_13[0][0] \n__________________________________________________________________________________________________\nactivation_213 (Activation) (None, 2, 2, 4032) 0 normal_concat_14[0][0] \n__________________________________________________________________________________________________\nadjust_conv_projection_15 (Conv (None, 2, 2, 672) 2709504 activation_212[0][0] \n__________________________________________________________________________________________________\nnormal_conv_1_15 (Conv2D) (None, 2, 2, 672) 2709504 activation_213[0][0] \n__________________________________________________________________________________________________\nadjust_bn_15 (BatchNormalizatio (None, 2, 2, 672) 2688 adjust_conv_projection_15[0][0] \n__________________________________________________________________________________________________\nnormal_bn_1_15 (BatchNormalizat (None, 2, 2, 672) 2688 normal_conv_1_15[0][0] \n__________________________________________________________________________________________________\nactivation_214 (Activation) (None, 2, 2, 672) 0 normal_bn_1_15[0][0] \n__________________________________________________________________________________________________\nactivation_216 (Activation) (None, 2, 2, 672) 0 adjust_bn_15[0][0] \n__________________________________________________________________________________________________\nactivation_218 (Activation) (None, 2, 2, 672) 0 adjust_bn_15[0][0] \n__________________________________________________________________________________________________\nactivation_220 (Activation) (None, 2, 2, 672) 0 adjust_bn_15[0][0] \n__________________________________________________________________________________________________\nactivation_222 (Activation) (None, 2, 2, 672) 0 normal_bn_1_15[0][0] \n__________________________________________________________________________________________________\nseparable_conv_1_normal_left1_1 (None, 2, 2, 672) 468384 activation_214[0][0] \n__________________________________________________________________________________________________\nseparable_conv_1_normal_right1_ (None, 2, 2, 672) 457632 activation_216[0][0] \n__________________________________________________________________________________________________\nseparable_conv_1_normal_left2_1 (None, 2, 2, 672) 468384 activation_218[0][0] \n__________________________________________________________________________________________________\nseparable_conv_1_normal_right2_ (None, 2, 2, 672) 457632 activation_220[0][0] \n__________________________________________________________________________________________________\nseparable_conv_1_normal_left5_1 (None, 2, 2, 672) 457632 activation_222[0][0] \n__________________________________________________________________________________________________\nseparable_conv_1_bn_normal_left (None, 2, 2, 672) 2688 separable_conv_1_normal_left1_15[\n__________________________________________________________________________________________________\nseparable_conv_1_bn_normal_righ (None, 2, 2, 672) 2688 separable_conv_1_normal_right1_15\n__________________________________________________________________________________________________\nseparable_conv_1_bn_normal_left (None, 2, 2, 672) 2688 separable_conv_1_normal_left2_15[\n__________________________________________________________________________________________________\nseparable_conv_1_bn_normal_righ (None, 2, 2, 672) 2688 separable_conv_1_normal_right2_15\n__________________________________________________________________________________________________\nseparable_conv_1_bn_normal_left (None, 2, 2, 672) 2688 separable_conv_1_normal_left5_15[\n__________________________________________________________________________________________________\nactivation_215 (Activation) (None, 2, 2, 672) 0 separable_conv_1_bn_normal_left1_\n__________________________________________________________________________________________________\nactivation_217 (Activation) (None, 2, 2, 672) 0 separable_conv_1_bn_normal_right1\n__________________________________________________________________________________________________\nactivation_219 (Activation) (None, 2, 2, 672) 0 separable_conv_1_bn_normal_left2_\n__________________________________________________________________________________________________\nactivation_221 (Activation) (None, 2, 2, 672) 0 separable_conv_1_bn_normal_right2\n__________________________________________________________________________________________________\nactivation_223 (Activation) (None, 2, 2, 672) 0 separable_conv_1_bn_normal_left5_\n__________________________________________________________________________________________________\nseparable_conv_2_normal_left1_1 (None, 2, 2, 672) 468384 activation_215[0][0] \n__________________________________________________________________________________________________\nseparable_conv_2_normal_right1_ (None, 2, 2, 672) 457632 activation_217[0][0] \n__________________________________________________________________________________________________\nseparable_conv_2_normal_left2_1 (None, 2, 2, 672) 468384 activation_219[0][0] \n__________________________________________________________________________________________________\nseparable_conv_2_normal_right2_ (None, 2, 2, 672) 457632 activation_221[0][0] \n__________________________________________________________________________________________________\nseparable_conv_2_normal_left5_1 (None, 2, 2, 672) 457632 activation_223[0][0] \n__________________________________________________________________________________________________\nseparable_conv_2_bn_normal_left (None, 2, 2, 672) 2688 separable_conv_2_normal_left1_15[\n__________________________________________________________________________________________________\nseparable_conv_2_bn_normal_righ (None, 2, 2, 672) 2688 separable_conv_2_normal_right1_15\n__________________________________________________________________________________________________\nseparable_conv_2_bn_normal_left (None, 2, 2, 672) 2688 separable_conv_2_normal_left2_15[\n__________________________________________________________________________________________________\nseparable_conv_2_bn_normal_righ (None, 2, 2, 672) 2688 separable_conv_2_normal_right2_15\n__________________________________________________________________________________________________\nnormal_left3_15 (AveragePooling (None, 2, 2, 672) 0 normal_bn_1_15[0][0] \n__________________________________________________________________________________________________\nnormal_left4_15 (AveragePooling (None, 2, 2, 672) 0 adjust_bn_15[0][0] \n__________________________________________________________________________________________________\nnormal_right4_15 (AveragePoolin (None, 2, 2, 672) 0 adjust_bn_15[0][0] \n__________________________________________________________________________________________________\nseparable_conv_2_bn_normal_left (None, 2, 2, 672) 2688 separable_conv_2_normal_left5_15[\n__________________________________________________________________________________________________\nnormal_add_1_15 (Add) (None, 2, 2, 672) 0 separable_conv_2_bn_normal_left1_\n separable_conv_2_bn_normal_right1\n__________________________________________________________________________________________________\nnormal_add_2_15 (Add) (None, 2, 2, 672) 0 separable_conv_2_bn_normal_left2_\n separable_conv_2_bn_normal_right2\n__________________________________________________________________________________________________\nnormal_add_3_15 (Add) (None, 2, 2, 672) 0 normal_left3_15[0][0] \n adjust_bn_15[0][0] \n__________________________________________________________________________________________________\nnormal_add_4_15 (Add) (None, 2, 2, 672) 0 normal_left4_15[0][0] \n normal_right4_15[0][0] \n__________________________________________________________________________________________________\nnormal_add_5_15 (Add) (None, 2, 2, 672) 0 separable_conv_2_bn_normal_left5_\n normal_bn_1_15[0][0] \n__________________________________________________________________________________________________\nnormal_concat_15 (Concatenate) (None, 2, 2, 4032) 0 adjust_bn_15[0][0] \n normal_add_1_15[0][0] \n normal_add_2_15[0][0] \n normal_add_3_15[0][0] \n normal_add_4_15[0][0] \n normal_add_5_15[0][0] \n__________________________________________________________________________________________________\nactivation_224 (Activation) (None, 2, 2, 4032) 0 normal_concat_14[0][0] \n__________________________________________________________________________________________________\nactivation_225 (Activation) (None, 2, 2, 4032) 0 normal_concat_15[0][0] \n__________________________________________________________________________________________________\nadjust_conv_projection_16 (Conv (None, 2, 2, 672) 2709504 activation_224[0][0] \n__________________________________________________________________________________________________\nnormal_conv_1_16 (Conv2D) (None, 2, 2, 672) 2709504 activation_225[0][0] \n__________________________________________________________________________________________________\nadjust_bn_16 (BatchNormalizatio (None, 2, 2, 672) 2688 adjust_conv_projection_16[0][0] \n__________________________________________________________________________________________________\nnormal_bn_1_16 (BatchNormalizat (None, 2, 2, 672) 2688 normal_conv_1_16[0][0] \n__________________________________________________________________________________________________\nactivation_226 (Activation) (None, 2, 2, 672) 0 normal_bn_1_16[0][0] \n__________________________________________________________________________________________________\nactivation_228 (Activation) (None, 2, 2, 672) 0 adjust_bn_16[0][0] \n__________________________________________________________________________________________________\nactivation_230 (Activation) (None, 2, 2, 672) 0 adjust_bn_16[0][0] \n__________________________________________________________________________________________________\nactivation_232 (Activation) (None, 2, 2, 672) 0 adjust_bn_16[0][0] \n__________________________________________________________________________________________________\nactivation_234 (Activation) (None, 2, 2, 672) 0 normal_bn_1_16[0][0] \n__________________________________________________________________________________________________\nseparable_conv_1_normal_left1_1 (None, 2, 2, 672) 468384 activation_226[0][0] \n__________________________________________________________________________________________________\nseparable_conv_1_normal_right1_ (None, 2, 2, 672) 457632 activation_228[0][0] \n__________________________________________________________________________________________________\nseparable_conv_1_normal_left2_1 (None, 2, 2, 672) 468384 activation_230[0][0] \n__________________________________________________________________________________________________\nseparable_conv_1_normal_right2_ (None, 2, 2, 672) 457632 activation_232[0][0] \n__________________________________________________________________________________________________\nseparable_conv_1_normal_left5_1 (None, 2, 2, 672) 457632 activation_234[0][0] \n__________________________________________________________________________________________________\nseparable_conv_1_bn_normal_left (None, 2, 2, 672) 2688 separable_conv_1_normal_left1_16[\n__________________________________________________________________________________________________\nseparable_conv_1_bn_normal_righ (None, 2, 2, 672) 2688 separable_conv_1_normal_right1_16\n__________________________________________________________________________________________________\nseparable_conv_1_bn_normal_left (None, 2, 2, 672) 2688 separable_conv_1_normal_left2_16[\n__________________________________________________________________________________________________\nseparable_conv_1_bn_normal_righ (None, 2, 2, 672) 2688 separable_conv_1_normal_right2_16\n__________________________________________________________________________________________________\nseparable_conv_1_bn_normal_left (None, 2, 2, 672) 2688 separable_conv_1_normal_left5_16[\n__________________________________________________________________________________________________\nactivation_227 (Activation) (None, 2, 2, 672) 0 separable_conv_1_bn_normal_left1_\n__________________________________________________________________________________________________\nactivation_229 (Activation) (None, 2, 2, 672) 0 separable_conv_1_bn_normal_right1\n__________________________________________________________________________________________________\nactivation_231 (Activation) (None, 2, 2, 672) 0 separable_conv_1_bn_normal_left2_\n__________________________________________________________________________________________________\nactivation_233 (Activation) (None, 2, 2, 672) 0 separable_conv_1_bn_normal_right2\n__________________________________________________________________________________________________\nactivation_235 (Activation) (None, 2, 2, 672) 0 separable_conv_1_bn_normal_left5_\n__________________________________________________________________________________________________\nseparable_conv_2_normal_left1_1 (None, 2, 2, 672) 468384 activation_227[0][0] \n__________________________________________________________________________________________________\nseparable_conv_2_normal_right1_ (None, 2, 2, 672) 457632 activation_229[0][0] \n__________________________________________________________________________________________________\nseparable_conv_2_normal_left2_1 (None, 2, 2, 672) 468384 activation_231[0][0] \n__________________________________________________________________________________________________\nseparable_conv_2_normal_right2_ (None, 2, 2, 672) 457632 activation_233[0][0] \n__________________________________________________________________________________________________\nseparable_conv_2_normal_left5_1 (None, 2, 2, 672) 457632 activation_235[0][0] \n__________________________________________________________________________________________________\nseparable_conv_2_bn_normal_left (None, 2, 2, 672) 2688 separable_conv_2_normal_left1_16[\n__________________________________________________________________________________________________\nseparable_conv_2_bn_normal_righ (None, 2, 2, 672) 2688 separable_conv_2_normal_right1_16\n__________________________________________________________________________________________________\nseparable_conv_2_bn_normal_left (None, 2, 2, 672) 2688 separable_conv_2_normal_left2_16[\n__________________________________________________________________________________________________\nseparable_conv_2_bn_normal_righ (None, 2, 2, 672) 2688 separable_conv_2_normal_right2_16\n__________________________________________________________________________________________________\nnormal_left3_16 (AveragePooling (None, 2, 2, 672) 0 normal_bn_1_16[0][0] \n__________________________________________________________________________________________________\nnormal_left4_16 (AveragePooling (None, 2, 2, 672) 0 adjust_bn_16[0][0] \n__________________________________________________________________________________________________\nnormal_right4_16 (AveragePoolin (None, 2, 2, 672) 0 adjust_bn_16[0][0] \n__________________________________________________________________________________________________\nseparable_conv_2_bn_normal_left (None, 2, 2, 672) 2688 separable_conv_2_normal_left5_16[\n__________________________________________________________________________________________________\nnormal_add_1_16 (Add) (None, 2, 2, 672) 0 separable_conv_2_bn_normal_left1_\n separable_conv_2_bn_normal_right1\n__________________________________________________________________________________________________\nnormal_add_2_16 (Add) (None, 2, 2, 672) 0 separable_conv_2_bn_normal_left2_\n separable_conv_2_bn_normal_right2\n__________________________________________________________________________________________________\nnormal_add_3_16 (Add) (None, 2, 2, 672) 0 normal_left3_16[0][0] \n adjust_bn_16[0][0] \n__________________________________________________________________________________________________\nnormal_add_4_16 (Add) (None, 2, 2, 672) 0 normal_left4_16[0][0] \n normal_right4_16[0][0] \n__________________________________________________________________________________________________\nnormal_add_5_16 (Add) (None, 2, 2, 672) 0 separable_conv_2_bn_normal_left5_\n normal_bn_1_16[0][0] \n__________________________________________________________________________________________________\nnormal_concat_16 (Concatenate) (None, 2, 2, 4032) 0 adjust_bn_16[0][0] \n normal_add_1_16[0][0] \n normal_add_2_16[0][0] \n normal_add_3_16[0][0] \n normal_add_4_16[0][0] \n normal_add_5_16[0][0] \n__________________________________________________________________________________________________\nactivation_236 (Activation) (None, 2, 2, 4032) 0 normal_concat_15[0][0] \n__________________________________________________________________________________________________\nactivation_237 (Activation) (None, 2, 2, 4032) 0 normal_concat_16[0][0] \n__________________________________________________________________________________________________\nadjust_conv_projection_17 (Conv (None, 2, 2, 672) 2709504 activation_236[0][0] \n__________________________________________________________________________________________________\nnormal_conv_1_17 (Conv2D) (None, 2, 2, 672) 2709504 activation_237[0][0] \n__________________________________________________________________________________________________\nadjust_bn_17 (BatchNormalizatio (None, 2, 2, 672) 2688 adjust_conv_projection_17[0][0] \n__________________________________________________________________________________________________\nnormal_bn_1_17 (BatchNormalizat (None, 2, 2, 672) 2688 normal_conv_1_17[0][0] \n__________________________________________________________________________________________________\nactivation_238 (Activation) (None, 2, 2, 672) 0 normal_bn_1_17[0][0] \n__________________________________________________________________________________________________\nactivation_240 (Activation) (None, 2, 2, 672) 0 adjust_bn_17[0][0] \n__________________________________________________________________________________________________\nactivation_242 (Activation) (None, 2, 2, 672) 0 adjust_bn_17[0][0] \n__________________________________________________________________________________________________\nactivation_244 (Activation) (None, 2, 2, 672) 0 adjust_bn_17[0][0] \n__________________________________________________________________________________________________\nactivation_246 (Activation) (None, 2, 2, 672) 0 normal_bn_1_17[0][0] \n__________________________________________________________________________________________________\nseparable_conv_1_normal_left1_1 (None, 2, 2, 672) 468384 activation_238[0][0] \n__________________________________________________________________________________________________\nseparable_conv_1_normal_right1_ (None, 2, 2, 672) 457632 activation_240[0][0] \n__________________________________________________________________________________________________\nseparable_conv_1_normal_left2_1 (None, 2, 2, 672) 468384 activation_242[0][0] \n__________________________________________________________________________________________________\nseparable_conv_1_normal_right2_ (None, 2, 2, 672) 457632 activation_244[0][0] \n__________________________________________________________________________________________________\nseparable_conv_1_normal_left5_1 (None, 2, 2, 672) 457632 activation_246[0][0] \n__________________________________________________________________________________________________\nseparable_conv_1_bn_normal_left (None, 2, 2, 672) 2688 separable_conv_1_normal_left1_17[\n__________________________________________________________________________________________________\nseparable_conv_1_bn_normal_righ (None, 2, 2, 672) 2688 separable_conv_1_normal_right1_17\n__________________________________________________________________________________________________\nseparable_conv_1_bn_normal_left (None, 2, 2, 672) 2688 separable_conv_1_normal_left2_17[\n__________________________________________________________________________________________________\nseparable_conv_1_bn_normal_righ (None, 2, 2, 672) 2688 separable_conv_1_normal_right2_17\n__________________________________________________________________________________________________\nseparable_conv_1_bn_normal_left (None, 2, 2, 672) 2688 separable_conv_1_normal_left5_17[\n__________________________________________________________________________________________________\nactivation_239 (Activation) (None, 2, 2, 672) 0 separable_conv_1_bn_normal_left1_\n__________________________________________________________________________________________________\nactivation_241 (Activation) (None, 2, 2, 672) 0 separable_conv_1_bn_normal_right1\n__________________________________________________________________________________________________\nactivation_243 (Activation) (None, 2, 2, 672) 0 separable_conv_1_bn_normal_left2_\n__________________________________________________________________________________________________\nactivation_245 (Activation) (None, 2, 2, 672) 0 separable_conv_1_bn_normal_right2\n__________________________________________________________________________________________________\nactivation_247 (Activation) (None, 2, 2, 672) 0 separable_conv_1_bn_normal_left5_\n__________________________________________________________________________________________________\nseparable_conv_2_normal_left1_1 (None, 2, 2, 672) 468384 activation_239[0][0] \n__________________________________________________________________________________________________\nseparable_conv_2_normal_right1_ (None, 2, 2, 672) 457632 activation_241[0][0] \n__________________________________________________________________________________________________\nseparable_conv_2_normal_left2_1 (None, 2, 2, 672) 468384 activation_243[0][0] \n__________________________________________________________________________________________________\nseparable_conv_2_normal_right2_ (None, 2, 2, 672) 457632 activation_245[0][0] \n__________________________________________________________________________________________________\nseparable_conv_2_normal_left5_1 (None, 2, 2, 672) 457632 activation_247[0][0] \n__________________________________________________________________________________________________\nseparable_conv_2_bn_normal_left (None, 2, 2, 672) 2688 separable_conv_2_normal_left1_17[\n__________________________________________________________________________________________________\nseparable_conv_2_bn_normal_righ (None, 2, 2, 672) 2688 separable_conv_2_normal_right1_17\n__________________________________________________________________________________________________\nseparable_conv_2_bn_normal_left (None, 2, 2, 672) 2688 separable_conv_2_normal_left2_17[\n__________________________________________________________________________________________________\nseparable_conv_2_bn_normal_righ (None, 2, 2, 672) 2688 separable_conv_2_normal_right2_17\n__________________________________________________________________________________________________\nnormal_left3_17 (AveragePooling (None, 2, 2, 672) 0 normal_bn_1_17[0][0] \n__________________________________________________________________________________________________\nnormal_left4_17 (AveragePooling (None, 2, 2, 672) 0 adjust_bn_17[0][0] \n__________________________________________________________________________________________________\nnormal_right4_17 (AveragePoolin (None, 2, 2, 672) 0 adjust_bn_17[0][0] \n__________________________________________________________________________________________________\nseparable_conv_2_bn_normal_left (None, 2, 2, 672) 2688 separable_conv_2_normal_left5_17[\n__________________________________________________________________________________________________\nnormal_add_1_17 (Add) (None, 2, 2, 672) 0 separable_conv_2_bn_normal_left1_\n separable_conv_2_bn_normal_right1\n__________________________________________________________________________________________________\nnormal_add_2_17 (Add) (None, 2, 2, 672) 0 separable_conv_2_bn_normal_left2_\n separable_conv_2_bn_normal_right2\n__________________________________________________________________________________________________\nnormal_add_3_17 (Add) (None, 2, 2, 672) 0 normal_left3_17[0][0] \n adjust_bn_17[0][0] \n__________________________________________________________________________________________________\nnormal_add_4_17 (Add) (None, 2, 2, 672) 0 normal_left4_17[0][0] \n normal_right4_17[0][0] \n__________________________________________________________________________________________________\nnormal_add_5_17 (Add) (None, 2, 2, 672) 0 separable_conv_2_bn_normal_left5_\n normal_bn_1_17[0][0] \n__________________________________________________________________________________________________\nnormal_concat_17 (Concatenate) (None, 2, 2, 4032) 0 adjust_bn_17[0][0] \n normal_add_1_17[0][0] \n normal_add_2_17[0][0] \n normal_add_3_17[0][0] \n normal_add_4_17[0][0] \n normal_add_5_17[0][0] \n__________________________________________________________________________________________________\nactivation_248 (Activation) (None, 2, 2, 4032) 0 normal_concat_16[0][0] \n__________________________________________________________________________________________________\nactivation_249 (Activation) (None, 2, 2, 4032) 0 normal_concat_17[0][0] \n__________________________________________________________________________________________________\nadjust_conv_projection_18 (Conv (None, 2, 2, 672) 2709504 activation_248[0][0] \n__________________________________________________________________________________________________\nnormal_conv_1_18 (Conv2D) (None, 2, 2, 672) 2709504 activation_249[0][0] \n__________________________________________________________________________________________________\nadjust_bn_18 (BatchNormalizatio (None, 2, 2, 672) 2688 adjust_conv_projection_18[0][0] \n__________________________________________________________________________________________________\nnormal_bn_1_18 (BatchNormalizat (None, 2, 2, 672) 2688 normal_conv_1_18[0][0] \n__________________________________________________________________________________________________\nactivation_250 (Activation) (None, 2, 2, 672) 0 normal_bn_1_18[0][0] \n__________________________________________________________________________________________________\nactivation_252 (Activation) (None, 2, 2, 672) 0 adjust_bn_18[0][0] \n__________________________________________________________________________________________________\nactivation_254 (Activation) (None, 2, 2, 672) 0 adjust_bn_18[0][0] \n__________________________________________________________________________________________________\nactivation_256 (Activation) (None, 2, 2, 672) 0 adjust_bn_18[0][0] \n__________________________________________________________________________________________________\nactivation_258 (Activation) (None, 2, 2, 672) 0 normal_bn_1_18[0][0] \n__________________________________________________________________________________________________\nseparable_conv_1_normal_left1_1 (None, 2, 2, 672) 468384 activation_250[0][0] \n__________________________________________________________________________________________________\nseparable_conv_1_normal_right1_ (None, 2, 2, 672) 457632 activation_252[0][0] \n__________________________________________________________________________________________________\nseparable_conv_1_normal_left2_1 (None, 2, 2, 672) 468384 activation_254[0][0] \n__________________________________________________________________________________________________\nseparable_conv_1_normal_right2_ (None, 2, 2, 672) 457632 activation_256[0][0] \n__________________________________________________________________________________________________\nseparable_conv_1_normal_left5_1 (None, 2, 2, 672) 457632 activation_258[0][0] \n__________________________________________________________________________________________________\nseparable_conv_1_bn_normal_left (None, 2, 2, 672) 2688 separable_conv_1_normal_left1_18[\n__________________________________________________________________________________________________\nseparable_conv_1_bn_normal_righ (None, 2, 2, 672) 2688 separable_conv_1_normal_right1_18\n__________________________________________________________________________________________________\nseparable_conv_1_bn_normal_left (None, 2, 2, 672) 2688 separable_conv_1_normal_left2_18[\n__________________________________________________________________________________________________\nseparable_conv_1_bn_normal_righ (None, 2, 2, 672) 2688 separable_conv_1_normal_right2_18\n__________________________________________________________________________________________________\nseparable_conv_1_bn_normal_left (None, 2, 2, 672) 2688 separable_conv_1_normal_left5_18[\n__________________________________________________________________________________________________\nactivation_251 (Activation) (None, 2, 2, 672) 0 separable_conv_1_bn_normal_left1_\n__________________________________________________________________________________________________\nactivation_253 (Activation) (None, 2, 2, 672) 0 separable_conv_1_bn_normal_right1\n__________________________________________________________________________________________________\nactivation_255 (Activation) (None, 2, 2, 672) 0 separable_conv_1_bn_normal_left2_\n__________________________________________________________________________________________________\nactivation_257 (Activation) (None, 2, 2, 672) 0 separable_conv_1_bn_normal_right2\n__________________________________________________________________________________________________\nactivation_259 (Activation) (None, 2, 2, 672) 0 separable_conv_1_bn_normal_left5_\n__________________________________________________________________________________________________\nseparable_conv_2_normal_left1_1 (None, 2, 2, 672) 468384 activation_251[0][0] \n__________________________________________________________________________________________________\nseparable_conv_2_normal_right1_ (None, 2, 2, 672) 457632 activation_253[0][0] \n__________________________________________________________________________________________________\nseparable_conv_2_normal_left2_1 (None, 2, 2, 672) 468384 activation_255[0][0] \n__________________________________________________________________________________________________\nseparable_conv_2_normal_right2_ (None, 2, 2, 672) 457632 activation_257[0][0] \n__________________________________________________________________________________________________\nseparable_conv_2_normal_left5_1 (None, 2, 2, 672) 457632 activation_259[0][0] \n__________________________________________________________________________________________________\nseparable_conv_2_bn_normal_left (None, 2, 2, 672) 2688 separable_conv_2_normal_left1_18[\n__________________________________________________________________________________________________\nseparable_conv_2_bn_normal_righ (None, 2, 2, 672) 2688 separable_conv_2_normal_right1_18\n__________________________________________________________________________________________________\nseparable_conv_2_bn_normal_left (None, 2, 2, 672) 2688 separable_conv_2_normal_left2_18[\n__________________________________________________________________________________________________\nseparable_conv_2_bn_normal_righ (None, 2, 2, 672) 2688 separable_conv_2_normal_right2_18\n__________________________________________________________________________________________________\nnormal_left3_18 (AveragePooling (None, 2, 2, 672) 0 normal_bn_1_18[0][0] \n__________________________________________________________________________________________________\nnormal_left4_18 (AveragePooling (None, 2, 2, 672) 0 adjust_bn_18[0][0] \n__________________________________________________________________________________________________\nnormal_right4_18 (AveragePoolin (None, 2, 2, 672) 0 adjust_bn_18[0][0] \n__________________________________________________________________________________________________\nseparable_conv_2_bn_normal_left (None, 2, 2, 672) 2688 separable_conv_2_normal_left5_18[\n__________________________________________________________________________________________________\nnormal_add_1_18 (Add) (None, 2, 2, 672) 0 separable_conv_2_bn_normal_left1_\n separable_conv_2_bn_normal_right1\n__________________________________________________________________________________________________\nnormal_add_2_18 (Add) (None, 2, 2, 672) 0 separable_conv_2_bn_normal_left2_\n separable_conv_2_bn_normal_right2\n__________________________________________________________________________________________________\nnormal_add_3_18 (Add) (None, 2, 2, 672) 0 normal_left3_18[0][0] \n adjust_bn_18[0][0] \n__________________________________________________________________________________________________\nnormal_add_4_18 (Add) (None, 2, 2, 672) 0 normal_left4_18[0][0] \n normal_right4_18[0][0] \n__________________________________________________________________________________________________\nnormal_add_5_18 (Add) (None, 2, 2, 672) 0 separable_conv_2_bn_normal_left5_\n normal_bn_1_18[0][0] \n__________________________________________________________________________________________________\nnormal_concat_18 (Concatenate) (None, 2, 2, 4032) 0 adjust_bn_18[0][0] \n normal_add_1_18[0][0] \n normal_add_2_18[0][0] \n normal_add_3_18[0][0] \n normal_add_4_18[0][0] \n normal_add_5_18[0][0] \n__________________________________________________________________________________________________\nactivation_260 (Activation) (None, 2, 2, 4032) 0 normal_concat_18[0][0] \n__________________________________________________________________________________________________\nglobal_average_pooling2d_1 (Glo (None, 4032) 0 activation_260[0][0] \n__________________________________________________________________________________________________\ndropout_1 (Dropout) (None, 4032) 0 global_average_pooling2d_1[0][0] \n__________________________________________________________________________________________________\ndense_1 (Dense) (None, 1024) 4129792 dropout_1[0][0] \n__________________________________________________________________________________________________\ndropout_2 (Dropout) (None, 1024) 0 dense_1[0][0] \n__________________________________________________________________________________________________\nfinal_output (Dense) (None, 1103) 1130575 dropout_2[0][0] \n==================================================================================================\nTotal params: 90,177,185\nTrainable params: 5,260,367\nNon-trainable params: 84,916,818\n__________________________________________________________________________________________________\n" ] ], [ [ "#### Train top layers", "_____no_output_____" ] ], [ [ "STEP_SIZE_TRAIN = train_generator.n//train_generator.batch_size\nSTEP_SIZE_VALID = valid_generator.n//valid_generator.batch_size\nhistory = model.fit_generator(generator=train_generator,\n steps_per_epoch=STEP_SIZE_TRAIN,\n validation_data=valid_generator,\n validation_steps=STEP_SIZE_VALID,\n epochs=EPOCHS,\n callbacks=callbacks,\n verbose=2,\n max_queue_size=16, workers=3, use_multiprocessing=True)", "WARNING:tensorflow:From /opt/conda/lib/python3.6/site-packages/tensorflow/python/ops/math_ops.py:3066: to_int32 (from tensorflow.python.ops.math_ops) is deprecated and will be removed in a future version.\nInstructions for updating:\nUse tf.cast instead.\nEpoch 1/30\n - 579s - loss: 0.0435 - acc: 0.9903 - categorical_accuracy: 0.0462 - val_loss: 0.1462 - val_acc: 0.9971 - val_categorical_accuracy: 0.0245\nEpoch 2/30\n - 565s - loss: 0.0192 - acc: 0.9970 - categorical_accuracy: 0.0713 - val_loss: 0.0919 - val_acc: 0.9971 - val_categorical_accuracy: 0.0304\nEpoch 3/30\n - 559s - loss: 0.0166 - acc: 0.9971 - categorical_accuracy: 0.0783 - val_loss: 0.0503 - val_acc: 0.9971 - val_categorical_accuracy: 0.0248\nEpoch 4/30\n - 552s - loss: 0.0153 - acc: 0.9971 - categorical_accuracy: 0.0843 - val_loss: 0.0295 - val_acc: 0.9971 - val_categorical_accuracy: 0.0299\nEpoch 5/30\n - 569s - loss: 0.0144 - acc: 0.9971 - categorical_accuracy: 0.0856 - val_loss: 0.0200 - val_acc: 0.9971 - val_categorical_accuracy: 0.0274\nEpoch 6/30\n - 563s - loss: 0.0140 - acc: 0.9971 - categorical_accuracy: 0.0838 - val_loss: 0.0165 - val_acc: 0.9971 - val_categorical_accuracy: 0.0346\nEpoch 7/30\n - 550s - loss: 0.0136 - acc: 0.9971 - categorical_accuracy: 0.0900 - val_loss: 0.0152 - val_acc: 0.9971 - val_categorical_accuracy: 0.0262\nEpoch 8/30\n - 549s - loss: 0.0134 - acc: 0.9972 - categorical_accuracy: 0.0884 - val_loss: 0.0149 - val_acc: 0.9971 - val_categorical_accuracy: 0.0270\nEpoch 9/30\n - 549s - loss: 0.0134 - acc: 0.9971 - categorical_accuracy: 0.0905 - val_loss: 0.0150 - val_acc: 0.9971 - val_categorical_accuracy: 0.0299\nEpoch 10/30\n - 542s - loss: 0.0132 - acc: 0.9971 - categorical_accuracy: 0.0925 - val_loss: 0.0149 - val_acc: 0.9971 - val_categorical_accuracy: 0.0248\nEpoch 11/30\n - 548s - loss: 0.0131 - acc: 0.9972 - categorical_accuracy: 0.0915 - val_loss: 0.0149 - val_acc: 0.9971 - val_categorical_accuracy: 0.0242\nEpoch 12/30\n - 545s - loss: 0.0131 - acc: 0.9972 - categorical_accuracy: 0.0929 - val_loss: 0.0151 - val_acc: 0.9971 - val_categorical_accuracy: 0.0266\nEpoch 13/30\n - 540s - loss: 0.0130 - acc: 0.9972 - categorical_accuracy: 0.0938 - val_loss: 0.0150 - val_acc: 0.9971 - val_categorical_accuracy: 0.0251\nEpoch 14/30\n - 549s - loss: 0.0130 - acc: 0.9972 - categorical_accuracy: 0.0965 - val_loss: 0.0151 - val_acc: 0.9971 - val_categorical_accuracy: 0.0238\nEpoch 15/30\n - 548s - loss: 0.0130 - acc: 0.9972 - categorical_accuracy: 0.0986 - val_loss: 0.0151 - val_acc: 0.9971 - val_categorical_accuracy: 0.0303\nEpoch 00015: early stopping\n" ] ], [ [ "### Complete model graph loss", "_____no_output_____" ] ], [ [ "sns.set_style(\"whitegrid\")\nfig, (ax1, ax2, ax3) = plt.subplots(1, 3, sharex='col', figsize=(20,7))\n\n\nax1.plot(history.history['loss'], label='Train loss')\nax1.plot(history.history['val_loss'], label='Validation loss')\nax1.legend(loc='best')\nax1.set_title('Loss')\n\nax2.plot(history.history['acc'], label='Train Accuracy')\nax2.plot(history.history['val_acc'], label='Validation accuracy')\nax2.legend(loc='best')\nax2.set_title('Accuracy')\n\nax3.plot(history.history['categorical_accuracy'], label='Train Cat Accuracy')\nax3.plot(history.history['val_categorical_accuracy'], label='Validation Cat Accuracy')\nax3.legend(loc='best')\nax3.set_title('Cat Accuracy')\n\nplt.xlabel('Epochs')\nsns.despine()\nplt.show()", "_____no_output_____" ] ], [ [ "### Find best threshold value", "_____no_output_____" ] ], [ [ "lastFullValPred = np.empty((0, N_CLASSES))\nlastFullValLabels = np.empty((0, N_CLASSES))\n\nfor i in range(STEP_SIZE_VALID+1):\n im, lbl = next(valid_generator)\n scores = model.predict(im, batch_size=valid_generator.batch_size)\n lastFullValPred = np.append(lastFullValPred, scores, axis=0)\n lastFullValLabels = np.append(lastFullValLabels, lbl, axis=0)\n \nprint(lastFullValPred.shape, lastFullValLabels.shape)", "(27309, 1103) (27309, 1103)\n" ], [ "def find_best_fixed_threshold(preds, targs, do_plot=True):\n score = []\n thrs = np.arange(0, 0.5, 0.01)\n for thr in thrs:\n score.append(custom_f2(targs, (preds > thr).astype(int)))\n score = np.array(score)\n pm = score.argmax()\n best_thr, best_score = thrs[pm], score[pm].item()\n print(f'thr={best_thr:.3f}', f'F2={best_score:.3f}')\n if do_plot:\n plt.plot(thrs, score)\n plt.vlines(x=best_thr, ymin=score.min(), ymax=score.max())\n plt.text(best_thr+0.03, best_score-0.01, f'$F_{2}=${best_score:.3f}', fontsize=14);\n plt.show()\n return best_thr, best_score\n\nthreshold, best_score = find_best_fixed_threshold(lastFullValPred, lastFullValLabels, do_plot=True)", "thr=0.050 F2=0.211\n" ] ], [ [ "### Apply model to test set and output predictions", "_____no_output_____" ] ], [ [ "test_generator.reset()\nSTEP_SIZE_TEST = test_generator.n//test_generator.batch_size\npreds = model.predict_generator(test_generator, steps=STEP_SIZE_TEST)", "_____no_output_____" ], [ "predictions = []\nfor pred_ar in preds:\n valid = []\n for idx, pred in enumerate(pred_ar):\n if pred > threshold:\n valid.append(idx)\n if len(valid) == 0:\n valid.append(np.argmax(pred_ar))\n predictions.append(valid)", "_____no_output_____" ], [ "filenames = test_generator.filenames\nlabel_map = {valid_generator.class_indices[k] : k for k in valid_generator.class_indices}\n\nresults = pd.DataFrame({'id':filenames, 'attribute_ids':predictions})\nresults['id'] = results['id'].map(lambda x: str(x)[:-4])\nresults['attribute_ids'] = results['attribute_ids'].apply(lambda x: list(map(label_map.get, x)))\nresults[\"attribute_ids\"] = results[\"attribute_ids\"].apply(lambda x: ' '.join(x))\nresults.to_csv('submission.csv',index=False)\nresults.head(10)", "_____no_output_____" ] ] ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "code", "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code", "code" ] ]
e7fee190429ebb4a0209aa72bf1a6d1c285610b7
773,708
ipynb
Jupyter Notebook
09-Civil_Air_Patrol_and_georegistration.ipynb
bwsi-remote-sensing-2020/12-Civil_Air_Patrol_and_Georegistration
79ff884456d302bcf105bc8da1cd4b4f49285430
[ "FSFULLR", "FSFUL" ]
1
2021-07-01T10:10:54.000Z
2021-07-01T10:10:54.000Z
09-Civil_Air_Patrol_and_georegistration.ipynb
bwsi-remote-sensing-2020/12-Civil_Air_Patrol_and_Georegistration
79ff884456d302bcf105bc8da1cd4b4f49285430
[ "FSFULLR", "FSFUL" ]
null
null
null
09-Civil_Air_Patrol_and_georegistration.ipynb
bwsi-remote-sensing-2020/12-Civil_Air_Patrol_and_Georegistration
79ff884456d302bcf105bc8da1cd4b4f49285430
[ "FSFULLR", "FSFUL" ]
1
2020-07-06T20:13:39.000Z
2020-07-06T20:13:39.000Z
1,184.851455
727,828
0.953305
[ [ [ "# Civil Air Patrol and Georegistration\n\nIn the previous session, we saw how a series of 2D images taken of a 3D scene can be used to recover the 3D information, by exploiting geometric constraints of the cameras. Now the question is, how do we take this technique and apply it in a disaster response scenario?\n\nWe are going to look at a specific case study, using images from the Low Altitude Disaster Imagery (LADI) dataset, taken by the Civil Air Patrol (CAP). As we work with this dataset, keep in mind the two major questions from the previous lecture:\n\n- _What_ is in an image (e.g. debris, buildings, etc.)?\n- _Where_ are these things located _in 3D space_ ?", "_____no_output_____" ], [ "## Civil Air Patrol\nCivil Air Patrol (CAP) is the civilian auxiliary of the United States Air Force (USAF). The origins of CAP date back to the pre-World War II era. As the Axis powers became a growing threat to the world, civilian aviators in the United States feared that the government would shut down general aviation as a precautionary measure. These aviators thus had to prove to the federal government that civilian aviation was not only not a danger, but actually a benefit to the war effort. \n\nAs a result of these efforts, two separate programs were created. One was a Civilian Pilot Training Program, intended to increase the available people that could operate an aircraft should the need to deploy additional troops arise. The second actually called for the organization of civilian aviators and opened the door to the creation of CAP. \n\nOnce the United States entered WWII proper, CAP began to embark a plethora of activities, some of which are still practiced today. They continued to do cadet education programs. They also began patrolling the coasts and borders. Finally, they started in 1942 conducting search and rescue (SAR) missions. These missions were a resounding success, and one of the main components of CAP today.\n\nCAP has five congressionally mandated missions:\n\n(1) To provide an organization to—\n(A) encourage and aid citizens of the United States in contributing their efforts, services, and resources in developing aviation and in maintaining air supremacy; and\n(B) encourage and develop by example the voluntary contribution of private citizens to the public welfare.\n\n(2) To provide aviation education and training especially to its senior and cadet members.\n\n(3) To encourage and foster civil aviation in local communities.\n\n(4) To provide an organization of private citizens with adequate facilities to assist in meeting local and national emergencies.\n\n(5) To assist the Department of the Air Force in fulfilling its noncombat programs and missions.\n\nsource: https://www.law.cornell.edu/uscode/text/36/40302\n\nCAP's main series of missions revolve around emergency response. CAP is involved in roughly 85% of all SAR missions in the United States and its territories. After natural disasters, CAP is responsible for assessing damage in affected communities, delivering supplies, providing transportation, in addition to its usual SAR missions. \n\nhttps://kvia.com/health/2020/06/18/el-paso-civil-air-patrol-flying-virus-tests-to-labs-in-money-saving-effort-for-texas/\n\nhttps://www.southernminn.com/article_2c5739a5-826f-53bb-a658-922fb1aa1627.html\n\nPart of their emergency programming is taking aerial imagery of affected areas. This imagery is the highest resolution, most timely imagery that we have available of a post-disaster situation. Even the highest resolution satellite imagery is often either limited in their geographical coverage, not very timely or occluded by clouds. These are images taken of Puerto Rico after Hurricane Maria in 2017.\n\n<img src=\"notebook_images/A0008AP-_932ec345-75a9-4005-9879-da06ba0af37e.jpg\" width=\"500\" />\n\n<img src=\"notebook_images/A0016-52_e71b5e09-ec3c-4ea6-8ac8-b9d1e4b714cb.jpg\" width=\"500\" />\n\n<img src=\"notebook_images/A0016-54_f5273b60-dec4-4617-8f01-d67f16001dcb.jpg\" width=\"500\" />\n\nCAP has taken hundreds of thousands of images of disaster-affected areas in the past decades. And yet, even though it is some of the best imagery we have access to, it is rarely if ever used in practice. _Why?_\n\n## The LADI dataset\nPart of the effort in making CAP imagery more useful is trying to make more sense of the content of the images. To that end, researchers at MIT Lincoln Laboratory released the Low Altitude Disaster Imagery (LADI) dataset. This dataset contains hundreds of thousands of CAP images that have crowdsourced labels corresponding to infrastructure, environment and damage categories. This begins to answer the first of the two questions we set out initially. We'll start working on these labels tomorrow. For now, we will solely focus on the images themselves.\n\n<img src=\"notebook_images/labels.png\" width=\"500\" />\n\nWhat are some of the limitations of this dataset?", "_____no_output_____" ], [ "### Exercise\n\nImagine you have acquired $200,000 to implement some improvement to the way CAP takes aerial imagery. Hurricane season starts in five months, so whatever improvements need to be implemented by then. Separate into your breakout rooms and answer the following questions:\n- What specific hurdles to using CAP images do you want to address? Identify at least two.\n- Design a proposal to address the challenges you identified above, taking into account the budget and time constraints. Improvements can be of any sort (technical, political, social, etc).\n- What are the advantages and disadvantages of implementing your proposal?\n- Identify at least three different stakeholder groups in this situation. What are their specific needs? How does your proposal address these needs? How does your proposal fall short?\n- Draw out a budget breakdown and a timeline, as well as a breakdown of which stakeholders you are prioritizing and why. Prepare to present these at 1:30pm.", "_____no_output_____" ], [ "## 3D Reconstruction in a Real World Reference Frame\n\nIf we want to answer the second of our two guiding questions, we must be able to make a translation between where something is in the image and its location in real world coordinates. Let's stake stock of what tools we have thus far. We spent a good amount of time discussing structure from motion as a way to reconstruct a 3D scene from 2D images. Recall the limitations of this approach:\n- There need to be more than one image in a sequence.\n- Sequential images need to have enough overlap that there are common features.\n- At least one pair of sequential images must have sufficient translation such that the problem is not ill-posed.\n- The reconstruction is given in an arbitrary reference frame up to scale. \n\nWhat does that last point mean? The arbitrary reference part refers to the fact that the origin and the axes are aligned with the first camera. The up to scale part means that all distances are preserved up to a factor $\\lambda$. Therefore the scene retains the general shape, but the size of the scene is not conserved. Without additional information, it is impossible to know how the reconstructed scene relates to any other reference frame, and translating the reconstruction to real world coordinates is impossible.\n\nHowever, recall that we do typically have at least a coarse estimate of the camera's GPS coordinates, therefore we have estimates of the distances between sequential cameras. Consider a reconstruction of just two images. Then a good estimate of $\\lambda$ is:\n\n$\\lambda = \\frac{D_{GPS}}{D_{reconstruction}}$\n\nThis is slightly more complicated for more than two images. Typically, a solver will initialize the camera positions at their GPS coordinates and use bundle adjustment to correct the errors in the GPS measurements, although certainly there's more than one way to do this.\n\nLet's give this a shot and see what happens! As it so happens, OpenSfM is already equipped to handle GPS coordinates. ", "_____no_output_____" ] ], [ [ "import sys\nimport open3d as o3d\nimport json\nimport numpy as np\nimport pandas as pd\nfrom matplotlib import pyplot as plt\nimport cv2\nimport os", "_____no_output_____" ], [ "# Take initial guess of intrinsic parameters through metadata\n!opensfm extract_metadata CAP_sample_1\n\n# Detect features points \n!opensfm detect_features CAP_sample_1\n\n# Match feature points across images\n!opensfm match_features CAP_sample_1\n\n# This creates \"tracks\" for the features. That is to say, if a feature in image 1 is matched with one in image 2,\n# and in turn that one is matched with one in image 3, then it links the matches between 1 and 3. \n!opensfm create_tracks CAP_sample_1\n\n# Calculates the essential matrix, the camera pose and the reconstructed feature points\n!opensfm reconstruct CAP_sample_1\n", "2020-07-17 15:50:24,089 INFO: Loading existing EXIF for image_url_pr_10_13_sample_12.jpg\n2020-07-17 15:50:24,090 INFO: Loading existing EXIF for image_url_pr_10_13_sample_11.jpg\n2020-07-17 15:50:24,090 INFO: Loading existing EXIF for image_url_pr_10_13_sample_08.jpg\n2020-07-17 15:50:24,091 INFO: Loading existing EXIF for image_url_pr_10_13_sample_07.jpg\n2020-07-17 15:50:24,092 INFO: Loading existing EXIF for image_url_pr_10_13_sample_13.jpg\n2020-07-17 15:50:26,069 INFO: Skip recomputing ROOT_HAHOG features for image image_url_pr_10_13_sample_12.jpg\n2020-07-17 15:50:26,070 INFO: Skip recomputing ROOT_HAHOG features for image image_url_pr_10_13_sample_11.jpg\n2020-07-17 15:50:26,070 INFO: Skip recomputing ROOT_HAHOG features for image image_url_pr_10_13_sample_08.jpg\n2020-07-17 15:50:26,070 INFO: Skip recomputing ROOT_HAHOG features for image image_url_pr_10_13_sample_07.jpg\n2020-07-17 15:50:26,070 INFO: Skip recomputing ROOT_HAHOG features for image image_url_pr_10_13_sample_13.jpg\n2020-07-17 15:50:28,043 INFO: Matching 6 image pairs\n2020-07-17 15:50:28,061 INFO: Computing pair matching with 1 processes\n2020-07-17 15:50:28,096 DEBUG: No segmentation for image_url_pr_10_13_sample_07.jpg, no features masked.\n2020-07-17 15:50:28,098 DEBUG: No segmentation for image_url_pr_10_13_sample_08.jpg, no features masked.\n2020-07-17 15:50:28,677 DEBUG: Matching image_url_pr_10_13_sample_07.jpg and image_url_pr_10_13_sample_08.jpg. Matcher: FLANN (symmetric) T-desc: 0.577 T-robust: 0.002 T-total: 0.579 Matches: 769 Robust: 738 Success: True\n2020-07-17 15:50:28,696 DEBUG: No segmentation for image_url_pr_10_13_sample_11.jpg, no features masked.\n2020-07-17 15:50:29,065 DEBUG: Matching image_url_pr_10_13_sample_07.jpg and image_url_pr_10_13_sample_11.jpg. Matcher: FLANN (symmetric) T-desc: 0.369 Matches: FAILED\n2020-07-17 15:50:29,065 DEBUG: Image image_url_pr_10_13_sample_07.jpg matches: 1 out of 2\n2020-07-17 15:50:29,082 DEBUG: No segmentation for image_url_pr_10_13_sample_12.jpg, no features masked.\n2020-07-17 15:50:29,474 DEBUG: Matching image_url_pr_10_13_sample_11.jpg and image_url_pr_10_13_sample_12.jpg. Matcher: FLANN (symmetric) T-desc: 0.386 T-robust: 0.004 T-total: 0.391 Matches: 1747 Robust: 1734 Success: True\n2020-07-17 15:50:29,492 DEBUG: No segmentation for image_url_pr_10_13_sample_13.jpg, no features masked.\n2020-07-17 15:50:29,871 DEBUG: Matching image_url_pr_10_13_sample_11.jpg and image_url_pr_10_13_sample_13.jpg. Matcher: FLANN (symmetric) T-desc: 0.378 T-robust: 0.002 T-total: 0.379 Matches: 552 Robust: 513 Success: True\n2020-07-17 15:50:29,872 DEBUG: Image image_url_pr_10_13_sample_11.jpg matches: 2 out of 2\n2020-07-17 15:50:30,048 DEBUG: Matching image_url_pr_10_13_sample_08.jpg and image_url_pr_10_13_sample_11.jpg. Matcher: FLANN (symmetric) T-desc: 0.176 Matches: FAILED\n2020-07-17 15:50:30,048 DEBUG: Image image_url_pr_10_13_sample_08.jpg matches: 0 out of 1\n2020-07-17 15:50:30,245 DEBUG: Matching image_url_pr_10_13_sample_12.jpg and image_url_pr_10_13_sample_13.jpg. Matcher: FLANN (symmetric) T-desc: 0.189 T-robust: 0.004 T-total: 0.194 Matches: 1716 Robust: 1704 Success: True\n2020-07-17 15:50:30,246 DEBUG: Image image_url_pr_10_13_sample_12.jpg matches: 1 out of 1\n2020-07-17 15:50:30,246 DEBUG: Image image_url_pr_10_13_sample_13.jpg matches: 0 out of 0\n2020-07-17 15:50:30,246 INFO: Matched 6 pairs for 5 ref_images (perspective-perspective: 6) in 2.202870965999864 seconds (0.36714523433329305 seconds/pair).\n2020-07-17 15:50:32,311 INFO: reading features\n2020-07-17 15:50:32,367 DEBUG: Merging features onto tracks\n2020-07-17 15:50:32,414 DEBUG: Good tracks: 3429\n2020-07-17 15:50:34,516 INFO: Starting incremental reconstruction\n2020-07-17 15:50:34,558 INFO: Starting reconstruction with image_url_pr_10_13_sample_11.jpg and image_url_pr_10_13_sample_12.jpg\n2020-07-17 15:50:34,597 INFO: Two-view reconstruction inliers: 1748 / 1748\n2020-07-17 15:50:34,788 INFO: Triangulated: 1551\n2020-07-17 15:50:34,812 DEBUG: Ceres Solver Report: Iterations: 3, Initial cost: 3.447386e+02, Final cost: 3.387344e+02, Termination: CONVERGENCE\n2020-07-17 15:50:34,971 DEBUG: Ceres Solver Report: Iterations: 3, Initial cost: 3.402295e+02, Final cost: 3.387048e+02, Termination: CONVERGENCE\nAlign plane: [ 0.03595051 -0.99933397 0.00625842 0. ]\n2020-07-17 15:50:35,188 DEBUG: Ceres Solver Report: Iterations: 16, Initial cost: 2.366617e+01, Final cost: 1.747716e+01, Termination: CONVERGENCE\n2020-07-17 15:50:35,195 INFO: Removed outliers: 0\n2020-07-17 15:50:35,197 INFO: -------------------------------------------------------\n2020-07-17 15:50:35,212 INFO: image_url_pr_10_13_sample_13.jpg resection inliers: 766 / 769\n2020-07-17 15:50:35,277 DEBUG: Ceres Solver Report: Iterations: 4, Initial cost: 6.813228e+01, Final cost: 5.968688e+01, Termination: CONVERGENCE\n2020-07-17 15:50:35,277 INFO: Adding image_url_pr_10_13_sample_13.jpg to the reconstruction\n2020-07-17 15:50:35,404 INFO: Re-triangulating\nAlign plane: [-1.64391829e-01 8.28579846e-02 9.82908887e-01 3.39743057e-14]\n2020-07-17 15:50:36,629 DEBUG: Ceres Solver Report: Iterations: 72, Initial cost: 1.071345e+02, Final cost: 5.625512e+01, Termination: CONVERGENCE\n2020-07-17 15:50:37,115 DEBUG: Ceres Solver Report: Iterations: 11, Initial cost: 5.525121e+01, Final cost: 5.462564e+01, Termination: CONVERGENCE\n2020-07-17 15:50:37,130 INFO: Removed outliers: 0\n2020-07-17 15:50:37,131 INFO: -------------------------------------------------------\n2020-07-17 15:50:37,135 INFO: Some images can not be added\n2020-07-17 15:50:37,135 INFO: -------------------------------------------------------\nAlign plane: [-1.14533010e-01 1.24543053e-01 9.85581665e-01 2.00914244e-14]\n2020-07-17 15:50:37,788 DEBUG: Ceres Solver Report: Iterations: 31, Initial cost: 6.335694e+01, Final cost: 5.542189e+01, Termination: CONVERGENCE\n2020-07-17 15:50:37,803 INFO: Removed outliers: 0\n2020-07-17 15:50:37,835 INFO: {'points_count': 2519, 'cameras_count': 3, 'observations_count': 5860, 'average_track_length': 2.3263199682413656, 'average_track_length_notwo': 3.0}\n2020-07-17 15:50:37,835 INFO: Starting reconstruction with image_url_pr_10_13_sample_07.jpg and image_url_pr_10_13_sample_08.jpg\n2020-07-17 15:50:37,897 INFO: Two-view reconstruction inliers: 737 / 738\n2020-07-17 15:50:38,081 INFO: Triangulated: 738\n2020-07-17 15:50:38,094 DEBUG: Ceres Solver Report: Iterations: 2, Initial cost: 4.094747e+02, Final cost: 4.090971e+02, Termination: CONVERGENCE\n2020-07-17 15:50:38,155 DEBUG: Ceres Solver Report: Iterations: 2, Initial cost: 4.091877e+02, Final cost: 4.090466e+02, Termination: CONVERGENCE\nAlign plane: [ 0.0163867 -0.99986469 -0.00144285 0. ]\n2020-07-17 15:50:38,354 DEBUG: Ceres Solver Report: Iterations: 28, Initial cost: 1.971526e+01, Final cost: 1.242737e+01, Termination: CONVERGENCE\n2020-07-17 15:50:38,357 INFO: Removed outliers: 1\n2020-07-17 15:50:38,358 INFO: -------------------------------------------------------\nAlign plane: [-5.31242776e-02 3.42329121e-02 9.98000961e-01 -8.61464469e-15]\n2020-07-17 15:50:38,493 DEBUG: Ceres Solver Report: Iterations: 11, Initial cost: 1.899088e+01, Final cost: 1.239818e+01, Termination: CONVERGENCE\n2020-07-17 15:50:38,497 INFO: Removed outliers: 0\n2020-07-17 15:50:38,505 INFO: {'points_count': 737, 'cameras_count': 2, 'observations_count': 1474, 'average_track_length': 2.0, 'average_track_length_notwo': -1}\n2020-07-17 15:50:38,506 INFO: Reconstruction 0: 3 images, 2519 points\n2020-07-17 15:50:38,506 INFO: Reconstruction 1: 2 images, 737 points\n2020-07-17 15:50:38,506 INFO: 2 partial reconstructions in total.\n" ], [ "# adding the --all command to include all partial reconstructions\n!opensfm export_ply --all CAP_sample_1\n", "_____no_output_____" ], [ "import open3d as o3d\nfrom open3d import JVisualizer\n\n# it turns out that we have two partial reconstructions from the reconstruct command\n# open3d actually has a very convenient way of combining point clouds, just by using the + operator\npcd = o3d.io.read_point_cloud(\"CAP_sample_1/reconstruction_files/reconstruction_0.ply\")\npcd += o3d.io.read_point_cloud(\"CAP_sample_1/reconstruction_files/reconstruction_1.ply\")\nvisualizer = JVisualizer()\nvisualizer.add_geometry(pcd)\nvisualizer.show()", "_____no_output_____" ] ], [ [ "So what are we seeing? We see two collections of points, both mostly coplanar internally (which we expect, given that this is a mostly planar scene), but the two sets are not aligned with each other! Let's look a bit more closely...", "_____no_output_____" ] ], [ [ "# here, we're just going to plot the z (altitude) values of the reconstructed points\npoint_coord = np.asarray(pcd.points)\nplt.hist(point_coord[:, 2].ravel())\nplt.show()", "_____no_output_____" ] ], [ [ "So not only are the points misaligned, but we're getting wild altitude values! **What's going on?**", "_____no_output_____" ], [ "### Exercise\nLet's make a critical assumption: all of the image coordinates (the GPS coordinates of the camera as it takes an image) all lie on a plane (in the mathematical sense). Answer the following questions:\n- How many points are needed to specify a (mathematical) plane?\n- In addition to the number of points, what other requirement do those points need?\n- Look at the visualization above. Do the camera points fulfill that requirement?\n- One way to resolve the ambiguity is to determine what direction is \"up\" (i.e. pointing away from the center of the Earth). Propose a solution to determine the up-vector. You can either assume the same setup that we currently have or propose new sensors/other setups.", "_____no_output_____" ], [ "<details>\n <summary>CLICK HERE TO SEE THE PROPOSED SOLUTION</summary>\n We're going to make a fair (but limited) assumption that the ground is mostly flat. It turns out we can fit a plane through the reconstructed ground points and find a direction perpendicular to the plane (called the plane normal). If the ground is flat, then the normal should be close enough to the up direction. Note that this assumption does not hold for an area with a lot of inclination. In practice, we would most likely augment this with a Digital Elevation Model (DEM)\n\n</details>\n\n<img src=\"notebook_images/plane_normal.png\" width=\"500\" />\n\nTo implement the proposed solution, we can go to the CAP_sample_1/config.yaml file and modify \"align_orientation_prior\" from \"horizontal\" to \"plane_based\". Afterwards, we run the previous commands as usual.", "_____no_output_____" ] ], [ [ "# This creates \"tracks\" for the features. That is to say, if a feature in image 1 is matched with one in image 2,\n# and in turn that one is matched with one in image 3, then it links the matches between 1 and 3. \n!opensfm create_tracks CAP_sample_1\n\n# Calculates the essential matrix, the camera pose and the reconstructed feature points\n!opensfm reconstruct CAP_sample_1\n\n# adding the --all command to include all partial reconstructions\n!opensfm export_ply --all CAP_sample_1", "2020-07-17 15:50:42,846 INFO: reading features\n2020-07-17 15:50:42,900 DEBUG: Merging features onto tracks\n2020-07-17 15:50:42,943 DEBUG: Good tracks: 3429\n2020-07-17 15:50:45,033 INFO: Starting incremental reconstruction\n2020-07-17 15:50:45,081 INFO: Starting reconstruction with image_url_pr_10_13_sample_11.jpg and image_url_pr_10_13_sample_12.jpg\n2020-07-17 15:50:45,119 INFO: Two-view reconstruction inliers: 1748 / 1748\n2020-07-17 15:50:45,316 INFO: Triangulated: 1551\n2020-07-17 15:50:45,343 DEBUG: Ceres Solver Report: Iterations: 3, Initial cost: 3.447386e+02, Final cost: 3.387344e+02, Termination: CONVERGENCE\n2020-07-17 15:50:45,494 DEBUG: Ceres Solver Report: Iterations: 3, Initial cost: 3.402295e+02, Final cost: 3.387048e+02, Termination: CONVERGENCE\nAlign plane: [ 0.03595051 -0.99933397 0.00625842 0. ]\n2020-07-17 15:50:45,725 DEBUG: Ceres Solver Report: Iterations: 16, Initial cost: 2.366617e+01, Final cost: 1.747716e+01, Termination: CONVERGENCE\n2020-07-17 15:50:45,733 INFO: Removed outliers: 0\n2020-07-17 15:50:45,734 INFO: -------------------------------------------------------\n2020-07-17 15:50:45,751 INFO: image_url_pr_10_13_sample_13.jpg resection inliers: 766 / 769\n2020-07-17 15:50:45,786 DEBUG: Ceres Solver Report: Iterations: 4, Initial cost: 6.813228e+01, Final cost: 5.968688e+01, Termination: CONVERGENCE\n2020-07-17 15:50:45,787 INFO: Adding image_url_pr_10_13_sample_13.jpg to the reconstruction\n2020-07-17 15:50:45,911 INFO: Re-triangulating\nAlign plane: [-1.64391829e-01 8.28579845e-02 9.82908887e-01 1.68926384e-14]\n2020-07-17 15:50:47,131 DEBUG: Ceres Solver Report: Iterations: 72, Initial cost: 1.071345e+02, Final cost: 5.625512e+01, Termination: CONVERGENCE\n2020-07-17 15:50:47,593 DEBUG: Ceres Solver Report: Iterations: 11, Initial cost: 5.525121e+01, Final cost: 5.462564e+01, Termination: CONVERGENCE\n2020-07-17 15:50:47,608 INFO: Removed outliers: 0\n2020-07-17 15:50:47,610 INFO: -------------------------------------------------------\n2020-07-17 15:50:47,613 INFO: Some images can not be added\n2020-07-17 15:50:47,614 INFO: -------------------------------------------------------\nAlign plane: [-1.14533010e-01 1.24543053e-01 9.85581665e-01 1.94621923e-14]\n2020-07-17 15:50:48,227 DEBUG: Ceres Solver Report: Iterations: 31, Initial cost: 6.335694e+01, Final cost: 5.542189e+01, Termination: CONVERGENCE\n2020-07-17 15:50:48,243 INFO: Removed outliers: 0\n2020-07-17 15:50:48,275 INFO: {'points_count': 2519, 'cameras_count': 3, 'observations_count': 5860, 'average_track_length': 2.3263199682413656, 'average_track_length_notwo': 3.0}\n2020-07-17 15:50:48,275 INFO: Starting reconstruction with image_url_pr_10_13_sample_07.jpg and image_url_pr_10_13_sample_08.jpg\n2020-07-17 15:50:48,337 INFO: Two-view reconstruction inliers: 737 / 738\n2020-07-17 15:50:48,491 INFO: Triangulated: 738\n2020-07-17 15:50:48,502 DEBUG: Ceres Solver Report: Iterations: 2, Initial cost: 4.094747e+02, Final cost: 4.090971e+02, Termination: CONVERGENCE\n2020-07-17 15:50:48,564 DEBUG: Ceres Solver Report: Iterations: 2, Initial cost: 4.091877e+02, Final cost: 4.090466e+02, Termination: CONVERGENCE\nAlign plane: [ 0.0163867 -0.99986469 -0.00144285 0. ]\n2020-07-17 15:50:48,738 DEBUG: Ceres Solver Report: Iterations: 28, Initial cost: 1.971526e+01, Final cost: 1.242737e+01, Termination: CONVERGENCE\n2020-07-17 15:50:48,742 INFO: Removed outliers: 1\n2020-07-17 15:50:48,742 INFO: -------------------------------------------------------\nAlign plane: [-5.31242776e-02 3.42329121e-02 9.98000961e-01 -3.78115673e-15]\n2020-07-17 15:50:48,824 DEBUG: Ceres Solver Report: Iterations: 11, Initial cost: 1.899088e+01, Final cost: 1.239818e+01, Termination: CONVERGENCE\n2020-07-17 15:50:48,828 INFO: Removed outliers: 0\n2020-07-17 15:50:48,878 INFO: {'points_count': 737, 'cameras_count': 2, 'observations_count': 1474, 'average_track_length': 2.0, 'average_track_length_notwo': -1}\n2020-07-17 15:50:48,878 INFO: Reconstruction 0: 3 images, 2519 points\n2020-07-17 15:50:48,878 INFO: Reconstruction 1: 2 images, 737 points\n2020-07-17 15:50:48,878 INFO: 2 partial reconstructions in total.\n" ] ], [ [ "## Georegistration\n\nThe process of assigning GPS coordinates to individual pixels is called _georegistration_ or _georeferencing_. This requires us to perform a final transformation from pixel coordinates *per each image* to the 3D reconstructed coordinates. Before doing so, it is worthwhile talking a bit about what exactly our 3D coordinate system is. \n\nYou might recall that not all coordinate referece systems lend themselves well to geometric transformations. Specifically, we want our 3D coordinate system to be Cartesian (i.e. three orthogonal, right-handed axes). OpenSfM performs its reconstructions in what is known as a *local tangent plane coordinate system* called *local east, north, up (ENU) coordinates*. The way this works is, you select an origin somewhere in the world (in our case, it is saved in the reference_lla.json file), and you align your axes such that the x-axis is parallel to latitudes and increasing Eastward, the y-axis is parallel to meridians and increasing Northward, and the z-axis is pointing away from the center of the Earth. The image below shows how this works:\n\n<img src=\"notebook_images/enu.png\" width=\"500\" />\n\nIn order to convert from ENU coordinates to geodetic coordinates (i.e. latitude, longitude, altitude), you need to know the origin. ", "_____no_output_____" ] ], [ [ "# Origin of our reconstruction, as given by the reference_lla.json (made from the reconstruction)\nwith open(\"CAP_sample_1/reference_lla.json\", \"r\") as f:\n reference_lla = json.load(f)\n latitude=reference_lla[\"latitude\"]\n longitude=reference_lla[\"longitude\"]\n altitude=reference_lla[\"altitude\"]\n\n# This is the json file that contains the reconstructed feature points\nwith open(\"CAP_sample_1/reconstruction.json\", \"r\") as f:\n reconstructions = json.load(f)", "_____no_output_____" ] ], [ [ "There is a bit of work we need to go through to finalize the georegistration. First, we need to match the reconstructed features with the features on an image the tracks.csv file and the reconstruction.json can help us do that. The columns of tracks are as follows: image name, track ID (ID of the reconstructed point), feature ID (ID of the feature within the image), the *normalized* image coordinates x and y, the normalization factor s, and the color of the feature RGB.", "_____no_output_____" ] ], [ [ "from opensfm.features import denormalized_image_coordinates\n\n# reading the csv\ntracks = pd.read_csv(\"CAP_sample_1/tracks.csv\", sep=\"\\t\", skiprows=1, names=[\"image_name\", \"track_id\", \"feature_id\", \"x\", \"y\", \"s\", \"R\", \"G\", \"B\"])\n\n# we need to denormalize the coordinates to turn them into regular pixel coordinates\nnormalized_coor = tracks[[\"x\", \"y\", \"s\"]]\ndenormalized_coor = denormalized_image_coordinates(normalized_coor.values, 4496, 3000)\n\n# create a new column with the denormalized coordinates\ntracks[\"denorm_x\"] = denormalized_coor[:, 0]\ntracks[\"denorm_y\"] = denormalized_coor[:, 1]", "_____no_output_____" ] ], [ [ "We're going to store the georegistration by creating a new .tif file for every CAP image. As you can recall, .tif files save not just the pixel data but also the projection that allows it to be displayed on top of other map data. There are two parts to doing this:\n- First, we need to create an _orthorectified_ image. Simply put, this is one that is transformed such that it looks as though you are looking at it from the top down. \n- Second, we need to add *ground control points* (GCPs) to the orthorectified image. GCPs are correspondences between world coordinates and pixel coordinates.\n\nOnce we add the GCPs, any mapping software can plot the image such that the GCPs are aligned with their underlying coordinates. ", "_____no_output_____" ] ], [ [ "import shutil\nimport gdal, osr\ntry:\n from pymap3d import enu2geodetic\nexcept:\n !pip install pymap3d\n from pymap3d import enu2geodetic\n\nimport random\nfrom skimage import transform\n\nif not os.path.isdir(\"CAP_sample_1/geotiff/\"):\n os.mkdir(\"CAP_sample_1/geotiff/\")\nif not os.path.isdir(\"CAP_sample_1/ortho/\"):\n os.mkdir(\"CAP_sample_1/ortho/\")\n\nfor reconst in reconstructions:\n for shot in reconst[\"shots\"]:\n # some housekeeping\n shot_name = shot.split(\".\")[0]\n img = cv2.imread(\"CAP_sample_1/images/\"+shot)\n shape = img.shape\n \n # here we get the features from the image and their corresponding reconstructed features\n reconst_ids = list(map(int, reconst[\"points\"].keys()))\n tracks_shot = tracks[(tracks[\"image_name\"] == shot) & (tracks[\"track_id\"].isin(reconst_ids))]\n denorm_shot = np.round(tracks_shot[[\"denorm_x\", \"denorm_y\"]].values)\n reconst_shot = np.array([reconst[\"points\"][str(point)][\"coordinates\"] for point in tracks_shot[\"track_id\"]])\n \n # we're going to create an image that is distorted to fit within the world coordinates\n # pix_shot is just the reconstructed feature coordinates offset by some amount so that\n # all coordinates are positive.\n offset = np.min(reconst_shot[:, :2])\n pix_shot = reconst_shot[:, :2]-np.multiply(offset, offset<0)\n \n # transformation for the new orthorectified image\n H, inliers = cv2.findHomography(denorm_shot, pix_shot)\n \n # filtering out points that didn't fit the transformation\n reconst_shot = reconst_shot[inliers.ravel()==1, :]\n denorm_shot = np.round(denorm_shot[inliers.ravel()==1, :])\n pix_shot = np.round(pix_shot[inliers.ravel()==1, :])\n \n # creating the ortho image\n shape = tuple(np.max(pix_shot, axis=0).astype(int))\n ortho_img = cv2.warpPerspective(img, H, shape)\n cv2.imwrite(\"CAP_sample_1/ortho/\" + shot + \"_ortho.jpg\", ortho_img)\n \n # here we convert all of the reconstructed points into lat/lon coordinates\n geo_shot = np.array([enu2geodetic(reconst_shot[i, 0],reconst_shot[i, 1],reconst_shot[i, 2],latitude,longitude,altitude) for i in range(reconst_shot.shape[0])]) \n \n idx = random.sample(range(len(geo_shot)), 10)\n pix_shot_sample = pix_shot[idx, :]\n geo_shot_sample = geo_shot[idx, :]\n \n # creating the Ground Control Points\n orig_fn = \"CAP_sample_1/ortho/\" + shot + \"_ortho.jpg\"\n fn = \"CAP_sample_1/geotiff/\" + shot_name + \"_GCP.tif\"\n \n orig_ds = gdal.Open(orig_fn)\n gdal.GetDriverByName('GTiff').CreateCopy(fn, orig_ds)\n ds = gdal.Open(fn, gdal.GA_Update)\n sr = osr.SpatialReference()\n sr.SetWellKnownGeogCS('WGS84')\n \n gcps = [gdal.GCP(geo_shot_sample[i, 1], geo_shot_sample[i, 0], 0, int(pix_shot_sample[i, 0]), int(pix_shot_sample[i, 1])) for i in range(geo_shot_sample.shape[0])]\n \n ds.SetGCPs(gcps, sr.ExportToWkt())\n \n ds = None\n \n", "Processing /home/jovyan/.cache/pip/wheels/0c/24/19/30c440838faa979cd09d8ac37ae866669b980bfdaed4d0fb91/pymap3d-2.4.1-py3-none-any.whl\nInstalling collected packages: pymap3d\nSuccessfully installed pymap3d-2.4.1\n" ], [ "import rasterio\nimport rasterio.plot\n\nfig = plt.figure(figsize=(15, 15))\nfiles = [os.path.join('CAP_sample_1/geotiff/', f) for f in os.listdir('CAP_sample_1/geotiff/') if f.endswith(\"tif\")]\n\nfor i, file in enumerate(files):\n with rasterio.open(file, \"r\") as dataset:\n# dataset_mask = dataset.read_masks(1)\n# dataset_read = dataset.read(1)\n# rasterio.plot.show(np.ma.masked_where(dataset_mask==0, dataset_read), ax=ax)\n ax = fig.add_subplot(3, 2, i+1)\n rasterio.plot.show(dataset, ax=ax)\n ax.axis(\"equal\")", "_____no_output_____" ] ], [ [ "## Exercise\nIn the lesson folder, there is a spreadsheet with CAP images and their coordinates taken on October 13th, 2017. \n- Use geopandas to visualize the coordinates of all the images, and overlay it with some basemap\n- Select an area of those images that looks interesting to you. Use SfM to reconstruct at least 10 images\n- For those 10 images, select at least one and go through the georegistration process. Does the georegistration process yield good alignment with the ground truth? If not, why do you think that is?\n\nI **strongly** encourage you to tackle this as a team! Feel free to divide the tasks up as you see fit. ", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "markdown", "markdown", "markdown", "markdown" ], [ "code", "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ] ]
e7fef0687093065e9c194eacb1ca4c29dd0a24a4
25,956
ipynb
Jupyter Notebook
temp/prepper_old.ipynb
esturdivant-usgs/geomorph-working-files
bd8d5391714ad0d8243580b6278c21ba419d83c5
[ "CC0-1.0" ]
null
null
null
temp/prepper_old.ipynb
esturdivant-usgs/geomorph-working-files
bd8d5391714ad0d8243580b6278c21ba419d83c5
[ "CC0-1.0" ]
1
2018-12-26T18:11:51.000Z
2018-12-26T18:12:02.000Z
temp/prepper_old.ipynb
esturdivant-usgs/geomorph-working-files
bd8d5391714ad0d8243580b6278c21ba419d83c5
[ "CC0-1.0" ]
null
null
null
42.068071
959
0.642125
[ [ [ "# Pre-process input data for coastal variable extraction\n\nAuthor: Emily Sturdivant; esturdivant@usgs.gov\n\n***\n\nPre-process files to be used in extractor.ipynb (Extract barrier island metrics along transects). See the project [README](https://github.com/esturdivant-usgs/BI-geomorph-extraction/blob/master/README.md) and the Methods Report (Zeigler et al., in review). \n\n\n## Pre-processing steps\n\n1. Pre-created geomorphic features: dunes, shoreline points, armoring.\n2. Inlets\n3. Shoreline\n4. Transects - extend and sort\n5. Transects - tidy\n\n\n## Notes:\nThis process requires some manipulation of the spatial layers by the user. When applicable, instructions are described in this file.\n\n***\n\n## Import modules", "_____no_output_____" ] ], [ [ "import os\nimport sys\nimport pandas as pd\nimport numpy as np\nimport arcpy\nimport matplotlib.pyplot as plt\nimport matplotlib\nmatplotlib.style.use('ggplot')\ntry:\n import core.functions_warcpy as fwa\n import core.functions as fun\nexcept ImportError as err:\n print(\"Looks like you need to install the module to your ArcGIS environment. Please see the README for details.\")\n \nfrom core.setvars import *", "No module named 'CoastalVarExtractor'\nLooks like you need to install the module to your ArcGIS environment.\nTo do so: pip install git+https://github.com/esturdivant-usgs/BI-geomorph-extraction.git\n" ] ], [ [ "If you don't want to formally install the module, you'll need to add the path to the package to your system path: \n\n```python\nmod_path = r\"path\\to\\dir\\BI-geomorph-extraction\" # replace with path to module\nsys.path.insert(0, mod_path)\nimport CoastalVarExtractor.functions_warcpy as fwa\n```", "_____no_output_____" ], [ "## Initialize variables\n\nBased on the project directory, and the site and year you have input, setvars.py will set a bunch of variables as the names of folders, files, and fields. 1) set-up the project folder and paths: ", "_____no_output_____" ] ], [ [ "from core.setvars import *\n\n# Inputs - vector\norig_trans = os.path.join(arcpy.env.workspace, 'DelmarvaS_SVA_LT')\nShorelinePts = os.path.join(home, 'SLpts')\ndlPts = os.path.join(home, 'DLpts')\ndhPts = os.path.join(home, 'DHpts')\n# Inputs - raster\nelevGrid = os.path.join(home, 'DEM')\nelevGrid_5m = os.path.join(home, 'DEM_5m')\nSubType = os.path.join(home, 'FI11_SubType')\nVegType = os.path.join(home, 'FI11_VegType')\nVegDens = os.path.join(home, 'FI11_VegDens')\nGeoSet = os.path.join(home, 'FI11_GeoSet')\nDisMOSH = os.path.join(home, 'FI11_DisMOSH')\n\n# Files to create or modify\narmorLines = os.path.join(home, 'armorLines')\ninletLines = os.path.join(home, 'inletLines')\nSA_bounds = 'SA_bounds'\n\n# Outputs\nextendedTrans = os.path.join(home, 'extTrans')\nextTrans_tidy = os.path.join(home, 'tidyTrans')\nbarrierBoundary = os.path.join(home, 'bndpoly_2sl') \nshoreline = os.path.join(home, 'ShoreBetweenInlets')\n\ntr_w_anthro = os.path.join(home, 'extTrans_wAnthro')", "_____no_output_____" ] ], [ [ "## Dunes and armoring <a name=\"geofeatures\"></a>\nDisplay the points and the DEM in a GIS to check for irregularities. For example, if shoreline points representing a distance less than X m are visually offset from the general shoreline, they should likely be removed. Another red flag is when the positions of dlows and dhighs in relation to the shore are illogical, i.e. dune crests are seaward of dune toes. \n\nIf fill values in the morphology datasets are not -99999, then replace them will Null values. If they are -99999, the extractor can accept fill values as long as they match those in the rest of the extractor. It also accepts Null (None or np.nan) values. \n\nThe morphology datasets do not need to be reprojected to UTM because the find_ClosestPt2Trans_snap() function will reproject them if necessary. \n\n#### Replace fill values with Null. \nOnly necessary if the fill values are different from what will be used during the extraction routine to follow (default is -99999).", "_____no_output_____" ] ], [ [ "fwa.ReplaceValueInFC(dhPts, oldvalue=fill, newvalue=None, fields=[\"dhigh_z\"]) # Dhighs\nfwa.ReplaceValueInFC(dlPts, oldvalue=fill, newvalue=None, fields=[\"dlow_z\"]) # Dlows\nfwa.ReplaceValueInFC(ShorelinePts, oldvalue=fill, newvalue=None, fields=[\"slope\"]) # Shoreline", "_____no_output_____" ] ], [ [ "#### Project to UTM if not already. \nIf this happens, we need to change the file name for future processing. ", "_____no_output_____" ], [ "#### If desired, delete dune points with missing Z values. \nNot necessary because you can choose to exclude those points from the beach width calculation. ", "_____no_output_____" ] ], [ [ "# Delete points with fill elevation value from dune crests\nfmapdict = fwa.find_similar_fields('DH', dhPts, fields=['_z'])\narcpy.CopyFeatures_management(dhPts, dhPts+'_orig')\nfwa.DeleteFeaturesByValue(dhPts, [fmapdict['_z']['src']], deletevalue=-99999)\nprint('Deleted dune crest points that will fill elevation values. Original file is saved with the _orig suffix.')\n\n# Delete points with fill elevation value from dune toes\nfmapdict = fwa.find_similar_fields('DL', dlPts, fields=['_z'])\narcpy.CopyFeatures_management(dlPts, dlPts+'_orig')\nfwa.DeleteFeaturesByValue(dlPts, [fmapdict['_z']['src']], deletevalue=-99999)\nprint('Deleted dune toe points that will fill elevation values. Original file is saved with the _orig suffix.')", "_____no_output_____" ] ], [ [ "#### Armoring\nIf the dlows do not capture the entire top-of-beach due to atypical formations caused by anthropogenic modification, you may need to digitize the beachfront armoring. The next code block will generate an empty feature class. Refer to the DEM and orthoimagery. If there is no armoring in the study area, continue. If there is armoring, use the Editing toolbar to add lines to the feature class that trace instances of armoring. Common manifestations of what we call armoring are sandfencing and sandbagging and concrete seawalls. \n\nIf there is no armoring file in the project geodatabase, the extractor script will notify you that it is proceeding without armoring.\n\n*__Requires manipulation in GIS__*", "_____no_output_____" ] ], [ [ "arcpy.CreateFeatureclass_management(home, os.path.basename(armorLines), 'POLYLINE', spatial_reference=utmSR)\nprint(\"{} created. Now manually digitize the shorefront armoring.\".format(armorLines))", "_____no_output_____" ] ], [ [ "## Inlets\nWe also need to manually digitize inlets if an inlet delineation does not already exist. To do, the code below will produce the feature class. After which, use the Editing toolbar to create a line where the oceanside shore meets a tidal inlet. If the study area includes both sides of an inlet, that inlet will be represented by two lines. The inlet lines are use to define the bounds of the oceanside shore, which is also considered the point where the oceanside shore meets the bayside. Inlet lines must intersect the MHW contour. \n\nWhat do we do when the study area and not an inlet is the end?\n\n*__Requires manipulation in GIS__*", "_____no_output_____" ] ], [ [ "# manually create lines that correspond to end of land and cross the MHW line (use bndpoly/DEM)\narcpy.CreateFeatureclass_management(home, os.path.basename(inletLines), 'POLYLINE', spatial_reference=utmSR)\nprint(\"{} created. Now we'll stop for you to manually create lines at each inlet.\".format(inletLines))", "_____no_output_____" ] ], [ [ "## Shoreline\nThe shoreline is produced through a combination of the DEM and the shoreline points. The first step converts the DEM to both MTL and MHW contour polygons. Those polygons are combined to produce the full shoreline, which is considered to fall at MHW on the oceanside and MTL on the bayside (to include partially submerged wetland).\n\nIf the study area does not end cleanly at an inlet, create a separate polyline feature class (default name is 'SA_bounds') and add lines that bisect the shoreline; they should look and function like inlet lines. Specify this in the arguments for DEMtoFullShorelinePoly() and CreateShoreBetweenInlets().\n\nAt some small inlets, channel depth may be above MTL. In this case, the script left to its own devices will leave the MTL contour between the two inlet lines. This can be rectified after processing by deleting the mid-inlet features from the temp file 'shore_2split.' ", "_____no_output_____" ] ], [ [ "SA_bounds = 'SA_bounds'\n\nbndpoly = fwa.DEMtoFullShorelinePoly(elevGrid_5m, sitevals['MTL'], sitevals['MHW'], inletLines, ShorelinePts)\nprint('Select features from {} that should not be included in the final shoreline polygon. '.format(bndpoly))", "Creating the MTL contour polgon from the DEM...\nCreating the MHW contour polgon from the DEM...\nCombining the two polygons...\nIsolating the above-MTL portion of the polygon to the bayside...\n\nUser input required! Select extra features in bndpoly for deletion.\n\n Recommended technique: select the polygon/s to keep and then Switch Selection.\n\n" ] ], [ [ "*__Requires display in GIS__*\n\nUser input is required to identify only the areas within the study area and eliminate isolated landmasses that are not. Once the features to delete are selected, either delete in the GIS or run the code below. Make sure the bndpoly variable matches the layer name in the GIS.\n__Do not...__ select the features in ArcGIS and then run DeleteFeatures in this Notebook Python kernel. That will delete the entire feature class. \n\n```\narcpy.DeleteFeatures_management(bndpoly)\n```\n\nThe next step snaps the boundary polygon to the shoreline points anywhere they don't already match and as long as as they are within 25 m of each other. ", "_____no_output_____" ] ], [ [ "bndpoly = 'bndpoly'\nbarrierBoundary = fwa.NewBNDpoly(bndpoly, ShorelinePts, barrierBoundary, '25 METERS', '50 METERS')", "Created: \\\\Mac\\stor\\Projects\\TransectExtraction\\FireIsland2010\\FireIsland2010.gdb\\bndpoly_2sl\n" ], [ "shoreline = fwa.CreateShoreBetweenInlets(barrierBoundary, inletLines, shoreline, \n ShorelinePts, proj_code)", "Splitting \\\\Mac\\stor\\Projects\\TransectExtraction\\FireIsland2010\\FireIsland2010.gdb\\bndpoly_2sl_edited at inlets...\nPreserving only those line segments that intersect shoreline points...\nDissolving the line to create \\\\Mac\\stor\\Projects\\TransectExtraction\\FireIsland2010\\FireIsland2010.gdb\\ShoreBetweenInlets...\n" ] ], [ [ "After this step, you'll want to make sure the shoreline looks okay. There should be only one line segment for each stretch of shore between two inlets. Segments may be incorrectly deleted if the shoreline points are missing in the area. Segments may be incorrectly preserved if they are intersect a shoreline point. To rectify, either perform manual editing or rerun this code with modifications. ", "_____no_output_____" ], [ "## Transects - extend, sort, and tidy\n\nCreate extendedTrans, NASC transects for the study area extended to cover the island, with gaps filled, and sorted in the field sort_ID.\n\n### 1. Extend the transects and use a copy of the lines to fill alongshore gaps", "_____no_output_____" ] ], [ [ "# Delete transects over 200 m outside of the study area.\nif input(\"Need to remove extra transects? 'y' if barrierBoundary should be used to select. \") == 'y':\n fwa.RemoveTransectsOutsideBounds(orig_trans, barrierBoundary)\ntrans_extended = fwa.ExtendLine(orig_trans, os.path.join(arcpy.env.scratchGDB, 'trans_ext_temp'), extendlength, proj_code)\ntrans_presort = fwa.CopyAndWipeFC(trans_extended, os.path.join(arcpy.env.scratchGDB, 'trans_presort_temp'), ['sort_ID'])\nprint(\"MANUALLY: use groups of existing transects in new FC '{}' to fill gaps.\".format(trans_presort))", "Need to remove extra transects? 'y' if barrierBoundary exists and should be used to select. y\n\\\\Mac\\stor\\Projects\\TransectExtraction\\Fisherman2014\\Fisherman2014.gdb\\DelmarvaS_SVA_LT is already projected in UTM.\nMANUALLY: use groups of existing transects in new FC '\\\\Mac\\stor\\Projects\\TransectExtraction\\Fisherman2014\\scratch.gdb\\trans_presort_temp' to fill gaps.\n" ] ], [ [ "*__Requires manipulation in GIS__*\n\n1. Edit the trans_presort_temp feature class. __Move and rotate__ groups of transects to fill in gaps that are greater than 50 m alongshore. There is no need to preserve the original transects, but avoid overlapping the transects with each other and with the originals. Do not move any transects slightly. If they are moved, they will not be deleted in the next stage. If you slightly move any, you can either undo or delete that line entirely.", "_____no_output_____" ] ], [ [ "fwa.RemoveDuplicates(trans_presort, trans_extended, barrierBoundary)", "_____no_output_____" ] ], [ [ "### 2. Sort the transects along the shore\nUsually if the shoreline curves, we need to identify different groups of transects for sorting. This is because the GIS will not correctly establish the alongshore order by simple ordering from the identified sort_corner. If this is the case, answer __yes__ to the next prompt.", "_____no_output_____" ] ], [ [ "sort_lines = fwa.SortTransectPrep(spatialref=utmSR)", "Do we need to sort the transects in batches to preserve the order? (y/n) y\nMANUALLY: Add features to sort_lines. Indicate the order of use in 'sort' and the sort corner in 'sort_corn'.\n" ] ], [ [ "*__Requires manipulation in GIS__*\n\nThe last step generated an empty sort lines feature class if you indicated that transects need to be sorted in batches to preserve the order. Now, the user creates lines that will be used to spatially sort transects in groups. \n\nFor each group of transects:\n\n1. __Create a new line__ in 'sort_lines' that intersects all transects in the group. The transects intersected by the line will be sorted independently before being appended to the preceding groups. (*__add example figure__*)\n2. __Assign values__ for the fields 'sort,' 'sort_corner,' and 'reverse.' 'sort' indicates the order in which the line should be used and 'sort_corn' indicates the corner from which to perform the spatial sort ('LL', 'UL', etc.). 'reverse' indicates whether the order should be reversed (roughly equivalent to 'DESCENDING').\n3. Run the following code to create a new sorted transect file.\n", "_____no_output_____" ] ], [ [ "fwa.SortTransectsFromSortLines(trans_presort, extendedTrans, sort_lines, tID_fld)", "Creating new feature class \\\\Mac\\stor\\Projects\\TransectExtraction\\Fisherman2014\\Fisherman2014.gdb\\extTrans to hold sorted transects...\nSorting sort lines by field sort...\nFor each line, creating subset of transects and adding them in order to the new FC...\nCopying the generated OID values to the transect ID field (sort_ID)...\n" ] ], [ [ "### 3. Tidy the extended (and sorted) transects to remove overlap\n\n*__Requires manipulation in GIS__*\n\nOverlapping transects cause problems during conversion to 5-m points and to rasters. We create a separate feature class with the 'tidied' transects, in which the lines don't overlap. This is largely a manually process with the following steps: \n\n1. __Select__ transects to be used to split other transects. Prioritize transects that a) were originally from NASC, b) have dune points within 25 m, and c) are oriented perpendicular to shore. (*__add example figure__*)\n2. Use the __Copy Features__ geoprocessing tool to copy only the selected transects into a new feature class. If desired, here is the code that could be used to copy the selected features and clear the selection:\n ```python\n arcpy.CopyFeatures_management(extendedTrans, overlapTrans_lines)\n arcpy.SelectLayerByAttribute_management(extendedTrans, \"CLEAR_SELECTION\")\n ```\n3. Run the code below to split the transects at the selected lines of overlap.", "_____no_output_____" ] ], [ [ "overlapTrans_lines = os.path.join(arcpy.env.scratchGDB, 'overlapTrans_lines_temp')\nif not arcpy.Exists(overlapTrans_lines):\n overlapTrans_lines = input(\"Filename of the feature class of only 'boundary' transects: \")\ntrans_x = arcpy.Intersect_analysis([extendedTrans, overlapTrans_lines], \n os.path.join(arcpy.env.scratchGDB, 'overlap_points_temp'),\n 'ALL', output_type=\"POINT\")\narcpy.SplitLineAtPoint_management(extendedTrans, trans_x, extTrans_tidy)", "_____no_output_____" ] ], [ [ "Delete the extraneous segments __manually__. Recommended:\n\n1. Using __Select with Line__ draw a line to the appropriate side of the boundary transects. This will select the line segments that need to be deleted. \n2. __Delete__ the selected lines.\n3. Remove any remaining overlaps entirely by hand. Use the __Split Line__ tool in the Editing toolbar to split lines to be shortened at the points of overlap. Then delete the remnant sections. ", "_____no_output_____" ], [ "# Run the extractor!", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ] ]
e7ff09a12bcc69c80a8edb7d2a43f583cc9694ec
9,573
ipynb
Jupyter Notebook
notebooks/Python-in-2-days/D1_L2_IPython/06-IPython-And-Shell-Commands.ipynb
mohr110/ml_training
562df9c9c0d939d190c503753d75dcbac89c8340
[ "MIT" ]
1
2020-11-30T01:43:57.000Z
2020-11-30T01:43:57.000Z
notebooks/Python-in-2-days/D1_L2_IPython/06-IPython-And-Shell-Commands.ipynb
mohr110/ml_training
562df9c9c0d939d190c503753d75dcbac89c8340
[ "MIT" ]
5
2020-01-28T23:05:25.000Z
2022-02-10T00:22:11.000Z
notebooks/Python-in-2-days/D1_L2_IPython/06-IPython-And-Shell-Commands.ipynb
mohr110/ml_training
562df9c9c0d939d190c503753d75dcbac89c8340
[ "MIT" ]
6
2021-01-07T01:07:27.000Z
2021-03-28T18:14:29.000Z
42.736607
352
0.612138
[ [ [ "# IPython and Shell Commands", "_____no_output_____" ], [ "When working interactively with the standard Python interpreter, one of the frustrations is the need to switch between multiple windows to access Python tools and system command-line tools.\nIPython bridges this gap, and gives you a syntax for executing shell commands directly from within the IPython terminal.\nThe magic happens with the exclamation point: anything appearing after ``!`` on a line will be executed not by the Python kernel, but by the system command-line.\n\nThe following assumes you're on a Unix-like system, such as Linux or Mac OSX.\nSome of the examples that follow will fail on Windows, which uses a different type of shell by default (though with the 2016 announcement of native Bash shells on Windows, soon this may no longer be an issue!).\nIf you're unfamiliar with shell commands, I'd suggest reviewing the [Shell Tutorial](http://swcarpentry.github.io/shell-novice/) put together by the always excellent Software Carpentry Foundation.", "_____no_output_____" ], [ "## Quick Introduction to the Shell\n\nA full intro to using the shell/terminal/command-line is well beyond the scope of this chapter, but for the uninitiated we will offer a quick introduction here.\nThe shell is a way to interact textually with your computer.\nEver since the mid 1980s, when Microsoft and Apple introduced the first versions of their now ubiquitous graphical operating systems, most computer users have interacted with their operating system through familiar clicking of menus and drag-and-drop movements.\nBut operating systems existed long before these graphical user interfaces, and were primarily controlled through sequences of text input: at the prompt, the user would type a command, and the computer would do what the user told it to.\nThose early prompt systems are the precursors of the shells and terminals that most active data scientists still use today.\n\nSomeone unfamiliar with the shell might ask why you would bother with this, when many results can be accomplished by simply clicking on icons and menus.\nA shell user might reply with another question: why hunt icons and click menus when you can accomplish things much more easily by typing?\nWhile it might sound like a typical tech preference impasse, when moving beyond basic tasks it quickly becomes clear that the shell offers much more control of advanced tasks, though admittedly the learning curve can intimidate the average computer user.\n\nAs an example, here is a sample of a Linux/OSX shell session where a user explores, creates, and modifies directories and files on their system (``osx:~ $`` is the prompt, and everything after the ``$`` sign is the typed command; text that is preceded by a ``#`` is meant just as description, rather than something you would actually type in):\n\n```bash\nosx:~ $ echo \"hello world\" # echo is like Python's print function\nhello world\n\nosx:~ $ pwd # pwd = print working directory\n/home/jake # this is the \"path\" that we're sitting in\n\nosx:~ $ ls # ls = list working directory contents\nnotebooks projects \n\nosx:~ $ cd projects/ # cd = change directory\n\nosx:projects $ pwd\n/home/jake/projects\n\nosx:projects $ ls\ndatasci_book mpld3 myproject.txt\n\nosx:projects $ mkdir myproject # mkdir = make new directory\n\nosx:projects $ cd myproject/\n\nosx:myproject $ mv ../myproject.txt ./ # mv = move file. Here we're moving the\n # file myproject.txt from one directory\n # up (../) to the current directory (./)\nosx:myproject $ ls\nmyproject.txt\n```\n\nNotice that all of this is just a compact way to do familiar operations (navigating a directory structure, creating a directory, moving a file, etc.) by typing commands rather than clicking icons and menus.\nNote that with just a few commands (``pwd``, ``ls``, ``cd``, ``mkdir``, and ``cp``) you can do many of the most common file operations.\nIt's when you go beyond these basics that the shell approach becomes really powerful.", "_____no_output_____" ], [ "## Shell Commands in IPython\n\nAny command that works at the command-line can be used in IPython by prefixing it with the ``!`` character.\nFor example, the ``ls``, ``pwd``, and ``echo`` commands can be run as follows:\n\n```ipython\nIn [1]: !ls\nmyproject.txt\n\nIn [2]: !pwd\n/home/jake/projects/myproject\n\nIn [3]: !echo \"printing from the shell\"\nprinting from the shell\n```", "_____no_output_____" ], [ "## Passing Values to and from the Shell\n\nShell commands can not only be called from IPython, but can also be made to interact with the IPython namespace.\nFor example, you can save the output of any shell command to a Python list using the assignment operator:\n\n```ipython\nIn [4]: contents = !ls\n\nIn [5]: print(contents)\n['myproject.txt']\n\nIn [6]: directory = !pwd\n\nIn [7]: print(directory)\n['/Users/jakevdp/notebooks/tmp/myproject']\n```\n\nNote that these results are not returned as lists, but as a special shell return type defined in IPython:\n\n```ipython\nIn [8]: type(directory)\nIPython.utils.text.SList\n```\n\nThis looks and acts a lot like a Python list, but has additional functionality, such as\nthe ``grep`` and ``fields`` methods and the ``s``, ``n``, and ``p`` properties that allow you to search, filter, and display the results in convenient ways.\nFor more information on these, you can use IPython's built-in help features.", "_____no_output_____" ], [ "Communication in the other direction–passing Python variables into the shell–is possible using the ``{varname}`` syntax:\n\n```ipython\nIn [9]: message = \"hello from Python\"\n\nIn [10]: !echo {message}\nhello from Python\n```\n\nThe curly braces contain the variable name, which is replaced by the variable's contents in the shell command.", "_____no_output_____" ], [ "# Shell-Related Magic Commands\n\nIf you play with IPython's shell commands for a while, you might notice that you cannot use ``!cd`` to navigate the filesystem:\n\n```ipython\nIn [11]: !pwd\n/home/jake/projects/myproject\n\nIn [12]: !cd ..\n\nIn [13]: !pwd\n/home/jake/projects/myproject\n```\n\nThe reason is that shell commands in the notebook are executed in a temporary subshell.\nIf you'd like to change the working directory in a more enduring way, you can use the ``%cd`` magic command:\n\n```ipython\nIn [14]: %cd ..\n/home/jake/projects\n```\n\nIn fact, by default you can even use this without the ``%`` sign:\n\n```ipython\nIn [15]: cd myproject\n/home/jake/projects/myproject\n```\n\nThis is known as an ``automagic`` function, and this behavior can be toggled with the ``%automagic`` magic function.\n\nBesides ``%cd``, other available shell-like magic functions are ``%cat``, ``%cp``, ``%env``, ``%ls``, ``%man``, ``%mkdir``, ``%more``, ``%mv``, ``%pwd``, ``%rm``, and ``%rmdir``, any of which can be used without the ``%`` sign if ``automagic`` is on.\nThis makes it so that you can almost treat the IPython prompt as if it's a normal shell:\n\n```ipython\nIn [16]: mkdir tmp\n\nIn [17]: ls\nmyproject.txt tmp/\n\nIn [18]: cp myproject.txt tmp/\n\nIn [19]: ls tmp\nmyproject.txt\n\nIn [20]: rm -r tmp\n```\n\nThis access to the shell from within the same terminal window as your Python session means that there is a lot less switching back and forth between interpreter and shell as you write your Python code.", "_____no_output_____" ] ] ]
[ "markdown" ]
[ [ "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown" ] ]
e7ff1db6e0a006934ea92f7f8b8e8ce2377e9f24
24,124
ipynb
Jupyter Notebook
session/sentiment/fnet-base.ipynb
AetherPrior/malaya
45d37b171dff9e92c5d30bd7260b282cd0912a7d
[ "MIT" ]
88
2021-01-06T10:01:31.000Z
2022-03-30T17:34:09.000Z
session/sentiment/fnet-base.ipynb
AetherPrior/malaya
45d37b171dff9e92c5d30bd7260b282cd0912a7d
[ "MIT" ]
43
2021-01-14T02:44:41.000Z
2022-03-31T19:47:42.000Z
session/sentiment/fnet-base.ipynb
AetherPrior/malaya
45d37b171dff9e92c5d30bd7260b282cd0912a7d
[ "MIT" ]
38
2021-01-06T07:15:03.000Z
2022-03-19T05:07:50.000Z
39.808581
1,599
0.556334
[ [ [ "import os\nos.environ['CUDA_VISIBLE_DEVICES'] = '1'", "_____no_output_____" ], [ "import model as modeling\nimport tensorflow as tf\nimport tokenization\nimport optimization", "WARNING:tensorflow:From /home/husein/Desktop/fnet/optimization.py:71: The name tf.train.Optimizer is deprecated. Please use tf.compat.v1.train.Optimizer instead.\n\n" ], [ "tokenizer = tokenization.FullTokenizer(\n vocab_file = 'BERT.wordpiece', do_lower_case = False\n)", "WARNING:tensorflow:From /home/husein/Desktop/fnet/tokenization.py:135: The name tf.gfile.GFile is deprecated. Please use tf.io.gfile.GFile instead.\n\n" ], [ "from rules import normalized_chars\nimport random\n\nlaughing = {\n 'huhu',\n 'haha',\n 'gagaga',\n 'hihi',\n 'wkawka',\n 'wkwk',\n 'kiki',\n 'keke',\n 'huehue',\n 'hshs',\n 'hoho',\n 'hewhew',\n 'uwu',\n 'sksk',\n 'ksks',\n 'gituu',\n 'gitu',\n 'mmeeooww',\n 'meow',\n 'alhamdulillah',\n 'muah',\n 'mmuahh',\n 'hehe',\n 'salamramadhan',\n 'happywomensday',\n 'jahagaha',\n 'ahakss',\n 'ahksk'\n}\n\ndef make_cleaning(s, c_dict):\n s = s.translate(c_dict)\n return s\n\ndef cleaning(string):\n \"\"\"\n use by any transformer model before tokenization\n \"\"\"\n string = unidecode(string)\n \n string = ' '.join(\n [make_cleaning(w, normalized_chars) for w in string.split()]\n )\n string = re.sub('\\(dot\\)', '.', string)\n string = (\n re.sub(re.findall(r'\\<a(.*?)\\>', string)[0], '', string)\n if (len(re.findall(r'\\<a (.*?)\\>', string)) > 0)\n and ('href' in re.findall(r'\\<a (.*?)\\>', string)[0])\n else string\n )\n string = re.sub(\n r'\\w+:\\/{2}[\\d\\w-]+(\\.[\\d\\w-]+)*(?:(?:\\/[^\\s/]*))*', ' ', string\n )\n \n chars = '.,/'\n for c in chars:\n string = string.replace(c, f' {c} ')\n \n string = re.sub(r'[ ]+', ' ', string).strip().split()\n string = [w for w in string if w[0] != '@']\n x = []\n for word in string:\n word = word.lower()\n if any([laugh in word for laugh in laughing]):\n if random.random() >= 0.5:\n x.append(word)\n else:\n x.append(word)\n string = [w.title() if w[0].isupper() else w for w in x]\n return ' '.join(string)", "_____no_output_____" ], [ "# !wget https://raw.githubusercontent.com/huseinzol05/malay-dataset/master/sentiment/news-sentiment/sentiment-data-v2.csv", "_____no_output_____" ], [ "import pandas as pd\nfrom sklearn.preprocessing import LabelEncoder\nfrom unidecode import unidecode\nimport re\n\ndf = pd.read_csv('sentiment-data-v2.csv')\nY = LabelEncoder().fit_transform(df.label)\n\ntexts = df.iloc[:,1].tolist()\nlabels = Y.tolist()\n\nassert len(labels) == len(texts)", "_____no_output_____" ], [ "import json\n\nwith open('/home/husein/sentiment/strong-positives.json') as fopen:\n positives = json.load(fopen)\n positives = random.sample(positives, 500000)\n \nlen(positives)", "_____no_output_____" ], [ "with open('/home/husein/sentiment/strong-negatives.json') as fopen:\n negatives = json.load(fopen)\n negatives = random.sample(negatives, 500000)\n \nlen(negatives)", "_____no_output_____" ], [ "texts += negatives\nlabels += [0] * len(negatives)\ntexts += positives\nlabels += [1] * len(positives)", "_____no_output_____" ], [ "from tqdm import tqdm\n\nfor i in tqdm(range(len(texts))):\n texts[i] = cleaning(texts[i])", "100%|██████████| 1003685/1003685 [02:02<00:00, 8201.50it/s]\n" ], [ "actual_t, actual_l = [], []\n\nfor i in tqdm(range(len(texts))):\n if len(texts[i]) > 2:\n actual_t.append(texts[i])\n actual_l.append(labels[i])", "100%|██████████| 1003685/1003685 [00:01<00:00, 775589.47it/s]\n" ], [ "from tqdm import tqdm\n\ninput_ids, input_masks = [], []\n\nfor text in tqdm(actual_t):\n tokens_a = tokenizer.tokenize(text)\n tokens = [\"[CLS]\"] + tokens_a + [\"[SEP]\"]\n input_id = tokenizer.convert_tokens_to_ids(tokens)\n input_mask = [1] * len(input_id)\n \n input_ids.append(input_id)\n input_masks.append(input_mask)", " 68%|██████▊ | 680174/1003685 [05:32<02:38, 2044.36it/s]\n" ], [ "epoch = 2\nbatch_size = 60\nwarmup_proportion = 0.1\nnum_train_steps = int(len(texts) / batch_size * epoch)\nnum_warmup_steps = int(num_train_steps * warmup_proportion)", "_____no_output_____" ], [ "def create_initializer(initializer_range=0.02):\n return tf.truncated_normal_initializer(stddev=initializer_range)\n\nclass Model:\n def __init__(\n self,\n dimension_output,\n learning_rate = 2e-5,\n training = True,\n ):\n self.X = tf.placeholder(tf.int32, [None, None])\n self.MASK = tf.placeholder(tf.int32, [None, None])\n self.Y = tf.placeholder(tf.int32, [None])\n \n model = modeling.Model(\n dim = 512, vocab_size = 32000, depth = 12, mlp_dim = 3072\n )\n sequence_output = model(\n self.X, training = training\n )\n # sequence_output *= tf.expand_dims(tf.cast(self.MASK, tf.float32), axis = -1)\n \n output_layer = sequence_output\n output_layer = tf.layers.dense(\n output_layer,\n model.hidden_size,\n activation=tf.tanh,\n kernel_initializer=create_initializer())\n self.logits_seq = tf.layers.dense(output_layer, dimension_output,\n kernel_initializer=create_initializer())\n self.logits_seq = tf.identity(self.logits_seq, name = 'logits_seq')\n self.logits = self.logits_seq[:, 0]\n self.logits = tf.identity(self.logits, name = 'logits')\n \n self.cost = tf.reduce_mean(\n tf.nn.sparse_softmax_cross_entropy_with_logits(\n logits = self.logits, labels = self.Y\n )\n )\n \n self.optimizer = optimization.create_optimizer(self.cost, learning_rate, \n num_train_steps, num_warmup_steps, False)\n correct_pred = tf.equal(\n tf.argmax(self.logits, 1, output_type = tf.int32), self.Y\n )\n self.accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32))", "_____no_output_____" ], [ "INIT_CHKPNT = 'fnet-base/model.ckpt-500000'", "_____no_output_____" ], [ "dimension_output = 2\nlearning_rate = 2e-5\n\ntf.reset_default_graph()\nsess = tf.InteractiveSession()\nmodel = Model(\n dimension_output,\n learning_rate\n)\n\nsess.run(tf.global_variables_initializer())", "/home/husein/.local/lib/python3.6/site-packages/tensorflow_core/python/client/session.py:1750: UserWarning: An interactive session is already active. This can cause out-of-memory errors in some cases. You must explicitly call `InteractiveSession.close()` to release resources held by the other session(s).\n warnings.warn('An interactive session is already active. This can '\n" ], [ "import collections\nimport re\n\ndef get_assignment_map_from_checkpoint(tvars, init_checkpoint):\n \"\"\"Compute the union of the current variables and checkpoint variables.\"\"\"\n assignment_map = {}\n initialized_variable_names = {}\n\n name_to_variable = collections.OrderedDict()\n for var in tvars:\n name = var.name\n m = re.match('^(.*):\\\\d+$', name)\n if m is not None:\n name = m.group(1)\n name_to_variable[name] = var\n\n init_vars = tf.train.list_variables(init_checkpoint)\n\n assignment_map = collections.OrderedDict()\n for x in init_vars:\n (name, var) = (x[0], x[1])\n if name not in name_to_variable:\n continue\n assignment_map[name] = name_to_variable[name]\n initialized_variable_names[name] = 1\n initialized_variable_names[name + ':0'] = 1\n\n return (assignment_map, initialized_variable_names)", "_____no_output_____" ], [ "tvars = tf.trainable_variables()\nassignment_map, initialized_variable_names = get_assignment_map_from_checkpoint(tvars, \n INIT_CHKPNT)", "_____no_output_____" ], [ "saver = tf.train.Saver(var_list = assignment_map)\nsaver.restore(sess, INIT_CHKPNT)", "INFO:tensorflow:Restoring parameters from fnet-base/model.ckpt-500000\n" ], [ "from sklearn.model_selection import train_test_split\n\ntrain_input_ids, test_input_ids, train_Y, test_Y, train_mask, test_mask = train_test_split(\n input_ids, actual_l[:len(input_ids)], input_masks, test_size = 0.2\n)", "_____no_output_____" ], [ "pad_sequences = tf.keras.preprocessing.sequence.pad_sequences", "_____no_output_____" ], [ "from tqdm import tqdm\nimport time\n\nfor EPOCH in range(epoch):\n\n train_acc, train_loss, test_acc, test_loss = [], [], [], []\n pbar = tqdm(\n range(0, len(train_input_ids), batch_size), desc = 'train minibatch loop'\n )\n for i in pbar:\n index = min(i + batch_size, len(train_input_ids))\n batch_x = train_input_ids[i: index]\n batch_x = pad_sequences(batch_x, padding='post')\n batch_mask = train_mask[i: index]\n batch_mask = pad_sequences(batch_mask, padding='post')\n batch_y = train_Y[i: index]\n acc, cost, _ = sess.run(\n [model.accuracy, model.cost, model.optimizer],\n feed_dict = {\n model.Y: batch_y,\n model.X: batch_x,\n model.MASK: batch_mask\n },\n )\n train_loss.append(cost)\n train_acc.append(acc)\n pbar.set_postfix(cost = cost, accuracy = acc)\n \n pbar = tqdm(range(0, len(test_input_ids), batch_size), desc = 'test minibatch loop')\n for i in pbar:\n index = min(i + batch_size, len(test_input_ids))\n batch_x = test_input_ids[i: index]\n batch_x = pad_sequences(batch_x, padding='post')\n batch_mask = test_mask[i: index]\n batch_mask = pad_sequences(batch_mask, padding='post')\n batch_y = test_Y[i: index]\n acc, cost = sess.run(\n [model.accuracy, model.cost],\n feed_dict = {\n model.Y: batch_y,\n model.X: batch_x,\n model.MASK: batch_mask\n },\n )\n test_loss.append(cost)\n test_acc.append(acc)\n pbar.set_postfix(cost = cost, accuracy = acc)\n \n train_loss = np.mean(train_loss)\n train_acc = np.mean(train_acc)\n test_loss = np.mean(test_loss)\n test_acc = np.mean(test_acc)\n \n print(\n 'epoch: %d, training loss: %f, training acc: %f, valid loss: %f, valid acc: %f\\n'\n % (EPOCH, train_loss, train_acc, test_loss, test_acc)\n )", "train minibatch loop: 0%| | 1/9069 [00:23<60:07:10, 23.87s/it, accuracy=0.367, cost=0.711]" ] ] ]
[ "code" ]
[ [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ] ]
e7ff2a8a6f55a78e65610e798ff630ddcb362c35
117,266
ipynb
Jupyter Notebook
stock-time-series-using-gru.ipynb
23subbhashit/stock-price-prdictions-IBM-using-GRU
3f208d57ea9202e48af76c2d914c9dc97b7e38d1
[ "MIT" ]
null
null
null
stock-time-series-using-gru.ipynb
23subbhashit/stock-price-prdictions-IBM-using-GRU
3f208d57ea9202e48af76c2d914c9dc97b7e38d1
[ "MIT" ]
null
null
null
stock-time-series-using-gru.ipynb
23subbhashit/stock-price-prdictions-IBM-using-GRU
3f208d57ea9202e48af76c2d914c9dc97b7e38d1
[ "MIT" ]
null
null
null
117,266
117,266
0.931694
[ [ [ "# This Python 3 environment comes with many helpful analytics libraries installed\n# It is defined by the kaggle/python Docker image: https://github.com/kaggle/docker-python\n# For example, here's several helpful packages to load\n\nimport numpy as np # linear algebra\nimport pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)\n\n# Input data files are available in the read-only \"../input/\" directory\n# For example, running this (by clicking run or pressing Shift+Enter) will list all files under the input directory\n\nimport os\nfor dirname, _, filenames in os.walk('/kaggle/input'):\n for filename in filenames:\n print(os.path.join(dirname, filename))\n\n# You can write up to 5GB to the current directory (/kaggle/working/) that gets preserved as output when you create a version using \"Save & Run All\" \n# You can also write temporary files to /kaggle/temp/, but they won't be saved outside of the current session", "/kaggle/input/stock-time-series-20050101-to-20171231/GS_2006-01-01_to_2018-01-01.csv\n/kaggle/input/stock-time-series-20050101-to-20171231/UTX_2006-01-01_to_2018-01-01.csv\n/kaggle/input/stock-time-series-20050101-to-20171231/HD_2006-01-01_to_2018-01-01.csv\n/kaggle/input/stock-time-series-20050101-to-20171231/VZ_2006-01-01_to_2018-01-01.csv\n/kaggle/input/stock-time-series-20050101-to-20171231/XOM_2006-01-01_to_2018-01-01.csv\n/kaggle/input/stock-time-series-20050101-to-20171231/GOOGL_2006-01-01_to_2018-01-01.csv\n/kaggle/input/stock-time-series-20050101-to-20171231/AAPL_2006-01-01_to_2018-01-01.csv\n/kaggle/input/stock-time-series-20050101-to-20171231/MMM_2006-01-01_to_2018-01-01.csv\n/kaggle/input/stock-time-series-20050101-to-20171231/MSFT_2006-01-01_to_2018-01-01.csv\n/kaggle/input/stock-time-series-20050101-to-20171231/AABA_2006-01-01_to_2018-01-01.csv\n/kaggle/input/stock-time-series-20050101-to-20171231/TRV_2006-01-01_to_2018-01-01.csv\n/kaggle/input/stock-time-series-20050101-to-20171231/INTC_2006-01-01_to_2018-01-01.csv\n/kaggle/input/stock-time-series-20050101-to-20171231/CAT_2006-01-01_to_2018-01-01.csv\n/kaggle/input/stock-time-series-20050101-to-20171231/WMT_2006-01-01_to_2018-01-01.csv\n/kaggle/input/stock-time-series-20050101-to-20171231/CSCO_2006-01-01_to_2018-01-01.csv\n/kaggle/input/stock-time-series-20050101-to-20171231/UNH_2006-01-01_to_2018-01-01.csv\n/kaggle/input/stock-time-series-20050101-to-20171231/CVX_2006-01-01_to_2018-01-01.csv\n/kaggle/input/stock-time-series-20050101-to-20171231/IBM_2006-01-01_to_2018-01-01.csv\n/kaggle/input/stock-time-series-20050101-to-20171231/MCD_2006-01-01_to_2018-01-01.csv\n/kaggle/input/stock-time-series-20050101-to-20171231/PG_2006-01-01_to_2018-01-01.csv\n/kaggle/input/stock-time-series-20050101-to-20171231/GE_2006-01-01_to_2018-01-01.csv\n/kaggle/input/stock-time-series-20050101-to-20171231/PFE_2006-01-01_to_2018-01-01.csv\n/kaggle/input/stock-time-series-20050101-to-20171231/AXP_2006-01-01_to_2018-01-01.csv\n/kaggle/input/stock-time-series-20050101-to-20171231/JNJ_2006-01-01_to_2018-01-01.csv\n/kaggle/input/stock-time-series-20050101-to-20171231/JPM_2006-01-01_to_2018-01-01.csv\n/kaggle/input/stock-time-series-20050101-to-20171231/NKE_2006-01-01_to_2018-01-01.csv\n/kaggle/input/stock-time-series-20050101-to-20171231/AMZN_2006-01-01_to_2018-01-01.csv\n/kaggle/input/stock-time-series-20050101-to-20171231/all_stocks_2017-01-01_to_2018-01-01.csv\n/kaggle/input/stock-time-series-20050101-to-20171231/KO_2006-01-01_to_2018-01-01.csv\n/kaggle/input/stock-time-series-20050101-to-20171231/DIS_2006-01-01_to_2018-01-01.csv\n/kaggle/input/stock-time-series-20050101-to-20171231/all_stocks_2006-01-01_to_2018-01-01.csv\n/kaggle/input/stock-time-series-20050101-to-20171231/BA_2006-01-01_to_2018-01-01.csv\n/kaggle/input/stock-time-series-20050101-to-20171231/MRK_2006-01-01_to_2018-01-01.csv\n" ], [ "import numpy as np\nimport matplotlib.pyplot as plt\nplt.style.use('fivethirtyeight')\nimport pandas as pd\nfrom sklearn.preprocessing import MinMaxScaler\nfrom keras.models import Sequential\nfrom keras.layers import Dense, LSTM, Dropout, GRU, Bidirectional\nimport math\nfrom sklearn.metrics import mean_squared_error", "_____no_output_____" ], [ "data = pd.read_csv('/kaggle/input/stock-time-series-20050101-to-20171231/IBM_2006-01-01_to_2018-01-01.csv', index_col='Date', parse_dates=['Date'])\ndata.head()", "_____no_output_____" ], [ "training_set = dataset[:'2016'].iloc[:,1:2].values\ntest_set = dataset['2017':].iloc[:,1:2].values\ntraining_set", "_____no_output_____" ], [ "dataset[\"High\"][:'2016'].plot(figsize=(16,4),legend=True)\ndataset[\"High\"]['2017':].plot(figsize=(16,4),legend=True)\nplt.legend(['Training set (Before 2017)','Test set (2017 and beyond)'])\nplt.title('IBM stock price')\nplt.show()", "_____no_output_____" ], [ "sc = MinMaxScaler(feature_range=(0,1))\ntraining_set_scaled = sc.fit_transform(training_set)", "_____no_output_____" ], [ "X_train = []\ny_train = []\nfor i in range(60,2769):\n X_train.append(training_set_scaled[i-60:i,0])\n y_train.append(training_set_scaled[i,0])\nX_train, y_train = np.array(X_train), np.array(y_train)", "_____no_output_____" ], [ "X_train = np.reshape(X_train, (X_train.shape[0],X_train.shape[1],1))\nX_train.shape", "_____no_output_____" ], [ "regressorGRU = Sequential()\nregressorGRU.add(GRU(50, return_sequences=True, input_shape=(X_train.shape[1],1)))\nregressorGRU.add(GRU(50, return_sequences=True))\nregressorGRU.add(GRU(50, return_sequences=True))\nregressorGRU.add(GRU(50))\nregressorGRU.add(Dropout(0.2))\nregressorGRU.add(Dense(units=1))\nregressorGRU.compile(optimizer='rmsprop',loss='mean_squared_error',metrics=['accuracy'])\nregressorGRU.fit(X_train,y_train,epochs=20,batch_size=32,validation_split=0.33)", "Train on 1815 samples, validate on 894 samples\nEpoch 1/20\n1815/1815 [==============================] - 11s 6ms/step - loss: 0.0139 - accuracy: 0.0011 - val_loss: 0.0042 - val_accuracy: 0.0000e+00\nEpoch 2/20\n1815/1815 [==============================] - 9s 5ms/step - loss: 0.0067 - accuracy: 0.0011 - val_loss: 5.6301e-04 - val_accuracy: 0.0000e+00\nEpoch 3/20\n1815/1815 [==============================] - 9s 5ms/step - loss: 0.0047 - accuracy: 0.0011 - val_loss: 0.0016 - val_accuracy: 0.0000e+00\nEpoch 4/20\n1815/1815 [==============================] - 9s 5ms/step - loss: 0.0044 - accuracy: 0.0011 - val_loss: 9.7905e-04 - val_accuracy: 0.0000e+00\nEpoch 5/20\n1815/1815 [==============================] - 8s 5ms/step - loss: 0.0045 - accuracy: 0.0011 - val_loss: 0.0027 - val_accuracy: 0.0000e+00\nEpoch 6/20\n1815/1815 [==============================] - 8s 5ms/step - loss: 0.0040 - accuracy: 0.0011 - val_loss: 5.9965e-04 - val_accuracy: 0.0000e+00\nEpoch 7/20\n1815/1815 [==============================] - 8s 5ms/step - loss: 0.0038 - accuracy: 0.0011 - val_loss: 0.0047 - val_accuracy: 0.0000e+00\nEpoch 8/20\n1815/1815 [==============================] - 9s 5ms/step - loss: 0.0034 - accuracy: 0.0011 - val_loss: 0.0077 - val_accuracy: 0.0000e+00\nEpoch 9/20\n1815/1815 [==============================] - 8s 5ms/step - loss: 0.0031 - accuracy: 0.0011 - val_loss: 0.0182 - val_accuracy: 0.0000e+00\nEpoch 10/20\n1815/1815 [==============================] - 9s 5ms/step - loss: 0.0034 - accuracy: 0.0011 - val_loss: 0.0054 - val_accuracy: 0.0000e+00\nEpoch 11/20\n1815/1815 [==============================] - 9s 5ms/step - loss: 0.0030 - accuracy: 0.0011 - val_loss: 0.0021 - val_accuracy: 0.0000e+00\nEpoch 12/20\n1815/1815 [==============================] - 9s 5ms/step - loss: 0.0026 - accuracy: 0.0011 - val_loss: 2.6391e-04 - val_accuracy: 0.0000e+00\nEpoch 13/20\n1815/1815 [==============================] - 8s 5ms/step - loss: 0.0031 - accuracy: 0.0011 - val_loss: 0.0016 - val_accuracy: 0.0000e+00\nEpoch 14/20\n1815/1815 [==============================] - 8s 5ms/step - loss: 0.0027 - accuracy: 0.0011 - val_loss: 6.7184e-04 - val_accuracy: 0.0000e+00\nEpoch 15/20\n1815/1815 [==============================] - 8s 5ms/step - loss: 0.0024 - accuracy: 0.0011 - val_loss: 0.0069 - val_accuracy: 0.0000e+00\nEpoch 16/20\n1815/1815 [==============================] - 8s 5ms/step - loss: 0.0022 - accuracy: 0.0011 - val_loss: 0.0014 - val_accuracy: 0.0000e+00\nEpoch 17/20\n1815/1815 [==============================] - 9s 5ms/step - loss: 0.0022 - accuracy: 0.0011 - val_loss: 0.0056 - val_accuracy: 0.0000e+00\nEpoch 18/20\n1815/1815 [==============================] - 9s 5ms/step - loss: 0.0023 - accuracy: 0.0011 - val_loss: 0.0099 - val_accuracy: 0.0000e+00\nEpoch 19/20\n1815/1815 [==============================] - 8s 5ms/step - loss: 0.0021 - accuracy: 0.0011 - val_loss: 6.7247e-04 - val_accuracy: 0.0000e+00\nEpoch 20/20\n1815/1815 [==============================] - 8s 5ms/step - loss: 0.0024 - accuracy: 0.0011 - val_loss: 1.8588e-04 - val_accuracy: 0.0000e+00\n" ], [ "dataset_total = pd.concat((dataset[\"High\"][:'2016'],dataset[\"High\"]['2017':]),axis=0)\ninputs = dataset_total[len(dataset_total)-len(test_set) - 60:].values\ninputs = inputs.reshape(-1,1)\ninputs = sc.transform(inputs)\ninputs.shape", "_____no_output_____" ], [ "X_test = []\nfor i in range(60,311):\n X_test.append(inputs[i-60:i,0])\nX_test = np.array(X_test)\nX_test = np.reshape(X_test, (X_test.shape[0],X_test.shape[1],1))\nGRU_predicted_stock_price = regressorGRU.predict(X_test)\nGRU_predicted_stock_price = sc.inverse_transform(GRU_predicted_stock_price)", "_____no_output_____" ], [ "plt.plot(test_set, color='red',label='Real IBM Stock Price')\nplt.plot(GRU_predicted_stock_price, color='blue',label='Predicted IBM Stock Price')\nplt.title('IBM Stock Price Prediction(GRU)')\nplt.xlabel('Time')\nplt.ylabel('IBM Stock Price')\nplt.legend()\nplt.show()", "_____no_output_____" ], [ "rmse = math.sqrt(mean_squared_error(test_set, GRU_predicted_stock_price))\nprint(\"The root mean squared error is {}.\".format(rmse))", "The root mean squared error is 1.6706269361835897.\n" ] ] ]
[ "code" ]
[ [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ] ]
e7ff364d359ca40c19cc6d21d2a58bc723c3cda2
18,178
ipynb
Jupyter Notebook
Airplane_Accident - HackerEarth/8. HackerEarth ML - BaggingClassifier.ipynb
phileinSophos/ML-DL_Problems
033fc8a0086883fbe6748f2bf4725de7e8376e4b
[ "MIT" ]
null
null
null
Airplane_Accident - HackerEarth/8. HackerEarth ML - BaggingClassifier.ipynb
phileinSophos/ML-DL_Problems
033fc8a0086883fbe6748f2bf4725de7e8376e4b
[ "MIT" ]
null
null
null
Airplane_Accident - HackerEarth/8. HackerEarth ML - BaggingClassifier.ipynb
phileinSophos/ML-DL_Problems
033fc8a0086883fbe6748f2bf4725de7e8376e4b
[ "MIT" ]
null
null
null
27.837672
114
0.400209
[ [ [ "## Importing all the required libraries", "_____no_output_____" ] ], [ [ "import pandas as pd\nfrom sklearn.model_selection import train_test_split\nfrom sklearn.metrics import accuracy_score\nfrom sklearn.preprocessing import LabelEncoder\nfrom sklearn.ensemble import BaggingClassifier", "_____no_output_____" ] ], [ [ "### Reading the training dataset to Pandas DataFrame", "_____no_output_____" ] ], [ [ "data = pd.read_csv('train.csv')\ndata.head()", "_____no_output_____" ] ], [ [ "### Getting the target variables to Y variable", "_____no_output_____" ] ], [ [ "Y = data['Severity']\nY.shape", "_____no_output_____" ] ], [ [ "### Dropoing the irrelevent columns from training data", "_____no_output_____" ] ], [ [ "data = data.drop(columns=['Severity','Accident_ID','Accident_Type_Code','Adverse_Weather_Metric'],axis=1)\ndata.head()", "_____no_output_____" ] ], [ [ "### creating the Label Encoder object which will encode the target severities to numerical form", "_____no_output_____" ] ], [ [ "label_encode = LabelEncoder()\ny = label_encode.fit_transform(Y)", "_____no_output_____" ], [ "x_train,x_test,y_train,y_test = train_test_split(data,y,test_size=0.3)", "_____no_output_____" ], [ "bag = BaggingClassifier(n_estimators=100,)", "_____no_output_____" ], [ "bag.fit(data,y)", "_____no_output_____" ], [ "predictions = bag.predict(x_test)", "_____no_output_____" ], [ "accuracy_score(y_test,predictions)", "_____no_output_____" ], [ "test_data = pd.read_csv('test.csv')\naccident_id = test_data['Accident_ID']", "_____no_output_____" ], [ "print(test_data.shape)\ntest_data = test_data.drop(columns=['Accident_ID','Accident_Type_Code','Adverse_Weather_Metric'],axis=1)", "(2500, 11)\n" ], [ "test_data.shape", "_____no_output_____" ], [ "predictions = bag.predict(test_data)", "_____no_output_____" ], [ "predictions = label_encode.inverse_transform(predictions)", "_____no_output_____" ], [ "result_df = pd.DataFrame({'Accident_ID':accident_id,'Severity':predictions})\nresult_df.head()", "_____no_output_____" ], [ "result_df.to_csv('Prediction.csv',index=False)", "_____no_output_____" ] ], [ [ "## Accuracy - 86.22", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ] ]
e7ff37c70fa86d74765a69d07c26087cf2ce623a
14,299
ipynb
Jupyter Notebook
_notebooks/2021-07-02-ch1-data-model.ipynb
jjmachan/fluent-python
5e1b43902f215ce54b9bda54a4de7555fe13bf6f
[ "Apache-2.0" ]
3
2021-07-06T05:53:40.000Z
2021-07-07T05:41:07.000Z
_notebooks/2021-07-02-ch1-data-model.ipynb
jjmachan/fluent-python
5e1b43902f215ce54b9bda54a4de7555fe13bf6f
[ "Apache-2.0" ]
null
null
null
_notebooks/2021-07-02-ch1-data-model.ipynb
jjmachan/fluent-python
5e1b43902f215ce54b9bda54a4de7555fe13bf6f
[ "Apache-2.0" ]
null
null
null
26.677239
442
0.540318
[ [ [ "# \"Chapter 1: Data Model\"\n\n> Introduction about what \"Pythonic\" means.\n\n- toc:true\n- badges: true\n- author: JJmachan", "_____no_output_____" ], [ "## Pythonic Card Deck\n\nTo undertant how python works as a framework it is crutial that you get the Python Data Model. Python is very consistent and by that I mean that once you have some experince with the language you can start to correctly make informed guesses on other features about python even if its new. This will help you make your objects more pythonic by leveraging the options python has for:\n1. Iteration\n2. Collections\n3. Attribute access\n4. Operator overloading\n5. Function and method invocation\n6. Object creation and destruction\n7. String representation and formatting\n8. Managed contexts (i.e., with blocks)\n\nStuding these will give you the power to make your own python object play nicely with the python language and use many of the freatures mentioned above. In short makes you code \"pythonic\".\n\nLet see an example to show you the power of `__getitem__` and `__len__`.", "_____no_output_____" ] ], [ [ "import collections\n\n# namedtuple - tuples with names for each value in it (much like a class)\nCard = collections.namedtuple('Card', ['rank', 'suit'])\nc = Card('7', 'diamonds')\n\n# individual card object\nprint(c)\nprint(c.rank, c.suit)", "Card(rank='7', suit='diamonds')\n7 diamonds\n" ], [ "# class to represent the deck of cards\nclass FrenchDeck:\n ranks = [str(n) for n in range(2, 11)] + list('JQKA')\n suits = 'spades diamonds clubs hearts'.split()\n \n def __init__(self):\n self._cards = [Card(rank, suit) for suit in self.suits\n for rank in self.ranks]\n \n def __len__(self):\n return len(self._cards)\n \n def __getitem__(self, position):\n return self._cards[position]", "_____no_output_____" ], [ "deck = FrenchDeck()\n\n# with this simple class, we can already use `len` and `__getitem__`\nlen(deck), deck[0]", "_____no_output_____" ] ], [ [ "Now we have created a class FrenchDeck that is short but still packs a punch. All the basic operations are supported. Now imagine we have another usecase to pick a random card. Normally we would add another function but in this case we can use pythons existing lib function `random.choice()`.", "_____no_output_____" ] ], [ [ "from random import choice\n\nchoice(deck)", "_____no_output_____" ] ], [ [ "> We’ve just seen two advantages of using special methods to leverage the Python data\nmodel:\n> 1. The users of your classes don’t have to memorize arbitrary method names for stan‐\ndard operations (“How to get the number of items? Is it .size() , .length() , or\nwhat?”).\n> 2. It’s easier to benefit from the rich Python standard library and avoid reinventing\nthe wheel, like the random.choice function.\n\nBut we have even more features", "_____no_output_____" ] ], [ [ "# because of __getitem__, our deck is now slicable\ndeck[1:5]", "_____no_output_____" ], [ "# because of __getitem__, is iterable\nfor card in deck:\n if card.rank == 'K':\n print(card)", "Card(rank='K', suit='spades')\nCard(rank='K', suit='diamonds')\nCard(rank='K', suit='clubs')\nCard(rank='K', suit='hearts')\n" ], [ "# iteration is often implicit hence if the collection has no __contains__ method\n# the in operator does a sequential scan.\n\nCard('Q', 'spades') in deck", "_____no_output_____" ], [ "Card('M', 'spades') in deck", "_____no_output_____" ] ], [ [ "we can also make use the build-in `sorted()` function. We just need to proved a function for providing the values of the cards. Here the logic is provided in `spedes_high`", "_____no_output_____" ] ], [ [ "suit_value = dict(spades=3, hearts=2, diamonds=1, clubs=0)\n\ndef spades_high(card):\n rank_value = FrenchDeck.ranks.index(card.rank)\n return rank_value*len(suit_value) + suit_value[card.suit] ", "_____no_output_____" ], [ "for card in sorted(deck, key=spades_high)[:10]:\n print(card)", "Card(rank='2', suit='clubs')\nCard(rank='2', suit='diamonds')\nCard(rank='2', suit='hearts')\nCard(rank='2', suit='spades')\nCard(rank='3', suit='clubs')\nCard(rank='3', suit='diamonds')\nCard(rank='3', suit='hearts')\nCard(rank='3', suit='spades')\nCard(rank='4', suit='clubs')\nCard(rank='4', suit='diamonds')\n" ] ], [ [ "> Although FrenchDeck implicitly inherits from object its functionality is not inherited,\nbut comes from leveraging the data model and composition. By implementing the special \nmethods `__len__` and `__getitem__` , our FrenchDeck behaves like a standard Python\nsequence, allowing it to benefit from core language features (e.g., iteration and slicing). \nand from the standard library, as shown by the examples using random.choice ,\nreversed , and sorted . Thanks to composition, the `__len__` and `__getitem__` imple‐\nmentations can hand off all the work to a *list* object, `self._cards` .", "_____no_output_____" ], [ "## How special methods are used\n\nNormally you just define these special methods and call them via the inbuild methods like `len()` `in` `[index]` instead of calling it via `object.__len__()`. This gives you speed up in some cases and also plays nicely with other other python library functions since they all are now interfacing with the same endpoints.", "_____no_output_____" ], [ "### Enumerating Numeric Types\n\nSpecial methods can also be used to repond to operators like +, - etc. We will see an example of vector operations.", "_____no_output_____" ] ], [ [ "from math import hypot", "_____no_output_____" ], [ "class Vector:\n \n def __init__(self, x=0, y=0):\n self.x = x\n self.y = y\n \n def __repr__(self):\n return 'Vector(%d, %d)' %(self.x, self.y)\n \n def __abs__(self):\n return hypot(self.x, self.y)\n \n def __bool__(self):\n return bool(self.x or self.y)\n \n def __add__(self, other):\n x = self.x + other.x\n y = self.y + other.y\n \n return Vector(x, y)\n \n def __mul__(self, scalar):\n x = scalar * self.x\n y = scalar * self.y\n \n return Vector(x, y)", "_____no_output_____" ], [ "v = Vector(3, 4)\na = Vector(0, 0)\nprint(v)\nprint(abs(v))\nprint(v*2)\nprint(v + a)", "Vector(3, 4)\n5.0\nVector(6, 8)\nVector(3, 4)\n" ] ], [ [ "As you can see we implemented many special methods but we don't directly invoke them. The special methods are to be invoked by the interpretor most of the time, unless you are doing a lot of metaprogramming.", "_____no_output_____" ] ], [ [ "bool(a)", "_____no_output_____" ] ], [ [ "### String Representation\n\nWe use the `__repr__` special method to get the buildin string representation of of the object for inspection (note the usage in `vector` object. There are also other special methods like `__repr__with__str__` which is called by `str()` or `__str__` which is used to return a string for display to the end user. If your only implementing 1 function stick with `__repr__` since `print()` will fall back to that if `__str__` is not found.", "_____no_output_____" ], [ "### Arithmetic Operators\n\nIn the above example we have implemented `__add__` and `__mul__`. Note in both cases we are returning new object, reading from self, and other. This is the expected behaviour.", "_____no_output_____" ], [ "### Boolean Value of Custom Type\n\nIn python any object can be used in a boolean context. If `__bool__` or `__len__` is not implemented then the object will be truthy by default. IF `__bool__` is implemented that is called, if not python calls `__len__` and checks if the length is 0.", "_____no_output_____" ] ], [ [ "class Test:\n def __init__(self, x):\n self.x = x\n\nt = Test(0)\nt, bool(t)", "_____no_output_____" ], [ "class Test:\n def __init__(self, x):\n self.x = x\n \n def __bool__(self):\n return bool(self.x)\n \nt = Test(0)\nt, bool(t)", "_____no_output_____" ] ], [ [ "## Why len is Not a Method\n\n> Practicality beats purity\n\n`len` (similar to abs) in built-in data types, has a shortcut implmentation in CPython and they are just returning their length from the values defined in the c struct code. This makes it super fast for built-in data types. You can also consider these as unary operations. ", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "markdown", "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown", "markdown", "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown" ], [ "code", "code" ], [ "markdown" ] ]
e7ff442699ef2b73a430f4f994d5ecca542aa880
808,497
ipynb
Jupyter Notebook
PCA_Classification.ipynb
diviramon/NBA-Rookie-Analytics
354465279256ec7d7155ae5000de2685c203278e
[ "OLDAP-2.2.1" ]
2
2020-12-09T02:47:26.000Z
2021-01-20T22:46:15.000Z
PCA_Classification.ipynb
diviramon/NBA-Rookie-Analytics
354465279256ec7d7155ae5000de2685c203278e
[ "OLDAP-2.2.1" ]
null
null
null
PCA_Classification.ipynb
diviramon/NBA-Rookie-Analytics
354465279256ec7d7155ae5000de2685c203278e
[ "OLDAP-2.2.1" ]
null
null
null
251.320174
146,654
0.883037
[ [ [ "<a href=\"https://colab.research.google.com/github/diviramon/NBA-Rookie-Analytics/blob/main/PCA_Classification.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>", "_____no_output_____" ] ], [ [ "!rm -r sample_data/", "rm: cannot remove 'sample_data/': No such file or directory\n" ], [ "import pandas as pd\npd.set_option('display.max_columns', None)\nimport numpy as np\nfrom sklearn.decomposition import PCA\nfrom sklearn.preprocessing import StandardScaler\nimport matplotlib.pyplot as plt\nimport seaborn as sns\nfrom scipy.cluster.hierarchy import dendrogram, linkage\nfrom scipy.cluster.hierarchy import cophenet\nfrom scipy.spatial.distance import pdist # computing the distance\nfrom scipy.cluster.hierarchy import inconsistent\nfrom scipy.cluster.hierarchy import fcluster", "_____no_output_____" ] ], [ [ "## UTILS", "_____no_output_____" ] ], [ [ "# PCA class derived from skelean standard PCA package\n# code adapted from: https://github.com/A-Jyad/NBAPlayerClustering\nclass PCA_adv:\n def __init__(self, data, var_per):\n self.data = data\n self.pca = PCA(var_per, random_state = 0)\n self.PCA = self.pca.fit(self.Standard_Scaler_Preprocess().drop(['PLAYER', 'TEAM', 'POSITION'], axis = 1))\n \n def Standard_Scaler_Preprocess(self): \n std_scale = StandardScaler()\n std_scale_data = std_scale.fit_transform(self.data.drop(['PLAYER', 'TEAM', 'POSITION'], axis = 1))\n std_scale_data = pd.DataFrame(std_scale_data, columns = self.data.drop(['PLAYER', 'TEAM', 'POSITION'], axis = 1).columns.tolist())\n std_scale_data['PLAYER'] = self.data['PLAYER']\n std_scale_data['TEAM'] = self.data['TEAM']\n std_scale_data['POSITION'] = self.data['POSITION']\n return std_scale_data\n \n def PCA_name(self):\n PCA_name = []\n for i in range(1, self.PCA.n_components_ + 1):\n PCA_name += ['PC' + str(i)]\n return PCA_name\n \n def PCA_variance(self):\n pca_variance = pd.DataFrame({\"Variance Explained\" : self.PCA.explained_variance_,\n 'Percentage of Variance Explained' : self.PCA.explained_variance_ratio_}, index = self.PCA_name())\n pca_variance['Percentage of Variance Explained'] = (pca_variance['Percentage of Variance Explained'] * 100).round(0)\n pca_variance['Cumulative Percentage of Variance Explained'] = pca_variance['Percentage of Variance Explained'].cumsum()\n return pca_variance\n \n def PCA_transform(self, n):\n pca_data = self.pca.fit_transform(self.Standard_Scaler_Preprocess().drop(['PLAYER', 'TEAM', 'POSITION'], axis = 1))\n pca_data = pd.DataFrame(pca_data, columns = self.PCA_name())\n index = []\n for i in range(1, n+1):\n index += ['PC' + str(i)]\n pca_data = pca_data[index]\n pca_data['PLAYER'] = self.Standard_Scaler_Preprocess()['PLAYER']\n pca_data['TEAM'] = self.Standard_Scaler_Preprocess()['TEAM']\n pca_data['POSITION'] = self.Standard_Scaler_Preprocess()['POSITION']\n return pca_data\n \n def Heatmap(self): \n pca_eigen = pd.DataFrame(self.PCA.components_, columns = self.Standard_Scaler_Preprocess().drop(['PLAYER', 'TEAM', 'POSITION'], axis = 1).columns.tolist(), index = self.PCA_name()).T\n plt.figure(figsize = (10,10))\n sns.heatmap(pca_eigen.abs(), vmax = 0.5, vmin = 0)\n \n def PCA_sorted_eigen(self, PC):\n pca_eigen = pd.DataFrame(self.PCA.components_, columns = self.Standard_Scaler_Preprocess().drop(['PLAYER', 'TEAM', 'POSITION'], axis = 1).columns.tolist(), index = self.PCA_name()).T\n return pca_eigen.loc[pca_eigen[PC].abs().sort_values(ascending = False).index][PC]", "_____no_output_____" ], [ "# simple heat map function\ndef HeatMap(df, vert_min, vert_max):\n plt.figure(figsize = (10,10))\n sns.heatmap(df.corr(),\n vmin = vert_min, vmax = vert_max, center = 0,\n cmap = sns.diverging_palette(20, 220, n = 200),\n square = True)\n\n# utility function to normalize the players' data\ndef Standard_Scaler_Preprocess(data): \n std_scale = StandardScaler()\n std_scale_data = std_scale.fit_transform(data.drop(['PLAYER', 'TEAM', 'POSITION'], axis = 1))\n std_scale_data = pd.DataFrame(std_scale_data, columns = data.drop(['PLAYER', 'TEAM', 'POSITION'], axis = 1).columns.tolist())\n std_scale_data['PLAYER'] = data['PLAYER']\n std_scale_data['TEAM'] = data['TEAM']\n std_scale_data['POSITION'] = data['POSITION']\n return std_scale_data\n\n# Hierarchical Clustering class\n# code adapted from: https://github.com/A-Jyad/NBAPlayerClustering\nclass Cluster:\n def __init__(self, df, method):\n self.df = df\n self.method = method\n self.linked = linkage(self.df, self.method)\n\n # calculates cophenete value\n def cophenet_value(self):\n c, coph_dists = cophenet(self.linked, pdist(self.df))\n return c\n\n # denogram plotting function\n def dendrogram_truncated(self, n, y_min = 0, max_d = 0):\n plt.title('Hierarchical Clustering Dendrogram (truncated)')\n plt.xlabel('sample index')\n plt.ylabel('distance')\n dendro = dendrogram(\n self.linked,\n truncate_mode='lastp', # show only the last p merged clusters\n p=n, # show only the last p merged clusters\n leaf_rotation=90.,\n leaf_font_size=12.,\n show_contracted=True, # to get a distribution impression in truncated branches\n )\n\n for i, d, c in zip(dendro['icoord'], dendro['dcoord'], dendro['color_list']):\n x = 0.5 * sum(i[1:3])\n y = d[1]\n #if y > annotate_above:\n plt.plot(x, y, 'o', c=c)\n plt.annotate(\"%.3g\" % y, (x, y), xytext=(0, -5),\n textcoords='offset points',\n va='top', ha='center')\n if max_d:\n plt.axhline(y=max_d, c='k')\n\n plt.ylim(ymin = y_min)\n plt.show()\n\n def inconsistency(self):\n depth = 3\n incons = inconsistent(self.linked, depth)\n return incons[-15:]\n\n # silhoute and elbow plot \n def elbow_plot(self, cut = 0):\n last = self.linked[(-1*cut):, 2]\n last_rev = last[::-1]\n idxs = np.arange(1, len(last) + 1)\n plt.plot(idxs, last_rev)\n\n acceleration = np.diff(last, 2) # 2nd derivative of the distances\n self.acceleration_rev = acceleration[::-1]\n plt.plot(idxs[:-2] + 1, self.acceleration_rev)\n plt.show()\n \n def elbow_point(self):\n k = self.acceleration_rev.argmax() + 2 # if idx 0 is the max of this we want 2 clusters\n return k\n\n def create_cluster(self, max_d):\n clusters = fcluster(self.linked, max_d, criterion='distance')\n return clusters", "_____no_output_____" ] ], [ [ "## DATA LOADING", "_____no_output_____" ] ], [ [ "data = pd.read_csv('Data/career.csv') # csv file with the career averages of all players who played more than 10 seasons\ndata.drop(['Unnamed: 0'], axis =1, inplace=True) # csv conversion automatically creates an index column which is not needed\ndata.head()", "_____no_output_____" ] ], [ [ "## PCA Analysis", "_____no_output_____" ] ], [ [ "pca = PCA_adv(data, 0.89) # create PCA object that covers 89% of the variance\npca.PCA_variance()", "_____no_output_____" ], [ "pca_df = pca.PCA_transform(4) # run PCA for the first 4 components\npca.Heatmap() # heatmap of the PCs and variables", "_____no_output_____" ], [ "pca.PCA_sorted_eigen('PC1')[:10] # eigenvalues for PC1", "_____no_output_____" ], [ "pc1 = pca_df[['PLAYER','POSITION','PC1']].copy()\npc1.nlargest(10,'PC1') # players with largest PC1", "_____no_output_____" ], [ "pca.PCA_sorted_eigen('PC2')[:10] # eigenvalues for PC2", "_____no_output_____" ], [ "pc2 = pca_df[['PLAYER','POSITION','PC2']].copy()\npc2.nlargest(10,'PC2') # players with largest PC2", "_____no_output_____" ], [ "pca.PCA_sorted_eigen('PC3')[:10] # eigenvalues for PC3", "_____no_output_____" ], [ "pc3 = pca_df[['PLAYER','POSITION','PC3']].copy()\npc3.nlargest(10,'PC3') # players with largest PC3", "_____no_output_____" ], [ "pca.PCA_sorted_eigen('PC4')[:10] # eigenvalues for PC4", "_____no_output_____" ], [ "pc4 = pca_df[['PLAYER','POSITION','PC4']].copy()\npc4.nlargest(10,'PC4') # players with largest PC4", "_____no_output_____" ], [ "pca_df.head()", "_____no_output_____" ], [ "data_scaled = Standard_Scaler_Preprocess(pca_df) # normalize and standardize the PCA for clustering\ndata_scaled.head()", "_____no_output_____" ], [ "data_scaled.describe().round(1) # check PCs are standardized", "_____no_output_____" ], [ "num_data_scaled = data_scaled.drop(['PLAYER', 'POSITION', 'TEAM'], axis = 1) # keep numerical categories only\nnum_data_scaled.columns", "_____no_output_____" ] ], [ [ "## K-MEANS", "_____no_output_____" ] ], [ [ "# elbow test for K-means to predict appropiate number of clusters\nfrom sklearn.cluster import KMeans\nSum_of_squared_distances = []\nK = range(1,20)\nfor k in K:\n km = KMeans(n_clusters=k)\n km = km.fit(num_data_scaled)\n Sum_of_squared_distances.append(km.inertia_)\n\nplt.plot(K, Sum_of_squared_distances, 'bx-')\nplt.xlabel('k')\nplt.ylabel('Sum_of_squared_distances')\nplt.title('Elbow Method For Optimal k')\nplt.show()", "_____no_output_____" ], [ "# Silhouette test for K-means to predict appropiate number of clusters\nfrom sklearn.metrics import silhouette_score\n\nsil = []\nkmax = 10\n\n# dissimilarity would not be defined for a single cluster, thus, minimum number of clusters should be 2\nfor k in range(2, kmax+1):\n kmeans = KMeans(n_clusters = k).fit(num_data_scaled)\n labels = kmeans.labels_\n sil.append(silhouette_score(num_data_scaled, labels, metric = 'euclidean'))\n\nplt.plot(sil, 'bx-')\nplt.xlabel('k')\nplt.ylabel('Silhouette Score')\nplt.title('Silhouette Method For Optimal k')\nplt.show()", "_____no_output_____" ], [ "# Run K-means for 6 clusters\nX = num_data_scaled.copy()\nkmeans = KMeans(n_clusters=6)\nkmeans.fit(X)\ny_kmeans = kmeans.labels_\ncenters = kmeans.cluster_centers_", "_____no_output_____" ], [ "# Plot Results\nfrom mpl_toolkits.mplot3d import Axes3D\nimport numpy as np\nX['K-cluster'] = y_kmeans\nfig = plt.figure(figsize = (10,10))\nax = fig.add_subplot(111, projection='3d')\nfor i in range(6):\n x = np.array(X[X['K-cluster'] == i]['PC1'])\n y = np.array(X[X['K-cluster'] == i]['PC2'])\n z = np.array(X[X['K-cluster'] == i]['PC3'])\n ax.scatter(x, y, z, marker = 'o', s = 30)\nplt.title('K-Clusters Results')\nax.set_xlabel('PC1')\nax.set_ylabel('PC2')\nax.set_zlabel('PC3')\nax.legend([0,1,2,3,4,5])\nfor i in range(6): ax.scatter(centers[i][0],centers[i][1],centers[i][2],marker = 'o', s = 50,c='black') # plot the centers\nplt.show()", "_____no_output_____" ], [ "# assign clusters to the players\ndata_scaled_k = data_scaled.copy()\ndata_scaled_k['K-cluster'] = y_kmeans\n# Plot values per cluster\nplt.bar([0,1,2,3,4,5],data_scaled_k['K-cluster'].value_counts().sort_index())\nplt.xlabel('K-Cluster')\nplt.ylabel('Number of Players')\nplt.title('Player Distribution per Cluster')\nplt.show()", "_____no_output_____" ], [ "data_scaled_k['K-cluster'].value_counts().sort_index()", "_____no_output_____" ], [ "# heatmap for each cluster \nplt.figure(figsize = (10,10))\nsns.heatmap(data_scaled_k.groupby('K-cluster').mean(), vmin = -1.5, vmax = 1.5, center = 0, cmap = sns.diverging_palette(20, 220, n = 200), square = True)", "_____no_output_____" ], [ "# Find Representative Players in the clusters\ndata_scaled_k[data_scaled_k['K-cluster'] == 5][['PLAYER','POSITION','K-cluster','PC3']].sort_values(['PC3'],ascending=False).head(10)", "_____no_output_____" ], [ "# Save players classification for rookie cost analysis \nresults = data_scaled_k[['PLAYER','K-cluster']].copy()\nresults = results.rename({'K-cluster' : 'CLUSTER'}, axis = 1)\nresults.to_csv('results-k-cluster.csv')", "_____no_output_____" ] ], [ [ "## Complete Hierarchy", "_____no_output_____" ] ], [ [ "data_scaled_c = data_scaled.copy()\n# run complete linkage clustering\ncomplete = Cluster(num_data_scaled, 'complete')\ncomplete.dendrogram_truncated(15, 5, 6.2) # plot dendrogram", "_____no_output_____" ], [ "complete.elbow_plot(15) # elbow and silhouette plot", "_____no_output_____" ], [ "# Calculate Complete Clusters\ndata_scaled_c['complete_cluster'] = complete.create_cluster(6)\ndata_scaled_c['complete_cluster'].value_counts().sort_index()", "_____no_output_____" ], [ "# 3D plot results\nX = data_scaled_c.copy()\nfig = plt.figure(figsize = (10,10))\nax = fig.add_subplot(111, projection='3d')\nfor i in range(1,6):\n x = np.array(X[X['complete_cluster'] == i]['PC1'])\n y = np.array(X[X['complete_cluster'] == i]['PC2'])\n z = np.array(X[X['complete_cluster'] == i]['PC3'])\n ax.scatter(x, y, z, marker = 'o', s = 30)\nplt.title('Complete-Cluster Results')\nax.set_xlabel('PC1')\nax.set_ylabel('PC2')\nax.set_zlabel('PC3')\nax.legend([1,2,3,4,5])\nplt.show()", "_____no_output_____" ], [ "# Plot values per cluster\nplt.bar([1,2,3,4,5],data_scaled_c['complete_cluster'].value_counts().sort_index())\nplt.xlabel('Complete-Cluster')\nplt.ylabel('Number of Players')\nplt.title('Player Distribution per Cluster')\nplt.show()", "_____no_output_____" ], [ "# heatmap plot\nplt.figure(figsize = (10,10))\nsns.heatmap(data_scaled_c.groupby('complete_cluster').mean(), vmin = -1.5, vmax = 1.5, center = 0, cmap = sns.diverging_palette(20, 220, n = 200), square = True)", "_____no_output_____" ], [ "# get representative players per cluster\ndata_scaled_c[data_scaled_c['complete_cluster'] == 5][['PLAYER','POSITION','complete_cluster','PC4']].sort_values(['PC4'],ascending=False).head(10)", "_____no_output_____" ], [ "# Save results\nres = data_scaled_c[['PLAYER','complete_cluster']].copy()\nres = res.rename({'complete_cluster' : 'CLUSTER'}, axis = 1)\nres.to_csv('results-complete.csv')", "_____no_output_____" ] ], [ [ "## SINGLE", "_____no_output_____" ] ], [ [ "data_scaled_s = data_scaled.copy()\n# run single linkage clustering\nsingle = Cluster(num_data_scaled, 'single')\nsingle.dendrogram_truncated(15) # plot dendrogram", "_____no_output_____" ], [ "single.elbow_plot(15) # elbow and silhouette plot", "_____no_output_____" ], [ "# Inadequate for the given data (all players fall in one cluster)\ndata_scaled_s['single_cluster'] = single.create_cluster(1.5)\ndata_scaled_s['single_cluster'].value_counts()", "_____no_output_____" ] ], [ [ "## Average", "_____no_output_____" ] ], [ [ "data_scaled_a = data_scaled.copy()\n# run average linkage clustering\naverage = Cluster(num_data_scaled, 'average')\naverage.dendrogram_truncated(15, 3, 4) # plot dendrogram", "0.6462838564130776\n" ], [ "average.elbow_plot(15) # silhouette and elbow plot", "_____no_output_____" ], [ "# Inadequate for the given data\ndata_scaled_a['average_cluster'] = average.create_cluster(3.5)\ndata_scaled_a['average_cluster'].value_counts()", "_____no_output_____" ] ], [ [ "## WARD method", "_____no_output_____" ] ], [ [ "# calculate ward linkage\ndata_scaled_w = data_scaled.copy()\nward = Cluster(num_data_scaled, 'ward')\nward.dendrogram_truncated(15, 5, 11)", "_____no_output_____" ], [ "# calculate elbow and silhouette plots\nward.elbow_plot(15)", "_____no_output_____" ], [ "# Cluster the data\ndata_scaled_w['ward_cluster'] = ward.create_cluster(10)\ndata_scaled_w['ward_cluster'].value_counts().sort_index()", "_____no_output_____" ], [ "# 3D plot results\nX = data_scaled_w.copy()\nfig = plt.figure(figsize = (10,10))\nax = fig.add_subplot(111, projection='3d')\nfor i in range(1,8):\n x = np.array(X[X['ward_cluster'] == i]['PC1'])\n y = np.array(X[X['ward_cluster'] == i]['PC2'])\n z = np.array(X[X['ward_cluster'] == i]['PC3'])\n ax.scatter(x, y, z, marker = 'o', s = 30)\nplt.title('Ward-Cluster Results')\nax.set_xlabel('PC1')\nax.set_ylabel('PC2')\nax.set_zlabel('PC3')\nax.legend([1,2,3,4,5,6,7])\nplt.show()", "_____no_output_____" ], [ "# Plot values per cluster\nplt.bar([1,2,3,4,5,6,7],data_scaled_w['ward_cluster'].value_counts().sort_index())\nplt.xlabel('Ward-Cluster')\nplt.ylabel('Number of Players')\nplt.title('Player Distribution per Cluster')\nplt.show()", "_____no_output_____" ], [ "# plot heatmap of PCs per Cluster\nplt.figure(figsize = (10,10))\nsns.heatmap(data_scaled_w.groupby('ward_cluster').mean(), vmin = -1.5, vmax = 1.5, center = 0, cmap = sns.diverging_palette(20, 220, n = 200), square = True)", "_____no_output_____" ], [ "# results are very similar to K-means so discard", "_____no_output_____" ] ], [ [ "## END", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ] ]
e7ff4658862aea2c021ae7727205feef781e0bca
6,416
ipynb
Jupyter Notebook
Crash Course on Python/pygrams_notebooks/utf-8''C1M6L1_Putting_It_All_Together.ipynb
garynth41/Google-IT-Automation-with-Python-Professional-Certificate
6a800b5b995c05f74c824545260207d19877baf7
[ "MIT" ]
2
2020-01-18T16:01:24.000Z
2020-02-29T19:27:17.000Z
Crash Course on Python/pygrams_notebooks/utf-8''C1M6L1_Putting_It_All_Together.ipynb
garynth41/Google-IT-Automation-with-Python-Professional-Certificate
6a800b5b995c05f74c824545260207d19877baf7
[ "MIT" ]
null
null
null
Crash Course on Python/pygrams_notebooks/utf-8''C1M6L1_Putting_It_All_Together.ipynb
garynth41/Google-IT-Automation-with-Python-Professional-Certificate
6a800b5b995c05f74c824545260207d19877baf7
[ "MIT" ]
4
2020-08-17T16:49:06.000Z
2022-02-14T06:45:29.000Z
31.762376
370
0.572319
[ [ [ "# Practice Notebook - Putting It All Together", "_____no_output_____" ], [ "Hello, coders! Below we have code similar to what we wrote in the last video. Go ahead and run the following cell that defines our `get_event_date`, `current_users` and `generate_report` methods.", "_____no_output_____" ] ], [ [ "def get_event_date(event):\n return event.date\n\ndef current_users(events):\n events.sort(key=get_event_date)\n machines={}\n for event in events:\n if event.machine not in machines:\n machines[event.machine]=set()\n if event.type ==\"login\":\n machines[event.machine].add(event.user)\n elif event.type==\"logout\":\n machines[event.machine].remove(event.user)\n return machines\ndef generate_report(machines):\n for machine,users in machines.items():\n if len(users)>0:user_list=\",\".join(users)\n print(\"{}: {}\".format(machines,user_list))", "_____no_output_____" ], [ "def get_event_date(event):\n return event.date\n\ndef current_users(events):\n events.sort(key=get_event_date)\n machines={}\n for event in events:\n if event.machine not in machines:\n machines[event.machine]=set()\n if event.type ==\"login\":\n machines[event.machine].add(event.user)\n elif event.type==\"logout\":\n machines[event.machine].remove(event.user)\n return machines\ndef generate_report(machines):\n for machine,users in machines.items():\n if len(users)>0:user_list=\",\".join(users)\n print(\"{}: {}\".format(machines,user_list))", "_____no_output_____" ] ], [ [ "No output should be generated from running the custom function definitions above. To check that our code is doing everything it's supposed to do, we need an `Event` class. The code in the next cell below initializes our `Event` class. Go ahead and run this cell next.", "_____no_output_____" ] ], [ [ "class Event:\n def __init__(self, event_date, event_type, machine_name, user):\n self.date = event_date\n self.type = event_type\n self.machine = machine_name\n self.user = user", "_____no_output_____" ] ], [ [ "Ok, we have an `Event` class that has a constructor and sets the necessary attributes. Next let's create some events and add them to a list by running the following cell.", "_____no_output_____" ] ], [ [ "events = [\n Event('2020-01-21 12:45:56', 'login', 'myworkstation.local', 'jordan'),\n Event('2020-01-22 15:53:42', 'logout', 'webserver.local', 'jordan'),\n Event('2020-01-21 18:53:21', 'login', 'webserver.local', 'lane'),\n Event('2020-01-22 10:25:34', 'logout', 'myworkstation.local', 'jordan'),\n Event('2020-01-21 08:20:01', 'login', 'webserver.local', 'jordan'),\n Event('2020-01-23 11:24:35', 'logout', 'mailserver.local', 'chris'),\n]", "_____no_output_____" ] ], [ [ "Now we've got a bunch of events. Let's feed these events into our `custom_users` function and see what happens.", "_____no_output_____" ] ], [ [ "users = current_users(events)\nprint(users)", "{'webserver.local': {'lane', 'jordan'}, 'myworkstation.local': set()}\n" ] ], [ [ "Uh oh. The code in the previous cell produces an error message. This is because we have a user in our `events` list that was logged out of a machine he was not logged into. Do you see which user this is? Make edits to the first cell containing our custom function definitions to see if you can fix this error message. There may be more than one way to do so. \n<br><br>\nRemember when you have finished making your edits, rerun that cell as well as the cell that feeds the `events` list into our `custom_users` function to see whether the error message has been fixed. Once the error message has been cleared and you have correctly outputted a dictionary with machine names as keys, your custom functions are properly finished. Great!", "_____no_output_____" ], [ "Now try generating the report by running the next cell.", "_____no_output_____" ] ], [ [ "generate_report(users)", "{'webserver.local': {'lane', 'jordan'}, 'myworkstation.local': set()}: lane,jordan\n{'webserver.local': {'lane', 'jordan'}, 'myworkstation.local': set()}: lane,jordan\n" ] ], [ [ "Whoop whoop! Success! The error message has been cleared and the desired output is produced. You are all done with this practice notebook. Way to go!", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "markdown", "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ] ]
e7ff7ed2ecd1cc9e138a1330113b4f82352065c0
15,056
ipynb
Jupyter Notebook
Day2/.ipynb_checkpoints/Notebook 2 - TripAdvisor_sol-checkpoint.ipynb
ALaks96/CenterParcs_NLP_SentimentAnalysis_Webscraping
6cb7185ac8eeffe906ed4e24705f37f8bdfaead7
[ "MIT" ]
null
null
null
Day2/.ipynb_checkpoints/Notebook 2 - TripAdvisor_sol-checkpoint.ipynb
ALaks96/CenterParcs_NLP_SentimentAnalysis_Webscraping
6cb7185ac8eeffe906ed4e24705f37f8bdfaead7
[ "MIT" ]
null
null
null
Day2/.ipynb_checkpoints/Notebook 2 - TripAdvisor_sol-checkpoint.ipynb
ALaks96/CenterParcs_NLP_SentimentAnalysis_Webscraping
6cb7185ac8eeffe906ed4e24705f37f8bdfaead7
[ "MIT" ]
null
null
null
36.279518
333
0.498937
[ [ [ "# Master Data Science for Business - Data Science Consulting - Session 2 \n\n# Notebook 2: \n\n# Web Scraping with Scrapy: Getting reviews from TripAdvisor\n\nTo Do (note for Cap): <br>\n-Enlever des parties du code que les élèves doivent compléter par eux même ", "_____no_output_____" ], [ "## 1. Importing packages", "_____no_output_____" ] ], [ [ "import scrapy\nfrom scrapy.crawler import CrawlerProcess\nfrom scrapy.spiders import CrawlSpider, Rule\nfrom scrapy.selector import Selector\nimport sys\nfrom scrapy.http import Request\nfrom scrapy.linkextractors import LinkExtractor\nimport json\nimport logging\nimport pandas as pd", "_____no_output_____" ] ], [ [ "## 2. Some class and functions", "_____no_output_____" ] ], [ [ "# -*- coding: utf-8 -*-\n\n# Define here the models for your scraped items\n#\n# See documentation in:\n# https://doc.scrapy.org/en/latest/topics/items.html\n\nclass HotelreviewsItem(scrapy.Item):\n # define the fields for your item here like:\n rating = scrapy.Field()\n review = scrapy.Field()\n title = scrapy.Field()\n trip_date = scrapy.Field()\n trip_type = scrapy.Field()\n published_date = scrapy.Field()\n hotel_type = scrapy.Field()\n hotel_name = scrapy.Field()\n price_range = scrapy.Field()\n reviewer_id = scrapy.Field()\n review_language = scrapy.Field()", "_____no_output_____" ], [ "def user_info_splitter(raw_user_info):\n \"\"\"\n\n :param raw_user_info:\n :return:\n \"\"\"\n\n user_info = {}\n\n splited_info = raw_user_info.split()\n for element in splited_info:\n converted_element = get_convertible_elements_as_dic(element)\n if converted_element:\n user_info[converted_element[0]] = converted_element[1]\n\n return user_info", "_____no_output_____" ] ], [ [ "## 2. Creating the JSon pipeline ", "_____no_output_____" ] ], [ [ "#JSon pipeline, you can rename the \"trust.jl\" to the name of your choice\nclass JsonWriterPipeline(object):\n\n def open_spider(self, spider):\n self.file = open('tripadvisor.jl', 'w')\n\n def close_spider(self, spider):\n self.file.close()\n\n def process_item(self, item, spider):\n line = json.dumps(dict(item)) + \"\\n\"\n self.file.write(line)\n return item", "_____no_output_____" ] ], [ [ "## 3. Spider\n\nWhen you go on a TripAdvisor page, you will have 5 reviews per page. Reviews are not fully displayed on the page, so you have to open them (i.e follow the link of the review to tell Scrapy to scrape this page) to scrape them. <br>\nThis means we will use 2 parsing functions: <br>\n-The first one will go on the page of the parc, and get the links of the reviews <br>\n-The second one will go on each page of each reviews and scrape them using the parse_item() method. <br>\n\n<b>To Do</b>: Complete the code with XPath to get the proper item to scrape. Once you are done, you can \"Restart and run all cells\" to see if everything is working correctly. ", "_____no_output_____" ] ], [ [ "class MySpider(CrawlSpider):\n name = 'BasicSpider'\n domain_url = \"https://www.tripadvisor.com\"\n # allowed_domains = [\"https://www.tripadvisor.com\"]\n\n start_urls = [\n \"https://www.tripadvisor.fr/ShowUserReviews-g1573379-d1573383-r629218790-Center_Parcs_Les_Trois_Forets-Hattigny_Moselle_Grand_Est.html\",\n \"https://www.tripadvisor.fr/ShowUserReviews-g1573379-d1573383-r645720538-Center_Parcs_Les_Trois_Forets-Hattigny_Moselle_Grand_Est.html\"]\n \n #Custom settings to modify settings usually found in the settings.py file \n custom_settings = {\n 'LOG_LEVEL': logging.WARNING,\n 'ITEM_PIPELINES': {'__main__.JsonWriterPipeline': 1}, # Used for pipeline 1\n 'FEED_FORMAT':'json', # Used for pipeline 2\n 'FEED_URI': 'tripadvisor.json' # Used for pipeline 2\n }\n\n def parse(self, response):\n\n item = HotelreviewsItem()\n\n item[\"reviewer_id\"] = next(iter(response.xpath(\n \"//div[contains(@class,'prw_reviews_resp_sur_h_featured_review')]/div/div/div/div/div[contains(@class,'prw_reviews_user_links_hs')]/span/@data-memberid\").extract()),\n None)\n item[\"review_language\"] = next(iter(response.xpath(\n \"//div[contains(@class,'prw_reviews_resp_sur_h_featured_review')]/div/div/div/div/div[contains(@class,'prw_reviews_user_links_hs')]/span/@data-language\").extract()),\n None)\n\n review_url_on_page = response.xpath('//script[@type=\"application/ld+json\"]/text()').extract()\n review = eval(review_url_on_page[0])\n\n item[\"review\"] = review[\"reviewBody\"].replace(\"\\\\n\", \"\")\n item[\"title\"] = review[\"name\"]\n item[\"rating\"] = review[\"reviewRating\"][\"ratingValue\"]\n item[\"hotel_type\"] = review[\"itemReviewed\"][\"@type\"]\n item[\"hotel_name\"] = review[\"itemReviewed\"][\"name\"]\n item[\"price_range\"] = review[\"itemReviewed\"][\"priceRange\"]\n try:\n item[\"published_date\"] = review[\"datePublished\"]\n except KeyError:\n\n item[\"published_date\"] = next(iter(response.xpath(\n f\"//div[contains(@id,'review_{review_id}')]/div/div/span[@class='ratingDate']/@title\"\"\").extract()),\n None)\n\n item[\"trip_type\"] = next(iter(response.xpath(\"//div[contains(@class,\"\n \"'prw_reviews_resp_sur_h_featured_review')]/div/div/div/div/div\"\n \"/div/div/div[contains(@class,'noRatings')]/text()\").extract()),\n None)\n\n try:\n item[\"trip_date\"] = next(iter(response.xpath(\"//div[contains(@class,\"\n \"'prw_reviews_resp_sur_h_featured_review')]/div/div/div/div[\"\n \"contains(@class,'prw_reviews_stay_date_hsx')]/text()\").extract(\n\n )), None)\n\n except:\n\n item[\"trip_date\"] = next(iter(response.xpath(\n \"//div[contains(@id,'review_538163624')]/div/div/div[@data-prwidget-name='reviews_stay_date_hsx']/text()\").extract()),\n None)\n\n yield item\n", "_____no_output_____" ] ], [ [ "## 4. Crawling", "_____no_output_____" ] ], [ [ "process = CrawlerProcess({\n 'USER_AGENT': 'Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.1)'\n})\n\nprocess.crawl(MySpider)\nprocess.start()", "2019-01-14 16:37:44 [scrapy.utils.log] INFO: Scrapy 1.5.1 started (bot: scrapybot)\n2019-01-14 16:37:44 [scrapy.utils.log] INFO: Versions: lxml 4.3.0.0, libxml2 2.9.8, cssselect 1.0.3, parsel 1.5.1, w3lib 1.19.0, Twisted 18.9.0, Python 3.7.2 (default, Jan 2 2019, 17:07:39) [MSC v.1915 64 bit (AMD64)], pyOpenSSL 18.0.0 (OpenSSL 1.1.1a 20 Nov 2018), cryptography 2.4.2, Platform Windows-10-10.0.16299-SP0\n2019-01-14 16:37:44 [scrapy.crawler] INFO: Overridden settings: {'FEED_FORMAT': 'json', 'FEED_URI': 'tripadvisor.json', 'LOG_LEVEL': 30, 'USER_AGENT': 'Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.1)'}\n" ] ], [ [ "## 5. Importing and reading data scraped\n\nIf you've succeeded, you should see here a dataframe with 2 entries corresponding to the 2 first reviews of the parc, and 11 columns for each item scraped. ", "_____no_output_____" ] ], [ [ "dfjson = pd.read_json('tripadvisor.json')\n#Previewing DF\ndfjson.head()", "_____no_output_____" ], [ "dfjson.info()", "<class 'pandas.core.frame.DataFrame'>\nRangeIndex: 2 entries, 0 to 1\nData columns (total 11 columns):\nhotel_name 2 non-null object\nhotel_type 2 non-null object\nprice_range 2 non-null object\npublished_date 2 non-null object\nrating 2 non-null int64\nreview 2 non-null object\nreview_language 2 non-null object\nreviewer_id 2 non-null object\ntitle 2 non-null object\ntrip_date 2 non-null object\ntrip_type 1 non-null object\ndtypes: int64(1), object(10)\nmemory usage: 256.0+ bytes\n" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ] ]
e7ff9e9fee3efd231a8a01b6f1867a2765855554
967,784
ipynb
Jupyter Notebook
FinRL_single_stock_trading.ipynb
jomach/FinRL-Library
4bcb4a4825ba22f582aa7211bce8d1422a47c5f0
[ "MIT" ]
null
null
null
FinRL_single_stock_trading.ipynb
jomach/FinRL-Library
4bcb4a4825ba22f582aa7211bce8d1422a47c5f0
[ "MIT" ]
null
null
null
FinRL_single_stock_trading.ipynb
jomach/FinRL-Library
4bcb4a4825ba22f582aa7211bce8d1422a47c5f0
[ "MIT" ]
null
null
null
243.406439
398,330
0.87599
[ [ [ "<a href=\"https://colab.research.google.com/github/AI4Finance-LLC/FinRL-Library/blob/master/FinRL_single_stock_trading.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>", "_____no_output_____" ], [ "# Deep Reinforcement Learning for Stock Trading from Scratch: Single Stock Trading\n\nTutorials to use OpenAI DRL to trade single stock in one Jupyter Notebook | Presented at NeurIPS 2020: Deep RL Workshop\n\n* This blog is based on our paper: FinRL: A Deep Reinforcement Learning Library for Automated Stock Trading in Quantitative Finance, presented at NeurIPS 2020: Deep RL Workshop.\n* Check out medium blog for detailed explanations: https://towardsdatascience.com/finrl-for-quantitative-finance-tutorial-for-single-stock-trading-37d6d7c30aac\n* Please report any issues to our Github: https://github.com/AI4Finance-LLC/FinRL-Library/issues\n\n", "_____no_output_____" ], [ "## Content", "_____no_output_____" ], [ "* [1. Problem Definition](#0)\n* [2. Getting Started - Load Python packages](#1)\n * [2.1. Install Packages](#1.1) \n * [2.2. Check Additional Packages](#1.2)\n * [2.3. Import Packages](#1.3)\n * [2.4. Create Folders](#1.4)\n* [3. Download Data](#2)\n* [4. Preprocess Data](#3) \n * [4.1. Technical Indicators](#3.1)\n * [4.2. Perform Feature Engineering](#3.2)\n* [5.Build Environment](#4) \n * [5.1. Training & Trade Data Split](#4.1)\n * [5.2. User-defined Environment](#4.2) \n * [5.3. Initialize Environment](#4.3) \n* [6.Implement DRL Algorithms](#5) \n* [7.Backtesting Performance](#6) \n * [7.1. BackTestStats](#6.1)\n * [7.2. BackTestPlot](#6.2) \n * [7.3. Baseline Stats](#6.3) \n * [7.3. Compare to Stock Market Index](#6.4) ", "_____no_output_____" ], [ "<a id='0'></a>\n# Part 1. Problem Definition", "_____no_output_____" ], [ "This problem is to design an automated trading solution for single stock trading. We model the stock trading process as a Markov Decision Process (MDP). We then formulate our trading goal as a maximization problem.\n\nThe components of the reinforcement learning environment are:\n\n\n* Action: The action space describes the allowed actions that the agent interacts with the\nenvironment. Normally, a ∈ A includes three actions: a ∈ {−1, 0, 1}, where −1, 0, 1 represent\nselling, holding, and buying one stock. Also, an action can be carried upon multiple shares. We use\nan action space {−k, ..., −1, 0, 1, ..., k}, where k denotes the number of shares. For example, \"Buy\n10 shares of AAPL\" or \"Sell 10 shares of AAPL\" are 10 or −10, respectively\n\n* Reward function: r(s, a, s′) is the incentive mechanism for an agent to learn a better action. The change of the portfolio value when action a is taken at state s and arriving at new state s', i.e., r(s, a, s′) = v′ − v, where v′ and v represent the portfolio\nvalues at state s′ and s, respectively\n\n* State: The state space describes the observations that the agent receives from the environment. Just as a human trader needs to analyze various information before executing a trade, so\nour trading agent observes many different features to better learn in an interactive environment.\n\n* Environment: single stock trading for AAPL\n\n\nThe data of the single stock that we will be using for this case study is obtained from Yahoo Finance API. The data contains Open-High-Low-Close price and volume.\n\nWe use Apple Inc. stock: AAPL as an example throughout this article, because it is one of the most popular and profitable stocks.", "_____no_output_____" ], [ "<a id='1'></a>\n# Part 2. Getting Started- Load Python Packages", "_____no_output_____" ], [ "<a id='1.1'></a>\n## 2.1. Install all the packages through FinRL library\n", "_____no_output_____" ] ], [ [ "## install finrl library\n!pip install git+https://github.com/AI4Finance-LLC/FinRL-Library.git", "Collecting git+https://github.com/AI4Finance-LLC/FinRL-Library.git\n Cloning https://github.com/AI4Finance-LLC/FinRL-Library.git to /tmp/pip-req-build-gpm5bcb4\n Running command git clone -q https://github.com/AI4Finance-LLC/FinRL-Library.git /tmp/pip-req-build-gpm5bcb4\nRequirement already satisfied (use --upgrade to upgrade): finrl==0.0.1 from git+https://github.com/AI4Finance-LLC/FinRL-Library.git in /usr/local/lib/python3.6/dist-packages\nRequirement already satisfied: numpy<1.19.0,>=1.16.0 in /usr/local/lib/python3.6/dist-packages (from finrl==0.0.1) (1.18.5)\nRequirement already satisfied: pandas==1.1.4 in /usr/local/lib/python3.6/dist-packages (from finrl==0.0.1) (1.1.4)\nRequirement already satisfied: stockstats in /usr/local/lib/python3.6/dist-packages (from finrl==0.0.1) (0.3.2)\nRequirement already satisfied: yfinance in /usr/local/lib/python3.6/dist-packages (from finrl==0.0.1) (0.1.55)\nRequirement already satisfied: scikit-learn==0.21.0 in /usr/local/lib/python3.6/dist-packages (from finrl==0.0.1) (0.21.0)\nRequirement already satisfied: gym==0.15.3 in /usr/local/lib/python3.6/dist-packages (from finrl==0.0.1) (0.15.3)\nRequirement already satisfied: stable-baselines[mpi] in /usr/local/lib/python3.6/dist-packages (from finrl==0.0.1) (2.10.1)\nRequirement already satisfied: tensorflow==1.15.4 in /usr/local/lib/python3.6/dist-packages (from finrl==0.0.1) (1.15.4)\nRequirement already satisfied: joblib==0.15.1 in /usr/local/lib/python3.6/dist-packages (from finrl==0.0.1) (0.15.1)\nRequirement already satisfied: matplotlib==3.2.1 in /usr/local/lib/python3.6/dist-packages (from finrl==0.0.1) (3.2.1)\nRequirement already satisfied: pytest<6.0.0,>=5.3.2 in /usr/local/lib/python3.6/dist-packages (from finrl==0.0.1) (5.4.3)\nRequirement already satisfied: setuptools<42.0.0,>=41.4.0 in /usr/local/lib/python3.6/dist-packages (from finrl==0.0.1) (41.6.0)\nRequirement already satisfied: wheel<0.34.0,>=0.33.6 in /usr/local/lib/python3.6/dist-packages (from finrl==0.0.1) (0.33.6)\nRequirement already satisfied: pyfolio@ git+https://github.com/quantopian/pyfolio.git#egg=pyfolio-0.9.2 from git+https://github.com/quantopian/pyfolio.git#egg=pyfolio-0.9.2 in /usr/local/lib/python3.6/dist-packages (from finrl==0.0.1) (0.9.2+75.g4b901f6)\nRequirement already satisfied: python-dateutil>=2.7.3 in /usr/local/lib/python3.6/dist-packages (from pandas==1.1.4->finrl==0.0.1) (2.8.1)\nRequirement already satisfied: pytz>=2017.2 in /usr/local/lib/python3.6/dist-packages (from pandas==1.1.4->finrl==0.0.1) (2018.9)\nRequirement already satisfied: int-date>=0.1.7 in /usr/local/lib/python3.6/dist-packages (from stockstats->finrl==0.0.1) (0.1.8)\nRequirement already satisfied: multitasking>=0.0.7 in /usr/local/lib/python3.6/dist-packages (from yfinance->finrl==0.0.1) (0.0.9)\nRequirement already satisfied: lxml>=4.5.1 in /usr/local/lib/python3.6/dist-packages (from yfinance->finrl==0.0.1) (4.6.2)\nRequirement already satisfied: requests>=2.20 in /usr/local/lib/python3.6/dist-packages (from yfinance->finrl==0.0.1) (2.23.0)\nRequirement already satisfied: scipy>=0.17.0 in /usr/local/lib/python3.6/dist-packages (from scikit-learn==0.21.0->finrl==0.0.1) (1.4.1)\nRequirement already satisfied: six in /usr/local/lib/python3.6/dist-packages (from gym==0.15.3->finrl==0.0.1) (1.15.0)\nRequirement already satisfied: cloudpickle~=1.2.0 in /usr/local/lib/python3.6/dist-packages (from gym==0.15.3->finrl==0.0.1) (1.2.2)\nRequirement already satisfied: pyglet<=1.3.2,>=1.2.0 in /usr/local/lib/python3.6/dist-packages (from gym==0.15.3->finrl==0.0.1) (1.3.2)\nRequirement already satisfied: opencv-python in /usr/local/lib/python3.6/dist-packages (from stable-baselines[mpi]->finrl==0.0.1) (4.1.2.30)\nRequirement already satisfied: mpi4py; extra == \"mpi\" in /usr/local/lib/python3.6/dist-packages (from stable-baselines[mpi]->finrl==0.0.1) (3.0.3)\nRequirement already satisfied: google-pasta>=0.1.6 in /usr/local/lib/python3.6/dist-packages (from tensorflow==1.15.4->finrl==0.0.1) (0.2.0)\nRequirement already satisfied: tensorflow-estimator==1.15.1 in /usr/local/lib/python3.6/dist-packages (from tensorflow==1.15.4->finrl==0.0.1) (1.15.1)\nRequirement already satisfied: keras-preprocessing>=1.0.5 in /usr/local/lib/python3.6/dist-packages (from tensorflow==1.15.4->finrl==0.0.1) (1.1.2)\nRequirement already satisfied: opt-einsum>=2.3.2 in /usr/local/lib/python3.6/dist-packages (from tensorflow==1.15.4->finrl==0.0.1) (3.3.0)\nRequirement already satisfied: grpcio>=1.8.6 in /usr/local/lib/python3.6/dist-packages (from tensorflow==1.15.4->finrl==0.0.1) (1.33.2)\nRequirement already satisfied: absl-py>=0.7.0 in /usr/local/lib/python3.6/dist-packages (from tensorflow==1.15.4->finrl==0.0.1) (0.10.0)\nRequirement already satisfied: astor>=0.6.0 in /usr/local/lib/python3.6/dist-packages (from tensorflow==1.15.4->finrl==0.0.1) (0.8.1)\nRequirement already satisfied: termcolor>=1.1.0 in /usr/local/lib/python3.6/dist-packages (from tensorflow==1.15.4->finrl==0.0.1) (1.1.0)\nRequirement already satisfied: wrapt>=1.11.1 in /usr/local/lib/python3.6/dist-packages (from tensorflow==1.15.4->finrl==0.0.1) (1.12.1)\nRequirement already satisfied: keras-applications>=1.0.8 in /usr/local/lib/python3.6/dist-packages (from tensorflow==1.15.4->finrl==0.0.1) (1.0.8)\nRequirement already satisfied: protobuf>=3.6.1 in /usr/local/lib/python3.6/dist-packages (from tensorflow==1.15.4->finrl==0.0.1) (3.12.4)\nRequirement already satisfied: tensorboard<1.16.0,>=1.15.0 in /usr/local/lib/python3.6/dist-packages (from tensorflow==1.15.4->finrl==0.0.1) (1.15.0)\nRequirement already satisfied: gast==0.2.2 in /usr/local/lib/python3.6/dist-packages (from tensorflow==1.15.4->finrl==0.0.1) (0.2.2)\nRequirement already satisfied: pyparsing!=2.0.4,!=2.1.2,!=2.1.6,>=2.0.1 in /usr/local/lib/python3.6/dist-packages (from matplotlib==3.2.1->finrl==0.0.1) (2.4.7)\nRequirement already satisfied: kiwisolver>=1.0.1 in /usr/local/lib/python3.6/dist-packages (from matplotlib==3.2.1->finrl==0.0.1) (1.3.1)\nRequirement already satisfied: cycler>=0.10 in /usr/local/lib/python3.6/dist-packages (from matplotlib==3.2.1->finrl==0.0.1) (0.10.0)\nRequirement already satisfied: packaging in /usr/local/lib/python3.6/dist-packages (from pytest<6.0.0,>=5.3.2->finrl==0.0.1) (20.4)\nRequirement already satisfied: attrs>=17.4.0 in /usr/local/lib/python3.6/dist-packages (from pytest<6.0.0,>=5.3.2->finrl==0.0.1) (20.3.0)\nRequirement already satisfied: more-itertools>=4.0.0 in /usr/local/lib/python3.6/dist-packages (from pytest<6.0.0,>=5.3.2->finrl==0.0.1) (8.6.0)\nRequirement already satisfied: pluggy<1.0,>=0.12 in /usr/local/lib/python3.6/dist-packages (from pytest<6.0.0,>=5.3.2->finrl==0.0.1) (0.13.1)\nRequirement already satisfied: wcwidth in /usr/local/lib/python3.6/dist-packages (from pytest<6.0.0,>=5.3.2->finrl==0.0.1) (0.2.5)\nRequirement already satisfied: py>=1.5.0 in /usr/local/lib/python3.6/dist-packages (from pytest<6.0.0,>=5.3.2->finrl==0.0.1) (1.9.0)\nRequirement already satisfied: importlib-metadata>=0.12; python_version < \"3.8\" in /usr/local/lib/python3.6/dist-packages (from pytest<6.0.0,>=5.3.2->finrl==0.0.1) (2.0.0)\nRequirement already satisfied: empyrical>=0.5.0 in /usr/local/lib/python3.6/dist-packages (from pyfolio@ git+https://github.com/quantopian/pyfolio.git#egg=pyfolio-0.9.2->finrl==0.0.1) (0.5.5)\nRequirement already satisfied: seaborn>=0.7.1 in /usr/local/lib/python3.6/dist-packages (from pyfolio@ git+https://github.com/quantopian/pyfolio.git#egg=pyfolio-0.9.2->finrl==0.0.1) (0.11.0)\nRequirement already satisfied: ipython>=3.2.3 in /usr/local/lib/python3.6/dist-packages (from pyfolio@ git+https://github.com/quantopian/pyfolio.git#egg=pyfolio-0.9.2->finrl==0.0.1) (5.5.0)\nRequirement already satisfied: idna<3,>=2.5 in /usr/local/lib/python3.6/dist-packages (from requests>=2.20->yfinance->finrl==0.0.1) (2.10)\nRequirement already satisfied: urllib3!=1.25.0,!=1.25.1,<1.26,>=1.21.1 in /usr/local/lib/python3.6/dist-packages (from requests>=2.20->yfinance->finrl==0.0.1) (1.24.3)\nRequirement already satisfied: chardet<4,>=3.0.2 in /usr/local/lib/python3.6/dist-packages (from requests>=2.20->yfinance->finrl==0.0.1) (3.0.4)\nRequirement already satisfied: certifi>=2017.4.17 in /usr/local/lib/python3.6/dist-packages (from requests>=2.20->yfinance->finrl==0.0.1) (2020.11.8)\nRequirement already satisfied: future in /usr/local/lib/python3.6/dist-packages (from pyglet<=1.3.2,>=1.2.0->gym==0.15.3->finrl==0.0.1) (0.16.0)\nRequirement already satisfied: h5py in /usr/local/lib/python3.6/dist-packages (from keras-applications>=1.0.8->tensorflow==1.15.4->finrl==0.0.1) (2.10.0)\nRequirement already satisfied: markdown>=2.6.8 in /usr/local/lib/python3.6/dist-packages (from tensorboard<1.16.0,>=1.15.0->tensorflow==1.15.4->finrl==0.0.1) (3.3.3)\nRequirement already satisfied: werkzeug>=0.11.15 in /usr/local/lib/python3.6/dist-packages (from tensorboard<1.16.0,>=1.15.0->tensorflow==1.15.4->finrl==0.0.1) (1.0.1)\nRequirement already satisfied: zipp>=0.5 in /usr/local/lib/python3.6/dist-packages (from importlib-metadata>=0.12; python_version < \"3.8\"->pytest<6.0.0,>=5.3.2->finrl==0.0.1) (3.4.0)\nRequirement already satisfied: pandas-datareader>=0.2 in /usr/local/lib/python3.6/dist-packages (from empyrical>=0.5.0->pyfolio@ git+https://github.com/quantopian/pyfolio.git#egg=pyfolio-0.9.2->finrl==0.0.1) (0.9.0)\nRequirement already satisfied: pygments in /usr/local/lib/python3.6/dist-packages (from ipython>=3.2.3->pyfolio@ git+https://github.com/quantopian/pyfolio.git#egg=pyfolio-0.9.2->finrl==0.0.1) (2.6.1)\nRequirement already satisfied: traitlets>=4.2 in /usr/local/lib/python3.6/dist-packages (from ipython>=3.2.3->pyfolio@ git+https://github.com/quantopian/pyfolio.git#egg=pyfolio-0.9.2->finrl==0.0.1) (4.3.3)\nRequirement already satisfied: pexpect; sys_platform != \"win32\" in /usr/local/lib/python3.6/dist-packages (from ipython>=3.2.3->pyfolio@ git+https://github.com/quantopian/pyfolio.git#egg=pyfolio-0.9.2->finrl==0.0.1) (4.8.0)\nRequirement already satisfied: prompt-toolkit<2.0.0,>=1.0.4 in /usr/local/lib/python3.6/dist-packages (from ipython>=3.2.3->pyfolio@ git+https://github.com/quantopian/pyfolio.git#egg=pyfolio-0.9.2->finrl==0.0.1) (1.0.18)\nRequirement already satisfied: pickleshare in /usr/local/lib/python3.6/dist-packages (from ipython>=3.2.3->pyfolio@ git+https://github.com/quantopian/pyfolio.git#egg=pyfolio-0.9.2->finrl==0.0.1) (0.7.5)\nRequirement already satisfied: simplegeneric>0.8 in /usr/local/lib/python3.6/dist-packages (from ipython>=3.2.3->pyfolio@ git+https://github.com/quantopian/pyfolio.git#egg=pyfolio-0.9.2->finrl==0.0.1) (0.8.1)\nRequirement already satisfied: decorator in /usr/local/lib/python3.6/dist-packages (from ipython>=3.2.3->pyfolio@ git+https://github.com/quantopian/pyfolio.git#egg=pyfolio-0.9.2->finrl==0.0.1) (4.4.2)\nRequirement already satisfied: ipython-genutils in /usr/local/lib/python3.6/dist-packages (from traitlets>=4.2->ipython>=3.2.3->pyfolio@ git+https://github.com/quantopian/pyfolio.git#egg=pyfolio-0.9.2->finrl==0.0.1) (0.2.0)\nRequirement already satisfied: ptyprocess>=0.5 in /usr/local/lib/python3.6/dist-packages (from pexpect; sys_platform != \"win32\"->ipython>=3.2.3->pyfolio@ git+https://github.com/quantopian/pyfolio.git#egg=pyfolio-0.9.2->finrl==0.0.1) (0.6.0)\nBuilding wheels for collected packages: finrl\n Building wheel for finrl (setup.py) ... \u001b[?25l\u001b[?25hdone\n Created wheel for finrl: filename=finrl-0.0.1-cp36-none-any.whl size=24270 sha256=18b4aee2509abb83c51b85c8a644bfc7d48c424223d88c2876f7a185aa940241\n Stored in directory: /tmp/pip-ephem-wheel-cache-hs_4dvki/wheels/9c/19/bf/c644def96612df1ad42c94d5304966797eaa3221dffc5efe0b\nSuccessfully built finrl\n" ] ], [ [ "\n<a id='1.2'></a>\n## 2.2. Check if the additional packages needed are present, if not install them. \n* Yahoo Finance API\n* pandas\n* numpy\n* matplotlib\n* stockstats\n* OpenAI gym\n* stable-baselines\n* tensorflow\n* pyfolio", "_____no_output_____" ] ], [ [ "import pkg_resources\nimport pip\ninstalledPackages = {pkg.key for pkg in pkg_resources.working_set}\nrequired = {'yfinance', 'pandas', 'matplotlib', 'stockstats','stable-baselines','gym','tensorflow'}\nmissing = required - installedPackages\nif missing:\n !pip install yfinance\n !pip install pandas\n !pip install matplotlib\n !pip install stockstats\n !pip install gym\n !pip install stable-baselines[mpi]\n !pip install tensorflow==1.15.4\n", "Requirement already satisfied: yfinance in /usr/local/lib/python3.6/dist-packages (0.1.55)\nRequirement already satisfied: lxml>=4.5.1 in /usr/local/lib/python3.6/dist-packages (from yfinance) (4.6.1)\nRequirement already satisfied: requests>=2.20 in /usr/local/lib/python3.6/dist-packages (from yfinance) (2.23.0)\nRequirement already satisfied: multitasking>=0.0.7 in /usr/local/lib/python3.6/dist-packages (from yfinance) (0.0.9)\nRequirement already satisfied: pandas>=0.24 in /usr/local/lib/python3.6/dist-packages (from yfinance) (1.1.4)\nRequirement already satisfied: numpy>=1.15 in /usr/local/lib/python3.6/dist-packages (from yfinance) (1.18.5)\nRequirement already satisfied: certifi>=2017.4.17 in /usr/local/lib/python3.6/dist-packages (from requests>=2.20->yfinance) (2020.11.8)\nRequirement already satisfied: idna<3,>=2.5 in /usr/local/lib/python3.6/dist-packages (from requests>=2.20->yfinance) (2.10)\nRequirement already satisfied: urllib3!=1.25.0,!=1.25.1,<1.26,>=1.21.1 in /usr/local/lib/python3.6/dist-packages (from requests>=2.20->yfinance) (1.24.3)\nRequirement already satisfied: chardet<4,>=3.0.2 in /usr/local/lib/python3.6/dist-packages (from requests>=2.20->yfinance) (3.0.4)\nRequirement already satisfied: python-dateutil>=2.7.3 in /usr/local/lib/python3.6/dist-packages (from pandas>=0.24->yfinance) (2.8.1)\nRequirement already satisfied: pytz>=2017.2 in /usr/local/lib/python3.6/dist-packages (from pandas>=0.24->yfinance) (2018.9)\nRequirement already satisfied: six>=1.5 in /usr/local/lib/python3.6/dist-packages (from python-dateutil>=2.7.3->pandas>=0.24->yfinance) (1.15.0)\nRequirement already satisfied: pandas in /usr/local/lib/python3.6/dist-packages (1.1.4)\nRequirement already satisfied: python-dateutil>=2.7.3 in /usr/local/lib/python3.6/dist-packages (from pandas) (2.8.1)\nRequirement already satisfied: numpy>=1.15.4 in /usr/local/lib/python3.6/dist-packages (from pandas) (1.18.5)\nRequirement already satisfied: pytz>=2017.2 in /usr/local/lib/python3.6/dist-packages (from pandas) (2018.9)\nRequirement already satisfied: six>=1.5 in /usr/local/lib/python3.6/dist-packages (from python-dateutil>=2.7.3->pandas) (1.15.0)\nRequirement already satisfied: matplotlib in /usr/local/lib/python3.6/dist-packages (3.2.1)\nRequirement already satisfied: cycler>=0.10 in /usr/local/lib/python3.6/dist-packages (from matplotlib) (0.10.0)\nRequirement already satisfied: numpy>=1.11 in /usr/local/lib/python3.6/dist-packages (from matplotlib) (1.18.5)\nRequirement already satisfied: kiwisolver>=1.0.1 in /usr/local/lib/python3.6/dist-packages (from matplotlib) (1.3.1)\nRequirement already satisfied: python-dateutil>=2.1 in /usr/local/lib/python3.6/dist-packages (from matplotlib) (2.8.1)\nRequirement already satisfied: pyparsing!=2.0.4,!=2.1.2,!=2.1.6,>=2.0.1 in /usr/local/lib/python3.6/dist-packages (from matplotlib) (2.4.7)\nRequirement already satisfied: six in /usr/local/lib/python3.6/dist-packages (from cycler>=0.10->matplotlib) (1.15.0)\nRequirement already satisfied: stockstats in /usr/local/lib/python3.6/dist-packages (0.3.2)\nRequirement already satisfied: int-date>=0.1.7 in /usr/local/lib/python3.6/dist-packages (from stockstats) (0.1.8)\nRequirement already satisfied: pandas>=0.18.1 in /usr/local/lib/python3.6/dist-packages (from stockstats) (1.1.4)\nRequirement already satisfied: numpy>=1.9.2 in /usr/local/lib/python3.6/dist-packages (from stockstats) (1.18.5)\nRequirement already satisfied: python-dateutil>=2.4.2 in /usr/local/lib/python3.6/dist-packages (from int-date>=0.1.7->stockstats) (2.8.1)\nRequirement already satisfied: six>=1.9.0 in /usr/local/lib/python3.6/dist-packages (from int-date>=0.1.7->stockstats) (1.15.0)\nRequirement already satisfied: pytz>=2017.2 in /usr/local/lib/python3.6/dist-packages (from pandas>=0.18.1->stockstats) (2018.9)\nRequirement already satisfied: gym in /usr/local/lib/python3.6/dist-packages (0.15.3)\nRequirement already satisfied: six in /usr/local/lib/python3.6/dist-packages (from gym) (1.15.0)\nRequirement already satisfied: scipy in /usr/local/lib/python3.6/dist-packages (from gym) (1.4.1)\nRequirement already satisfied: pyglet<=1.3.2,>=1.2.0 in /usr/local/lib/python3.6/dist-packages (from gym) (1.3.2)\nRequirement already satisfied: numpy>=1.10.4 in /usr/local/lib/python3.6/dist-packages (from gym) (1.18.5)\nRequirement already satisfied: cloudpickle~=1.2.0 in /usr/local/lib/python3.6/dist-packages (from gym) (1.2.2)\nRequirement already satisfied: future in /usr/local/lib/python3.6/dist-packages (from pyglet<=1.3.2,>=1.2.0->gym) (0.16.0)\nRequirement already satisfied: stable-baselines[mpi] in /usr/local/lib/python3.6/dist-packages (2.10.1)\nRequirement already satisfied: pandas in /usr/local/lib/python3.6/dist-packages (from stable-baselines[mpi]) (1.1.4)\nRequirement already satisfied: numpy in /usr/local/lib/python3.6/dist-packages (from stable-baselines[mpi]) (1.18.5)\nRequirement already satisfied: scipy in /usr/local/lib/python3.6/dist-packages (from stable-baselines[mpi]) (1.4.1)\nRequirement already satisfied: gym[atari,classic_control]>=0.11 in /usr/local/lib/python3.6/dist-packages (from stable-baselines[mpi]) (0.15.3)\nRequirement already satisfied: opencv-python in /usr/local/lib/python3.6/dist-packages (from stable-baselines[mpi]) (4.1.2.30)\nRequirement already satisfied: joblib in /usr/local/lib/python3.6/dist-packages (from stable-baselines[mpi]) (0.15.1)\nRequirement already satisfied: matplotlib in /usr/local/lib/python3.6/dist-packages (from stable-baselines[mpi]) (3.2.1)\nRequirement already satisfied: cloudpickle>=0.5.5 in /usr/local/lib/python3.6/dist-packages (from stable-baselines[mpi]) (1.2.2)\nRequirement already satisfied: mpi4py; extra == \"mpi\" in /usr/local/lib/python3.6/dist-packages (from stable-baselines[mpi]) (3.0.3)\nRequirement already satisfied: python-dateutil>=2.7.3 in /usr/local/lib/python3.6/dist-packages (from pandas->stable-baselines[mpi]) (2.8.1)\nRequirement already satisfied: pytz>=2017.2 in /usr/local/lib/python3.6/dist-packages (from pandas->stable-baselines[mpi]) (2018.9)\nRequirement already satisfied: pyglet<=1.3.2,>=1.2.0 in /usr/local/lib/python3.6/dist-packages (from gym[atari,classic_control]>=0.11->stable-baselines[mpi]) (1.3.2)\nRequirement already satisfied: six in /usr/local/lib/python3.6/dist-packages (from gym[atari,classic_control]>=0.11->stable-baselines[mpi]) (1.15.0)\nRequirement already satisfied: Pillow; extra == \"atari\" in /usr/local/lib/python3.6/dist-packages (from gym[atari,classic_control]>=0.11->stable-baselines[mpi]) (7.0.0)\nRequirement already satisfied: atari-py~=0.2.0; extra == \"atari\" in /usr/local/lib/python3.6/dist-packages (from gym[atari,classic_control]>=0.11->stable-baselines[mpi]) (0.2.6)\nRequirement already satisfied: kiwisolver>=1.0.1 in /usr/local/lib/python3.6/dist-packages (from matplotlib->stable-baselines[mpi]) (1.3.1)\nRequirement already satisfied: pyparsing!=2.0.4,!=2.1.2,!=2.1.6,>=2.0.1 in /usr/local/lib/python3.6/dist-packages (from matplotlib->stable-baselines[mpi]) (2.4.7)\nRequirement already satisfied: cycler>=0.10 in /usr/local/lib/python3.6/dist-packages (from matplotlib->stable-baselines[mpi]) (0.10.0)\nRequirement already satisfied: future in /usr/local/lib/python3.6/dist-packages (from pyglet<=1.3.2,>=1.2.0->gym[atari,classic_control]>=0.11->stable-baselines[mpi]) (0.16.0)\nRequirement already satisfied: tensorflow==1.15.4 in /usr/local/lib/python3.6/dist-packages (1.15.4)\nRequirement already satisfied: wrapt>=1.11.1 in /usr/local/lib/python3.6/dist-packages (from tensorflow==1.15.4) (1.12.1)\nRequirement already satisfied: keras-preprocessing>=1.0.5 in /usr/local/lib/python3.6/dist-packages (from tensorflow==1.15.4) (1.1.2)\nRequirement already satisfied: gast==0.2.2 in /usr/local/lib/python3.6/dist-packages (from tensorflow==1.15.4) (0.2.2)\nRequirement already satisfied: wheel>=0.26; python_version >= \"3\" in /usr/local/lib/python3.6/dist-packages (from tensorflow==1.15.4) (0.33.6)\nRequirement already satisfied: tensorflow-estimator==1.15.1 in /usr/local/lib/python3.6/dist-packages (from tensorflow==1.15.4) (1.15.1)\nRequirement already satisfied: keras-applications>=1.0.8 in /usr/local/lib/python3.6/dist-packages (from tensorflow==1.15.4) (1.0.8)\nRequirement already satisfied: six>=1.10.0 in /usr/local/lib/python3.6/dist-packages (from tensorflow==1.15.4) (1.15.0)\nRequirement already satisfied: astor>=0.6.0 in /usr/local/lib/python3.6/dist-packages (from tensorflow==1.15.4) (0.8.1)\nRequirement already satisfied: grpcio>=1.8.6 in /usr/local/lib/python3.6/dist-packages (from tensorflow==1.15.4) (1.33.2)\nRequirement already satisfied: absl-py>=0.7.0 in /usr/local/lib/python3.6/dist-packages (from tensorflow==1.15.4) (0.10.0)\nRequirement already satisfied: opt-einsum>=2.3.2 in /usr/local/lib/python3.6/dist-packages (from tensorflow==1.15.4) (3.3.0)\nRequirement already satisfied: tensorboard<1.16.0,>=1.15.0 in /usr/local/lib/python3.6/dist-packages (from tensorflow==1.15.4) (1.15.0)\nRequirement already satisfied: numpy<1.19.0,>=1.16.0 in /usr/local/lib/python3.6/dist-packages (from tensorflow==1.15.4) (1.18.5)\nRequirement already satisfied: termcolor>=1.1.0 in /usr/local/lib/python3.6/dist-packages (from tensorflow==1.15.4) (1.1.0)\nRequirement already satisfied: protobuf>=3.6.1 in /usr/local/lib/python3.6/dist-packages (from tensorflow==1.15.4) (3.12.4)\nRequirement already satisfied: google-pasta>=0.1.6 in /usr/local/lib/python3.6/dist-packages (from tensorflow==1.15.4) (0.2.0)\nRequirement already satisfied: h5py in /usr/local/lib/python3.6/dist-packages (from keras-applications>=1.0.8->tensorflow==1.15.4) (2.10.0)\nRequirement already satisfied: markdown>=2.6.8 in /usr/local/lib/python3.6/dist-packages (from tensorboard<1.16.0,>=1.15.0->tensorflow==1.15.4) (3.3.3)\nRequirement already satisfied: setuptools>=41.0.0 in /usr/local/lib/python3.6/dist-packages (from tensorboard<1.16.0,>=1.15.0->tensorflow==1.15.4) (41.6.0)\nRequirement already satisfied: werkzeug>=0.11.15 in /usr/local/lib/python3.6/dist-packages (from tensorboard<1.16.0,>=1.15.0->tensorflow==1.15.4) (1.0.1)\nRequirement already satisfied: importlib-metadata; python_version < \"3.8\" in /usr/local/lib/python3.6/dist-packages (from markdown>=2.6.8->tensorboard<1.16.0,>=1.15.0->tensorflow==1.15.4) (2.0.0)\nRequirement already satisfied: zipp>=0.5 in /usr/local/lib/python3.6/dist-packages (from importlib-metadata; python_version < \"3.8\"->markdown>=2.6.8->tensorboard<1.16.0,>=1.15.0->tensorflow==1.15.4) (3.4.0)\n" ] ], [ [ "<a id='1.3'></a>\n## 2.3. Import Packages", "_____no_output_____" ] ], [ [ "import pandas as pd\nimport numpy as np\nimport matplotlib\nimport matplotlib.pyplot as plt\nmatplotlib.use('Agg')\nimport datetime\n\nfrom finrl.config import config\nfrom finrl.marketdata.yahoodownloader import YahooDownloader\nfrom finrl.preprocessing.preprocessors import FeatureEngineer\nfrom finrl.preprocessing.data import data_split\nfrom finrl.env.environment import EnvSetup\nfrom finrl.env.EnvMultipleStock_train import StockEnvTrain\nfrom finrl.env.EnvMultipleStock_trade import StockEnvTrade\nfrom finrl.model.models import DRLAgent\nfrom finrl.trade.backtest import BackTestStats, BaselineStats, BackTestPlot\n\n", "_____no_output_____" ], [ "#Diable the warnings\nimport warnings\nwarnings.filterwarnings('ignore')", "_____no_output_____" ] ], [ [ "<a id='1.4'></a>\n## 2.4. Create Folders", "_____no_output_____" ] ], [ [ "import os\nif not os.path.exists(\"./\" + config.DATA_SAVE_DIR):\n os.makedirs(\"./\" + config.DATA_SAVE_DIR)\nif not os.path.exists(\"./\" + config.TRAINED_MODEL_DIR):\n os.makedirs(\"./\" + config.TRAINED_MODEL_DIR)\nif not os.path.exists(\"./\" + config.TENSORBOARD_LOG_DIR):\n os.makedirs(\"./\" + config.TENSORBOARD_LOG_DIR)\nif not os.path.exists(\"./\" + config.RESULTS_DIR):\n os.makedirs(\"./\" + config.RESULTS_DIR)", "_____no_output_____" ] ], [ [ "<a id='2'></a>\n# Part 3. Download Data\nYahoo Finance is a website that provides stock data, financial news, financial reports, etc. All the data provided by Yahoo Finance is free.\n* FinRL uses a class **YahooDownloader** to fetch data from Yahoo Finance API\n* Call Limit: Using the Public API (without authentication), you are limited to 2,000 requests per hour per IP (or up to a total of 48,000 requests a day).\n", "_____no_output_____" ], [ "\n\n-----\nclass YahooDownloader:\n Provides methods for retrieving daily stock data from\n Yahoo Finance API\n\n Attributes\n ----------\n start_date : str\n start date of the data (modified from config.py)\n end_date : str\n end date of the data (modified from config.py)\n ticker_list : list\n a list of stock tickers (modified from config.py)\n\n Methods\n -------\n fetch_data()\n Fetches data from yahoo API\n", "_____no_output_____" ] ], [ [ "# from config.py start_date is a string\nconfig.START_DATE", "_____no_output_____" ], [ "# from config.py end_date is a string\nconfig.END_DATE", "_____no_output_____" ] ], [ [ "ticker_list is a list of stock tickers, in a single stock trading case, the list contains only 1 ticker", "_____no_output_____" ] ], [ [ "# Download and save the data in a pandas DataFrame:\ndata_df = YahooDownloader(start_date = config.START_DATE,\n end_date = config.END_DATE,\n ticker_list = ['AAPL']).fetch_data()", "\r[*********************100%***********************] 1 of 1 completed\nShape of DataFrame: (2956, 7)\n" ], [ "data_df.shape", "_____no_output_____" ], [ "data_df.head()", "_____no_output_____" ] ], [ [ "<a id='3'></a>\n# Part 4. Preprocess Data\nData preprocessing is a crucial step for training a high quality machine learning model. We need to check for missing data and do feature engineering in order to convert the data into a model-ready state.\n* FinRL uses a class **FeatureEngineer** to preprocess the data\n* Add **technical indicators**. In practical trading, various information needs to be taken into account, for example the historical stock prices, current holding shares, technical indicators, etc.\n", "_____no_output_____" ], [ "class FeatureEngineer:\nProvides methods for preprocessing the stock price data\n\n Attributes\n ----------\n df: DataFrame\n data downloaded from Yahoo API\n feature_number : int\n number of features we used\n use_technical_indicator : boolean\n we technical indicator or not\n use_turbulence : boolean\n use turbulence index or not\n\n Methods\n -------\n preprocess_data()\n main method to do the feature engineering", "_____no_output_____" ], [ "<a id='3.1'></a>\n\n## 4.1 Technical Indicators\n* FinRL uses stockstats to calcualte technical indicators such as **Moving Average Convergence Divergence (MACD)**, **Relative Strength Index (RSI)**, **Average Directional Index (ADX)**, **Commodity Channel Index (CCI)** and other various indicators and stats.\n* **stockstats**: supplies a wrapper StockDataFrame based on the **pandas.DataFrame** with inline stock statistics/indicators support.\n\n", "_____no_output_____" ] ], [ [ "## we store the stockstats technical indicator column names in config.py\ntech_indicator_list=config.TECHNICAL_INDICATORS_LIST\nprint(tech_indicator_list)", "['macd', 'rsi_30', 'cci_30', 'dx_30']\n" ], [ "## user can add more technical indicators\n## check https://github.com/jealous/stockstats for different names\ntech_indicator_list=tech_indicator_list+['kdjk','open_2_sma','boll','close_10.0_le_5_c','wr_10','dma','trix']\nprint(tech_indicator_list)", "['macd', 'rsi_30', 'cci_30', 'dx_30', 'kdjk', 'open_2_sma', 'boll', 'close_10.0_le_5_c', 'wr_10', 'dma', 'trix']\n" ] ], [ [ "<a id='3.2'></a>\n## 4.2 Perform Feature Engineering", "_____no_output_____" ] ], [ [ "data_df = FeatureEngineer(data_df.copy(),\n use_technical_indicator=True,\n tech_indicator_list = tech_indicator_list,\n use_turbulence=False,\n user_defined_feature = True).preprocess_data()", "Successfully added technical indicators\nSuccessfully added user defined features\n" ], [ "data_df.head()", "_____no_output_____" ] ], [ [ "<a id='4'></a>\n# Part 5. Build Environment\nConsidering the stochastic and interactive nature of the automated stock trading tasks, a financial task is modeled as a **Markov Decision Process (MDP)** problem. The training process involves observing stock price change, taking an action and reward's calculation to have the agent adjusting its strategy accordingly. By interacting with the environment, the trading agent will derive a trading strategy with the maximized rewards as time proceeds.\n\nOur trading environments, based on OpenAI Gym framework, simulate live stock markets with real market data according to the principle of time-driven simulation.\n\nThe action space describes the allowed actions that the agent interacts with the environment. Normally, action a includes three actions: {-1, 0, 1}, where -1, 0, 1 represent selling, holding, and buying one share. Also, an action can be carried upon multiple shares. We use an action space {-k,…,-1, 0, 1, …, k}, where k denotes the number of shares to buy and -k denotes the number of shares to sell. For example, \"Buy 10 shares of AAPL\" or \"Sell 10 shares of AAPL\" are 10 or -10, respectively. The continuous action space needs to be normalized to [-1, 1], since the policy is defined on a Gaussian distribution, which needs to be normalized and symmetric.", "_____no_output_____" ], [ "<a id='4.1'></a>\n## 5.1 Training & Trade data split\n* Training: 2009-01-01 to 2018-12-31\n* Trade: 2019-01-01 to 2020-09-30", "_____no_output_____" ] ], [ [ "train = data_split(data_df, start = config.START_DATE, end = config.START_TRADE_DATE)\ntrade = data_split(data_df, start = config.START_TRADE_DATE, end = config.END_DATE)\n#train = data_split(data_df, start = '2009-01-01', end = '2019-01-01')\n#trade = data_split(data_df, start = '2019-01-01', end = '2020-09-30')\n", "_____no_output_____" ], [ "## data normalization, this part is optional, have little impact\nfeaures_list = list(train.columns)\nfeaures_list.remove('date')\nfeaures_list.remove('tic')\nfeaures_list.remove('close')\nprint(feaures_list)\nfrom sklearn import preprocessing\ndata_normaliser = preprocessing.StandardScaler()\ntrain[feaures_list] = data_normaliser.fit_transform(train[feaures_list])\ntrade[feaures_list] = data_normaliser.fit_transform(trade[feaures_list])", "['open', 'high', 'low', 'volume', 'macd', 'rsi_30', 'cci_30', 'dx_30', 'kdjk', 'open_2_sma', 'boll', 'close_10.0_le_5_c', 'wr_10', 'dma', 'trix', 'daily_return']\n" ] ], [ [ "<a id='4.2'></a>\n## 5.2 User-defined Environment: a simulation environment class ", "_____no_output_____" ] ], [ [ "import numpy as np\nimport pandas as pd\nfrom gym.utils import seeding\nimport gym\nfrom gym import spaces\nimport matplotlib\nmatplotlib.use('Agg')\nimport matplotlib.pyplot as plt\n\nclass SingleStockEnv(gym.Env):\n \"\"\"A single stock trading environment for OpenAI gym\n\n Attributes\n ----------\n df: DataFrame\n input data\n stock_dim : int\n number of unique stocks\n hmax : int\n maximum number of shares to trade\n initial_amount : int\n start money\n transaction_cost_pct: float\n transaction cost percentage per trade\n reward_scaling: float\n scaling factor for reward, good for training\n state_space: int\n the dimension of input features\n action_space: int\n equals stock dimension\n tech_indicator_list: list\n a list of technical indicator names\n turbulence_threshold: int\n a threshold to control risk aversion\n day: int\n an increment number to control date\n\n Methods\n -------\n _sell_stock()\n perform sell action based on the sign of the action\n _buy_stock()\n perform buy action based on the sign of the action\n step()\n at each step the agent will return actions, then \n we will calculate the reward, and return the next observation.\n reset()\n reset the environment\n render()\n use render to return other functions\n save_asset_memory()\n return account value at each time step\n save_action_memory()\n return actions/positions at each time step\n \n\n \"\"\"\n metadata = {'render.modes': ['human']}\n\n def __init__(self, \n df,\n stock_dim,\n hmax,\n initial_amount,\n transaction_cost_pct,\n reward_scaling,\n state_space,\n action_space,\n tech_indicator_list,\n turbulence_threshold,\n day = 0):\n #super(StockEnv, self).__init__()\n #money = 10 , scope = 1\n self.day = day\n self.df = df\n self.stock_dim = stock_dim\n self.hmax = hmax\n self.initial_amount = initial_amount\n self.transaction_cost_pct =transaction_cost_pct\n self.reward_scaling = reward_scaling\n self.state_space = state_space\n self.action_space = action_space\n self.tech_indicator_list = tech_indicator_list\n\n # action_space normalization and shape is self.stock_dim\n self.action_space = spaces.Box(low = -1, high = 1,shape = (self.action_space,)) \n # Shape = 181: [Current Balance]+[prices 1-30]+[owned shares 1-30] \n # +[macd 1-30]+ [rsi 1-30] + [cci 1-30] + [adx 1-30]\n self.observation_space = spaces.Box(low=0, high=np.inf, shape = (self.state_space,))\n # load data from a pandas dataframe\n self.data = self.df.loc[self.day,:]\n self.terminal = False \n self.turbulence_threshold = turbulence_threshold \n # initalize state: inital amount + close price + shares + technical indicators + other features\n self.state = [self.initial_amount] + \\\n [self.data.close] + \\\n [0]*self.stock_dim + \\\n sum([[self.data[tech]] for tech in self.tech_indicator_list ], [])+ \\\n [self.data.open] + \\\n [self.data.high] + \\\n [self.data.low] +\\\n [self.data.daily_return] \n # initialize reward\n self.reward = 0\n self.cost = 0\n # memorize all the total balance change\n self.asset_memory = [self.initial_amount]\n self.rewards_memory = []\n self.actions_memory=[]\n self.date_memory=[self.data.date]\n self.close_price_memory = [self.data.close]\n self.trades = 0\n self._seed()\n\n\n def _sell_stock(self, index, action):\n # perform sell action based on the sign of the action\n if self.state[index+self.stock_dim+1] > 0:\n #update balance\n self.state[0] += \\\n self.state[index+1]*min(abs(action),self.state[index+self.stock_dim+1]) * \\\n (1- self.transaction_cost_pct)\n\n self.state[index+self.stock_dim+1] -= min(abs(action), self.state[index+self.stock_dim+1])\n self.cost +=self.state[index+1]*min(abs(action),self.state[index+self.stock_dim+1]) * \\\n self.transaction_cost_pct\n self.trades+=1\n else:\n pass\n\n \n def _buy_stock(self, index, action):\n # perform buy action based on the sign of the action\n available_amount = self.state[0] // self.state[index+1]\n # print('available_amount:{}'.format(available_amount))\n\n #update balance\n self.state[0] -= self.state[index+1]*min(available_amount, action)* \\\n (1+ self.transaction_cost_pct)\n\n self.state[index+self.stock_dim+1] += min(available_amount, action)\n\n self.cost+=self.state[index+1]*min(available_amount, action)* \\\n self.transaction_cost_pct\n self.trades+=1\n \n def step(self, actions):\n # print(self.day)\n self.terminal = self.day >= len(self.df.index.unique())-1\n # print(actions)\n\n if self.terminal:\n #plt.plot(self.asset_memory,'r')\n #plt.savefig('results/account_value_train.png')\n #plt.close()\n end_total_asset = self.state[0]+ \\\n sum(np.array(self.state[1:(self.stock_dim+1)])*np.array(self.state[(self.stock_dim+1):(self.stock_dim*2+1)]))\n\n print(\"begin_total_asset:{}\".format(self.asset_memory[0])) \n print(\"end_total_asset:{}\".format(end_total_asset))\n df_total_value = pd.DataFrame(self.asset_memory)\n #df_total_value.to_csv('results/account_value_train.csv')\n print(\"total_reward:{}\".format(self.state[0]+sum(np.array(self.state[1:(self.stock_dim+1)])*np.array(self.state[(self.stock_dim+1):(self.stock_dim*2+1)]))- self.initial_amount ))\n print(\"total_cost: \", self.cost)\n print(\"total_trades: \", self.trades)\n df_total_value.columns = ['account_value']\n df_total_value['daily_return']=df_total_value.pct_change(1)\n if df_total_value['daily_return'].std() !=0:\n sharpe = (252**0.5)*df_total_value['daily_return'].mean()/ \\\n df_total_value['daily_return'].std()\n print(\"Sharpe: \",sharpe)\n print(\"=================================\")\n df_rewards = pd.DataFrame(self.rewards_memory)\n #df_rewards.to_csv('results/account_rewards_train.csv')\n \n \n return self.state, self.reward, self.terminal,{}\n\n else:\n #print(actions)\n actions = actions * self.hmax\n self.actions_memory.append(actions)\n #actions = (actions.astype(int))\n \n begin_total_asset = self.state[0]+ \\\n sum(np.array(self.state[1:(self.stock_dim+1)])*np.array(self.state[(self.stock_dim+1):(self.stock_dim*2+1)]))\n #print(\"begin_total_asset:{}\".format(begin_total_asset))\n \n argsort_actions = np.argsort(actions)\n \n sell_index = argsort_actions[:np.where(actions < 0)[0].shape[0]]\n buy_index = argsort_actions[::-1][:np.where(actions > 0)[0].shape[0]]\n\n for index in sell_index:\n # print('take sell action'.format(actions[index]))\n self._sell_stock(index, actions[index])\n\n for index in buy_index:\n # print('take buy action: {}'.format(actions[index]))\n self._buy_stock(index, actions[index])\n\n self.day += 1\n self.data = self.df.loc[self.day,:] \n #load next state\n # print(\"stock_shares:{}\".format(self.state[29:]))\n self.state = [self.state[0]] + \\\n [self.data.close] + \\\n list(self.state[(self.stock_dim+1):(self.stock_dim*2+1)]) + \\\n sum([[self.data[tech]] for tech in self.tech_indicator_list ], [])+ \\\n [self.data.open] + \\\n [self.data.high] + \\\n [self.data.low] +\\\n [self.data.daily_return] \n \n end_total_asset = self.state[0]+ \\\n sum(np.array(self.state[1:(self.stock_dim+1)])*np.array(self.state[(self.stock_dim+1):(self.stock_dim*2+1)]))\n self.asset_memory.append(end_total_asset)\n self.date_memory.append(self.data.date)\n self.close_price_memory.append(self.data.close)\n\n #print(\"end_total_asset:{}\".format(end_total_asset))\n \n self.reward = end_total_asset - begin_total_asset \n # print(\"step_reward:{}\".format(self.reward))\n self.rewards_memory.append(self.reward)\n \n self.reward = self.reward*self.reward_scaling\n\n\n\n return self.state, self.reward, self.terminal, {}\n\n def reset(self):\n self.asset_memory = [self.initial_amount]\n self.day = 0\n self.data = self.df.loc[self.day,:]\n self.cost = 0\n self.trades = 0\n self.terminal = False \n self.rewards_memory = []\n self.actions_memory=[]\n self.date_memory=[self.data.date]\n #initiate state\n self.state = [self.initial_amount] + \\\n [self.data.close] + \\\n [0]*self.stock_dim + \\\n sum([[self.data[tech]] for tech in self.tech_indicator_list ], [])+ \\\n [self.data.open] + \\\n [self.data.high] + \\\n [self.data.low] +\\\n [self.data.daily_return] \n return self.state\n \n def render(self, mode='human'):\n return self.state\n \n def save_asset_memory(self):\n date_list = self.date_memory\n asset_list = self.asset_memory\n #print(len(date_list))\n #print(len(asset_list))\n df_account_value = pd.DataFrame({'date':date_list,'account_value':asset_list})\n return df_account_value\n\n def save_action_memory(self):\n # date and close price length must match actions length\n date_list = self.date_memory[:-1]\n close_price_list = self.close_price_memory[:-1]\n\n action_list = self.actions_memory\n df_actions = pd.DataFrame({'date':date_list,'actions':action_list,'close_price':close_price_list})\n return df_actions\n\n def _seed(self, seed=None):\n self.np_random, seed = seeding.np_random(seed)\n return [seed]", "_____no_output_____" ] ], [ [ "<a id='4.3'></a>\n## 5.3 Initialize Environment\n* **stock dimension**: the number of unique stock tickers we use\n* **hmax**: the maximum amount of shares to buy or sell\n* **initial amount**: the amount of money we use to trade in the begining\n* **transaction cost percentage**: a per share rate for every share trade\n* **tech_indicator_list**: a list of technical indicator names (modified from config.py)", "_____no_output_____" ] ], [ [ "## we store the stockstats technical indicator column names in config.py\n## check https://github.com/jealous/stockstats for different names\ntech_indicator_list", "_____no_output_____" ], [ "# the stock dimension is 1, because we only use the price data of AAPL.\nlen(train.tic.unique())", "_____no_output_____" ], [ "# account balance + close price + shares + technical indicators + open-high-low-price + 1 returns\nstock_dimension = len(train.tic.unique())\nstate_space = 1 + 2*stock_dimension + len(tech_indicator_list)*stock_dimension + 4*stock_dimension\nprint(state_space)\n", "18\n" ], [ "env_setup = EnvSetup(stock_dim = stock_dimension,\n state_space = state_space,\n hmax = 200,\n initial_amount = 100000,\n transaction_cost_pct = 0.001,\n tech_indicator_list = tech_indicator_list)", "_____no_output_____" ], [ "env_train = env_setup.create_env_training(data = train,\n env_class = SingleStockEnv)", "_____no_output_____" ], [ "train.head()", "_____no_output_____" ] ], [ [ "<a id='5'></a>\n# Part 6: Implement DRL Algorithms\n* The implementation of the DRL algorithms are based on **OpenAI Baselines** and **Stable Baselines**. Stable Baselines is a fork of OpenAI Baselines, with a major structural refactoring, and code cleanups.\n* FinRL library includes fine-tuned standard DRL algorithms, such as DQN, DDPG,\nMulti-Agent DDPG, PPO, SAC, A2C and TD3. We also allow users to\ndesign their own DRL algorithms by adapting these DRL algorithms.", "_____no_output_____" ] ], [ [ "agent = DRLAgent(env = env_train)", "_____no_output_____" ] ], [ [ "### Model Training: 5 models, A2C DDPG, PPO, TD3, SAC\n\n", "_____no_output_____" ], [ "### Model 1: A2C", "_____no_output_____" ] ], [ [ "## default hyperparameters in config file\nconfig.A2C_PARAMS", "_____no_output_____" ], [ "print(\"==============Model Training===========\")\nnow = datetime.datetime.now().strftime('%Y%m%d-%Hh%M')\na2c_params_tuning = {'n_steps':5, \n\t\t\t 'ent_coef':0.005, \n\t\t\t 'learning_rate':0.0007,\n\t\t\t 'verbose':0,\n\t\t\t 'timesteps':100000}\nmodel_a2c = agent.train_A2C(model_name = \"A2C_{}\".format(now), model_params = a2c_params_tuning)", "==============Model Training===========\nbegin_total_asset:100000\nend_total_asset:176934.7576968735\ntotal_reward:76934.75769687351\ntotal_cost: 5882.835153967686\ntotal_trades: 2484\nSharpe: 0.46981434691347806\n=================================\nbegin_total_asset:100000\nend_total_asset:595867.5745766863\ntotal_reward:495867.57457668625\ntotal_cost: 4290.078180151586\ntotal_trades: 2514\nSharpe: 0.8764031127847676\n=================================\nbegin_total_asset:100000\nend_total_asset:583671.8077524664\ntotal_reward:483671.8077524664\ntotal_cost: 5838.791503323599\ntotal_trades: 2512\nSharpe: 0.8828870827729837\n=================================\nbegin_total_asset:100000\nend_total_asset:637429.0815745457\ntotal_reward:537429.0815745457\ntotal_cost: 3895.962820358061\ntotal_trades: 2514\nSharpe: 0.8993083850920852\n=================================\nbegin_total_asset:100000\nend_total_asset:766699.1715777694\ntotal_reward:666699.1715777694\ntotal_cost: 1336.049787657923\ntotal_trades: 2515\nSharpe: 0.9528759647152936\n=================================\nbegin_total_asset:100000\nend_total_asset:882677.1870489779\ntotal_reward:782677.1870489779\ntotal_cost: 785.3824416674332\ntotal_trades: 2515\nSharpe: 1.000739173295064\n=================================\nbegin_total_asset:100000\nend_total_asset:927423.8880478856\ntotal_reward:827423.8880478856\ntotal_cost: 254.9934727955073\ntotal_trades: 2515\nSharpe: 1.0182559497960086\n=================================\nbegin_total_asset:100000\nend_total_asset:1003931.2516248588\ntotal_reward:903931.2516248588\ntotal_cost: 103.18390643660977\ntotal_trades: 2515\nSharpe: 1.0458180695295847\n=================================\nbegin_total_asset:100000\nend_total_asset:1034917.0496185564\ntotal_reward:934917.0496185564\ntotal_cost: 115.83752941795201\ntotal_trades: 2515\nSharpe: 1.0560390646001476\n=================================\nbegin_total_asset:100000\nend_total_asset:1028252.0867060601\ntotal_reward:928252.0867060601\ntotal_cost: 504.6352044569054\ntotal_trades: 2515\nSharpe: 1.0539729580189\n=================================\nbegin_total_asset:100000\nend_total_asset:1012919.8832652074\ntotal_reward:912919.8832652074\ntotal_cost: 1087.4567894795407\ntotal_trades: 2515\nSharpe: 1.049357005305756\n=================================\nbegin_total_asset:100000\nend_total_asset:1009170.4684736083\ntotal_reward:909170.4684736083\ntotal_cost: 330.15603324727704\ntotal_trades: 2515\nSharpe: 1.0477097038943333\n=================================\nbegin_total_asset:100000\nend_total_asset:1008728.6930000533\ntotal_reward:908728.6930000533\ntotal_cost: 105.3839081911078\ntotal_trades: 2515\nSharpe: 1.0473237020347772\n=================================\nbegin_total_asset:100000\nend_total_asset:1066405.9457223369\ntotal_reward:966405.9457223369\ntotal_cost: 99.93001295428193\ntotal_trades: 2515\nSharpe: 1.0661702887529563\n=================================\nbegin_total_asset:100000\nend_total_asset:1076095.1269021085\ntotal_reward:976095.1269021085\ntotal_cost: 99.8999331866183\ntotal_trades: 2515\nSharpe: 1.0691763688716032\n=================================\nbegin_total_asset:100000\nend_total_asset:1077672.4272528668\ntotal_reward:977672.4272528668\ntotal_cost: 99.89875688362122\ntotal_trades: 2515\nSharpe: 1.069666896575114\n=================================\nbegin_total_asset:100000\nend_total_asset:1076202.7779122659\ntotal_reward:976202.7779122659\ntotal_cost: 99.89889871461293\ntotal_trades: 2515\nSharpe: 1.0692246929592815\n=================================\nbegin_total_asset:100000\nend_total_asset:1076713.6513732625\ntotal_reward:976713.6513732625\ntotal_cost: 99.89764339307739\ntotal_trades: 2515\nSharpe: 1.0693742840547629\n=================================\nbegin_total_asset:100000\nend_total_asset:1073821.6024768997\ntotal_reward:973821.6024768997\ntotal_cost: 99.89993767998253\ntotal_trades: 2515\nSharpe: 1.0684508094123852\n=================================\nbegin_total_asset:100000\nend_total_asset:1071677.1316402173\ntotal_reward:971677.1316402173\ntotal_cost: 99.89588509830196\ntotal_trades: 2515\nSharpe: 1.0678225871360378\n=================================\nbegin_total_asset:100000\nend_total_asset:1077672.4272528668\ntotal_reward:977672.4272528668\ntotal_cost: 99.89875688362122\ntotal_trades: 2515\nSharpe: 1.069666896575114\n=================================\nbegin_total_asset:100000\nend_total_asset:1073289.289490049\ntotal_reward:973289.289490049\ntotal_cost: 99.89958180492658\ntotal_trades: 2515\nSharpe: 1.0683035175424689\n=================================\nbegin_total_asset:100000\nend_total_asset:1077672.4272528668\ntotal_reward:977672.4272528668\ntotal_cost: 99.89875688362122\ntotal_trades: 2515\nSharpe: 1.069666896575114\n=================================\nbegin_total_asset:100000\nend_total_asset:1077672.4272528668\ntotal_reward:977672.4272528668\ntotal_cost: 99.89875688362122\ntotal_trades: 2515\nSharpe: 1.069666896575114\n=================================\nbegin_total_asset:100000\nend_total_asset:1077672.4272528668\ntotal_reward:977672.4272528668\ntotal_cost: 99.89875688362122\ntotal_trades: 2515\nSharpe: 1.069666896575114\n=================================\nbegin_total_asset:100000\nend_total_asset:1077672.4272528668\ntotal_reward:977672.4272528668\ntotal_cost: 99.89875688362122\ntotal_trades: 2515\nSharpe: 1.069666896575114\n=================================\nbegin_total_asset:100000\nend_total_asset:1077672.4272528668\ntotal_reward:977672.4272528668\ntotal_cost: 99.89875688362122\ntotal_trades: 2515\nSharpe: 1.069666896575114\n=================================\nbegin_total_asset:100000\nend_total_asset:1077672.4272528668\ntotal_reward:977672.4272528668\ntotal_cost: 99.89875688362122\ntotal_trades: 2515\nSharpe: 1.069666896575114\n=================================\nbegin_total_asset:100000\nend_total_asset:1075447.4353602997\ntotal_reward:975447.4353602997\ntotal_cost: 99.89794734264501\ntotal_trades: 2515\nSharpe: 1.0689847753454724\n=================================\nbegin_total_asset:100000\nend_total_asset:1049675.3015436474\ntotal_reward:949675.3015436474\ntotal_cost: 100.54921497903216\ntotal_trades: 2515\nSharpe: 1.060809128824055\n=================================\nbegin_total_asset:100000\nend_total_asset:1047590.4577336748\ntotal_reward:947590.4577336748\ntotal_cost: 101.89706915307313\ntotal_trades: 2514\nSharpe: 1.0602217979823982\n=================================\nbegin_total_asset:100000\nend_total_asset:1047776.4501636802\ntotal_reward:947776.4501636802\ntotal_cost: 102.07005234048698\ntotal_trades: 2515\nSharpe: 1.0602399076152227\n=================================\nbegin_total_asset:100000\nend_total_asset:1012040.9935228077\ntotal_reward:912040.9935228077\ntotal_cost: 106.2468007798095\ntotal_trades: 2515\nSharpe: 1.0486006596980257\n=================================\nbegin_total_asset:100000\nend_total_asset:981578.9094083403\ntotal_reward:881578.9094083403\ntotal_cost: 110.2152610478695\ntotal_trades: 2514\nSharpe: 1.0376627607165896\n=================================\nbegin_total_asset:100000\nend_total_asset:1011185.4051029399\ntotal_reward:911185.4051029399\ntotal_cost: 106.32009020648678\ntotal_trades: 2515\nSharpe: 1.0481405595104392\n=================================\nbegin_total_asset:100000\nend_total_asset:921929.9658751409\ntotal_reward:821929.9658751409\ntotal_cost: 116.26213817031766\ntotal_trades: 2513\nSharpe: 1.0158525285971\n=================================\nbegin_total_asset:100000\nend_total_asset:937270.7163539234\ntotal_reward:837270.7163539234\ntotal_cost: 116.24364382860685\ntotal_trades: 2515\nSharpe: 1.0219615653157366\n=================================\nbegin_total_asset:100000\nend_total_asset:1006007.6539997341\ntotal_reward:906007.6539997341\ntotal_cost: 106.56197241422906\ntotal_trades: 2515\nSharpe: 1.0462902027048853\n=================================\nbegin_total_asset:100000\nend_total_asset:988280.4663097259\ntotal_reward:888280.4663097259\ntotal_cost: 107.06134741607173\ntotal_trades: 2515\nSharpe: 1.0401128283240537\n=================================\nTraining time (A2C): 3.6911093990008035 minutes\n" ] ], [ [ "### Model 2: DDPG", "_____no_output_____" ] ], [ [ "## default hyperparameters in config file\nconfig.DDPG_PARAMS", "_____no_output_____" ], [ "print(\"==============Model Training===========\")\nnow = datetime.datetime.now().strftime('%Y%m%d-%Hh%M')\nddpg_params_tuning = {\n 'batch_size': 128,\n\t\t\t 'buffer_size':100000, \n\t\t\t 'verbose':0,\n\t\t\t 'timesteps':50000}\nmodel_ddpg = agent.train_DDPG(model_name = \"DDPG_{}\".format(now), model_params = ddpg_params_tuning)", "==============Model Training===========\nWARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/stable_baselines/ddpg/policies.py:136: dense (from tensorflow.python.layers.core) is deprecated and will be removed in a future version.\nInstructions for updating:\nUse keras.layers.Dense instead.\nWARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/stable_baselines/common/tf_util.py:449: The name tf.get_collection is deprecated. Please use tf.compat.v1.get_collection instead.\n\nWARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/stable_baselines/common/tf_util.py:449: The name tf.GraphKeys is deprecated. Please use tf.compat.v1.GraphKeys instead.\n\nWARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/stable_baselines/ddpg/ddpg.py:94: The name tf.assign is deprecated. Please use tf.compat.v1.assign instead.\n\nWARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/stable_baselines/ddpg/ddpg.py:444: The name tf.global_variables is deprecated. Please use tf.compat.v1.global_variables instead.\n\nWARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/stable_baselines/common/tf_util.py:432: The name tf.get_default_session is deprecated. Please use tf.compat.v1.get_default_session instead.\n\nbegin_total_asset:100000\nend_total_asset:99995.7058569457\ntotal_reward:-4.294143054299639\ntotal_cost: 0.17149700678428698\ntotal_trades: 10\nSharpe: -0.4393937514363672\n=================================\nbegin_total_asset:100000\nend_total_asset:125223.8481070335\ntotal_reward:25223.848107033496\ntotal_cost: 693.701142432137\ntotal_trades: 1159\nSharpe: 0.22317075597890754\n=================================\nbegin_total_asset:100000\nend_total_asset:78872.9656911464\ntotal_reward:-21127.034308853603\ntotal_cost: 354.44841880019516\ntotal_trades: 270\nSharpe: -0.31473719368550507\n=================================\nbegin_total_asset:100000\nend_total_asset:101105.50020441337\ntotal_reward:1105.5002044133726\ntotal_cost: 158.41803119077085\ntotal_trades: 523\nSharpe: 0.05295084331155536\n=================================\nbegin_total_asset:100000\nend_total_asset:92841.32190924165\ntotal_reward:-7158.678090758345\ntotal_cost: 285.19241356424914\ntotal_trades: 441\nSharpe: -0.04400567053352091\n=================================\nbegin_total_asset:100000\nend_total_asset:100098.01839813044\ntotal_reward:98.01839813044353\ntotal_cost: 317.5804545814511\ntotal_trades: 529\nSharpe: 0.08613603512851835\n=================================\nbegin_total_asset:100000\nend_total_asset:92739.4599723879\ntotal_reward:-7260.540027612107\ntotal_cost: 191.08645151072778\ntotal_trades: 309\nSharpe: -0.05614602909156426\n=================================\nbegin_total_asset:100000\nend_total_asset:184718.08681328793\ntotal_reward:84718.08681328793\ntotal_cost: 413.2914850454691\ntotal_trades: 1474\nSharpe: 0.468037661266039\n=================================\nbegin_total_asset:100000\nend_total_asset:348737.8082337941\ntotal_reward:248737.80823379412\ntotal_cost: 1077.897208581877\ntotal_trades: 2325\nSharpe: 0.8488781572304104\n=================================\nbegin_total_asset:100000\nend_total_asset:1066685.5776203808\ntotal_reward:966685.5776203808\ntotal_cost: 104.86199927912372\ntotal_trades: 2515\nSharpe: 1.0662577405642613\n=================================\nbegin_total_asset:100000\nend_total_asset:546140.3665567372\ntotal_reward:446140.3665567372\ntotal_cost: 1984.94501164637\ntotal_trades: 2039\nSharpe: 1.0962526319196348\n=================================\nbegin_total_asset:100000\nend_total_asset:725392.1516885572\ntotal_reward:625392.1516885572\ntotal_cost: 1929.21672055331\ntotal_trades: 2367\nSharpe: 1.0850685606375767\n=================================\nbegin_total_asset:100000\nend_total_asset:1197963.7491200864\ntotal_reward:1097963.7491200864\ntotal_cost: 775.1689520095539\ntotal_trades: 2515\nSharpe: 1.1368189820751509\n=================================\nbegin_total_asset:100000\nend_total_asset:742963.9653327786\ntotal_reward:642963.9653327786\ntotal_cost: 4533.239665881099\ntotal_trades: 2515\nSharpe: 1.1079544079376524\n=================================\nbegin_total_asset:100000\nend_total_asset:1144761.2711953667\ntotal_reward:1044761.2711953667\ntotal_cost: 3276.7738260039214\ntotal_trades: 2515\nSharpe: 1.1819869523465631\n=================================\nbegin_total_asset:100000\nend_total_asset:1037986.4133432165\ntotal_reward:937986.4133432165\ntotal_cost: 1362.696404204281\ntotal_trades: 2515\nSharpe: 1.074102780542535\n=================================\nbegin_total_asset:100000\nend_total_asset:379800.5713001307\ntotal_reward:279800.5713001307\ntotal_cost: 561.097958557322\ntotal_trades: 1108\nSharpe: 1.017939795759656\n=================================\nbegin_total_asset:100000\nend_total_asset:1057570.3289890413\ntotal_reward:957570.3289890413\ntotal_cost: 230.8395712170385\ntotal_trades: 2515\nSharpe: 1.0649367921722361\n=================================\nbegin_total_asset:100000\nend_total_asset:1262476.1474488103\ntotal_reward:1162476.1474488103\ntotal_cost: 3144.3996860926018\ntotal_trades: 2515\nSharpe: 1.192124716528578\n=================================\nbegin_total_asset:100000\nend_total_asset:1082336.6108373601\ntotal_reward:982336.6108373601\ntotal_cost: 4391.917908269401\ntotal_trades: 2515\nSharpe: 1.2316074665722823\n=================================\nbegin_total_asset:100000\nend_total_asset:1073915.1368067395\ntotal_reward:973915.1368067395\ntotal_cost: 3961.0404912539616\ntotal_trades: 2515\nSharpe: 1.182975331285516\n=================================\nbegin_total_asset:100000\nend_total_asset:995355.5827891508\ntotal_reward:895355.5827891508\ntotal_cost: 2987.380600884907\ntotal_trades: 2275\nSharpe: 1.2184826630430359\n=================================\nbegin_total_asset:100000\nend_total_asset:1265155.931393874\ntotal_reward:1165155.931393874\ntotal_cost: 3421.904957608416\ntotal_trades: 2515\nSharpe: 1.176230159082932\n=================================\nbegin_total_asset:100000\nend_total_asset:354591.09329952736\ntotal_reward:254591.09329952736\ntotal_cost: 2402.9491733768123\ntotal_trades: 1728\nSharpe: 1.1527467961622158\n=================================\nbegin_total_asset:100000\nend_total_asset:803623.4664680425\ntotal_reward:703623.4664680425\ntotal_cost: 4972.748300329915\ntotal_trades: 2023\nSharpe: 1.2292805568673504\n=================================\nbegin_total_asset:100000\nend_total_asset:955506.5108108347\ntotal_reward:855506.5108108347\ntotal_cost: 3991.8851063311604\ntotal_trades: 2515\nSharpe: 1.0580866091966117\n=================================\nbegin_total_asset:100000\nend_total_asset:1155244.3521296869\ntotal_reward:1055244.3521296869\ntotal_cost: 948.3196589027705\ntotal_trades: 2515\nSharpe: 1.115323871625628\n=================================\nbegin_total_asset:100000\nend_total_asset:558497.3118469787\ntotal_reward:458497.31184697873\ntotal_cost: 764.4295498483248\ntotal_trades: 2160\nSharpe: 0.9180767731547095\n=================================\nbegin_total_asset:100000\nend_total_asset:1066247.2705421762\ntotal_reward:966247.2705421762\ntotal_cost: 1693.194371330125\ntotal_trades: 2515\nSharpe: 1.0701861305505607\n=================================\nbegin_total_asset:100000\nend_total_asset:1182423.7881506896\ntotal_reward:1082423.7881506896\ntotal_cost: 4612.575868804704\ntotal_trades: 2515\nSharpe: 1.1897017571714126\n=================================\nbegin_total_asset:100000\nend_total_asset:352639.7791066152\ntotal_reward:252639.77910661523\ntotal_cost: 2203.071873404569\ntotal_trades: 1706\nSharpe: 0.9297194480752586\n=================================\nbegin_total_asset:100000\nend_total_asset:512017.8187501993\ntotal_reward:412017.8187501993\ntotal_cost: 3237.2744638466706\ntotal_trades: 2074\nSharpe: 1.2296052920172373\n=================================\nbegin_total_asset:100000\nend_total_asset:1026617.409790139\ntotal_reward:926617.409790139\ntotal_cost: 2235.833171563652\ntotal_trades: 2515\nSharpe: 1.0634461951643783\n=================================\nbegin_total_asset:100000\nend_total_asset:432922.27221052325\ntotal_reward:332922.27221052325\ntotal_cost: 1965.1113230232177\ntotal_trades: 1676\nSharpe: 0.9558190650202323\n=================================\nbegin_total_asset:100000\nend_total_asset:1136563.8991799501\ntotal_reward:1036563.8991799501\ntotal_cost: 4048.353596072037\ntotal_trades: 2515\nSharpe: 1.1567139637696162\n=================================\nbegin_total_asset:100000\nend_total_asset:457739.8968391317\ntotal_reward:357739.8968391317\ntotal_cost: 1451.009792129765\ntotal_trades: 1722\nSharpe: 0.9887615430292522\n=================================\nbegin_total_asset:100000\nend_total_asset:832672.3654919548\ntotal_reward:732672.3654919548\ntotal_cost: 2254.518771357834\ntotal_trades: 2117\nSharpe: 1.0499743963093453\n=================================\nbegin_total_asset:100000\nend_total_asset:903730.0291357596\ntotal_reward:803730.0291357596\ntotal_cost: 4160.4464784263955\ntotal_trades: 2515\nSharpe: 1.0537325331716016\n=================================\nbegin_total_asset:100000\nend_total_asset:868039.507615209\ntotal_reward:768039.507615209\ntotal_cost: 1324.6054822848214\ntotal_trades: 2515\nSharpe: 1.0055657486465792\n=================================\nTraining time (DDPG): 7.679340577125549 minutes\n" ] ], [ [ "### Model 3: PPO", "_____no_output_____" ] ], [ [ "config.PPO_PARAMS", "_____no_output_____" ], [ "print(\"==============Model Training===========\")\nnow = datetime.datetime.now().strftime('%Y%m%d-%Hh%M')\nppo_params_tuning = {'n_steps':128, \n 'nminibatches': 4,\n\t\t\t 'ent_coef':0.005, \n\t\t\t 'learning_rate':0.00025,\n\t\t\t 'verbose':0,\n\t\t\t 'timesteps':50000}\nmodel_ppo = agent.train_PPO(model_name = \"PPO_{}\".format(now), model_params = ppo_params_tuning)", "==============Model Training===========\nWARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/stable_baselines/common/tf_util.py:191: The name tf.ConfigProto is deprecated. Please use tf.compat.v1.ConfigProto instead.\n\nWARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/stable_baselines/common/tf_util.py:200: The name tf.Session is deprecated. Please use tf.compat.v1.Session instead.\n\nWARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/stable_baselines/common/policies.py:116: The name tf.variable_scope is deprecated. Please use tf.compat.v1.variable_scope instead.\n\nWARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/stable_baselines/common/input.py:25: The name tf.placeholder is deprecated. Please use tf.compat.v1.placeholder instead.\n\nWARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/stable_baselines/common/policies.py:561: flatten (from tensorflow.python.layers.core) is deprecated and will be removed in a future version.\nInstructions for updating:\nUse keras.layers.flatten instead.\nWARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/tensorflow_core/python/layers/core.py:332: Layer.apply (from tensorflow.python.keras.engine.base_layer) is deprecated and will be removed in a future version.\nInstructions for updating:\nPlease use `layer.__call__` method instead.\nWARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/stable_baselines/common/tf_layers.py:123: The name tf.get_variable is deprecated. Please use tf.compat.v1.get_variable instead.\n\nWARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/stable_baselines/common/distributions.py:418: The name tf.random_normal is deprecated. Please use tf.random.normal instead.\n\nWARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/stable_baselines/ppo2/ppo2.py:190: The name tf.summary.scalar is deprecated. Please use tf.compat.v1.summary.scalar instead.\n\nWARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/stable_baselines/ppo2/ppo2.py:198: The name tf.trainable_variables is deprecated. Please use tf.compat.v1.trainable_variables instead.\n\nWARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/tensorflow_core/python/ops/math_grad.py:1424: where (from tensorflow.python.ops.array_ops) is deprecated and will be removed in a future version.\nInstructions for updating:\nUse tf.where in 2.0, which has the same broadcast rule as np.where\nWARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/stable_baselines/ppo2/ppo2.py:206: The name tf.train.AdamOptimizer is deprecated. Please use tf.compat.v1.train.AdamOptimizer instead.\n\nWARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/stable_baselines/ppo2/ppo2.py:240: The name tf.global_variables_initializer is deprecated. Please use tf.compat.v1.global_variables_initializer instead.\n\nWARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/stable_baselines/ppo2/ppo2.py:242: The name tf.summary.merge_all is deprecated. Please use tf.compat.v1.summary.merge_all instead.\n\nWARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/stable_baselines/common/base_class.py:1169: The name tf.summary.FileWriter is deprecated. Please use tf.compat.v1.summary.FileWriter instead.\n\nbegin_total_asset:100000\nend_total_asset:467641.36933949846\ntotal_reward:367641.36933949846\ntotal_cost: 6334.431322515711\ntotal_trades: 2512\nSharpe: 0.8257905133964245\n=================================\nWARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/stable_baselines/common/tf_util.py:502: The name tf.Summary is deprecated. Please use tf.compat.v1.Summary instead.\n\nbegin_total_asset:100000\nend_total_asset:598301.9358692836\ntotal_reward:498301.9358692836\ntotal_cost: 6714.914704657209\ntotal_trades: 2514\nSharpe: 0.9104137553610742\n=================================\nbegin_total_asset:100000\nend_total_asset:487324.45261743915\ntotal_reward:387324.45261743915\ntotal_cost: 6694.683756348197\ntotal_trades: 2513\nSharpe: 0.8778200252832747\n=================================\nbegin_total_asset:100000\nend_total_asset:376587.1472550176\ntotal_reward:276587.1472550176\ntotal_cost: 6498.996226416659\ntotal_trades: 2500\nSharpe: 0.9265883206757147\n=================================\nbegin_total_asset:100000\nend_total_asset:411775.78502221894\ntotal_reward:311775.78502221894\ntotal_cost: 6672.303574431684\ntotal_trades: 2509\nSharpe: 0.8663978354433025\n=================================\nbegin_total_asset:100000\nend_total_asset:443250.46347580303\ntotal_reward:343250.46347580303\ntotal_cost: 6792.764421390525\ntotal_trades: 2515\nSharpe: 0.8567059823183628\n=================================\nbegin_total_asset:100000\nend_total_asset:547712.6511708717\ntotal_reward:447712.6511708717\ntotal_cost: 6901.285057676673\ntotal_trades: 2511\nSharpe: 0.9463287997507608\n=================================\nbegin_total_asset:100000\nend_total_asset:534293.6779391705\ntotal_reward:434293.6779391705\ntotal_cost: 7026.2048333167895\ntotal_trades: 2515\nSharpe: 0.9103397038651807\n=================================\nbegin_total_asset:100000\nend_total_asset:767260.8108358055\ntotal_reward:667260.8108358055\ntotal_cost: 6963.422003443312\ntotal_trades: 2515\nSharpe: 0.9969063532868196\n=================================\nbegin_total_asset:100000\nend_total_asset:862184.7490450073\ntotal_reward:762184.7490450073\ntotal_cost: 6934.620506435971\ntotal_trades: 2515\nSharpe: 1.0262666662712374\n=================================\nbegin_total_asset:100000\nend_total_asset:877375.7041656245\ntotal_reward:777375.7041656245\ntotal_cost: 6802.841792068231\ntotal_trades: 2515\nSharpe: 1.0294698704729517\n=================================\nTraining time (PPO): 0.8607302109400431 minutes\n" ] ], [ [ "### Model 4: TD3", "_____no_output_____" ] ], [ [ "## default hyperparameters in config file\nconfig.TD3_PARAMS", "_____no_output_____" ], [ "print(\"==============Model Training===========\")\nnow = datetime.datetime.now().strftime('%Y%m%d-%Hh%M')\ntd3_params_tuning = {\n 'batch_size': 128,\n\t\t\t 'buffer_size':200000, \n 'learning_rate': 0.0002,\n\t\t\t 'verbose':0,\n\t\t\t 'timesteps':50000}\nmodel_td3 = agent.train_TD3(model_name = \"TD3_{}\".format(now), model_params = td3_params_tuning)", "==============Model Training===========\nbegin_total_asset:100000\nend_total_asset:766882.06486716\ntotal_reward:666882.06486716\ntotal_cost: 122.06275547719093\ntotal_trades: 2502\nSharpe: 0.9471484567377753\n=================================\nbegin_total_asset:100000\nend_total_asset:1064261.8436314124\ntotal_reward:964261.8436314124\ntotal_cost: 99.89867026527524\ntotal_trades: 2515\nSharpe: 1.065522879677433\n=================================\nbegin_total_asset:100000\nend_total_asset:1065410.0297433857\ntotal_reward:965410.0297433857\ntotal_cost: 99.89616402166861\ntotal_trades: 2515\nSharpe: 1.0658930407952774\n=================================\nbegin_total_asset:100000\nend_total_asset:1062109.9810929787\ntotal_reward:962109.9810929787\ntotal_cost: 99.8953728848598\ntotal_trades: 2515\nSharpe: 1.0648226545239186\n=================================\nbegin_total_asset:100000\nend_total_asset:1066748.4141685755\ntotal_reward:966748.4141685755\ntotal_cost: 99.89584806687385\ntotal_trades: 2515\nSharpe: 1.06632128735182\n=================================\nbegin_total_asset:100000\nend_total_asset:1064717.8522568534\ntotal_reward:964717.8522568534\ntotal_cost: 99.89954986181223\ntotal_trades: 2515\nSharpe: 1.0656773439462421\n=================================\nbegin_total_asset:100000\nend_total_asset:1063618.256994051\ntotal_reward:963618.2569940509\ntotal_cost: 99.89578810058728\ntotal_trades: 2515\nSharpe: 1.065318652036845\n=================================\nbegin_total_asset:100000\nend_total_asset:1065101.978900172\ntotal_reward:965101.978900172\ntotal_cost: 99.90007986039683\ntotal_trades: 2515\nSharpe: 1.0657876842044585\n=================================\nbegin_total_asset:100000\nend_total_asset:1065345.1607699129\ntotal_reward:965345.1607699129\ntotal_cost: 99.89532260010348\ntotal_trades: 2515\nSharpe: 1.0658841213300805\n=================================\nbegin_total_asset:100000\nend_total_asset:1066239.1006302314\ntotal_reward:966239.1006302314\ntotal_cost: 99.89946311191612\ntotal_trades: 2515\nSharpe: 1.0661338428981897\n=================================\nbegin_total_asset:100000\nend_total_asset:1064642.5474156558\ntotal_reward:964642.5474156558\ntotal_cost: 99.89530934433792\ntotal_trades: 2515\nSharpe: 1.0656451438551164\n=================================\nbegin_total_asset:100000\nend_total_asset:1066120.7395977282\ntotal_reward:966120.7395977282\ntotal_cost: 99.89889606461536\ntotal_trades: 2515\nSharpe: 1.0661139044989152\n=================================\nbegin_total_asset:100000\nend_total_asset:1065188.3816049164\ntotal_reward:965188.3816049164\ntotal_cost: 99.89524269603959\ntotal_trades: 2515\nSharpe: 1.0658240771799103\n=================================\nbegin_total_asset:100000\nend_total_asset:1062915.9535119308\ntotal_reward:962915.9535119308\ntotal_cost: 99.89634893255415\ntotal_trades: 2515\nSharpe: 1.065112303207818\n=================================\nbegin_total_asset:100000\nend_total_asset:1066825.939915284\ntotal_reward:966825.939915284\ntotal_cost: 99.89954193874149\ntotal_trades: 2515\nSharpe: 1.0663221666084541\n=================================\nbegin_total_asset:100000\nend_total_asset:1064761.0628751868\ntotal_reward:964761.0628751868\ntotal_cost: 99.89933540212928\ntotal_trades: 2515\nSharpe: 1.0656814880277525\n=================================\nbegin_total_asset:100000\nend_total_asset:1068713.0753036987\ntotal_reward:968713.0753036987\ntotal_cost: 99.89722384904954\ntotal_trades: 2515\nSharpe: 1.0669419606856339\n=================================\nbegin_total_asset:100000\nend_total_asset:1066851.9421668774\ntotal_reward:966851.9421668774\ntotal_cost: 99.8963950145522\ntotal_trades: 2515\nSharpe: 1.066357578203035\n=================================\nbegin_total_asset:100000\nend_total_asset:1066403.710948116\ntotal_reward:966403.7109481159\ntotal_cost: 99.89754595417725\ntotal_trades: 2515\nSharpe: 1.066210452143935\n=================================\nTraining time (DDPG): 4.460090506076813 minutes\n" ] ], [ [ "### Model 5: SAC", "_____no_output_____" ] ], [ [ "## default hyperparameters in config file\nconfig.SAC_PARAMS", "_____no_output_____" ], [ "print(\"==============Model Training===========\")\nnow = datetime.datetime.now().strftime('%Y%m%d-%Hh%M')\nsac_params_tuning={\n 'batch_size': 64,\n 'buffer_size': 100000,\n 'ent_coef':'auto_0.1',\n 'learning_rate': 0.0001,\n 'learning_starts':200,\n 'timesteps': 50000,\n 'verbose': 0}\nmodel_sac = agent.train_SAC(model_name = \"SAC_{}\".format(now), model_params = sac_params_tuning)", "==============Model Training===========\nWARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/stable_baselines/sac/policies.py:63: The name tf.log is deprecated. Please use tf.math.log instead.\n\nbegin_total_asset:100000\nend_total_asset:628197.7965312647\ntotal_reward:528197.7965312647\ntotal_cost: 161.17551531590826\ntotal_trades: 2493\nSharpe: 0.8786969593304516\n=================================\nbegin_total_asset:100000\nend_total_asset:1077672.4272528668\ntotal_reward:977672.4272528668\ntotal_cost: 99.89875688362122\ntotal_trades: 2515\nSharpe: 1.069666896575114\n=================================\nbegin_total_asset:100000\nend_total_asset:1077672.4272528668\ntotal_reward:977672.4272528668\ntotal_cost: 99.89875688362122\ntotal_trades: 2515\nSharpe: 1.069666896575114\n=================================\nbegin_total_asset:100000\nend_total_asset:1077672.4272528668\ntotal_reward:977672.4272528668\ntotal_cost: 99.89875688362122\ntotal_trades: 2515\nSharpe: 1.069666896575114\n=================================\nbegin_total_asset:100000\nend_total_asset:1077672.4272528668\ntotal_reward:977672.4272528668\ntotal_cost: 99.89875688362122\ntotal_trades: 2515\nSharpe: 1.069666896575114\n=================================\nbegin_total_asset:100000\nend_total_asset:1077672.4272528668\ntotal_reward:977672.4272528668\ntotal_cost: 99.89875688362122\ntotal_trades: 2515\nSharpe: 1.069666896575114\n=================================\nbegin_total_asset:100000\nend_total_asset:1077672.4272528668\ntotal_reward:977672.4272528668\ntotal_cost: 99.89875688362122\ntotal_trades: 2515\nSharpe: 1.069666896575114\n=================================\nbegin_total_asset:100000\nend_total_asset:1077672.4272528668\ntotal_reward:977672.4272528668\ntotal_cost: 99.89875688362122\ntotal_trades: 2515\nSharpe: 1.069666896575114\n=================================\nbegin_total_asset:100000\nend_total_asset:1077672.4272528668\ntotal_reward:977672.4272528668\ntotal_cost: 99.89875688362122\ntotal_trades: 2515\nSharpe: 1.069666896575114\n=================================\nbegin_total_asset:100000\nend_total_asset:1077672.4272528668\ntotal_reward:977672.4272528668\ntotal_cost: 99.89875688362122\ntotal_trades: 2515\nSharpe: 1.069666896575114\n=================================\nbegin_total_asset:100000\nend_total_asset:1077672.4272528668\ntotal_reward:977672.4272528668\ntotal_cost: 99.89875688362122\ntotal_trades: 2515\nSharpe: 1.069666896575114\n=================================\nbegin_total_asset:100000\nend_total_asset:1077672.4272528668\ntotal_reward:977672.4272528668\ntotal_cost: 99.89875688362122\ntotal_trades: 2515\nSharpe: 1.069666896575114\n=================================\nbegin_total_asset:100000\nend_total_asset:1077672.4272528668\ntotal_reward:977672.4272528668\ntotal_cost: 99.89875688362122\ntotal_trades: 2515\nSharpe: 1.069666896575114\n=================================\nbegin_total_asset:100000\nend_total_asset:1077672.4272528668\ntotal_reward:977672.4272528668\ntotal_cost: 99.89875688362122\ntotal_trades: 2515\nSharpe: 1.069666896575114\n=================================\nbegin_total_asset:100000\nend_total_asset:1077672.4272528668\ntotal_reward:977672.4272528668\ntotal_cost: 99.89875688362122\ntotal_trades: 2515\nSharpe: 1.069666896575114\n=================================\nbegin_total_asset:100000\nend_total_asset:1077672.4272528668\ntotal_reward:977672.4272528668\ntotal_cost: 99.89875688362122\ntotal_trades: 2515\nSharpe: 1.069666896575114\n=================================\nbegin_total_asset:100000\nend_total_asset:1077672.4272528668\ntotal_reward:977672.4272528668\ntotal_cost: 99.89875688362122\ntotal_trades: 2515\nSharpe: 1.069666896575114\n=================================\nbegin_total_asset:100000\nend_total_asset:1077672.4272528668\ntotal_reward:977672.4272528668\ntotal_cost: 99.89875688362122\ntotal_trades: 2515\nSharpe: 1.069666896575114\n=================================\nbegin_total_asset:100000\nend_total_asset:1077672.4272528668\ntotal_reward:977672.4272528668\ntotal_cost: 99.89875688362122\ntotal_trades: 2515\nSharpe: 1.069666896575114\n=================================\nTraining time (SAC): 5.726297716299693 minutes\n" ] ], [ [ "### Trading\n* we use the environment class we initialized at 5.3 to create a stock trading environment\n* Assume that we have $100,000 initial capital at 2019-01-01. \n* We use the trained model of PPO to trade AAPL.", "_____no_output_____" ] ], [ [ "trade.head()", "_____no_output_____" ], [ "# create trading env\nenv_trade, obs_trade = env_setup.create_env_trading(data = trade,\n env_class = SingleStockEnv) ", "_____no_output_____" ], [ "## make a prediction and get the account value change\ndf_account_value, df_actions = DRLAgent.DRL_prediction(model=model_td3,\n test_data = trade,\n test_env = env_trade,\n test_obs = obs_trade)", "begin_total_asset:100000\nend_total_asset:308768.3018266945\ntotal_reward:208768.30182669451\ntotal_cost: 99.89708306503296\ntotal_trades: 439\nSharpe: 1.9188345294206783\n=================================\n" ] ], [ [ "<a id='6'></a>\n# Part 7: Backtesting Performance\nBacktesting plays a key role in evaluating the performance of a trading strategy. Automated backtesting tool is preferred because it reduces the human error. We usually use the Quantopian pyfolio package to backtest our trading strategies. It is easy to use and consists of various individual plots that provide a comprehensive image of the performance of a trading strategy.", "_____no_output_____" ], [ "<a id='6.1'></a>\n## 7.1 BackTestStats\npass in df_account_value, this information is stored in env class\n", "_____no_output_____" ] ], [ [ "print(\"==============Get Backtest Results===========\")\nperf_stats_all = BackTestStats(account_value=df_account_value)\nperf_stats_all = pd.DataFrame(perf_stats_all)\nperf_stats_all.to_csv(\"./\"+config.RESULTS_DIR+\"/perf_stats_all_\"+now+'.csv')", "==============Get Backtest Results===========\nannual return: 104.80443553947256\nsharpe ratio: 1.9188345294206783\nAnnual return 0.907331\nCumulative returns 2.087683\nAnnual volatility 0.374136\nSharpe ratio 1.918835\nCalmar ratio 2.887121\nStability 0.909127\nMax drawdown -0.314268\nOmega ratio 1.442243\nSortino ratio 2.903654\nSkew NaN\nKurtosis NaN\nTail ratio 1.049744\nDaily value at risk -0.044288\nAlpha 0.000000\nBeta 1.000000\ndtype: float64\n" ] ], [ [ "<a id='6.2'></a>\n## 7.2 BackTestPlot", "_____no_output_____" ] ], [ [ "print(\"==============Compare to AAPL itself buy-and-hold===========\")\n%matplotlib inline\nBackTestPlot(account_value=df_account_value, baseline_ticker = 'AAPL')", "==============Compare to AAPL itself buy-and-hold===========\nannual return: 104.80443553947256\nsharpe ratio: 1.9188345294206783\n[*********************100%***********************] 1 of 1 completed\nShape of DataFrame: (440, 7)\n" ] ], [ [ "<a id='6.3'></a>\n## 7.3 Baseline Stats", "_____no_output_____" ] ], [ [ "print(\"==============Get Baseline Stats===========\")\nbaesline_perf_stats=BaselineStats('AAPL')", "==============Get Baseline Stats===========\n[*********************100%***********************] 1 of 1 completed\nShape of DataFrame: (440, 7)\nAnnual return 0.868103\nCumulative returns 1.977654\nAnnual volatility 0.384009\nSharpe ratio 1.825350\nCalmar ratio 2.762260\nStability 0.909223\nMax drawdown -0.314273\nOmega ratio 1.416301\nSortino ratio 2.709220\nSkew NaN\nKurtosis NaN\nTail ratio 1.067808\nDaily value at risk -0.045599\nAlpha 0.000000\nBeta 1.000000\ndtype: float64\n" ], [ "print(\"==============Get Baseline Stats===========\")\nbaesline_perf_stats=BaselineStats('^GSPC')", "==============Get Baseline Stats===========\n[*********************100%***********************] 1 of 1 completed\nShape of DataFrame: (440, 7)\nAnnual return 0.176845\nCumulative returns 0.328857\nAnnual volatility 0.270644\nSharpe ratio 0.739474\nCalmar ratio 0.521283\nStability 0.339596\nMax drawdown -0.339250\nOmega ratio 1.174869\nSortino ratio 1.015508\nSkew NaN\nKurtosis NaN\nTail ratio 0.659621\nDaily value at risk -0.033304\nAlpha 0.000000\nBeta 1.000000\ndtype: float64\n" ] ], [ [ "<a id='6.4'></a>\n## 7.4 Compare to Stock Market Index", "_____no_output_____" ] ], [ [ "print(\"==============Compare to S&P 500===========\")\n%matplotlib inline\n# S&P 500: ^GSPC\n# Dow Jones Index: ^DJI\n# NASDAQ 100: ^NDX\nBackTestPlot(df_account_value, baseline_ticker = '^GSPC')", "==============Compare to S&P 500===========\nannual return: 104.80443553947256\nsharpe ratio: 1.9188345294206783\n[*********************100%***********************] 1 of 1 completed\nShape of DataFrame: (440, 7)\n" ], [ "", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown", "markdown", "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown", "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ] ]
e7ffa5781abdc6720a6bdb4061c11cb051ae1351
13,198
ipynb
Jupyter Notebook
lostanlen_waspaa2019_fig3.ipynb
BirdVox/lostanlen_waspaa2019
4b3daa5b5945683e28511e157dfab1db1644f80a
[ "MIT" ]
2
2019-10-25T18:22:04.000Z
2020-07-05T15:11:21.000Z
lostanlen_waspaa2019_fig3.ipynb
BirdVox/lostanlen2019dcase
4b3daa5b5945683e28511e157dfab1db1644f80a
[ "MIT" ]
null
null
null
lostanlen_waspaa2019_fig3.ipynb
BirdVox/lostanlen2019dcase
4b3daa5b5945683e28511e157dfab1db1644f80a
[ "MIT" ]
null
null
null
35.766938
127
0.522276
[ [ [ "import h5py\n\nfrom librosa.display import specshow\nimport numpy as np\nimport os\nimport pandas as pd\nimport tqdm\n\ndata_dir = \"/beegfs/vl1019/waspaa2019_data\"\nccb_dir = os.path.join(data_dir, \"ccb18\")\ncsv_name = 'CCB_2009Feb18to22Apr17.csv'\ncsv_path = os.path.join(ccb_dir, csv_name)\n\nhop_length = 2**7\nn_fft = 2 * hop_length\nsr = 2000\nhalf_clip_n_cols = 16\nlow_freq_bin = 5\n\npcen_version_id = 2\npcen_version_str = str(pcen_version_id)\nhdf5_dir = os.path.join(\n ccb_dir, \"ccb18_h5_v-\" + pcen_version_str)\ndistances = []\ndates = []\nchannels = []\n\nstft_max_list = []\nstft_avg_list = []\nlog_pcen_max_list = []\nlog_pcen_avg_list = []\n\n\ndf = pd.read_csv(csv_path)\ndf = df.dropna()\ndf = df.sort_values(by=\"DistanceKm\")\n\n\nimport h5py\n\nfrom librosa.display import specshow\nimport numpy as np\nimport os\nimport pandas as pd\nimport tqdm\n\ndata_dir = \"/beegfs/vl1019/waspaa2019_data\"\nccb_dir = os.path.join(data_dir, \"ccb18\")\ncsv_name = 'CCB_2009Feb18to22Apr17.csv'\ncsv_path = os.path.join(ccb_dir, csv_name)\n\nhop_length = 2**7\nn_fft = 2 * hop_length\nsr = 2000\nhalf_clip_n_cols = 16\nlow_freq_bin = 5\n\npcen_version_id = 2\npcen_version_str = str(pcen_version_id)\nhdf5_dir = os.path.join(\n ccb_dir, \"ccb18_h5_v-\" + pcen_version_str)\ndistances = []\ndates = []\nchannels = []\n\nstft_max_list = []\nstft_avg_list = []\nlog_pcen_max_list = []\nlog_pcen_avg_list = []\n\n\ndf = pd.read_csv(csv_path)\ndf = df.dropna()\ndf = df.sort_values(by=\"DistanceKm\")\n\nfor i, row_id in tqdm.tqdm(enumerate(range(len(df)))):\n row = df.iloc[row_id]\n \n # Read date.\n date_int = int(row[\"Date\"])\n date_str = \"20090\" + str(date_int)\n prefix_str = \"CCB18_\" + date_str\n\n # Read channel.\n channel_id = int(row[\"Channel\"])\n channel_str = str(channel_id)\n\n # Read onset time\n onset_time_int = int(row[\"Begin.Time..s.\"])\n hour_str = str(onset_time_int // 3600).zfill(2)\n minute_str = str(((onset_time_int % 3600) // (60*15)) * 15).zfill(2)\n second_str = \"00\"\n time_str = \"\".join([hour_str, minute_str, second_str])\n \n # Locate HDF5 \n hdf5_name = \"_\".join([\n \"CCB18\",\n date_str,\n time_str,\n pcen_version_str\n ])\n hdf5_path = os.path.join(hdf5_dir, hdf5_name + \".h5\")\n \n if not os.path.exists(hdf5_path):\n continue\n \n distances.append(row[\"DistanceKm\"])\n dates.append(row[\"Date\"])\n channels.append(row[\"Channel\"])\n \n # Open HDF5\n with h5py.File(hdf5_path, 'r') as f:\n mid_time = (0.5 * (row[\"Begin.Time..s.\"] + row[\"End.Time..s.\"])) % (15*60)\n mid_col = int(round(mid_time * (sr/hop_length)))\n start_col = max(0, mid_col - half_clip_n_cols)\n stop_col = min(start_col + 2*half_clip_n_cols, f[\"stft\"].shape[1])\n start_col = stop_col - 2*half_clip_n_cols\n X_stft = f[\"stft\"][low_freq_bin:, start_col:stop_col, channel_id-1]\n X_stft_sf = np.maximum(0, np.diff(np.log1p(X_stft), axis=1))\n stft_max_list.append(np.max(X_stft_sf))\n stft_avg_list.append(np.max(np.mean(X_stft_sf, axis=0)))\n X_pcen = f[\"pcen\"][low_freq_bin:, start_col:stop_col, channel_id-1]\n log_pcen_max_list.append(np.max(np.log1p(X_pcen)))\n log_pcen_avg_list.append(np.max(np.mean(np.log1p(X_pcen), axis=0)))\n\nfeature_dict = {\n \"stft_avg\": stft_avg_list,\n \"stft_max\": stft_max_list,\n \"log_pcen_avg\": log_pcen_avg_list,\n \"log_pcen_max\": log_pcen_max_list,\n \"distance\": distances,\n \"date\": dates,\n \"channel\": channels\n}\nfeature_df = pd.DataFrame(feature_dict)\n\nfeature_df.to_csv(\n \"/beegfs/vl1019/waspaa2019_data/ccb18/ccb18_features_v-2bis\" +\\\n pcen_version_str + \".csv\")", "_____no_output_____" ], [ "import glob\nimport librosa\nimport soundfile as sf\n\nships_dir = '/beegfs/vl1019/waspaa2019_data/shipsEar_AUDIOS'\nsettings = {\n \"T\": 1.0,\n \"alpha\": 1.0,\n \"delta\": 0.0,\n \"r\": 1.0,\n \"eps\": 1e-6,\n \"n_fft\": 2**8,\n \"win_length\": 2**8,\n \"hop_length\": 2**7,\n \"sr\": 2000\n}\n\nstft_avg_neg = []\nstft_max_neg = []\nlog_pcen_avg_neg = []\nlog_pcen_max_neg = []\n\n\nfor ship_path in tqdm.tqdm(glob.glob(os.path.join(ships_dir, \"*.wav\"))):\n\n ship_waveform, orig_sr = sf.read(ship_path)\n\n ship_waveform = librosa.resample(ship_waveform, orig_sr, settings[\"sr\"])\n\n stft = librosa.stft(\n ship_waveform * (2**31),\n n_fft=settings[\"n_fft\"],\n win_length=settings[\"win_length\"],\n hop_length=settings[\"hop_length\"],\n window=\"hann\")[low_freq_bin:, :]\n\n logspec = np.log1p(np.abs(stft))\n\n logspec_flux = np.maximum(0, np.diff(logspec, axis=1))\n stft_avg_neg.append(np.mean(logspec_flux, axis=0))\n stft_max_neg.append(np.max(logspec_flux, axis=0))\n\n pcen = librosa.pcen(np.abs(stft),\n sr=settings[\"sr\"],\n hop_length=settings[\"hop_length\"],\n gain=settings[\"alpha\"],\n bias=settings[\"delta\"],\n power=settings[\"r\"],\n time_constant=settings[\"T\"],\n eps=settings[\"eps\"])[low_freq_bin:, :]\n\n log_pcen_avg_neg.append(np.mean(np.log1p(pcen), axis=0))\n log_pcen_max_neg.append(np.max(np.log1p(pcen), axis=0))\n \n \nneg_dict = {\n \"log_pcen_avg\": np.concatenate([x[10:-10] for x in log_pcen_avg_neg]),\n \"log_pcen_max\": np.concatenate([x[10:-10] for x in log_pcen_max_neg]),\n \"stft_avg\": np.concatenate([x[10:-10] for x in stft_avg_neg]),\n \"stft_max\": np.concatenate([x[10:-10] for x in stft_max_neg])\n}", "_____no_output_____" ], [ "with h5py.File('/beegfs/vl1019/waspaa2019_data/ccb18/ccb18_negatives_v-' + pcen_version_str + \"bis.hdf5\", 'w') as f:\n for k in neg_dict.keys():\n f[k] = neg_dict[k]", "_____no_output_____" ], [ "from matplotlib import pyplot as plt\n%matplotlib inline\n\nneg_dict = {}\n\nwith h5py.File('/beegfs/vl1019/waspaa2019_data/ccb18/ccb18_negatives_v-' + pcen_version_str + \"bis.hdf5\", 'r') as f:\n for k in f.keys():\n neg_dict[k] = f[k][()]\n \n \nfeature_df = pd.DataFrame.from_csv(\n \"/beegfs/vl1019/waspaa2019_data/ccb18/ccb18_features_v-2bis\" +\\\n pcen_version_str + \".csv\")\n\nmax_distance = 21\ncommand_tpr = 0.5\ndates = [219, 220, 221, 222, 417]\nfeature_mtbfs = []\n\nfeature_strs = [\"log_pcen_max\", \"stft_avg\", \"stft_max\", \"log_pcen_avg\"]\n\nfor feature_str in feature_strs:\n\n distance_mtbfs = []\n\n for min_distance in range(30):\n dist_rows =\\\n (feature_df[\"distance\"] >= min_distance) &\\\n (feature_df[\"distance\"] < (min_distance+1)) &\\\n (feature_df[\"channel\"] != 3)\n\n mtbfs = []\n for date in dates:\n date_dist_rows = dist_rows & (feature_df[\"date\"] == date)\n sorted_values = np.sort(feature_df[date_dist_rows][feature_str])[::-1]\n n_positives = len(sorted_values)\n command_threshold = sorted_values[int(command_tpr*n_positives)]\n command_fpr = np.mean(neg_dict[feature_str] > command_threshold)\n mtbf = (hop_length/sr) / command_fpr\n mtbfs.append(mtbf)\n\n distance_mtbfs.append(np.stack(mtbfs))\n \n feature_mtbfs.append(np.stack(distance_mtbfs))\n \nfeature_mtbfs = np.array(feature_mtbfs)\n\nplt.figure(figsize=(6, 4))\nplt.title('Bioacoustic detection of North Atlantic Right Whale', size=12)\n\nplt.plot(\n np.log10(np.median(feature_mtbfs[0, :max_distance, :], axis=1)),\n '-o', markersize=10.0, linewidth=2.0, color=\"#009900\",\n label=\"Max-pooled PCEN\")\nplt.fill_between(\n range(max_distance),\n np.log10(np.ravel(np.quantile(feature_mtbfs[0, :max_distance, :], [0.25], axis=1))),\n np.log10(np.ravel(np.quantile(feature_mtbfs[0, :max_distance, :], [0.75], axis=1))),\n alpha = 0.5,\n color=\"#009900\",\n)\n\n\nplt.plot(\n np.log10(np.median(feature_mtbfs[2, :max_distance, :], axis=1)),\n '-s', color=\"#0000B2\",\n markersize=10.0, linewidth=2.0,\n label=\"Max-pooled spectral flux\")\nplt.fill_between(\n range(max_distance),\n np.log10(np.ravel(np.quantile(feature_mtbfs[2, :max_distance, :], [0.25], axis=1))),\n np.log10(np.ravel(np.quantile(feature_mtbfs[2, :max_distance, :], [0.75], axis=1))),\n alpha = 0.5,\n color=\"#0000B2\"\n)\n\n\nplt.plot(\n np.log10(np.median(feature_mtbfs[1, :max_distance, :], axis=1)), '-v', markersize=10.0, linewidth=2.0,\n color=\"#E67300\", label=\"Averaged spectral flux\")\nplt.fill_between(\n range(max_distance),\n np.log10(np.ravel(np.quantile(feature_mtbfs[1, :max_distance, :], [0.25], axis=1))),\n np.log10(np.ravel(np.quantile(feature_mtbfs[1, :max_distance, :], [0.75], axis=1))),\n alpha = 0.33, color=\"#E67300\"\n)\n\nyticks = np.array([0.5, 1.0, 2.0, 5.0, 10.0, 20.0, 50.0, 100.0, 200.0])\n\n\nplt.rcParams[\"font.family\"] = \"serif\"\nplt.legend(prop={'size': 11})\nplt.xlabel(\"CCB18 dataset: distance between sensor and source (km)\", size=12)\nplt.ylabel(\"ShipsEar dataset: mean time\\nbetween false alarms at half recall (s)\", size=12)\nplt.gca().set_xticks(np.arange(0, 21, 2), minor=True)\nplt.gca().set_xticks(np.arange(0, 21, 4), minor=False)\nplt.gca().set_xticklabels(np.arange(0, 21, 4), size=11)\nplt.yticks(np.log10(yticks))\nplt.gca().set_yticklabels(yticks, size=11)\nplt.xlim(0, 20)\nplt.grid(linestyle='--', alpha=1.0, which=\"minor\")\nplt.grid(linestyle='--', alpha=1.0, which=\"major\")\nax = plt.gca()\nax.spines['top'].set_visible(False)\nax.spines['right'].set_visible(False)\n\nplt.savefig('lostanlen_waspaa2019_ccb_mtbfa-50_semilogy.eps', bbox_inches='tight')\nplt.savefig('lostanlen_waspaa2019_ccb_mtbfa-50_semilogy.png', bbox_inches='tight', dpi=500)", "_____no_output_____" ] ] ]
[ "code" ]
[ [ "code", "code", "code", "code" ] ]
e7ffa669fa84c3ddb87339155f56ce5ef5a2f19c
168,803
ipynb
Jupyter Notebook
convolutional-neural-networks/mnist-mlp/mnist_mlp_solution.ipynb
staskh/deep-learning-v2-pytorch
6be44d048381c90dce1c309f24eb9024ed31d362
[ "MIT" ]
null
null
null
convolutional-neural-networks/mnist-mlp/mnist_mlp_solution.ipynb
staskh/deep-learning-v2-pytorch
6be44d048381c90dce1c309f24eb9024ed31d362
[ "MIT" ]
null
null
null
convolutional-neural-networks/mnist-mlp/mnist_mlp_solution.ipynb
staskh/deep-learning-v2-pytorch
6be44d048381c90dce1c309f24eb9024ed31d362
[ "MIT" ]
null
null
null
306.357532
99,196
0.916956
[ [ [ "# Multi-Layer Perceptron, MNIST\n---\nIn this notebook, we will train an MLP to classify images from the [MNIST database](http://yann.lecun.com/exdb/mnist/) hand-written digit database.\n\nThe process will be broken down into the following steps:\n>1. Load and visualize the data\n2. Define a neural network\n3. Train the model\n4. Evaluate the performance of our trained model on a test dataset!\n\nBefore we begin, we have to import the necessary libraries for working with data and PyTorch.", "_____no_output_____" ] ], [ [ "# import libraries\nimport torch\nimport numpy as np", "_____no_output_____" ] ], [ [ "---\n## Load and Visualize the [Data](http://pytorch.org/docs/stable/torchvision/datasets.html)\n\nDownloading may take a few moments, and you should see your progress as the data is loading. You may also choose to change the `batch_size` if you want to load more data at a time.\n\nThis cell will create DataLoaders for each of our datasets.", "_____no_output_____" ] ], [ [ "# The MNIST datasets are hosted on yann.lecun.com that has moved under CloudFlare protection\n# Run this script to enable the datasets download\n# Reference: https://github.com/pytorch/vision/issues/1938\n\nfrom six.moves import urllib\nopener = urllib.request.build_opener()\nopener.addheaders = [('User-agent', 'Mozilla/5.0')]\nurllib.request.install_opener(opener)", "_____no_output_____" ], [ "from torchvision import datasets\nimport torchvision.transforms as transforms\n\n# number of subprocesses to use for data loading\nnum_workers = 0\n# how many samples per batch to load\nbatch_size = 20\n\n# convert data to torch.FloatTensor\ntransform = transforms.ToTensor()\n\n# choose the training and test datasets\ntrain_data = datasets.MNIST(root='data', train=True,\n download=True, transform=transform)\ntest_data = datasets.MNIST(root='data', train=False,\n download=True, transform=transform)\n\n# prepare data loaders\ntrain_loader = torch.utils.data.DataLoader(train_data, batch_size=batch_size,\n num_workers=num_workers)\ntest_loader = torch.utils.data.DataLoader(test_data, batch_size=batch_size, \n num_workers=num_workers)", "Downloading http://yann.lecun.com/exdb/mnist/train-images-idx3-ubyte.gz\nDownloading http://yann.lecun.com/exdb/mnist/train-labels-idx1-ubyte.gz\nDownloading http://yann.lecun.com/exdb/mnist/t10k-images-idx3-ubyte.gz\nDownloading http://yann.lecun.com/exdb/mnist/t10k-labels-idx1-ubyte.gz\nProcessing...\nDone!\n" ] ], [ [ "### Visualize a Batch of Training Data\n\nThe first step in a classification task is to take a look at the data, make sure it is loaded in correctly, then make any initial observations about patterns in that data.", "_____no_output_____" ] ], [ [ "import matplotlib.pyplot as plt\n%matplotlib inline\n \n# obtain one batch of training images\ndataiter = iter(train_loader)\nimages, labels = dataiter.next()\nimages = images.numpy()\n\n# plot the images in the batch, along with the corresponding labels\nfig = plt.figure(figsize=(25, 4))\nfor idx in np.arange(20):\n ax = fig.add_subplot(2, 20/2, idx+1, xticks=[], yticks=[])\n ax.imshow(np.squeeze(images[idx]), cmap='gray')\n # print out the correct label for each image\n # .item() gets the value contained in a Tensor\n ax.set_title(str(labels[idx].item()))", "_____no_output_____" ] ], [ [ "### View an Image in More Detail", "_____no_output_____" ] ], [ [ "img = np.squeeze(images[1])\n\nfig = plt.figure(figsize = (12,12)) \nax = fig.add_subplot(111)\nax.imshow(img, cmap='gray')\nwidth, height = img.shape\nthresh = img.max()/2.5\nfor x in range(width):\n for y in range(height):\n val = round(img[x][y],2) if img[x][y] !=0 else 0\n ax.annotate(str(val), xy=(y,x),\n horizontalalignment='center',\n verticalalignment='center',\n color='white' if img[x][y]<thresh else 'black')", "_____no_output_____" ] ], [ [ "---\n## Define the Network [Architecture](http://pytorch.org/docs/stable/nn.html)\n\nThe architecture will be responsible for seeing as input a 784-dim Tensor of pixel values for each image, and producing a Tensor of length 10 (our number of classes) that indicates the class scores for an input image. This particular example uses two hidden layers and dropout to avoid overfitting.", "_____no_output_____" ] ], [ [ "import torch.nn as nn\nimport torch.nn.functional as F\n\n# define the NN architecture\nclass Net(nn.Module):\n def __init__(self):\n super(Net, self).__init__()\n # number of hidden nodes in each layer (512)\n hidden_1 = 512\n hidden_2 = 512\n # linear layer (784 -> hidden_1)\n self.fc1 = nn.Linear(28 * 28, hidden_1)\n # linear layer (n_hidden -> hidden_2)\n self.fc2 = nn.Linear(hidden_1, hidden_2)\n # linear layer (n_hidden -> 10)\n self.fc3 = nn.Linear(hidden_2, 10)\n # dropout layer (p=0.2)\n # dropout prevents overfitting of data\n self.dropout = nn.Dropout(0.2)\n\n def forward(self, x):\n # flatten image input\n x = x.view(-1, 28 * 28)\n # add hidden layer, with relu activation function\n x = F.relu(self.fc1(x))\n # add dropout layer\n x = self.dropout(x)\n # add hidden layer, with relu activation function\n x = F.relu(self.fc2(x))\n # add dropout layer\n x = self.dropout(x)\n # add output layer\n x = self.fc3(x)\n return x\n\n# initialize the NN\nmodel = Net()\nprint(model)", "Net(\n (fc1): Linear(in_features=784, out_features=512, bias=True)\n (fc2): Linear(in_features=512, out_features=512, bias=True)\n (fc3): Linear(in_features=512, out_features=10, bias=True)\n (dropout): Dropout(p=0.2)\n)\n" ] ], [ [ "### Specify [Loss Function](http://pytorch.org/docs/stable/nn.html#loss-functions) and [Optimizer](http://pytorch.org/docs/stable/optim.html)\n\nIt's recommended that you use cross-entropy loss for classification. If you look at the documentation (linked above), you can see that PyTorch's cross entropy function applies a softmax funtion to the output layer *and* then calculates the log loss.", "_____no_output_____" ] ], [ [ "# specify loss function (categorical cross-entropy)\ncriterion = nn.CrossEntropyLoss()\n\n# specify optimizer (stochastic gradient descent) and learning rate = 0.01\noptimizer = torch.optim.SGD(model.parameters(), lr=0.01)", "_____no_output_____" ] ], [ [ "---\n## Train the Network\n\nThe steps for training/learning from a batch of data are described in the comments below:\n1. Clear the gradients of all optimized variables\n2. Forward pass: compute predicted outputs by passing inputs to the model\n3. Calculate the loss\n4. Backward pass: compute gradient of the loss with respect to model parameters\n5. Perform a single optimization step (parameter update)\n6. Update average training loss\n\nThe following loop trains for 50 epochs; take a look at how the values for the training loss decrease over time. We want it to decrease while also avoiding overfitting the training data.", "_____no_output_____" ] ], [ [ "# number of epochs to train the model\nn_epochs = 50\n\nmodel.train() # prep model for training\n\nfor epoch in range(n_epochs):\n # monitor training loss\n train_loss = 0.0\n \n ###################\n # train the model #\n ###################\n for data, target in train_loader:\n # clear the gradients of all optimized variables\n optimizer.zero_grad()\n # forward pass: compute predicted outputs by passing inputs to the model\n output = model(data)\n # calculate the loss\n loss = criterion(output, target)\n # backward pass: compute gradient of the loss with respect to model parameters\n loss.backward()\n # perform a single optimization step (parameter update)\n optimizer.step()\n # update running training loss\n train_loss += loss.item()*data.size(0)\n \n # print training statistics \n # calculate average loss over an epoch\n train_loss = train_loss/len(train_loader.dataset)\n\n print('Epoch: {} \\tTraining Loss: {:.6f}'.format(\n epoch+1, \n train_loss\n ))", "Epoch: 1 \tTraining Loss: 0.833544\nEpoch: 2 \tTraining Loss: 0.321996\nEpoch: 3 \tTraining Loss: 0.247905\nEpoch: 4 \tTraining Loss: 0.201408\nEpoch: 5 \tTraining Loss: 0.169627\nEpoch: 6 \tTraining Loss: 0.147488\nEpoch: 7 \tTraining Loss: 0.129424\nEpoch: 8 \tTraining Loss: 0.116433\nEpoch: 9 \tTraining Loss: 0.104333\nEpoch: 10 \tTraining Loss: 0.094504\nEpoch: 11 \tTraining Loss: 0.085769\nEpoch: 12 \tTraining Loss: 0.080728\nEpoch: 13 \tTraining Loss: 0.073689\nEpoch: 14 \tTraining Loss: 0.067905\nEpoch: 15 \tTraining Loss: 0.063251\nEpoch: 16 \tTraining Loss: 0.058666\nEpoch: 17 \tTraining Loss: 0.055106\nEpoch: 18 \tTraining Loss: 0.050979\nEpoch: 19 \tTraining Loss: 0.048491\nEpoch: 20 \tTraining Loss: 0.046173\nEpoch: 21 \tTraining Loss: 0.044311\nEpoch: 22 \tTraining Loss: 0.041405\nEpoch: 23 \tTraining Loss: 0.038702\nEpoch: 24 \tTraining Loss: 0.036634\nEpoch: 25 \tTraining Loss: 0.035159\nEpoch: 26 \tTraining Loss: 0.033605\nEpoch: 27 \tTraining Loss: 0.030255\nEpoch: 28 \tTraining Loss: 0.029026\nEpoch: 29 \tTraining Loss: 0.028722\nEpoch: 30 \tTraining Loss: 0.027026\nEpoch: 31 \tTraining Loss: 0.026134\nEpoch: 32 \tTraining Loss: 0.022992\nEpoch: 33 \tTraining Loss: 0.023809\nEpoch: 34 \tTraining Loss: 0.022347\nEpoch: 35 \tTraining Loss: 0.021212\nEpoch: 36 \tTraining Loss: 0.020292\nEpoch: 37 \tTraining Loss: 0.019413\nEpoch: 38 \tTraining Loss: 0.019758\nEpoch: 39 \tTraining Loss: 0.017851\nEpoch: 40 \tTraining Loss: 0.017023\nEpoch: 41 \tTraining Loss: 0.016846\nEpoch: 42 \tTraining Loss: 0.016187\nEpoch: 43 \tTraining Loss: 0.015530\nEpoch: 44 \tTraining Loss: 0.014553\nEpoch: 45 \tTraining Loss: 0.014781\nEpoch: 46 \tTraining Loss: 0.013546\nEpoch: 47 \tTraining Loss: 0.013328\nEpoch: 48 \tTraining Loss: 0.012698\nEpoch: 49 \tTraining Loss: 0.012012\nEpoch: 50 \tTraining Loss: 0.012588\n" ] ], [ [ "---\n## Test the Trained Network\n\nFinally, we test our best model on previously unseen **test data** and evaluate it's performance. Testing on unseen data is a good way to check that our model generalizes well. It may also be useful to be granular in this analysis and take a look at how this model performs on each class as well as looking at its overall loss and accuracy.", "_____no_output_____" ] ], [ [ "# initialize lists to monitor test loss and accuracy\ntest_loss = 0.0\nclass_correct = list(0. for i in range(10))\nclass_total = list(0. for i in range(10))\n\nmodel.eval() # prep model for training\n\nfor data, target in test_loader:\n # forward pass: compute predicted outputs by passing inputs to the model\n output = model(data)\n # calculate the loss\n loss = criterion(output, target)\n # update test loss \n test_loss += loss.item()*data.size(0)\n # convert output probabilities to predicted class\n _, pred = torch.max(output, 1)\n # compare predictions to true label\n correct = np.squeeze(pred.eq(target.data.view_as(pred)))\n # calculate test accuracy for each object class\n for i in range(batch_size):\n label = target.data[i]\n class_correct[label] += correct[i].item()\n class_total[label] += 1\n\n# calculate and print avg test loss\ntest_loss = test_loss/len(test_loader.dataset)\nprint('Test Loss: {:.6f}\\n'.format(test_loss))\n\nfor i in range(10):\n if class_total[i] > 0:\n print('Test Accuracy of %5s: %2d%% (%2d/%2d)' % (\n str(i), 100 * class_correct[i] / class_total[i],\n np.sum(class_correct[i]), np.sum(class_total[i])))\n else:\n print('Test Accuracy of %5s: N/A (no training examples)' % (classes[i]))\n\nprint('\\nTest Accuracy (Overall): %2d%% (%2d/%2d)' % (\n 100. * np.sum(class_correct) / np.sum(class_total),\n np.sum(class_correct), np.sum(class_total)))", "Test Loss: 0.052876\n\nTest Accuracy of 0: 99% (972/980)\nTest Accuracy of 1: 99% (1127/1135)\nTest Accuracy of 2: 98% (1012/1032)\nTest Accuracy of 3: 98% (992/1010)\nTest Accuracy of 4: 98% (968/982)\nTest Accuracy of 5: 98% (875/892)\nTest Accuracy of 6: 98% (946/958)\nTest Accuracy of 7: 98% (1010/1028)\nTest Accuracy of 8: 97% (949/974)\nTest Accuracy of 9: 98% (990/1009)\n\nTest Accuracy (Overall): 98% (9841/10000)\n" ] ], [ [ "### Visualize Sample Test Results\n\nThis cell displays test images and their labels in this format: `predicted (ground-truth)`. The text will be green for accurately classified examples and red for incorrect predictions.", "_____no_output_____" ] ], [ [ "# obtain one batch of test images\ndataiter = iter(test_loader)\nimages, labels = dataiter.next()\n\n# get sample outputs\noutput = model(images)\n# convert output probabilities to predicted class\n_, preds = torch.max(output, 1)\n# prep images for display\nimages = images.numpy()\n\n# plot the images in the batch, along with predicted and true labels\nfig = plt.figure(figsize=(25, 4))\nfor idx in np.arange(20):\n ax = fig.add_subplot(2, 20/2, idx+1, xticks=[], yticks=[])\n ax.imshow(np.squeeze(images[idx]), cmap='gray')\n ax.set_title(\"{} ({})\".format(str(preds[idx].item()), str(labels[idx].item())),\n color=(\"green\" if preds[idx]==labels[idx] else \"red\"))", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ] ]
e7ffad3d083fc3cd3a2ca0571f96859cb290baff
13,704
ipynb
Jupyter Notebook
session3/session3.ipynb
casey-bowman/pb-exercises
00b814f2d1b8827db0ca79180db4b88a2cdfa087
[ "BSD-2-Clause" ]
null
null
null
session3/session3.ipynb
casey-bowman/pb-exercises
00b814f2d1b8827db0ca79180db4b88a2cdfa087
[ "BSD-2-Clause" ]
null
null
null
session3/session3.ipynb
casey-bowman/pb-exercises
00b814f2d1b8827db0ca79180db4b88a2cdfa087
[ "BSD-2-Clause" ]
null
null
null
34.606061
1,361
0.663383
[ [ [ "# import everything and define a test runner function\nfrom importlib import reload\nfrom helper import run_test\n\nimport ecc\nimport helper\nimport tx", "_____no_output_____" ], [ "# Signing Example\nfrom random import randint\n\nfrom ecc import G, N\nfrom helper import double_sha256\n\nsecret = 1800555555518005555555\nz = int.from_bytes(double_sha256(b'ECDSA is awesome!'), 'big')\nk = randint(0, 2**256)\nr = (k*G).x.num\ns = (z+r*secret) * pow(k, N-2, N) % N\nprint(hex(z), hex(r), hex(s))\nprint(secret*G)", "_____no_output_____" ], [ "# Verification Example\nfrom ecc import G, N, S256Point\n\nz = 0xbc62d4b80d9e36da29c16c5d4d9f11731f36052c72401a76c23c0fb5a9b74423\nr = 0x37206a0610995c58074999cb9767b87af4c4978db68c06e8e6e81d282047a7c6\ns = 0x8ca63759c1157ebeaec0d03cecca119fc9a75bf8e6d0fa65c841c8e2738cdaec\npoint = S256Point(0x04519fac3d910ca7e7138f7013706f619fa8f033e6ec6e09370ea38cee6a7574,\n 0x82b51eab8c27c66e26c858a079bcdf4f1ada34cec420cafc7eac1a42216fb6c4)\nu = z * pow(s, N-2, N) % N\nv = r * pow(s, N-2, N) % N\nprint((u*G + v*point).x.num == r)", "_____no_output_____" ] ], [ [ "### Exercise 1\n\n#### 1.1. Which sigs are valid?\n\n```\nP = (887387e452b8eacc4acfde10d9aaf7f6d9a0f975aabb10d006e4da568744d06c, \n 61de6d95231cd89026e286df3b6ae4a894a3378e393e93a0f45b666329a0ae34)\nz, r, s = ec208baa0fc1c19f708a9ca96fdeff3ac3f230bb4a7ba4aede4942ad003c0f60,\n ac8d1c87e51d0d441be8b3dd5b05c8795b48875dffe00b7ffcfac23010d3a395,\n 68342ceff8935ededd102dd876ffd6ba72d6a427a3edb13d26eb0781cb423c4\nz, r, s = 7c076ff316692a3d7eb3c3bb0f8b1488cf72e1afcd929e29307032997a838a3d,\n eff69ef2b1bd93a66ed5219add4fb51e11a840f404876325a1e8ffe0529a2c,\n c7207fee197d27c618aea621406f6bf5ef6fca38681d82b2f06fddbdce6feab6\n```\n\n#### 1.2. Make [these tests](/edit/session3/ecc.py) pass\n```\necc.py:S256Test:test_verify\necc.py:PrivateKeyTest:test_sign\n```", "_____no_output_____" ] ], [ [ "# Exercise 1.1\n\nfrom ecc import S256Point, G, N\n\npx = 0x887387e452b8eacc4acfde10d9aaf7f6d9a0f975aabb10d006e4da568744d06c\npy = 0x61de6d95231cd89026e286df3b6ae4a894a3378e393e93a0f45b666329a0ae34\n\nsignatures = (\n # (z, r, s)\n (0xec208baa0fc1c19f708a9ca96fdeff3ac3f230bb4a7ba4aede4942ad003c0f60,\n 0xac8d1c87e51d0d441be8b3dd5b05c8795b48875dffe00b7ffcfac23010d3a395,\n 0x68342ceff8935ededd102dd876ffd6ba72d6a427a3edb13d26eb0781cb423c4),\n (0x7c076ff316692a3d7eb3c3bb0f8b1488cf72e1afcd929e29307032997a838a3d,\n 0xeff69ef2b1bd93a66ed5219add4fb51e11a840f404876325a1e8ffe0529a2c,\n 0xc7207fee197d27c618aea621406f6bf5ef6fca38681d82b2f06fddbdce6feab6),\n)\n\n# initialize the public point\n# use: S256Point(x-coordinate, y-coordinate)\n\n# iterate over signatures\n # u = z / s, v = r / s\n # finally, uG+vP should have the x-coordinate equal to r", "_____no_output_____" ], [ "# Exercise 1.2\n\nreload(ecc)\nrun_test(ecc.S256Test('test_verify'))\nrun_test(ecc.PrivateKeyTest('test_sign'))", "_____no_output_____" ] ], [ [ "### Exercise 2\n\n#### 2.1. Verify the DER signature for the hash of \"ECDSA is awesome!\" for the given SEC pubkey\n\n`z = int.from_bytes(double_sha256('ECDSA is awesome!'), 'big')`\n\nPublic Key in SEC Format: \n0204519fac3d910ca7e7138f7013706f619fa8f033e6ec6e09370ea38cee6a7574\n\nSignature in DER Format: 304402201f62993ee03fca342fcb45929993fa6ee885e00ddad8de154f268d98f083991402201e1ca12ad140c04e0e022c38f7ce31da426b8009d02832f0b44f39a6b178b7a1\n", "_____no_output_____" ] ], [ [ "# Exercise 2.1\n\nfrom ecc import S256Point, Signature\nfrom helper import double_sha256\n\nder = bytes.fromhex('304402201f62993ee03fca342fcb45929993fa6ee885e00ddad8de154f268d98f083991402201e1ca12ad140c04e0e022c38f7ce31da426b8009d02832f0b44f39a6b178b7a1')\nsec = bytes.fromhex('0204519fac3d910ca7e7138f7013706f619fa8f033e6ec6e09370ea38cee6a7574')\n\n# message is the double_sha256 of the message \"ECDSA is awesome!\"\nz = int.from_bytes(double_sha256(b'ECDSA is awesome!'), 'big')\n\n# parse the der format to get the signature\n# parse the sec format to get the public key\n# use the verify method on S256Point to validate the signature", "_____no_output_____" ], [ "# WIF Example\nfrom helper import encode_base58_checksum\n\nsecret = 2**256 - 2**200\ns = secret.to_bytes(32, 'big')\nprint(encode_base58_checksum(b'\\x80'+s))\nprint(encode_base58_checksum(b'\\x80'+s+b'\\x01'))\nprint(encode_base58_checksum(b'\\xef'+s))\nprint(encode_base58_checksum(b'\\xef'+s+b'\\x01'))", "_____no_output_____" ] ], [ [ "### Exercise 3\n\nWIF is the serialization of a Private Key.\n\n#### 3.1. Find the WIF Format of the following:\n\n* \\\\(2^{256}-2^{199}\\\\), mainnet, compressed\n* \\\\(2^{256}-2^{201}\\\\), testnet, uncompressed\n* 0dba685b4511dbd3d368e5c4358a1277de9486447af7b3604a69b8d9d8b7889d, mainnet, uncompressed\n* 1cca23de92fd1862fb5b76e5f4f50eb082165e5191e116c18ed1a6b24be6a53f, testnet, compressed\n\n#### 3.2. Make [this test](/edit/session3/ecc.py) pass\n```\necc.py:PrivateKeyTest:test_wif\n```", "_____no_output_____" ] ], [ [ "# Exercise 3.1\nfrom helper import encode_base58_checksum\n\ncomponents = (\n # (secret, testnet, compressed)\n (2**256-2**199, False, True),\n (2**256-2**201, True, False),\n (0x0dba685b4511dbd3d368e5c4358a1277de9486447af7b3604a69b8d9d8b7889d, False, False),\n (0x1cca23de92fd1862fb5b76e5f4f50eb082165e5191e116c18ed1a6b24be6a53f, True, True),\n)\n\n# iterate through components\n # get the private key in 32-byte big-endian: num.to_bytes(32, 'big')\n # prepend b'\\x80' for mainnet, b'\\xef' for testnet\n # append b'\\x01' for compressed\n # base58 the whole thing with checksum\n # print the wif", "_____no_output_____" ], [ "# Exercise 3.2\n\nreload(ecc)\nrun_test(ecc.PrivateKeyTest('test_wif'))", "_____no_output_____" ] ], [ [ "### Exercise 4\n\n#### 4.1. Make [this test](/edit/session3/tx.py) pass\n```\ntx.py:TxTest:test_parse_version\n```", "_____no_output_____" ] ], [ [ "# Exercise 4.1\n\nreload(tx)\nrun_test(tx.TxTest('test_parse_version'))", "_____no_output_____" ] ], [ [ "### Exercise 5\n\n#### 5.1. Make [this test](/edit/session3/tx.py) pass\n```\ntx.py:TxTest:test_parse_inputs\n```", "_____no_output_____" ] ], [ [ "# Exercise 5.1\n\nreload(tx)\nrun_test(tx.TxTest('test_parse_inputs'))", "_____no_output_____" ] ], [ [ "### Exercise 6\n\n#### 6.1. Make [this test](/edit/session3/tx.py) pass\n```\ntx.py:TxTest:test_parse_outputs\n```", "_____no_output_____" ] ], [ [ "# Exercise 6.1\n\nreload(tx)\nrun_test(tx.TxTest('test_parse_outputs'))", "_____no_output_____" ] ], [ [ "### Exercise 7\n\n#### 7.1. Make [this test](/edit/session3/tx.py) pass\n```\ntx.py:TxTest:test_parse_locktime\n```", "_____no_output_____" ] ], [ [ "# Exercise 7.1\n\nreload(tx)\nrun_test(tx.TxTest('test_parse_locktime'))", "_____no_output_____" ] ], [ [ "### Exercise 8\n\n#### 8.1. What is the scriptSig from the second input in this tx?\n#### 8.2. What is the scriptPubKey and amount of the first output in this tx?\n#### 8.3. What is the amount for the second output?\n\n```\n010000000456919960ac691763688d3d3bcea9ad6ecaf875df5339e148a1fc61c6ed7a069e010000006a47304402204585bcdef85e6b1c6af5c2669d4830ff86e42dd205c0e089bc2a821657e951c002201024a10366077f87d6bce1f7100ad8cfa8a064b39d4e8fe4ea13a7b71aa8180f012102f0da57e85eec2934a82a585ea337ce2f4998b50ae699dd79f5880e253dafafb7feffffffeb8f51f4038dc17e6313cf831d4f02281c2a468bde0fafd37f1bf882729e7fd3000000006a47304402207899531a52d59a6de200179928ca900254a36b8dff8bb75f5f5d71b1cdc26125022008b422690b8461cb52c3cc30330b23d574351872b7c361e9aae3649071c1a7160121035d5c93d9ac96881f19ba1f686f15f009ded7c62efe85a872e6a19b43c15a2937feffffff567bf40595119d1bb8a3037c356efd56170b64cbcc160fb028fa10704b45d775000000006a47304402204c7c7818424c7f7911da6cddc59655a70af1cb5eaf17c69dadbfc74ffa0b662f02207599e08bc8023693ad4e9527dc42c34210f7a7d1d1ddfc8492b654a11e7620a0012102158b46fbdff65d0172b7989aec8850aa0dae49abfb84c81ae6e5b251a58ace5cfeffffffd63a5e6c16e620f86f375925b21cabaf736c779f88fd04dcad51d26690f7f345010000006a47304402200633ea0d3314bea0d95b3cd8dadb2ef79ea8331ffe1e61f762c0f6daea0fabde022029f23b3e9c30f080446150b23852028751635dcee2be669c2a1686a4b5edf304012103ffd6f4a67e94aba353a00882e563ff2722eb4cff0ad6006e86ee20dfe7520d55feffffff0251430f00000000001976a914ab0c0b2e98b1ab6dbf67d4750b0a56244948a87988ac005a6202000000001976a9143c82d7df364eb6c75be8c80df2b3eda8db57397088ac46430600\n```", "_____no_output_____" ] ], [ [ "# Exercise 8.1/8.2/8.3\n\nfrom io import BytesIO\nfrom tx import Tx\n\nhex_transaction = '010000000456919960ac691763688d3d3bcea9ad6ecaf875df5339e148a1fc61c6ed7a069e010000006a47304402204585bcdef85e6b1c6af5c2669d4830ff86e42dd205c0e089bc2a821657e951c002201024a10366077f87d6bce1f7100ad8cfa8a064b39d4e8fe4ea13a7b71aa8180f012102f0da57e85eec2934a82a585ea337ce2f4998b50ae699dd79f5880e253dafafb7feffffffeb8f51f4038dc17e6313cf831d4f02281c2a468bde0fafd37f1bf882729e7fd3000000006a47304402207899531a52d59a6de200179928ca900254a36b8dff8bb75f5f5d71b1cdc26125022008b422690b8461cb52c3cc30330b23d574351872b7c361e9aae3649071c1a7160121035d5c93d9ac96881f19ba1f686f15f009ded7c62efe85a872e6a19b43c15a2937feffffff567bf40595119d1bb8a3037c356efd56170b64cbcc160fb028fa10704b45d775000000006a47304402204c7c7818424c7f7911da6cddc59655a70af1cb5eaf17c69dadbfc74ffa0b662f02207599e08bc8023693ad4e9527dc42c34210f7a7d1d1ddfc8492b654a11e7620a0012102158b46fbdff65d0172b7989aec8850aa0dae49abfb84c81ae6e5b251a58ace5cfeffffffd63a5e6c16e620f86f375925b21cabaf736c779f88fd04dcad51d26690f7f345010000006a47304402200633ea0d3314bea0d95b3cd8dadb2ef79ea8331ffe1e61f762c0f6daea0fabde022029f23b3e9c30f080446150b23852028751635dcee2be669c2a1686a4b5edf304012103ffd6f4a67e94aba353a00882e563ff2722eb4cff0ad6006e86ee20dfe7520d55feffffff0251430f00000000001976a914ab0c0b2e98b1ab6dbf67d4750b0a56244948a87988ac005a6202000000001976a9143c82d7df364eb6c75be8c80df2b3eda8db57397088ac46430600'\n\n# bytes.fromhex to get the binary representation\n# create a stream using BytesIO()\n# Tx.parse() the stream\n# print tx's second input's scriptSig\n# print tx's first output's scriptPubKey\n# print tx's second output's amount", "_____no_output_____" ] ] ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "code", "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ] ]
e7ffb3f56a80b5b79bddb0d710569cdb408f106e
94,829
ipynb
Jupyter Notebook
eunice012716/Week2/ch4/4.10/example_ch4_10.ipynb
eunice012716/Intern-Training
c3bbf42448a0b41e96d88569b6cfd57d78338716
[ "MIT" ]
1
2021-08-24T12:14:46.000Z
2021-08-24T12:14:46.000Z
eunice012716/Week2/ch4/4.10/example_ch4_10.ipynb
eunice012716/Intern-Training
c3bbf42448a0b41e96d88569b6cfd57d78338716
[ "MIT" ]
14
2021-07-09T07:48:35.000Z
2021-08-19T03:06:31.000Z
eunice012716/Week2/ch4/4.10/example_ch4_10.ipynb
eunice012716/Intern-Training
c3bbf42448a0b41e96d88569b6cfd57d78338716
[ "MIT" ]
11
2021-07-09T07:35:24.000Z
2021-08-15T07:19:43.000Z
41.537013
200
0.490989
[ [ [ "# 4.10. Predicting House Prices on Kaggle\n# 4.10.1. Downloading and Caching Datasets\nimport hashlib\nimport os\nimport tarfile\nimport zipfile\nimport requests\nimport os\nos.environ[\"KMP_DUPLICATE_LIB_OK\"] = \"TRUE\"\n\n#@save\nDATA_HUB = dict()\nDATA_URL = 'http://d2l-data.s3-accelerate.amazonaws.com/'", "_____no_output_____" ], [ "def download(name, cache_dir=os.path.join('..', 'data')): #@save\n \"\"\"Download a file inserted into DATA_HUB, return the local filename.\"\"\"\n assert name in DATA_HUB, f\"{name} does not exist in {DATA_HUB}.\"\n url, sha1_hash = DATA_HUB[name]\n os.makedirs(cache_dir, exist_ok=True)\n fname = os.path.join(cache_dir, url.split('/')[-1])\n if os.path.exists(fname):\n sha1 = hashlib.sha1()\n with open(fname, 'rb') as f:\n while True:\n data = f.read(1048576)\n if not data:\n break\n sha1.update(data)\n if sha1.hexdigest() == sha1_hash:\n return fname # Hit cache\n print(f'Downloading {fname} from {url}...')\n r = requests.get(url, stream=True, verify=True)\n with open(fname, 'wb') as f:\n f.write(r.content)\n return fname", "_____no_output_____" ], [ "def download_extract(name, folder=None): #@save\n \"\"\"Download and extract a zip/tar file.\"\"\"\n fname = download(name)\n base_dir = os.path.dirname(fname)\n data_dir, ext = os.path.splitext(fname)\n if ext == '.zip':\n fp = zipfile.ZipFile(fname, 'r')\n elif ext in ('.tar', '.gz'):\n fp = tarfile.open(fname, 'r')\n else:\n assert False, 'Only zip/tar files can be extracted.'\n fp.extractall(base_dir)\n return os.path.join(base_dir, folder) if folder else data_dir\n\ndef download_all(): #@save\n \"\"\"Download all files in the DATA_HUB.\"\"\"\n for name in DATA_HUB:\n download(name)", "_____no_output_____" ], [ "# 4.10.2. Kaggle\n# If pandas is not installed, please uncomment the following line:\n# !pip install pandas\n\n%matplotlib inline\nimport numpy as np\nimport pandas as pd\nimport torch\nfrom torch import nn\nfrom d2l import torch as d2l", "_____no_output_____" ], [ "DATA_HUB['kaggle_house_train'] = ( #@save\n DATA_URL + 'kaggle_house_pred_train.csv',\n '585e9cc93e70b39160e7921475f9bcd7d31219ce')\n\nDATA_HUB['kaggle_house_test'] = ( #@save\n DATA_URL + 'kaggle_house_pred_test.csv',\n 'fa19780a7b011d9b009e8bff8e99922a8ee2eb90')", "_____no_output_____" ], [ "train_data = pd.read_csv(download('kaggle_house_train'))\ntest_data = pd.read_csv(download('kaggle_house_test'))", "_____no_output_____" ], [ "print(train_data.shape)\nprint(test_data.shape)", "(1460, 81)\n(1459, 80)\n" ], [ "print(train_data.iloc[0:4, [0, 1, 2, 3, -3, -2, -1]])", " Id MSSubClass MSZoning LotFrontage SaleType SaleCondition SalePrice\n0 1 60 RL 65.0 WD Normal 208500\n1 2 20 RL 80.0 WD Normal 181500\n2 3 60 RL 68.0 WD Normal 223500\n3 4 70 RL 60.0 WD Abnorml 140000\n" ], [ "all_features = pd.concat((train_data.iloc[:, 1:-1], test_data.iloc[:, 1:]))", "_____no_output_____" ], [ "# 4.10.4. Data Preprocessing\n# If test data were inaccessible, mean and standard deviation could be\n# calculated from training data\nnumeric_features = all_features.dtypes[all_features.dtypes != 'object'].index\nall_features[numeric_features] = all_features[numeric_features].apply(\n lambda x: (x - x.mean()) / (x.std()))\n# After standardizing the data all means vanish, hence we can set missing\n# values to 0\nall_features[numeric_features] = all_features[numeric_features].fillna(0)", "_____no_output_____" ], [ "# `Dummy_na=True` considers \"na\" (missing value) as a valid feature value, and\n# creates an indicator feature for it\nall_features = pd.get_dummies(all_features, dummy_na=True)\nall_features.shape", "_____no_output_____" ], [ "n_train = train_data.shape[0]\ntrain_features = torch.tensor(all_features[:n_train].values,\n dtype=torch.float32)\ntest_features = torch.tensor(all_features[n_train:].values,\n dtype=torch.float32)\ntrain_labels = torch.tensor(train_data.SalePrice.values.reshape(-1, 1),\n dtype=torch.float32)", "_____no_output_____" ], [ "# 4.10.5. Training\nloss = nn.MSELoss()\nin_features = train_features.shape[1]\n\ndef get_net():\n net = nn.Sequential(nn.Linear(in_features, 1))\n return net", "_____no_output_____" ], [ "def log_rmse(net, features, labels):\n # To further stabilize the value when the logarithm is taken, set the\n # value less than 1 as 1\n clipped_preds = torch.clamp(net(features), 1, float('inf'))\n rmse = torch.sqrt(loss(torch.log(clipped_preds), torch.log(labels)))\n return rmse.item()", "_____no_output_____" ], [ "def train(net, train_features, train_labels, test_features, test_labels,\n num_epochs, learning_rate, weight_decay, batch_size):\n train_ls, test_ls = [], []\n train_iter = d2l.load_array((train_features, train_labels), batch_size)\n # The Adam optimization algorithm is used here\n optimizer = torch.optim.Adam(net.parameters(), lr=learning_rate,\n weight_decay=weight_decay)\n for epoch in range(num_epochs):\n for X, y in train_iter:\n optimizer.zero_grad()\n l = loss(net(X), y)\n l.backward()\n optimizer.step()\n train_ls.append(log_rmse(net, train_features, train_labels))\n if test_labels is not None:\n test_ls.append(log_rmse(net, test_features, test_labels))\n return train_ls, test_ls", "_____no_output_____" ], [ "# 4.10.6. K-Fold Cross-Validation\ndef get_k_fold_data(k, i, X, y):\n assert k > 1\n fold_size = X.shape[0] // k\n X_train, y_train = None, None\n for j in range(k):\n idx = slice(j * fold_size, (j + 1) * fold_size)\n X_part, y_part = X[idx, :], y[idx]\n if j == i:\n X_valid, y_valid = X_part, y_part\n elif X_train is None:\n X_train, y_train = X_part, y_part\n else:\n X_train = torch.cat([X_train, X_part], 0)\n y_train = torch.cat([y_train, y_part], 0)\n return X_train, y_train, X_valid, y_valid", "_____no_output_____" ], [ "def k_fold(k, X_train, y_train, num_epochs, learning_rate, weight_decay,\n batch_size):\n train_l_sum, valid_l_sum = 0, 0\n for i in range(k):\n data = get_k_fold_data(k, i, X_train, y_train)\n net = get_net()\n train_ls, valid_ls = train(net, *data, num_epochs, learning_rate,\n weight_decay, batch_size)\n train_l_sum += train_ls[-1]\n valid_l_sum += valid_ls[-1]\n if i == 0:\n d2l.plot(list(range(1, num_epochs + 1)), [train_ls, valid_ls],\n xlabel='epoch', ylabel='rmse', xlim=[1, num_epochs],\n legend=['train', 'valid'], yscale='log')\n print(f'fold {i + 1}, train log rmse {float(train_ls[-1]):f}, '\n f'valid log rmse {float(valid_ls[-1]):f}')\n return train_l_sum / k, valid_l_sum / k", "_____no_output_____" ], [ "# 4.10.7. Model Selection\nk, num_epochs, lr, weight_decay, batch_size = 5, 100, 5, 0, 64\ntrain_l, valid_l = k_fold(k, train_features, train_labels, num_epochs, lr,\n weight_decay, batch_size)\nprint(f'{k}-fold validation: avg train log rmse: {float(train_l):f}, '\n f'avg valid log rmse: {float(valid_l):f}')", "fold 1, train log rmse 0.169511, valid log rmse 0.156951\nfold 2, train log rmse 0.162732, valid log rmse 0.193682\nfold 3, train log rmse 0.163785, valid log rmse 0.168519\nfold 4, train log rmse 0.168099, valid log rmse 0.154362\nfold 5, train log rmse 0.162367, valid log rmse 0.182431\n5-fold validation: avg train log rmse: 0.165299, avg valid log rmse: 0.171189\n" ], [ "# 4.10.8. Submitting Predictions on Kaggle\ndef train_and_pred(train_features, test_feature, train_labels, test_data,\n num_epochs, lr, weight_decay, batch_size):\n net = get_net()\n train_ls, _ = train(net, train_features, train_labels, None, None,\n num_epochs, lr, weight_decay, batch_size)\n d2l.plot(np.arange(1, num_epochs + 1), [train_ls], xlabel='epoch',\n ylabel='log rmse', xlim=[1, num_epochs], yscale='log')\n print(f'train log rmse {float(train_ls[-1]):f}')\n # Apply the network to the test set\n preds = net(test_features).detach().numpy()\n # Reformat it to export to Kaggle\n test_data['SalePrice'] = pd.Series(preds.reshape(1, -1)[0])\n submission = pd.concat([test_data['Id'], test_data['SalePrice']], axis=1)\n submission.to_csv('submission.csv', index=False)", "_____no_output_____" ], [ "train_and_pred(train_features, test_features, train_labels, test_data,\n num_epochs, lr, weight_decay, batch_size)", "train log rmse 0.162229\n" ] ] ]
[ "code" ]
[ [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ] ]
e7ffbec753a80c8957e09d5abf3c0956f4ed69aa
453,936
ipynb
Jupyter Notebook
doc/token_cooccurrence_vectorizer_multi_labelled_cyber_example.ipynb
jmconroy/vectorizers
8f1fd2725e660a7ee81f9d6dd59ea3dfa50410ea
[ "BSD-3-Clause" ]
54
2020-03-06T08:14:06.000Z
2022-03-04T08:40:03.000Z
doc/token_cooccurrence_vectorizer_multi_labelled_cyber_example.ipynb
jmconroy/vectorizers
8f1fd2725e660a7ee81f9d6dd59ea3dfa50410ea
[ "BSD-3-Clause" ]
54
2020-03-19T00:14:02.000Z
2022-03-16T13:53:45.000Z
doc/token_cooccurrence_vectorizer_multi_labelled_cyber_example.ipynb
jmconroy/vectorizers
8f1fd2725e660a7ee81f9d6dd59ea3dfa50410ea
[ "BSD-3-Clause" ]
15
2020-03-04T21:17:59.000Z
2022-03-15T20:37:50.000Z
434.804598
175,294
0.807748
[ [ [ "## Sequences of Multi-labelled data\nEarlier we have examined the notion that documents can the thought of a sequence of tokens along with a mapping from a set of labels to these tokens. Ideas like stemming and lemmatization are linguistic methods for applying different label mappings to these token sequences. An example of this might be the sequence of tokens assocated with the sentence:\n<pre>\"the cats sat in the cat box\"</pre>\nThis can be decomposed into the sequence of tokens:\n<pre>[the, cats, sat, in, the, cat, box]</pre>\nThe notion of token and label are often confounded but it can be useful to keep them separated mentally. Here we a sequence of six tokens _token_1_ through _token_6_ along with a single label assocated with each token. In this case _'the' $\\mapsto$ token_1_ and _'the' $\\mapsto$ token_5_. They are different tokens with the same label.\n\nWe take advantage of these shared labels in our TokenCooccurrenceVectorizer where we embed token labels into a vector space based on the locality of their associated tokens within our sequence. That is two labels are similar if their surrounding tokens share many common labels. This is covered in more explicite detail in:\n\n[TODO: Add Link to TokenCooccurrenceVectorizer readme](https://noteboook_link_here)\n\nThat said there is nothing which necessitates that we associate a single label with each token. In fact, as we mentioned previously, stemming can be thought of as a way of contracting the label space by contracting a set of labels such as _(tabby, feline, cat, cats)_ to a single token label _(cat)_. Traditionally in NLP, we would replace the label mapping for all the tokens previously associated with various feline tokens with our new canonical token _cat_. This has the twin advantages of simplifying our label space, making it easier to analyse, and seeing more examples of _cat_ in our text, potentially improving the embedding of this label by providing more contexts in which it was used. If we didn't care about simplifying our token space we could, in fact, get the second benefit without the first. Instead of replacing our label mapping to each of these tokens we could instead map from a set of labels to each token instead of from a single labels. Thus, in our previous example of *\"the cats sat in the cat box\"* _{cat, cats} $\\mapsto$ token_2_. \n\nOne oddity this introduces is that the contex of _sat_ now has both the labels _cat_ and _cats_ occuring at a distance of 1 from it. This can become problematic if taken to extreme levels in that that very wide context windows when combined with large label sets could cause a combinatorial explosion. When used in moderation this can be a powerful and quite useful technique in that it allows us a great deal of freedom when moving beyond the text domain to other sequence domains. The other oddity that this can introduce is that the labels _cat_ and _cats_ now cooccur within a context window at distance zero from each other. This has the nice property of encoding the label similarity that has been specified by our mapping int our embedding. \n\nA great example of the usefulness of this multi-label framework occurs when looking at the cyber defense domain. The [Operationally Transparent Cyber (OpTC)](https://github.com/FiveDirections/OpTC-data) data is a DARPA released data set that examines the sequence of computer events on a small network of computers and asks researchers to search for known malicious activity within these events. More details on this dataset, it's content and utility for cyber defense analysis can be found in the recent paper [Analyzing the Usefulness of the DARPA OpTCDataset in Cyber Threat Detection Research](https://arxiv.org/pdf/2103.03080.pdf). \n\nThe interesting thing for us to note is that this data was described as a **\"sequence of events\"**. What differentiates cyber **events** from those **tokens** we spoke about in NLP? The short answer is nothing other than the fact that they can't be easily represented by a single label. Now we have a framework for representing such multi-labelled tokens or events. Let's see what that might look like in practice.\n\nCyber events come in a variety of flavours. The most common being FLOW events. Flow event represent data being transmitted between two computers. They are commonly summarized via a set of descriptions such as:\n\n<pre>[process, source_IP, source_port, destination_IP, destination_port, protocal, time_stamp]</pre>\n\nHere a process instantiated a connection from a source IP address and port to a destination IP address and port and sent data over that connection at a particular time.\n\nPreviously it might have been difficult to think about how we might apply something like a \nTokenCooccurrenceVectorizer to this sort of data but with our new notion of muli-labelled tokens we quickly realize that flow events are really just tokens with interesting labels associated with them and a sequence induced via our time_stamps. This should allow us to embed these labels into a useful vector space with similar tokens being placed near other tokens that appear within the same event and have similar preceeding and following events.\n\nLet's walk through a few increasingly more complex examples.", "_____no_output_____" ], [ "# Import Some libraries\n\nWe'll need TokenCooccurrenceVectorizer from our vectorizers library along with a few helper functions for dealing with our data and plotting it.", "_____no_output_____" ] ], [ [ "from vectorizers import TokenCooccurrenceVectorizer\nfrom vectorizers.utils import flatten\n\nimport pandas as pd\nimport numpy as np\nimport umap\nimport umap.plot", "_____no_output_____" ] ], [ [ "We'll add some bokeh imports for easy interactive plots", "_____no_output_____" ] ], [ [ "import bokeh.io\nbokeh.io.output_notebook()", "_____no_output_____" ] ], [ [ "# Let's fetch some data\n\nThe OpTC data is a bit difficult to pull and parse into easy to process formats. I will leave that as an excercise to the reader. A colleague of ours has pulled this data and restructured it into parquet files distributed across a nice file structure but that is outside of the scope of this notebook. \n\nFor the purposes of this notebook we will load simple pandas data frames that were derived from this events data. Each event is a row in one of these data frames and we have wide set of columns representing all of the summary labels that one might use to represent our various types of cyber events.\n\nIn order for this example to be easily reproducable on a reasonably small machine we we limit our initial analysis to one days worth of FLOW MESSAGE data on a single host. This is for demonstration purposes only and a useful analysis of this data should be broadened to incorporate more of this data.", "_____no_output_____" ] ], [ [ "flows_onehost_oneday = pd.read_csv(\"optc_flows_onehost_oneday.csv\")\nflows_onehost_oneday.shape", "_____no_output_____" ], [ "flows_onehost_oneday.columns", "_____no_output_____" ] ], [ [ "You'll notice that we have just over a million events that are being desribed by a wide variety of descriptive columns. Since we've limited our data to network flow data many of these columns aren't populated for this particular data set. For a more detailed description of this data and these fields I point a reader to the paper we mentioned earlier, [Analyzing the Usefulness of the DARPA OpTCDataset in Cyber Threat Detection Research](https://arxiv.org/pdf/2103.03080.pdf). \n\nFor the purposes of this notebook we are most interested in:\n\n<pre>[process, source_IP, source_port, destination_IP, destination_port, protocal, time_stamp]</pre>\n\nIn this data these correspond to the fields:\n\n<pre>['image_path', 'src_ip', 'src_port','dest_ip', 'dest_port', 'l4protocol', 'timestamp']</pre>", "_____no_output_____" ] ], [ [ "flow_variables = ['image_path', 'src_ip', 'src_port','dest_ip', 'dest_port', 'l4protocol']\ncategorical_variables = flow_variables \nsort_by = ['timestamp']", "_____no_output_____" ] ], [ [ "# Restructure our data\n\nNow we need to restructure this data into a format for easy consumption via our TokenCooccurrenceVectorizer.\n\nWe will convert each row of our data_frame into a sequence of multi-labelled events. To do that we'll need to convert from a list of categorical column values into a a list of labels. An easy way to define a label associated with a categorical column is in the form of the string <code>f'{column_name}:{value}'</code>.", "_____no_output_____" ], [ "We'll first ensure that our events are propperyl ordered by time", "_____no_output_____" ] ], [ [ "flows_sorted = flows_onehost_oneday.sort_values(by = 'timestamp')", "_____no_output_____" ] ], [ [ "Now we limit ourselves to the columns of interest for these particular events.", "_____no_output_____" ] ], [ [ "flows_df = flows_sorted[flows_sorted.columns.intersection(categorical_variables)]\nflows_df.shape", "_____no_output_____" ], [ "flows_df.head(3)", "_____no_output_____" ] ], [ [ "Now we'll quickly iterate through this dataframe and into our list of lists format.", "_____no_output_____" ] ], [ [ "def categorical_columns_to_list(data_frame, column_names):\n \"\"\"\n Takes a data frame and a set of columns and represents each row a list of the appropriate non-empty columns\n of the form column_name:value.\n \"\"\"\n label_list = pd.Series([[f'{k}:{v}' for k, v in zip(column_names, t) if v is not None]\n for t in zip(*map(data_frame.get, column_names))])\n return label_list", "_____no_output_____" ], [ "flow_labels = categorical_columns_to_list(flows_df, categorical_variables)\nlen(flow_labels)", "_____no_output_____" ], [ "flow_labels[0]", "_____no_output_____" ] ], [ [ "# TokenCooccurrenceVectorizer\n\nWe initially only embed labels that occur at least 20 times within our days events. This prevents us from attempting to embed labels that we have very limited data for. \n\nWe will initally select a <code>window_radii=2</code> in order to include some very limited sequence information. The presumption here is that the flow messages that occurred near each other in the sequence of flow events are related to each other.\n\nLastly we set <code>multi_labelled_tokens=True</code> to convey that we are dealing with a sequence of multi-labelled events.", "_____no_output_____" ] ], [ [ "word_vectorizer = TokenCooccurrenceVectorizer(\n min_occurrences= 20,\n window_radii=2,\n multi_labelled_tokens=True).fit(flow_labels)\nword_vectors = word_vectorizer.reduce_dimension()\n\nprint(f\"This constructs an embedding of {word_vectorizer.cooccurrences_.shape[0]} labels represented by their\",\n f\"cooccurrence with {word_vectorizer.cooccurrences_.shape[1]} labels occurring before and after them.\\n\",\n f\"We have then reduced this space to a {word_vectors.shape[1]} dimensional representation.\")", "This constructs an embedding of 15014 labels represented by their cooccurrence with 30028 labels occurring before and after them.\n We have then reduced this space to a 150 dimensional representation.\n" ] ], [ [ "For the purposes of visualization we will use our UMAP algorithm to embed this data into two dimensional space.", "_____no_output_____" ] ], [ [ "model = umap.UMAP(n_neighbors=30, metric='cosine', unique=True, random_state=42).fit(word_vectors)", "_____no_output_____" ], [ "hover_df = pd.DataFrame({'label':word_vectorizer.token_label_dictionary_.keys()})\nevent_type = hover_df.label.str.split(':', n=1, expand=True)\nevent_type.columns = ['type','value']\numap.plot.points(model, theme='fire', labels=event_type['type']);", "_____no_output_____" ] ], [ [ "A little exploration of this space quickly reveals that our label space is overwhelmed by source ports (and some destination ports with values in the range 40,000 to 60,000. A little consultation with subject matter experts quickly reveals that these are so called ephemeral ports. That is a pre-established range of ports that are used to establish temporary connections and then thrown away to be re-used by other processes later. The disposable nature of these ports explains why there is such a plethora of them within our label space. In fact what we see here are clusters of these ports that are all used by the same process and IP pairs over the course of our day. \n\nThough it's encouraging that we can easily detect this structure it is essentially meaningless structure since it tells us nothing about the flows or processes and is completely unstable over any period of time. As such we will want to remove these tokens from our space.\n\nFortunately TokenCooccurrenceVectorizer() has an <code>exclude_token_regex</code> parameter which will allow us to remove these tokens with very little work.", "_____no_output_____" ] ], [ [ "word_vectorizer = TokenCooccurrenceVectorizer(\n min_occurrences= 20,\n window_radii=2,\n excluded_token_regex='(src\\_port|dest\\_port):[4-6][0-9]{4}',\n multi_labelled_tokens=True).fit(flow_labels)\nword_vectors = word_vectorizer.reduce_dimension()\n\nprint(f\"This constructs an embedding of {word_vectorizer.cooccurrences_.shape[0]} labels represented by their\",\n f\"cooccurrence with {word_vectorizer.cooccurrences_.shape[1]} labels occurring before and after them.\\n\",\n f\"We have then reduced this space to a {word_vectors.shape[1]} dimensional representation.\")", "This constructs an embedding of 3245 labels represented by their cooccurrence with 6490 labels occurring before and after them.\n We have then reduced this space to a 150 dimensional representation.\n" ] ], [ [ "As before we'll reduce this 150 dimensional representation to a two dimensional representation for visualization and exploration. Since we are already using subject matter knowledge to enrich our analysis we will continue in this vein and label our IP addresses with whether they are internal or external addresses. Internal IP addresses are of the form <code>10.\\*.\\*.\\*</code>.", "_____no_output_____" ] ], [ [ "model = umap.UMAP(n_neighbors=30, metric='cosine', unique=True, random_state=42).fit(word_vectors)", "_____no_output_____" ], [ "hover_df = pd.DataFrame({'label':word_vectorizer.token_label_dictionary_.keys()})\ninternal_bool = hover_df.label.str.contains(\"ip:10\\.\")\nevent_type = hover_df.label.str.split(':', n=1, expand=True)\nevent_type.columns = ['type','value']\nevent_type['type'][internal_bool] = event_type['type'][internal_bool] + '_internal'\numap.plot.points(model, theme='fire', labels=event_type['type']);", "_____no_output_____" ] ], [ [ "This provides a nice structure over our token label space. We see and can interesting mixtures of internal and external source IP spaces with connections making use of specific source and destination ports seperating off nicely into their own clusters.\n\nThe next step would be to look at your data by building an interactive plot and starting to exploring these clusters in earnest.", "_____no_output_____" ] ], [ [ "p = umap.plot.interactive(model, theme='fire', labels=event_type['type'], hover_data=hover_df, point_size=3, width=600, height=600)\numap.plot.show(p)", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ] ]
e7ffc35225efb6340c6103d150d4f8b3220e13bb
17,129
ipynb
Jupyter Notebook
test_notebooks/Untitled.ipynb
clarka34/jupyter_scisheets_widget
b411b2b0a9692cd1252b8d8b9d6cebcb9c53c8e8
[ "BSD-3-Clause" ]
3
2017-09-26T10:00:14.000Z
2018-04-23T02:02:18.000Z
test_notebooks/Untitled.ipynb
clarka34/jupyter_scisheets_widget
b411b2b0a9692cd1252b8d8b9d6cebcb9c53c8e8
[ "BSD-3-Clause" ]
1
2018-04-23T03:38:39.000Z
2018-04-26T18:47:25.000Z
test_notebooks/Untitled.ipynb
clarka34/jupyter_scisheets_widget
b411b2b0a9692cd1252b8d8b9d6cebcb9c53c8e8
[ "BSD-3-Clause" ]
1
2018-04-26T18:25:24.000Z
2018-04-26T18:25:24.000Z
32.751434
140
0.479538
[ [ [ "%%javascript\n\n// load css if it's not already there: http://stackoverflow.com/a/4724676/7782\nfunction loadcss(url) {\n if (!$(\"link[href='\" + url + \"']\").length)\n $('<link href=\"' + url + '\" rel=\"stylesheet\">').appendTo(\"head\");\n}\n\nloadcss(\"http://handsontable.com/dist/jquery.handsontable.full.css\");\nloadcss(\"http://handsontable.com/demo/css/samples.css?20140331\");", "_____no_output_____" ], [ "from ipywidgets import widget", "_____no_output_____" ], [ "%%javascript\nvar widgets = require('@jupyter-widgets/base');\nvar _ = require('underscore');\n// var handsontable_css = require('http://handsontable.com/dist/jquery.handsontable.full.css');\n\n// var handsontable = require(['http://handsontable.com/dist/jquery.handsontable.full.js']);\n\n\nimport Handsontable from 'http://handsontable.com/dist/jquery.handsontable.full.js';\n\n// var SciSheetTableModel = widgets.DOMWidgetModel.extend({\n// defaults: _.extend(_.result(this, 'widgets.DOMWidgetModel.prototype.defaults'), {\n// _model_name : 'SciSheetTableModel',\n// _view_name : 'SciSheetTableView',\n// _model_module : 'jupyter_scisheets_widget',\n// _view_module : 'jupyter_scisheets_widget',\n// _model_module_version : '0.1.0',\n// _view_module_version : '0.1.0'\n// })\n// });", "_____no_output_____" ], [ "%%javascript\nvar table_id = 0;\n\nvar widgets = require('@jupyter-widgets/base');\nvar _ = require('underscore');\n\nvar SciSheetTableModel = widgets.DOMWidgetModel.extend({\n defaults: _.extend(_.result(this, 'widgets.DOMWidgetModel.prototype.defaults'), {\n _model_name : 'SciSheetTableModel',\n _view_name : 'SciSheetTableView',\n _model_module : 'jupyter_scisheets_widget',\n _view_module : 'jupyter_scisheets_widget',\n _model_module_version : '0.1.0',\n _view_module_version : '0.1.0'\n })\n});\n\n\n// Custom View. Renders the widget model.\nvar SciSheetTableView = widgets.DOMWidgetView.extend({\n render: function(){\n // CREATION OF THE WIDGET IN THE NOTEBOOK.\n\n // Add a <div> in the widget area.\n this.$table = $('<div />')\n .attr('id', 'table_' + (table_id++))\n .appendTo(this.$el);\n\n this.$table.handsontable({\n });\n },\n\n update: function() {\n // PYTHON --> JS UPDATE.\n\n // Get the model's value (JSON)\n var json_model = this.model.get('_model_data');\n var json_header = this.model.get('_model_header');\n var json_row_header = this.model.get('_model_row_header');\n\n console.log(json_row_header);\n\n // Get the model's JSON string, and parse it.\n var datamod = JSON.parse(json_model);\n var headermod = JSON.parse(json_header);\n var rowheadermod = JSON.parse(json_row_header);\n\n console.log(headermod);\n console.log(rowheadmod);\n\n // Give it to the Handsontable widget.\n this.$table.handsontable({\n data: datamod,\n colHeaders: headermod,\n rowHeaders: rowheadermod\n });\n\n // Don't touch this...\n return SciSheetTableView.__super__.update.apply(this);\n }, \n\n // Tell Backbone to listen to the change event of input controls.\n\n events: {\"change\": \"handle_table_change\"},\n\n handle_table_change: function(event) {\n // JS --> PYTHON UPDATE.\n\n // Get table instance\n var ht = this.$table.handsontable('getInstance');\n\n // Get the data and serialize it\n var json_vals = JSON.stringify(ht.getData());\n var col_vals = JSON.stringify(ht.getColHeader());\n var row_vals = JSON.stringify(ht.getRowHeader());\n\n // Update the model with the JSON string.\n this.model.set('_model_data', json_vals);\n this.model.set('_model_header', col_vals);\n this.model.set('_model_row_header', row_vals);\n\n // Don't touch this...\n this.touch();\n }, \n\n});\n\nmodule.exports = {\n SciSheetTableModel: SciSheetTableModel,\n SciSheetTableView: SciSheetTableView\n};\n\n", "_____no_output_____" ], [ "define('hello', [\"@jupyter-widgets/base\"], function(widgets) {\n\n var HelloView = widgets.DOMWidgetView.extend({\n\n // Render the view.\n render: function() {\n this.value_changed();\n this.model.on('change:value', this.value_changed, this);\n }, \n \n value_changed: function() {\n this.el.textContent = this.model.get('value');\n },\n });\n\n return {\n HelloView: HelloView\n };\n});", "_____no_output_____" ], [ "%%html\n\n<p style=\"font-size: 20px\"><strong>Handsontable</strong> is a minimalistic Excel-like <span class=\"nobreak\">data grid</span>\n editor\n for HTML, JavaScript &amp; jQuery</p>\n\n<div id=\"hs_example\" class=\"handsontable\"></div>", "_____no_output_____" ], [ "%%javascript\nrequire.config({\n paths: {\n handsontable: \"http://handsontable.com/dist/jquery.handsontable.full.js\"\n }\n});\n\n\nrequire(['handsontable'], function (handsontable){\n console.log(\"in require->handsontable\");\n \n var data = [\n [\"\", \"Maserati\", \"Mazda\", \"Mercedes\", \"Mini\", \"Mitsubishi\"],\n [\"2009\", 0, 2941, 4303, 354, 5814],\n [\"2010\", 5, 2905, 2867, 412, 5284],\n [\"2011\", 4, 2517, 4822, 552, 6127],\n [\"2012\", 2, 2422, 5399, 776, 4151]\n ];\n \n $('#hs_example').handsontable({\n data: data,\n minSpareRows: 1,\n colHeaders: true,\n contextMenu: true\n });\n \n \n// function bindDumpButton() {\n// $('body').on('click', 'button[name=dump]', function () {\n// var dump = $(this).data('dump');\n// var $container = $(dump);\n// console.log('data of ' + dump, $container.handsontable('getData'));\n// });\n// }\n// bindDumpButton(); \n \n});\n\nhandsontable()", "_____no_output_____" ] ] ]
[ "code" ]
[ [ "code", "code", "code", "code", "code", "code", "code" ] ]
e7ffc7812bfc38f2a9926d51cfdb44149fd9168e
30,565
ipynb
Jupyter Notebook
introductory/Intro_Numpy.ipynb
lmartak/PracticalSessions
2c4e6a2ef94bc9e8980a116a6ff11318ae91a24d
[ "MIT" ]
137
2018-07-06T15:46:08.000Z
2021-11-18T10:50:37.000Z
introductory/Intro_Numpy.ipynb
lmartak/PracticalSessions
2c4e6a2ef94bc9e8980a116a6ff11318ae91a24d
[ "MIT" ]
1
2018-08-24T02:08:28.000Z
2018-09-03T13:13:52.000Z
introductory/Intro_Numpy.ipynb
tmlss2018/PracticalSessions
7eb4336e96498b486b8ebf5120ade65bffaaba0b
[ "Unlicense" ]
44
2018-07-06T14:29:37.000Z
2021-08-07T04:09:28.000Z
25.684874
242
0.417994
[ [ [ "# Numpy\n\n\" NumPy is the fundamental package for scientific computing with Python. It contains among other things:\n\n* a powerful N-dimensional array object\n* sophisticated (broadcasting) functions\n* useful linear algebra, Fourier transform, and random number capabilities \"\n\n\n-- From the [NumPy](http://www.numpy.org/) landing page.\n\n", "_____no_output_____" ], [ "Before learning about numpy, we introduce..\n\n### The NXOR Function\n\nMany of the exercises involve working with the $\\mathrm{NXOR} \\colon \\; [-1, 1]^2 \\rightarrow \\{-1, +1\\}$ function defined as \n\n$$ (x_1, x_2) \\longmapsto \\mathrm{sgn}(x_1 \\cdot x_2) .$$\n\nwhere for $x_1 \\cdot x_2 = 0$ we let $\\mathrm{NXOR}(x_1, x_2) = -1$.\n\nWe can visualize this function as\n\n![A set of points in \\[-1, +1\\]^2 with green and red markers denoting the value assigned to them by the NXOR function](https://github.com/tmlss2018/PracticalSessions/blob/master/assets/nxor_labels.png?raw=true)\n\nwhere each point in $ [-1, 1]^2$ is marked by green (+1) or red (-1) according to the value assigned to it by the NXOR function.\n\n\n", "_____no_output_____" ], [ "\nOver the course of the intro lab exercises we will\n\n1. Generate such data with numpy.\n2. Create the plot above with matplotlib.\n3. Train a model to learn this function.\n", "_____no_output_____" ], [ "### Setup and imports. Run the following cell.", "_____no_output_____" ] ], [ [ "from __future__ import absolute_import\nfrom __future__ import division\nfrom __future__ import print_function\n\nimport numpy as np\n", "_____no_output_____" ] ], [ [ "### Random numbers in numpy", "_____no_output_____" ] ], [ [ "np.random.random((3, 2)) # Array of shape (3, 2), entries uniform in [0, 1).", "_____no_output_____" ] ], [ [ "Note that (as usual in computing) numpy produces pseudo-random numbers based on a seed, or more precisely a random state. In order to make random sequences and calculations based on reproducible, use\n\n* the [`np.random.seed()`](https://docs.scipy.org/doc/numpy/reference/generated/numpy.random.seed.html) function to set the default global seed, or\n* the [`np.random.RandomState`](https://docs.scipy.org/doc/numpy/reference/generated/numpy.random.RandomState.html) class which is a container for a pseudo-random number generator and exposes methods for generating random numbers.\n", "_____no_output_____" ] ], [ [ "np.random.seed(0)\nprint(np.random.random(2))\n# Reset the global random state to the same state.\nnp.random.seed(0)\nprint(np.random.random(2))", "[0.5488135 0.71518937]\n[0.5488135 0.71518937]\n" ] ], [ [ "### Numpy Array Operations 1\n\nThere are a large number of operations you can run on any numpy array. Here we showcase some common ones.", "_____no_output_____" ] ], [ [ "# Create one from hard-coded data:\nar = np.array([\n [0.0, 0.2],\n [0.9, 0.5],\n [0.3, 0.7],\n], dtype=np.float64) # float64 is the default.\n\nprint('The array:\\n', ar)\nprint()\n\nprint('data type', ar.dtype)\nprint('transpose\\n', ar.T)\nprint('shape', ar.shape)\nprint('reshaping an array', ar.reshape((6)))\n\n", "The array:\n [[ 0. 0.2]\n [ 0.9 0.5]\n [ 0.3 0.7]]\n\ndata type float64\ntranspose\n [[ 0. 0.9 0.3]\n [ 0.2 0.5 0.7]]\nshape (3, 2)\nreshaping an array [ 0. 0.2 0.9 0.5 0.3 0.7]\n" ] ], [ [ "Many numpy operations are available both as np module functions as well as array methods. For example, we can also reshape as", "_____no_output_____" ] ], [ [ "print('reshape v2', np.reshape(ar, (6, 1)))", "reshape v2 [[ 0. ]\n [ 0.2]\n [ 0.9]\n [ 0.5]\n [ 0.3]\n [ 0.7]]\n" ] ], [ [ "### Numpy Indexing and selectors\n\nHere are some basic indexing examples from numpy.", "_____no_output_____" ] ], [ [ "ar", "_____no_output_____" ], [ "ar[0, 1] # row, column", "_____no_output_____" ], [ "ar[:, 1] # slices: select all elements across the first (0th) axis.", "_____no_output_____" ], [ "ar[1:2, 1] # slices with syntax from:to, selecting [from, to).", "_____no_output_____" ], [ "ar[1:, 1] # Omit `to` to go all the way to the end", "_____no_output_____" ], [ "ar[:2, 1] # Omit `from` to start from the beginning", "_____no_output_____" ], [ "ar[0:-1, 1] # Use negative indexing to count elements from the back.", "_____no_output_____" ] ], [ [ "We can also pass boolean arrays as indices. These will exactly define which elements to select.", "_____no_output_____" ] ], [ [ "ar[np.array([\n [True, False],\n [False, True],\n [True, False],\n])]", "_____no_output_____" ] ], [ [ "Boolean arrays can be created with logical operations, then used as selectors. Logical operators apply elementwise.", "_____no_output_____" ] ], [ [ "ar_2 = np.array([ # Nearly the same as ar\n [0.0, 0.1],\n [0.9, 0.5],\n [0.0, 0.7],\n])\n\n# Where ar_2 is smaller than ar, let ar_2 be -inf.\nar_2[ar_2 < ar] = -np.inf\nar_2", "_____no_output_____" ] ], [ [ "### Numpy Operations 2", "_____no_output_____" ] ], [ [ "print('array:\\n', ar)\nprint()\n\nprint('sum across axis 0 (rows):', ar.sum(axis=0))\nprint('mean', ar.mean())\nprint('min', ar.min())\nprint('row-wise min', ar.min(axis=1))\n", "array:\n [[ 0. 0.2]\n [ 0.9 0.5]\n [ 0.3 0.7]]\n\nsum across axis 0 (rows): [ 1.2 1.4]\nmean 0.433333333333\nmin 0.0\nrow-wise min [ 0. 0.5 0.3]\n" ] ], [ [ "We can also take element-wise minimums between two arrays.\n\nWe may want to do this when \"clipping\" values in a matrix, that is, setting any values larger than, say, 0.6, to 0.6. We would do this in numpy with..\n\n### Broadcasting (and selectors)", "_____no_output_____" ] ], [ [ "np.minimum(ar, 0.6)", "_____no_output_____" ] ], [ [ "Numpy automatically turns the scalar 0.6 into an array the same size as `ar` in order to take element-wise minimum.\n\n", "_____no_output_____" ], [ "Broadcasting can save us a lot of typing, but in complicated cases it may require a good understanding of the exact rules followed.\n\nSome references:\n\n* [Numpy page that explains broadcasting](https://docs.scipy.org/doc/numpy-1.13.0/user/basics.broadcasting.html)\n* [Similar content with some visualizations](http://scipy.github.io/old-wiki/pages/EricsBroadcastingDoc)\n\nHere we follow with a selection of other useful broadcasting examples.\n", "_____no_output_____" ] ], [ [ "# Centering our array.\nprint('centered array:\\n', ar - np.mean(ar)) ", "centered array:\n [[-0.43333333 -0.23333333]\n [ 0.46666667 0.06666667]\n [-0.13333333 0.26666667]]\n" ] ], [ [ "Note that `np.mean()` was a scalar, but it is automatically subtracted from every element.\n", "_____no_output_____" ], [ "We can write the minimum function ourselves, as well.", "_____no_output_____" ] ], [ [ "clipped_ar = ar.copy() # So that ar is not modified.\nclipped_ar[clipped_ar > 0.6] = 0.6\nclipped_ar", "_____no_output_____" ] ], [ [ "A few things happened here:\n\n1. 0.6 was broadcast in for the greater than (>) operation\n2. The greater than operation defined a selector, selecting a subset of the elements of the array\n3. 0.6 was broadcast to the right number of elements for assignment.", "_____no_output_____" ], [ "Vectors may also be broadcast into matrices.", "_____no_output_____" ] ], [ [ "vec = np.array([1, 2])\nar + vec", "_____no_output_____" ] ], [ [ "Here the shapes of the involved arrays are:\n```\nar (2d array): 2 x 2\nvec (1d array): 2\nResult (2d array): 2 x 2\n```\n\nWhen either of the dimensions compared is one (even implicitly, like in the case of `vec`), the other is used. In other words, dimensions with size 1 are stretched or “copied” to match the other.\n\nHere, this meant that the `[1, 2]` row was repeated to match the number of rows in `ar`, then added together.\n", "_____no_output_____" ], [ "If there is a shape mismatch, you will be informed. To try, uncomment the line below and run it.", "_____no_output_____" ] ], [ [ "#ar + np.array([[1, 2, 3]])", "_____no_output_____" ] ], [ [ "#### Exercise\n\nBroadcast and add the vector `[10, 20, 30]` across the columns of `ar`. \n\nYou should get \n```\narray([[10. , 10.2],\n [20.9, 20.5],\n [30.3, 30.7]])\n ```\n", "_____no_output_____" ] ], [ [ "#@title Code\n\n# Recall that you can use vec.shape to verify that your array has the\n# shape you expect.\n\n### Your code here ###", "_____no_output_____" ], [ "#@title Solution\n\nvec = np.array([[10], [20], [30]])\nar + vec", "_____no_output_____" ] ], [ [ "### `np.newaxis`\n\nWe can use another numpy feature, `np.newaxis` to simply form the column vector that was required for the example above. It adds a singleton dimension to arrays at the desired location:", "_____no_output_____" ] ], [ [ "vec = np.array([1, 2])\nvec.shape", "_____no_output_____" ], [ "vec[np.newaxis, :].shape", "_____no_output_____" ], [ "vec[:, np.newaxis].shape", "_____no_output_____" ] ], [ [ "Now you know more than enough to generate some example data for our `NXOR` function.\n\n\n### Exercise: Generate Data for NXOR\n\nWrite a function `get_data(num_examples)` that returns two numpy arrays\n\n* `inputs` of shape `num_examples x 2` with points selected uniformly from the $[-1, 1]^2$ domain.\n* `labels` of shape `num_examples` with the associated output of `NXOR`.", "_____no_output_____" ] ], [ [ "#@title Code\n\ndef get_data(num_examples):\n # Replace with your code.\n return np.zeros((num_examples, 2)), np.zeros((num_examples))\n", "_____no_output_____" ], [ "#@title Solution\n\n# Solution 1.\ndef get_data(num_examples):\n inputs = 2*np.random.random((num_examples, 2)) - 1\n labels = np.prod(inputs, axis=1)\n labels[labels <= 0] = -1 \n labels[labels > 0] = 1 \n return inputs, labels\n\n# Solution 1.\n# def get_data(num_examples):\n# inputs = 2*np.random.random((num_examples, 2)) - 1\n# labels = np.sign(np.prod(inputs, axis=1))\n# labels[labels == 0] = -1 \n# return inputs, labels\n", "_____no_output_____" ], [ "get_data(4)", "_____no_output_____" ] ], [ [ "## That's all, folks!\n\nFor now.", "_____no_output_____" ] ], [ [ "", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown", "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code" ] ]
e7ffcdfdf6df1888625eee70acc4be2a36a1f3e5
61,444
ipynb
Jupyter Notebook
microdrop/static/notebooks/dump feedback results to csv.ipynb
mkdryden/microdrop
674dbae9ec7bd7430dc6e48c4d91b25379133592
[ "BSD-3-Clause" ]
17
2018-03-19T18:57:16.000Z
2021-10-06T02:55:42.000Z
microdrop/static/notebooks/dump feedback results to csv.ipynb
mkdryden/microdrop
674dbae9ec7bd7430dc6e48c4d91b25379133592
[ "BSD-3-Clause" ]
243
2016-08-09T13:52:30.000Z
2017-11-24T05:13:54.000Z
microdrop/static/notebooks/dump feedback results to csv.ipynb
mkdryden/microdrop
674dbae9ec7bd7430dc6e48c4d91b25379133592
[ "BSD-3-Clause" ]
5
2019-01-09T19:47:11.000Z
2022-03-19T09:03:48.000Z
351.108571
35,284
0.907249
[ [ [ "empty" ] ] ]
[ "empty" ]
[ [ "empty" ] ]
e7ffce0f38e1cad0f99a62c8fae4c97e40d6e80a
19,511
ipynb
Jupyter Notebook
BCNcode/0_vibratioon_signal/1250/BCN/1250-016-512-z.ipynb
Decaili98/BCN-code-2022
ab0ce085cb29fbf12b6d773861953cb2cef23e20
[ "MulanPSL-1.0" ]
null
null
null
BCNcode/0_vibratioon_signal/1250/BCN/1250-016-512-z.ipynb
Decaili98/BCN-code-2022
ab0ce085cb29fbf12b6d773861953cb2cef23e20
[ "MulanPSL-1.0" ]
null
null
null
BCNcode/0_vibratioon_signal/1250/BCN/1250-016-512-z.ipynb
Decaili98/BCN-code-2022
ab0ce085cb29fbf12b6d773861953cb2cef23e20
[ "MulanPSL-1.0" ]
null
null
null
39.818367
248
0.542207
[ [ [ "import tensorflow as tf\nfrom tensorflow import keras\nfrom tensorflow.keras import layers\nfrom keras import initializers\nimport keras.backend as K\nimport numpy as np\nimport pandas as pd\nfrom tensorflow.keras.layers import *\nfrom keras.regularizers import l2#正则化", "Using TensorFlow backend.\n" ], [ "# 12-0.2\n# 13-2.4\n# 18-12.14\nimport pandas as pd\nimport numpy as np\nnormal = np.loadtxt(r'E:\\水泵代码调试\\试验数据(包括压力脉动和振动)\\2013.9.12-未发生缠绕前\\2013-9.12振动\\2013-9-12振动-1250rmin-mat\\1250rnormalvibz.txt', delimiter=',')\nchanrao = np.loadtxt(r'E:\\水泵代码调试\\试验数据(包括压力脉动和振动)\\2013.9.17-发生缠绕后\\振动\\9-18上午振动1250rmin-mat\\1250r_chanraovibz.txt', delimiter=',')\nprint(normal.shape,chanrao.shape,\"***************************************************\")\ndata_normal=normal[8:10] #提取前两行\ndata_chanrao=chanrao[8:10] #提取前两行\nprint(data_normal.shape,data_chanrao.shape)\nprint(data_normal,\"\\r\\n\",data_chanrao,\"***************************************************\")\ndata_normal=data_normal.reshape(1,-1)\ndata_chanrao=data_chanrao.reshape(1,-1)\nprint(data_normal.shape,data_chanrao.shape)\nprint(data_normal,\"\\r\\n\",data_chanrao,\"***************************************************\")", "(22, 32768) (20, 32768) ***************************************************\n(2, 32768) (2, 32768)\n[[ 0.006563 0.69693 0.36774 ... 0.023889 -0.55568 -0.94358 ]\n [-0.13989 -1.5401 0.29726 ... 0.15154 -0.081996 -0.33388 ]] \r\n [[ 0.55769 -0.41621 -0.59624 ... 2.0395 1.6655 -0.086483]\n [ 0.90511 0.88114 0.20347 ... -2.3256 1.3639 -0.13755 ]] ***************************************************\n(1, 65536) (1, 65536)\n[[ 0.006563 0.69693 0.36774 ... 0.15154 -0.081996 -0.33388 ]] \r\n [[ 0.55769 -0.41621 -0.59624 ... -2.3256 1.3639 -0.13755]] ***************************************************\n" ], [ "#水泵的两种故障类型信号normal正常,chanrao故障\ndata_normal=data_normal.reshape(-1, 512)#(65536,1)-(128, 515)\ndata_chanrao=data_chanrao.reshape(-1,512)\nprint(data_normal.shape,data_chanrao.shape)\n", "(128, 512) (128, 512)\n" ], [ "import numpy as np\ndef yuchuli(data,label):#(4:1)(51:13)\n #打乱数据顺序\n np.random.shuffle(data)\n train = data[0:102,:]\n test = data[102:128,:]\n label_train = np.array([label for i in range(0,102)])\n label_test =np.array([label for i in range(0,26)])\n return train,test ,label_train ,label_test\ndef stackkk(a,b,c,d,e,f,g,h):\n aa = np.vstack((a, e))\n bb = np.vstack((b, f))\n cc = np.hstack((c, g))\n dd = np.hstack((d, h))\n return aa,bb,cc,dd\nx_tra0,x_tes0,y_tra0,y_tes0 = yuchuli(data_normal,0)\nx_tra1,x_tes1,y_tra1,y_tes1 = yuchuli(data_chanrao,1)\ntr1,te1,yr1,ye1=stackkk(x_tra0,x_tes0,y_tra0,y_tes0 ,x_tra1,x_tes1,y_tra1,y_tes1)\n\nx_train=tr1\nx_test=te1\ny_train = yr1\ny_test = ye1\n\n#打乱数据\nstate = np.random.get_state()\nnp.random.shuffle(x_train)\nnp.random.set_state(state)\nnp.random.shuffle(y_train)\n\nstate = np.random.get_state()\nnp.random.shuffle(x_test)\nnp.random.set_state(state)\nnp.random.shuffle(y_test)\n\n\n#对训练集和测试集标准化\ndef ZscoreNormalization(x):\n \"\"\"Z-score normaliaztion\"\"\"\n x = (x - np.mean(x)) / np.std(x)\n return x\nx_train=ZscoreNormalization(x_train)\nx_test=ZscoreNormalization(x_test)\n# print(x_test[0])\n\n\n#转化为一维序列\nx_train = x_train.reshape(-1,512,1)\nx_test = x_test.reshape(-1,512,1)\nprint(x_train.shape,x_test.shape)\n\ndef to_one_hot(labels,dimension=2):\n results = np.zeros((len(labels),dimension))\n for i,label in enumerate(labels):\n results[i,label] = 1\n return results\none_hot_train_labels = to_one_hot(y_train)\none_hot_test_labels = to_one_hot(y_test)\n", "(204, 512, 1) (52, 512, 1)\n" ], [ "#定义挤压函数\ndef squash(vectors, axis=-1):\n \"\"\"\n 对向量的非线性激活函数\n ## vectors: some vectors to be squashed, N-dim tensor\n ## axis: the axis to squash\n :return: a Tensor with same shape as input vectors\n \"\"\"\n s_squared_norm = K.sum(K.square(vectors), axis, keepdims=True)\n scale = s_squared_norm / (1 + s_squared_norm) / K.sqrt(s_squared_norm + K.epsilon())\n return scale * vectors\n\nclass Length(layers.Layer):\n \"\"\"\n 计算向量的长度。它用于计算与margin_loss中的y_true具有相同形状的张量\n Compute the length of vectors. This is used to compute a Tensor that has the same shape with y_true in margin_loss\n inputs: shape=[dim_1, ..., dim_{n-1}, dim_n]\n output: shape=[dim_1, ..., dim_{n-1}]\n \"\"\"\n def call(self, inputs, **kwargs):\n return K.sqrt(K.sum(K.square(inputs), -1))\n\n def compute_output_shape(self, input_shape):\n return input_shape[:-1]\n \n def get_config(self):\n config = super(Length, self).get_config()\n return config\n#定义预胶囊层\ndef PrimaryCap(inputs, dim_capsule, n_channels, kernel_size, strides, padding):\n \"\"\"\n 进行普通二维卷积 `n_channels` 次, 然后将所有的胶囊重叠起来\n :param inputs: 4D tensor, shape=[None, width, height, channels]\n :param dim_capsule: the dim of the output vector of capsule\n :param n_channels: the number of types of capsules\n :return: output tensor, shape=[None, num_capsule, dim_capsule]\n \"\"\"\n output = layers.Conv2D(filters=dim_capsule*n_channels, kernel_size=kernel_size, strides=strides,\n padding=padding,name='primarycap_conv2d')(inputs)\n outputs = layers.Reshape(target_shape=[-1, dim_capsule], name='primarycap_reshape')(output)\n return layers.Lambda(squash, name='primarycap_squash')(outputs)\n\nclass DenseCapsule(layers.Layer):\n \"\"\"\n 胶囊层. 输入输出都为向量. \n ## num_capsule: 本层包含的胶囊数量\n ## dim_capsule: 输出的每一个胶囊向量的维度\n ## routings: routing 算法的迭代次数\n \"\"\"\n def __init__(self, num_capsule, dim_capsule, routings=3, kernel_initializer='glorot_uniform',**kwargs):\n super(DenseCapsule, self).__init__(**kwargs)\n self.num_capsule = num_capsule\n self.dim_capsule = dim_capsule\n self.routings = routings\n self.kernel_initializer = kernel_initializer\n\n def build(self, input_shape):\n assert len(input_shape) >= 3, '输入的 Tensor 的形状[None, input_num_capsule, input_dim_capsule]'#(None,1152,8)\n self.input_num_capsule = input_shape[1]\n self.input_dim_capsule = input_shape[2]\n\n #转换矩阵\n self.W = self.add_weight(shape=[self.num_capsule, self.input_num_capsule,\n self.dim_capsule, self.input_dim_capsule],\n initializer=self.kernel_initializer,name='W')\n self.built = True\n\n def call(self, inputs, training=None):\n # inputs.shape=[None, input_num_capsuie, input_dim_capsule]\n # inputs_expand.shape=[None, 1, input_num_capsule, input_dim_capsule]\n inputs_expand = K.expand_dims(inputs, 1)\n # 运算优化:将inputs_expand重复num_capsule 次,用于快速和W相乘\n # inputs_tiled.shape=[None, num_capsule, input_num_capsule, input_dim_capsule]\n inputs_tiled = K.tile(inputs_expand, [1, self.num_capsule, 1, 1])\n\n # 将inputs_tiled的batch中的每一条数据,计算inputs+W\n # x.shape = [num_capsule, input_num_capsule, input_dim_capsule]\n # W.shape = [num_capsule, input_num_capsule, dim_capsule, input_dim_capsule]\n # 将x和W的前两个维度看作'batch'维度,向量和矩阵相乘:\n # [input_dim_capsule] x [dim_capsule, input_dim_capsule]^T -> [dim_capsule].\n # inputs_hat.shape = [None, num_capsule, input_num_capsule, dim_capsutel\n inputs_hat = K.map_fn(lambda x: K.batch_dot(x, self.W, [2, 3]),elems=inputs_tiled)\n\n # Begin: Routing算法\n # 将系数b初始化为0.\n # b.shape = [None, self.num_capsule, self, input_num_capsule].\n b = tf.zeros(shape=[K.shape(inputs_hat)[0], self.num_capsule, self.input_num_capsule])\n \n assert self.routings > 0, 'The routings should be > 0.'\n for i in range(self.routings):\n # c.shape=[None, num_capsule, input_num_capsule]\n C = tf.nn.softmax(b ,axis=1)\n # c.shape = [None, num_capsule, input_num_capsule]\n # inputs_hat.shape = [None, num_capsule, input_num_capsule, dim_capsule]\n # 将c与inputs_hat的前两个维度看作'batch'维度,向量和矩阵相乘:\n # [input_num_capsule] x [input_num_capsule, dim_capsule] -> [dim_capsule],\n # outputs.shape= [None, num_capsule, dim_capsule]\n outputs = squash(K. batch_dot(C, inputs_hat, [2, 2])) # [None, 10, 16]\n \n if i < self.routings - 1:\n # outputs.shape = [None, num_capsule, dim_capsule]\n # inputs_hat.shape = [None, num_capsule, input_num_capsule, dim_capsule]\n # 将outputs和inρuts_hat的前两个维度看作‘batch’ 维度,向量和矩阵相乘:\n # [dim_capsule] x [imput_num_capsule, dim_capsule]^T -> [input_num_capsule]\n # b.shape = [batch_size. num_capsule, input_nom_capsule]\n# b += K.batch_dot(outputs, inputs_hat, [2, 3]) to this b += tf.matmul(self.W, x)\n b += K.batch_dot(outputs, inputs_hat, [2, 3])\n\n # End: Routing 算法\n return outputs\n\n def compute_output_shape(self, input_shape):\n return tuple([None, self.num_capsule, self.dim_capsule])\n\n def get_config(self):\n config = {\n 'num_capsule': self.num_capsule,\n 'dim_capsule': self.dim_capsule,\n 'routings': self.routings\n }\n base_config = super(DenseCapsule, self).get_config()\n return dict(list(base_config.items()) + list(config.items()))", "_____no_output_____" ], [ "from tensorflow import keras\nfrom keras.regularizers import l2#正则化\nx = layers.Input(shape=[512,1, 1])\n#普通卷积层\nconv1 = layers.Conv2D(filters=16, kernel_size=(2, 1),activation='relu',padding='valid',name='conv1')(x)\n#池化层\nPOOL1 = MaxPooling2D((2,1))(conv1)\n#普通卷积层\nconv2 = layers.Conv2D(filters=32, kernel_size=(2, 1),activation='relu',padding='valid',name='conv2')(POOL1)\n#池化层\n# POOL2 = MaxPooling2D((2,1))(conv2)\n#Dropout层\nDropout=layers.Dropout(0.1)(conv2)\n\n# Layer 3: 使用“squash”激活的Conv2D层, 然后重塑 [None, num_capsule, dim_vector]\nprimarycaps = PrimaryCap(Dropout, dim_capsule=8, n_channels=12, kernel_size=(4, 1), strides=2, padding='valid')\n# Layer 4: 数字胶囊层,动态路由算法在这里工作。\ndigitcaps = DenseCapsule(num_capsule=2, dim_capsule=16, routings=3, name='digit_caps')(primarycaps)\n# Layer 5:这是一个辅助层,用它的长度代替每个胶囊。只是为了符合标签的形状。\nout_caps = Length(name='out_caps')(digitcaps)\n\nmodel = keras.Model(x, out_caps) \nmodel.summary() ", "WARNING:tensorflow:From E:\\anaconda0\\envs\\tf2.4\\lib\\site-packages\\tensorflow\\python\\util\\deprecation.py:605: calling map_fn_v2 (from tensorflow.python.ops.map_fn) with dtype is deprecated and will be removed in a future version.\nInstructions for updating:\nUse fn_output_signature instead\nModel: \"model\"\n_________________________________________________________________\nLayer (type) Output Shape Param # \n=================================================================\ninput_1 (InputLayer) [(None, 512, 1, 1)] 0 \n_________________________________________________________________\nconv1 (Conv2D) (None, 511, 1, 16) 48 \n_________________________________________________________________\nmax_pooling2d (MaxPooling2D) (None, 255, 1, 16) 0 \n_________________________________________________________________\nconv2 (Conv2D) (None, 254, 1, 32) 1056 \n_________________________________________________________________\ndropout (Dropout) (None, 254, 1, 32) 0 \n_________________________________________________________________\nprimarycap_conv2d (Conv2D) (None, 126, 1, 96) 12384 \n_________________________________________________________________\nprimarycap_reshape (Reshape) (None, 1512, 8) 0 \n_________________________________________________________________\nprimarycap_squash (Lambda) (None, 1512, 8) 0 \n_________________________________________________________________\ndigit_caps (DenseCapsule) (None, 2, 16) 387072 \n_________________________________________________________________\nout_caps (Length) (None, 2) 0 \n=================================================================\nTotal params: 400,560\nTrainable params: 400,560\nNon-trainable params: 0\n_________________________________________________________________\n" ], [ "\n#定义优化\nmodel.compile(loss='categorical_crossentropy',\n optimizer='adam',metrics=['accuracy']) ", "_____no_output_____" ], [ "import time\ntime_begin = time.time()\nhistory = model.fit(x_train,one_hot_train_labels,\n validation_split=0.1,\n epochs=50,batch_size=10,\n shuffle=True)\ntime_end = time.time()\ntime = time_end - time_begin\nprint('time:', time)", "Epoch 1/50\n19/19 [==============================] - 6s 158ms/step - loss: 0.6234 - accuracy: 0.5298 - val_loss: 0.5731 - val_accuracy: 0.4286\nEpoch 2/50\n19/19 [==============================] - 1s 45ms/step - loss: 0.4899 - accuracy: 0.5696 - val_loss: 0.5103 - val_accuracy: 0.4286\nEpoch 3/50\n19/19 [==============================] - 1s 45ms/step - loss: 0.4458 - accuracy: 0.5031 - val_loss: 0.4448 - val_accuracy: 0.4286\nEpoch 4/50\n19/19 [==============================] - 1s 45ms/step - loss: 0.3927 - accuracy: 0.5040 - val_loss: 0.4289 - val_accuracy: 0.4286\nEpoch 5/50\n 1/19 [>.............................] - ETA: 0s - loss: 0.4429 - accuracy: 0.4000" ], [ "import time\ntime_begin = time.time()\nscore = model.evaluate(x_test,one_hot_test_labels, verbose=0)\nprint('Test loss:', score[0])\nprint('Test accuracy:', score[1])\n \ntime_end = time.time()\ntime = time_end - time_begin\nprint('time:', time)", "_____no_output_____" ], [ "#绘制acc-loss曲线\nimport matplotlib.pyplot as plt\n\nplt.plot(history.history['loss'],color='r')\nplt.plot(history.history['val_loss'],color='g')\nplt.plot(history.history['accuracy'],color='b')\nplt.plot(history.history['val_accuracy'],color='k')\nplt.title('model loss and acc')\nplt.ylabel('Accuracy')\nplt.xlabel('epoch')\nplt.legend(['train_loss', 'test_loss','train_acc', 'test_acc'], loc='center right')\n# plt.legend(['train_loss','train_acc'], loc='upper left')\n#plt.savefig('1.png')\nplt.show()", "_____no_output_____" ], [ "import matplotlib.pyplot as plt\n\nplt.plot(history.history['loss'],color='r')\nplt.plot(history.history['accuracy'],color='b')\nplt.title('model loss and sccuracy ')\nplt.ylabel('loss/sccuracy')\nplt.xlabel('epoch')\nplt.legend(['train_loss', 'train_sccuracy'], loc='center right')\nplt.show()", "_____no_output_____" ] ] ]
[ "code" ]
[ [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ] ]
e7ffd47328c0b9032127a5863c30349c4a392c7c
523,106
ipynb
Jupyter Notebook
.ipynb_checkpoints/SO_survey CRISP-DM-10-checkpoint.ipynb
Serenitea/CRISP_DM-StackOverflow-Survey
0ac3f6186d1b5fbb60fca398723057500e4014c1
[ "CNRI-Python" ]
null
null
null
.ipynb_checkpoints/SO_survey CRISP-DM-10-checkpoint.ipynb
Serenitea/CRISP_DM-StackOverflow-Survey
0ac3f6186d1b5fbb60fca398723057500e4014c1
[ "CNRI-Python" ]
null
null
null
.ipynb_checkpoints/SO_survey CRISP-DM-10-checkpoint.ipynb
Serenitea/CRISP_DM-StackOverflow-Survey
0ac3f6186d1b5fbb60fca398723057500e4014c1
[ "CNRI-Python" ]
null
null
null
40.22036
479
0.329232
[ [ [ "<center><h1>Stack Overflow Developer Surveys, 2015-2019", "_____no_output_____" ] ], [ [ "# global printing options\npd.options.display.max_columns = 100\npd.options.display.max_rows = 30", "_____no_output_____" ] ], [ [ "Questions explored:\n1. Which are the current most commonly used programming languages?\n2. How has the prevalance of different programming languages changed throughout the past **five????** years?\n3. Which programming languages are the currently the most popular for specific types of developers?\n\nPoss questions:\n\n- mode of education + diff lang/frameworks/plats?\n- years of experience + diff lang/frameworks/plats?", "_____no_output_____" ], [ "Challenges:\n\nAs is often the case with the practicalities of real-life data, the Stack Overflow developer survey varies each year, presenting unique challenges to making cross-year comparisons. \n\n1. The same languages are classified differently from year-to-year. For instance, HTML and CSS are combined under one category in the 2019 survey, categorized separately in the 2018 survey, and nonexistent in 2017 and prior surveys.\n2. The question in 2017 covers \"technologies that you work with\", including languages, databases, platforms, and frameworks. The 2018 and 2019 surveys thankfully separated these different variables, but that still means more cleaning for the 2017 dataset!\n3. The addition of an \"Others\" category in 2019 that replaces the most obscure entries from earlier years. For consistency across years, I opted to combine the obscure languages from before 2019 into a single category \"Other(s)\". \n\n\nProblem variables:\n\n- HTML/CSS for 2019, 2018 has HTML and CSS separately.\n- Bash/Shell/PowerShell for 2019, 2018 has Bash/Shell\n- 2019 has an \"Other\" category", "_____no_output_____" ], [ "End goal - create a line graph of prevalence of languages across different years\n2015\n- [x] clean names of 2015 data\n- [ ] merge all visual basic under 'Visual Basic / VBA'\n- [ ] all years have \"Other(s)\" as a category\n- [ ] delete HTML/CSS from 2018+19\n- [ ] delete non-language categories from 2017 and prior\n\n\n- [ ] uniform Shell/Powershell category\n- [ ] chart with languages and years\n\n", "_____no_output_____" ], [ "\n2019: LanguageWorkedWith\n\n2018: LanguageWorkedWith\n\n2017: HaveWorkedLanguage\n\n2016: tech_do", "_____no_output_____" ], [ "---\n# <center>Loading data and functions\n---", "_____no_output_____" ] ], [ [ "# import libraries here; add more as necessary\nimport numpy as np\nimport pandas as pd\nimport matplotlib.pyplot as plt\nimport seaborn as sns\nimport timeit\n\n%matplotlib inline", "_____no_output_____" ], [ "df2019 = pd.read_csv('./2019survey_results_public.csv', header = 0, skipinitialspace= True, low_memory=False)\ndf2018 = pd.read_csv('./2018survey_results_public.csv', header = 0, skipinitialspace= True, low_memory=False)\ndf2017 = pd.read_csv('./2017survey_results_public.csv', header = 0, skipinitialspace= True, low_memory=False)\ndf2016 = pd.read_csv('./2016survey_results.csv', header = 0, skipinitialspace= True, low_memory=False)\ndf2015 = pd.read_csv('./2015survey_results.csv', header = 1, skipinitialspace= True, low_memory=False)", "_____no_output_____" ], [ "display(df2019.head(), df2018.head(), df2017.head(), df2016.head(), df2015.head())", "_____no_output_____" ], [ "uniformize_dict = {'VB.NET':'Visual Basic / VBA',\n 'VBA':'Visual Basic / VBA',\n 'Visual Basic':'Visual Basic / VBA',\n 'Visual Basic 6':'Visual Basic / VBA'\n}", "_____no_output_____" ], [ "#lists retrieved from previous notebooks\n\nnot_lang_list = ['Android',\n 'AngularJS',\n 'Arduino',\n 'Arduino / Raspberry Pi',\n 'Cassandra',\n 'Cloud',\n 'Cloud (AWS, GAE, Azure, etc.)',\n 'Cordova',\n 'Hadoop',\n 'LAMP',\n 'MongoDB',\n 'Node.js',\n 'ReactJS',\n 'Redis',\n 'SQL Server',\n 'Salesforce',\n 'SharePoint',\n 'Spark',\n 'Windows Phone',\n 'Wordpress',\n'WordPress', 'Write-In',\n 'iOS']\n\nlang_comm_list = ['Bash/Shell/PowerShell', 'C',\n 'C#',\n 'C++',\n 'Clojure',\n 'F#',\n 'Go','HTML/CSS',\n 'Java',\n 'JavaScript',\n 'Objective-C',\n 'PHP',\n 'Python',\n 'R',\n 'Ruby',\n 'Rust',\n 'SQL',\n 'Scala',\n 'Swift',\n 'Visual Basic / VBA']\n\nmixed_all_list = ['Android',\n 'AngularJS',\n 'Arduino',\n 'Arduino / Raspberry Pi',\n 'Assembly',\n 'Bash/Shell',\n 'Bash/Shell/PowerShell',\n 'C',\n 'C#',\n 'C++',\n 'C++11',\n 'CSS',\n 'Cassandra',\n 'Clojure',\n 'Cloud',\n 'Cloud (AWS, GAE, Azure, etc.)',\n 'Cobol',\n 'CoffeeScript',\n 'Common Lisp',\n 'Cordova',\n 'Dart',\n 'Delphi/Object Pascal',\n 'Elixir',\n 'Erlang',\n 'F#',\n 'Go',\n 'Groovy',\n 'HTML',\n 'HTML/CSS',\n 'Hack',\n 'Hadoop',\n 'Haskell',\n 'Java',\n 'JavaScript',\n 'Julia',\n 'Kotlin',\n 'LAMP',\n 'Lua',\n 'Matlab',\n 'MongoDB',\n 'Node.js',\n 'Objective-C',\n 'Ocaml',\n 'Other(s):',\n 'PHP',\n 'Perl',\n 'Python',\n 'R',\n 'ReactJS',\n 'Redis',\n 'Ruby',\n 'Rust',\n 'SQL',\n 'SQL Server',\n 'Salesforce',\n 'Scala',\n 'SharePoint',\n 'Sharepoint',\n 'Smalltalk',\n 'Spark',\n 'Swift',\n 'TypeScript',\n 'VB.NET',\n 'VBA',\n 'Visual Basic',\n 'Visual Basic 6',\n 'WebAssembly',\n 'Windows Phone',\n 'WordPress',\n 'Wordpress',\n 'Write-In',\n 'iOS']\n\nlang_uncomm_list = ['Assembly',\n 'Cobol',\n 'CoffeeScript',\n 'Common Lisp',\n 'Dart',\n 'Delphi/Object Pascal',\n 'Elixir',\n 'Erlang',\n 'Groovy',\n 'Hack',\n 'Haskell',\n 'Julia',\n 'Kotlin',\n 'Lua',\n 'Matlab',\n 'Ocaml',\n 'Perl',\n 'Sharepoint',\n 'Smalltalk',\n 'TypeScript',\n 'WebAssembly',]\n\nlang_all_list = ['Assembly',\n 'Bash/Shell',\n 'Bash/Shell/PowerShell',\n 'C',\n 'C#',\n 'C++',\n 'C++11',\n 'CSS',\n 'Clojure',\n 'Cobol',\n 'CoffeeScript',\n 'Common Lisp',\n 'Dart',\n 'Delphi/Object Pascal',\n 'Elixir',\n 'Erlang',\n 'F#',\n 'Go',\n 'Groovy',\n 'HTML',\n 'HTML/CSS',\n 'Hack',\n 'Haskell',\n 'Java',\n 'JavaScript',\n 'Julia',\n 'Kotlin',\n 'Lua',\n 'Matlab',\n 'Objective-C',\n 'Ocaml',\n 'Other(s):',\n 'PHP',\n 'Perl',\n 'Python',\n 'R',\n 'Ruby',\n 'Rust',\n 'SQL',\n 'Scala',\n 'Sharepoint',\n 'Smalltalk',\n 'Swift',\n 'TypeScript',\n 'VB.NET',\n 'VBA',\n 'Visual Basic',\n 'Visual Basic 6',\n 'WebAssembly']", "_____no_output_____" ], [ "def clean_df_cat(df, to_remove):\n '''\n Removes columns that match any of the values in the to_remove list\n\n '''\n for item in to_remove:\n for col in df.columns:\n if item.casefold() == col.casefold():\n df = df.drop(col, axis = 1)\n return df", "_____no_output_____" ], [ "'''\n#for 2015 data - multiple columns for one data category\n#converts a dataframe into a 1-d series to a 2 column df,\n#returned df has Columns and column counts\n#series sorted alphabetically\n'''\ndef make_counts2015(ini_df, series_name, index_name):\n series = ini_df.count()\n series = series.rename_axis(index_name)\n series = series.rename(series_name).sort_index()\n df = series.reset_index()\n# df.sort_values(by=[sort_by], inplace = True)\n return series, df", "_____no_output_____" ], [ "def make_counts(ini_df, series_name, index_name):\n series = ini_df.sum()\n series = series.rename_axis(index_name)\n series = series.rename(series_name).sort_index()\n df = series.reset_index()\n# df.sort_values(by=[sort_by], inplace = True)\n return series, df", "_____no_output_____" ], [ "# sorts a series and converts it to df\n'''\n2016?\nseries - name of the series to be modified\nseries_name - the desired name for the values (e.g.counts)\nindex_name - the desired name for the index (e.g.Languages)\nresetindex_Y_N - are we resetting the index?\n'''\ndef ser_to_df(series, series_name, index_name, resetindex_Y_N):\n series = series.rename_axis(index_name)\n sorted_series = series.rename(series_name).sort_index()\n if resetindex_Y_N == 'Y':\n df = sorted_series.reset_index()\n else:\n df = pd.DataFrame(sorted_series)\n return sorted_series, df", "_____no_output_____" ], [ "def eval_complex_col(df, col):\n '''\n IN:\n df[col] - every str consists of one or more values (e.g. 'a, b, d')\n \n OUT:\n col_vals - All unique elements found in the column, listed alphabetically\n \n '''\n col_num = df[df[col].isnull() == 0].shape[0]\n col_df = df[col].value_counts().reset_index()\n col_df.rename(columns={'index': col, col:'count'}, inplace = True)\n col_series = pd.Series(col_df[col].unique()).dropna()\n clean_list = col_series.str.split(pat = ';').tolist()\n \n\n flat_list = []\n for sublist in clean_list:\n for item in sublist:\n flat_list.append(item)\n clean_series = pd.DataFrame(flat_list)\n clean_series[0] = clean_series[0].str.strip()\n\n col_vals = clean_series[0].unique()\n col_vals = pd.Series(sorted(col_vals))\n cat_count = clean_series[0].value_counts()\n \n \n# print('Unique Categories: ', col_vals)\n return cat_count, col_vals\n", "_____no_output_____" ], [ "'''\nfor years 2016-2019.\nprocesses a a specified column from the raw imported dataframe\n'''\ndef process_col(df, col):\n s = df[col]\n s = s.dropna()\n s_len = s.shape[0]\n cat_count, col_vals = eval_complex_col(df, col)\n s_split = s.str.split(pat = '; ')\n return s,s_len, s_split, cat_count, col_vals", "_____no_output_____" ], [ "'''\n2017 to 2019?\nconverts a series of lists into a df with each list as a row\nalso returns a transposed version.\n'''\n\ndef s_of_lists_to_df(s):\n df = pd.DataFrame(item for item in s)\n df_transposed = df.transpose()\n return df, df_transposed", "_____no_output_____" ], [ "\ndef make_df_bool(df, df_transposed, vals_list):\n '''\n creates a df of bool values based on whether each survey response has the value in vals_list.\n df: dataframe of survey responses, \n vals_list: list of values for conditions of the new columns, with 1 col per val\n '''\n for item in vals_list:\n df[item] = df_transposed.iloc[:,:].isin([item]).sum()\n df_bool = df.loc[:,vals_list]\n return df_bool", "_____no_output_____" ], [ "'''\nCondensed function processing from initial imported df to boolean df\nUsed for 2017-2019\n'''\ndef process_data(df, col):\n df_droppedna, df_droppedna_len, df_droppedna_split, df_count, df_vals = process_col(df, col)\n df2, df2_transposed = s_of_lists_to_df(df_droppedna_split)\n df_bool = make_df_bool(df2, df2_transposed, df_vals)\n \n return df_bool, df_vals, df_droppedna_len\n\n", "_____no_output_____" ], [ "#must edit the df to match the lang_comm_list first!\n\ndef find_other_lang(df):\n other_lang = set(df.columns).difference(set(lang_comm_list))\n other_lang_list = list(other_lang)\n return other_lang_list\n", "_____no_output_____" ] ], [ [ "---\n# <center># 2015 Dataset\n---", "_____no_output_____" ] ], [ [ "#slicing the desired columns about Current Lang & Tech from the rest of the 2015 df\n#modify new df column names to match list\ndf2015_mix = df2015.loc[:,'Current Lang & Tech: Android':'Current Lang & Tech: Write-In']\ndf2015_mix.columns = df2015_mix.columns.str.replace('Current Lang & Tech: ', '')\n#df2015_mix.columns = df2015_mix.columns.str.casefold()", "_____no_output_____" ], [ "display(df2015_mix.info(),\n df2015_mix.head())", "<class 'pandas.core.frame.DataFrame'>\nRangeIndex: 26086 entries, 0 to 26085\nData columns (total 43 columns):\nAndroid 4110 non-null object\nArduino 1626 non-null object\nAngularJS 2913 non-null object\nC 3612 non-null object\nC++ 4529 non-null object\nC++11 1851 non-null object\nC# 6949 non-null object\nCassandra 202 non-null object\nCoffeeScript 783 non-null object\nCordova 628 non-null object\nClojure 176 non-null object\nCloud 1410 non-null object\nDart 109 non-null object\nF# 174 non-null object\nGo 462 non-null object\nHadoop 342 non-null object\nHaskell 357 non-null object\niOS 1956 non-null object\nJava 8219 non-null object\nJavaScript 11962 non-null object\nLAMP 1926 non-null object\nMatlab 860 non-null object\nMongoDB 1745 non-null object\nNode.js 2919 non-null object\nObjective-C 1719 non-null object\nPerl 738 non-null object\nPHP 6529 non-null object\nPython 5238 non-null object\nR 755 non-null object\nRedis 873 non-null object\nRuby 1765 non-null object\nRust 103 non-null object\nSalesforce 153 non-null object\nScala 538 non-null object\nSharepoint 349 non-null object\nSpark 104 non-null object\nSQL 9439 non-null object\nSQL Server 4129 non-null object\nSwift 759 non-null object\nVisual Basic 1701 non-null object\nWindows Phone 570 non-null object\nWordpress 2007 non-null object\nWrite-In 2148 non-null object\ndtypes: object(43)\nmemory usage: 8.6+ MB\n" ], [ "dflang2015 = clean_df_cat(df2015_mix, not_lang_list)\nprint(df2015_mix.shape, dflang2015.shape)", "(26086, 43) (26086, 24)\n" ], [ "dflang2015 = dflang2015.rename(columns = {\"C++\": \"C++_ini\", \n \"Visual Basic\": \"Visual Basic / VBA\"})\ndflang2015.head()", "_____no_output_____" ], [ "#make the new column 'C++' with booleans\ndflang2015['C++'] = ((dflang2015['C++_ini'].isnull() == 0) | \n (dflang2015['C++11'].isnull() == 0)).astype(dtype = 'int')\ndflang2015['C++'] = dflang2015['C++'].replace(0, np.nan)", "_____no_output_____" ], [ "dflang2015.head(n = 10)", "_____no_output_____" ], [ "#double-checking that the new boolean column is correct\nprint(dflang2015['C++_ini'].count(), \ndflang2015['C++11'].count(),\ndflang2015['C++'].count())\ndflang2015.loc[:,('C++_ini', 'C++11', 'C++')].reindex().head(n=50)", "4529 1851 4840\n" ], [ "#take out the now defunct initial C++ column and C++11 column, \n#since they will affect the next fxn\ndflang2015 = dflang2015.drop(['C++_ini', 'C++11'], axis = 1)\ndflang2015.head()", "_____no_output_____" ], [ "#obtain list of columns that need to be aggregated into an Others column\nother_lang_2015 = list(set(dflang2015.columns).difference(set(lang_comm_list)))\nother_lang_2015", "_____no_output_____" ], [ "#Combine the lowest popularity languages into column \"Other(s)\"\ndflang2015['Other(s)'] = ((dflang2015['Dart'].isnull() == 0) | \n (dflang2015['Haskell'].isnull() == 0) |\n (dflang2015['CoffeeScript'].isnull() == 0) |\n (dflang2015['Perl'].isnull() == 0) |\n (dflang2015['Matlab'].isnull() == 0))\n\ndflang2015['Other(s)'] = dflang2015['Other(s)'].replace(False, np.nan)\n\n", "_____no_output_____" ], [ "#drop the columns that were just used to create the 'Others' column\ndflang2015 = dflang2015.drop(other_lang_2015, axis = 1)", "_____no_output_____" ], [ "display(dflang2015['Other(s)'].sum(),\n dflang2015.head(),\n dflang2015.info())", "<class 'pandas.core.frame.DataFrame'>\nRangeIndex: 26086 entries, 0 to 26085\nData columns (total 19 columns):\nC 3612 non-null object\nC# 6949 non-null object\nClojure 176 non-null object\nF# 174 non-null object\nGo 462 non-null object\nJava 8219 non-null object\nJavaScript 11962 non-null object\nObjective-C 1719 non-null object\nPHP 6529 non-null object\nPython 5238 non-null object\nR 755 non-null object\nRuby 1765 non-null object\nRust 103 non-null object\nScala 538 non-null object\nSQL 9439 non-null object\nSwift 759 non-null object\nVisual Basic / VBA 1701 non-null object\nC++ 4840 non-null float64\nOther(s) 2591 non-null float64\ndtypes: float64(2), object(17)\nmemory usage: 3.8+ MB\n" ], [ "#make a new df of counts from the boolean df\nslang2015_counts, dflang2015_counts = make_counts2015(dflang2015, 'Count 2015', 'Languages')\ndisplay(slang2015_counts, dflang2015_counts)", "_____no_output_____" ] ], [ [ "---\n# <center>2016 Dataset\n\n---", "_____no_output_____" ] ], [ [ "def process_2016_pt1(df, col):\n df_droppedna, df_droppedna_len, df_droppedna_split, df_count, df_vals = process_col(df, col)\n df_new, df_new_transposed = s_of_lists_to_df(df_droppedna_split)\n \n return df_new, df_new_transposed, df_vals, df_droppedna_len", "_____no_output_____" ], [ "dftech2016_new, dftech2016_new_tp, tech2016_vals, tech2016_len = process_2016_pt1(df2016, 'tech_do')", "_____no_output_____" ], [ "display(dftech2016_new, tech2016_vals)", "_____no_output_____" ], [ "lang2016_list = sorted(list(set(tech2016_vals).difference(set(not_lang_list))))\nlang2016_list", "_____no_output_____" ], [ "dflang2016_bool = make_df_bool(dftech2016_new, dftech2016_new_tp, lang2016_list)", "_____no_output_____" ], [ "display(dflang2016_bool)", "_____no_output_____" ], [ "dflang2016_bool = dflang2016_bool.rename(columns = {\"Visual Basic\": \"Visual Basic / VBA\"})\ndflang2016_bool.head()", "_____no_output_____" ], [ "other_lang2016_list = find_other_lang(dflang2016_bool)", "_____no_output_____" ], [ "other_lang2016_list", "_____no_output_____" ], [ "dflang2016_bool['Other(s)'] = (dflang2016_bool['Dart'] | \n dflang2016_bool['CoffeeScript'] | \n dflang2016_bool['Haskell'] | \n dflang2016_bool['Perl'] | \n dflang2016_bool['Matlab'])", "_____no_output_____" ], [ "dflang2016_bool = dflang2016_bool.drop(other_lang2016_list, axis = 1)", "_____no_output_____" ], [ "dflang2016_bool", "_____no_output_____" ], [ "slang2016_counts, dflang2016_counts = make_counts(dflang2016_bool,'Counts 2016', 'Languages')\ndflang2016_counts", "_____no_output_____" ] ], [ [ "---\n# <center>2017 Dataset\n---", "_____no_output_____" ] ], [ [ "df_droppedna, df_droppedna_split, df_count, df_vals = process_col(df, col)", "_____no_output_____" ], [ "def process_data_extended(df, col):\n '''\n for years 2017-2019.\n processes a specified column from the raw imported dataframe\n '''\n s = df[col]\n s = s.dropna()\n df_len = s.shape[0]\n df_count, df_vals = eval_complex_col(df, col)\n s_split = s.str.split(pat = '; ')\n\n df_new = pd.DataFrame(item for item in s_split)\n df_new_transposed = df_new.transpose()\n\n for item in df_vals:\n df_new[item] = df_new_transposed.iloc[:,:].isin([item]).sum()\n df_bool = df_new.loc[:,df_vals]\n\n return df_bool, df_vals, df_len", "_____no_output_____" ], [ "dflang2017_bool, lang2017_vals, lang2017_len = process_data_extended(df2017, 'HaveWorkedLanguage')", "_____no_output_____" ], [ "display(dflang2017_bool, lang2017_vals, lang2017_len)", "_____no_output_____" ], [ "dflang2017_bool['Visual Basic / VBA'] = (dflang2017_bool['VB.NET'] | \n dflang2017_bool['VBA'] | \n dflang2017_bool['Visual Basic 6'])", "_____no_output_____" ], [ "dflang2017_bool = dflang2017_bool.drop(['VB.NET', 'VBA', 'Visual Basic 6'], axis = 1)", "_____no_output_____" ], [ "display(dflang2017_bool)", "_____no_output_____" ], [ "other_lang2017_list = find_other_lang(dflang2017_bool)\nother_lang2017_list", "_____no_output_____" ], [ "for elem in other_lang2017_list:\n print(\"dflang2017_bool['\" + elem + \"'] |\")", "dflang2017_bool['Assembly'] |\ndflang2017_bool['Hack'] |\ndflang2017_bool['Erlang'] |\ndflang2017_bool['Smalltalk'] |\ndflang2017_bool['Perl'] |\ndflang2017_bool['Common Lisp'] |\ndflang2017_bool['Julia'] |\ndflang2017_bool['TypeScript'] |\ndflang2017_bool['Lua'] |\ndflang2017_bool['Dart'] |\ndflang2017_bool['Elixir'] |\ndflang2017_bool['CoffeeScript'] |\ndflang2017_bool['Matlab'] |\ndflang2017_bool['Groovy'] |\ndflang2017_bool['Haskell'] |\n" ], [ "dflang2017_bool['Other(s)'] = (dflang2017_bool['Assembly'] |\ndflang2017_bool['Hack'] |\ndflang2017_bool['Erlang'] |\ndflang2017_bool['Smalltalk'] |\ndflang2017_bool['Perl'] |\ndflang2017_bool['Common Lisp'] |\ndflang2017_bool['Julia'] |\ndflang2017_bool['TypeScript'] |\ndflang2017_bool['Lua'] |\ndflang2017_bool['Dart'] |\ndflang2017_bool['Elixir'] |\ndflang2017_bool['CoffeeScript'] |\ndflang2017_bool['Matlab'] |\ndflang2017_bool['Groovy'] |\ndflang2017_bool['Haskell'])", "_____no_output_____" ], [ "dflang2017_bool = clean_df_cat(dflang2017_bool, other_lang2017_list)", "_____no_output_____" ], [ "slang2017_counts, dflang2017_counts = make_counts(dflang2017_bool,'Counts 2017', 'Languages')\ndflang2017_counts", "_____no_output_____" ] ], [ [ "---\n# <center>2018 Dataset\n---", "_____no_output_____" ] ], [ [ "dflang2018_bool, lang2018_vals, lang2018_len = process_data(df2018, 'LanguageWorkedWith')", "_____no_output_____" ], [ "lang2018_vals", "_____no_output_____" ], [ "dflang2018_bool['Visual Basic / VBA'] = (dflang2018_bool['VB.NET'] | \n dflang2018_bool['VBA'] | \n dflang2018_bool['Visual Basic 6'])", "_____no_output_____" ], [ "dflang2018_bool = dflang2018_bool.drop(['VB.NET', 'VBA', 'Visual Basic 6'], axis = 1)", "_____no_output_____" ], [ "dflang2018_bool['HTML/CSS'] = (dflang2018_bool['HTML'] | \n dflang2018_bool['CSS'])", "_____no_output_____" ], [ "dflang2018_bool = dflang2018_bool.drop(['HTML', 'CSS'], axis = 1)", "_____no_output_____" ], [ "other_lang2018_list = find_other_lang(dflang2018_bool)\nother_lang2018_list", "_____no_output_____" ], [ "for elem in other_lang2018_list:\n print(\"dflang2018_bool['\" + elem + \"'] |\")", "dflang2018_bool['Assembly'] |\ndflang2018_bool['Delphi/Object Pascal'] |\ndflang2018_bool['Hack'] |\ndflang2018_bool['Erlang'] |\ndflang2018_bool['Cobol'] |\ndflang2018_bool['Perl'] |\ndflang2018_bool['TypeScript'] |\ndflang2018_bool['Julia'] |\ndflang2018_bool['Lua'] |\ndflang2018_bool['Ocaml'] |\ndflang2018_bool['CoffeeScript'] |\ndflang2018_bool['Matlab'] |\ndflang2018_bool['Kotlin'] |\ndflang2018_bool['Bash/Shell'] |\ndflang2018_bool['Groovy'] |\ndflang2018_bool['Haskell'] |\n" ], [ "other_lang_str = ''\nfor elem in other_lang18_list:\n other_lang_str = other_lang_str + \"dflang2018_bool['\" + elem +\"']\"\n if elem != 'TypeScript':\n other_lang_str = other_lang_str + \" | \"\n else:\n break", "_____no_output_____" ], [ "other_lang_str", "_____no_output_____" ], [ "dflang2018_bool['Other(s)'] = (dflang2018_bool['Assembly'] |\ndflang2018_bool['Delphi/Object Pascal'] |\ndflang2018_bool['Hack'] |\ndflang2018_bool['Erlang'] |\ndflang2018_bool['Cobol'] |\ndflang2018_bool['Perl'] |\ndflang2018_bool['TypeScript'] |\ndflang2018_bool['Julia'] |\ndflang2018_bool['Lua'] |\ndflang2018_bool['Ocaml'] |\ndflang2018_bool['CoffeeScript'] |\ndflang2018_bool['Matlab'] |\ndflang2018_bool['Kotlin'] |\ndflang2018_bool['Bash/Shell'] |\ndflang2018_bool['Groovy'] |\ndflang2018_bool['Haskell'])", "_____no_output_____" ], [ "dflang2018_bool = clean_df_cat(dflang2018_bool, other_lang2018_list)", "_____no_output_____" ], [ "slang2018_counts, dflang2018_counts = make_counts(dflang2018_bool,'Counts 2018', 'Languages')\ndflang2018_counts", "_____no_output_____" ] ], [ [ "---\n# <center>2019 Dataset\n---", "_____no_output_____" ] ], [ [ "dflang2019_bool, lang2019_vals, lang2019_len = process_data(df2019, 'LanguageWorkedWith')", "_____no_output_____" ], [ "lang2019_vals", "_____no_output_____" ], [ "dflang2019_bool = dflang2019_bool.rename(columns = {\"VBA\": \"Visual Basic / VBA\", \"Other(s):\": \"Other\"})", "_____no_output_____" ], [ "other_lang2019_list = find_other_lang(dflang2019_bool)\nother_lang2019_list = sorted(other_lang2019_list)\nother_lang2019_list", "_____no_output_____" ], [ "for elem in other_lang2019_list:\n print(\"dflang2019_bool['\" + elem + \"'] |\")", "dflang2019_bool['Assembly'] |\ndflang2019_bool['Dart'] |\ndflang2019_bool['Elixir'] |\ndflang2019_bool['Erlang'] |\ndflang2019_bool['Kotlin'] |\ndflang2019_bool['Other'] |\ndflang2019_bool['TypeScript'] |\ndflang2019_bool['WebAssembly'] |\n" ], [ "dflang2019_bool['Other(s)'] = (dflang2019_bool['Assembly'] |\ndflang2019_bool['Dart'] |\ndflang2019_bool['Elixir'] |\ndflang2019_bool['Erlang'] |\ndflang2019_bool['Kotlin'] |\ndflang2019_bool['Other'] |\ndflang2019_bool['TypeScript'] |\ndflang2019_bool['WebAssembly'])", "_____no_output_____" ], [ "dflang2019_bool = clean_df_cat(dflang2019_bool, other_lang2019_list)", "_____no_output_____" ], [ "slang2019_counts, dflang2019_counts = make_counts(dflang2019_bool,'Counts 2019', 'Languages')\ndflang2019_counts", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown", "markdown", "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code" ] ]
e7ffd8f5306e72b714b742ae76237bddff3bc061
749,790
ipynb
Jupyter Notebook
Notebooks/RadarCOVID-Report/Daily/RadarCOVID-Report-2021-06-29.ipynb
pvieito/Radar-STATS
9ff991a4db776259bc749a823ee6f0b0c0d38108
[ "Apache-2.0" ]
9
2020-10-14T16:58:32.000Z
2021-10-05T12:01:56.000Z
Notebooks/RadarCOVID-Report/Daily/RadarCOVID-Report-2021-06-29.ipynb
pvieito/Radar-STATS
9ff991a4db776259bc749a823ee6f0b0c0d38108
[ "Apache-2.0" ]
3
2020-10-08T04:48:35.000Z
2020-10-10T20:46:58.000Z
Notebooks/RadarCOVID-Report/Daily/RadarCOVID-Report-2021-06-29.ipynb
Radar-STATS/Radar-STATS
61d8b3529f6bbf4576d799e340feec5b183338a3
[ "Apache-2.0" ]
3
2020-09-27T07:39:26.000Z
2020-10-02T07:48:56.000Z
84.359811
140,708
0.726485
[ [ [ "# RadarCOVID-Report", "_____no_output_____" ], [ "## Data Extraction", "_____no_output_____" ] ], [ [ "import datetime\nimport json\nimport logging\nimport os\nimport shutil\nimport tempfile\nimport textwrap\nimport uuid\n\nimport matplotlib.pyplot as plt\nimport matplotlib.ticker\nimport numpy as np\nimport pandas as pd\nimport pycountry\nimport retry\nimport seaborn as sns\n\n%matplotlib inline", "_____no_output_____" ], [ "current_working_directory = os.environ.get(\"PWD\")\nif current_working_directory:\n os.chdir(current_working_directory)\n\nsns.set()\nmatplotlib.rcParams[\"figure.figsize\"] = (15, 6)\n\nextraction_datetime = datetime.datetime.utcnow()\nextraction_date = extraction_datetime.strftime(\"%Y-%m-%d\")\nextraction_previous_datetime = extraction_datetime - datetime.timedelta(days=1)\nextraction_previous_date = extraction_previous_datetime.strftime(\"%Y-%m-%d\")\nextraction_date_with_hour = datetime.datetime.utcnow().strftime(\"%Y-%m-%d@%H\")\ncurrent_hour = datetime.datetime.utcnow().hour\nare_today_results_partial = current_hour != 23", "_____no_output_____" ] ], [ [ "### Constants", "_____no_output_____" ] ], [ [ "from Modules.ExposureNotification import exposure_notification_io\n\nspain_region_country_code = \"ES\"\ngermany_region_country_code = \"DE\"\n\ndefault_backend_identifier = spain_region_country_code\n\nbackend_generation_days = 7 * 2\ndaily_summary_days = 7 * 4 * 3\ndaily_plot_days = 7 * 4\ntek_dumps_load_limit = daily_summary_days + 1", "_____no_output_____" ] ], [ [ "### Parameters", "_____no_output_____" ] ], [ [ "environment_backend_identifier = os.environ.get(\"RADARCOVID_REPORT__BACKEND_IDENTIFIER\")\nif environment_backend_identifier:\n report_backend_identifier = environment_backend_identifier\nelse:\n report_backend_identifier = default_backend_identifier\nreport_backend_identifier", "_____no_output_____" ], [ "environment_enable_multi_backend_download = \\\n os.environ.get(\"RADARCOVID_REPORT__ENABLE_MULTI_BACKEND_DOWNLOAD\")\nif environment_enable_multi_backend_download:\n report_backend_identifiers = None\nelse:\n report_backend_identifiers = [report_backend_identifier]\n\nreport_backend_identifiers", "_____no_output_____" ], [ "environment_invalid_shared_diagnoses_dates = \\\n os.environ.get(\"RADARCOVID_REPORT__INVALID_SHARED_DIAGNOSES_DATES\")\nif environment_invalid_shared_diagnoses_dates:\n invalid_shared_diagnoses_dates = environment_invalid_shared_diagnoses_dates.split(\",\")\nelse:\n invalid_shared_diagnoses_dates = []\n\ninvalid_shared_diagnoses_dates", "_____no_output_____" ] ], [ [ "### COVID-19 Cases", "_____no_output_____" ] ], [ [ "report_backend_client = \\\n exposure_notification_io.get_backend_client_with_identifier(\n backend_identifier=report_backend_identifier)", "_____no_output_____" ], [ "@retry.retry(tries=10, delay=10, backoff=1.1, jitter=(0, 10))\ndef download_cases_dataframe():\n return pd.read_csv(\"https://raw.githubusercontent.com/owid/covid-19-data/master/public/data/owid-covid-data.csv\")\n\nconfirmed_df_ = download_cases_dataframe()\nconfirmed_df_.iloc[0]", "_____no_output_____" ], [ "confirmed_df = confirmed_df_.copy()\nconfirmed_df = confirmed_df[[\"date\", \"new_cases\", \"iso_code\"]]\nconfirmed_df.rename(\n columns={\n \"date\": \"sample_date\",\n \"iso_code\": \"country_code\",\n },\n inplace=True)\n\ndef convert_iso_alpha_3_to_alpha_2(x):\n try:\n return pycountry.countries.get(alpha_3=x).alpha_2\n except Exception as e:\n logging.info(f\"Error converting country ISO Alpha 3 code '{x}': {repr(e)}\")\n return None\n\nconfirmed_df[\"country_code\"] = confirmed_df.country_code.apply(convert_iso_alpha_3_to_alpha_2)\nconfirmed_df.dropna(inplace=True)\nconfirmed_df[\"sample_date\"] = pd.to_datetime(confirmed_df.sample_date, dayfirst=True)\nconfirmed_df[\"sample_date\"] = confirmed_df.sample_date.dt.strftime(\"%Y-%m-%d\")\nconfirmed_df.sort_values(\"sample_date\", inplace=True)\nconfirmed_df.tail()", "_____no_output_____" ], [ "confirmed_days = pd.date_range(\n start=confirmed_df.iloc[0].sample_date,\n end=extraction_datetime)\nconfirmed_days_df = pd.DataFrame(data=confirmed_days, columns=[\"sample_date\"])\nconfirmed_days_df[\"sample_date_string\"] = \\\n confirmed_days_df.sample_date.dt.strftime(\"%Y-%m-%d\")\nconfirmed_days_df.tail()", "_____no_output_____" ], [ "def sort_source_regions_for_display(source_regions: list) -> list:\n if report_backend_identifier in source_regions:\n source_regions = [report_backend_identifier] + \\\n list(sorted(set(source_regions).difference([report_backend_identifier])))\n else:\n source_regions = list(sorted(source_regions))\n return source_regions", "_____no_output_____" ], [ "report_source_regions = report_backend_client.source_regions_for_date(\n date=extraction_datetime.date())\nreport_source_regions = sort_source_regions_for_display(\n source_regions=report_source_regions)\nreport_source_regions", "_____no_output_____" ], [ "def get_cases_dataframe(source_regions_for_date_function, columns_suffix=None):\n source_regions_at_date_df = confirmed_days_df.copy()\n source_regions_at_date_df[\"source_regions_at_date\"] = \\\n source_regions_at_date_df.sample_date.apply(\n lambda x: source_regions_for_date_function(date=x))\n source_regions_at_date_df.sort_values(\"sample_date\", inplace=True)\n source_regions_at_date_df[\"_source_regions_group\"] = source_regions_at_date_df. \\\n source_regions_at_date.apply(lambda x: \",\".join(sort_source_regions_for_display(x)))\n source_regions_at_date_df.tail()\n\n #%%\n\n source_regions_for_summary_df_ = \\\n source_regions_at_date_df[[\"sample_date\", \"_source_regions_group\"]].copy()\n source_regions_for_summary_df_.rename(columns={\"_source_regions_group\": \"source_regions\"}, inplace=True)\n source_regions_for_summary_df_.tail()\n\n #%%\n\n confirmed_output_columns = [\"sample_date\", \"new_cases\", \"covid_cases\"]\n confirmed_output_df = pd.DataFrame(columns=confirmed_output_columns)\n\n for source_regions_group, source_regions_group_series in \\\n source_regions_at_date_df.groupby(\"_source_regions_group\"):\n source_regions_set = set(source_regions_group.split(\",\"))\n confirmed_source_regions_set_df = \\\n confirmed_df[confirmed_df.country_code.isin(source_regions_set)].copy()\n confirmed_source_regions_group_df = \\\n confirmed_source_regions_set_df.groupby(\"sample_date\").new_cases.sum() \\\n .reset_index().sort_values(\"sample_date\")\n confirmed_source_regions_group_df = \\\n confirmed_source_regions_group_df.merge(\n confirmed_days_df[[\"sample_date_string\"]].rename(\n columns={\"sample_date_string\": \"sample_date\"}),\n how=\"right\")\n confirmed_source_regions_group_df[\"new_cases\"] = \\\n confirmed_source_regions_group_df[\"new_cases\"].clip(lower=0)\n confirmed_source_regions_group_df[\"covid_cases\"] = \\\n confirmed_source_regions_group_df.new_cases.rolling(7, min_periods=0).mean().round()\n confirmed_source_regions_group_df = \\\n confirmed_source_regions_group_df[confirmed_output_columns]\n confirmed_source_regions_group_df = confirmed_source_regions_group_df.replace(0, np.nan)\n confirmed_source_regions_group_df.fillna(method=\"ffill\", inplace=True)\n confirmed_source_regions_group_df = \\\n confirmed_source_regions_group_df[\n confirmed_source_regions_group_df.sample_date.isin(\n source_regions_group_series.sample_date_string)]\n confirmed_output_df = confirmed_output_df.append(confirmed_source_regions_group_df)\n\n result_df = confirmed_output_df.copy()\n result_df.tail()\n\n #%%\n\n result_df.rename(columns={\"sample_date\": \"sample_date_string\"}, inplace=True)\n result_df = confirmed_days_df[[\"sample_date_string\"]].merge(result_df, how=\"left\")\n result_df.sort_values(\"sample_date_string\", inplace=True)\n result_df.fillna(method=\"ffill\", inplace=True)\n result_df.tail()\n\n #%%\n\n result_df[[\"new_cases\", \"covid_cases\"]].plot()\n\n if columns_suffix:\n result_df.rename(\n columns={\n \"new_cases\": \"new_cases_\" + columns_suffix,\n \"covid_cases\": \"covid_cases_\" + columns_suffix},\n inplace=True)\n return result_df, source_regions_for_summary_df_", "_____no_output_____" ], [ "confirmed_eu_df, source_regions_for_summary_df = get_cases_dataframe(\n report_backend_client.source_regions_for_date)\nconfirmed_es_df, _ = get_cases_dataframe(\n lambda date: [spain_region_country_code],\n columns_suffix=spain_region_country_code.lower())", "_____no_output_____" ] ], [ [ "### Extract API TEKs", "_____no_output_____" ] ], [ [ "raw_zip_path_prefix = \"Data/TEKs/Raw/\"\nbase_backend_identifiers = [report_backend_identifier]\nmulti_backend_exposure_keys_df = \\\n exposure_notification_io.download_exposure_keys_from_backends(\n backend_identifiers=report_backend_identifiers,\n generation_days=backend_generation_days,\n fail_on_error_backend_identifiers=base_backend_identifiers,\n save_raw_zip_path_prefix=raw_zip_path_prefix)\nmulti_backend_exposure_keys_df[\"region\"] = multi_backend_exposure_keys_df[\"backend_identifier\"]\nmulti_backend_exposure_keys_df.rename(\n columns={\n \"generation_datetime\": \"sample_datetime\",\n \"generation_date_string\": \"sample_date_string\",\n },\n inplace=True)\nmulti_backend_exposure_keys_df.head()", "WARNING:root:NoKeysFoundException(\"No exposure keys found on endpoint 'https://radarcovid.covid19.gob.es/dp3t/v2/gaen/exposed/?originCountries=PT' (parameters: {'origin_country': 'PT', 'endpoint_identifier_components': ['PT'], 'backend_identifier': 'PT@ES', 'server_endpoint_url': 'https://radarcovid.covid19.gob.es/dp3t'}).\")\n" ], [ "early_teks_df = multi_backend_exposure_keys_df[\n multi_backend_exposure_keys_df.rolling_period < 144].copy()\nearly_teks_df[\"rolling_period_in_hours\"] = early_teks_df.rolling_period / 6\nearly_teks_df[early_teks_df.sample_date_string != extraction_date] \\\n .rolling_period_in_hours.hist(bins=list(range(24)))", "_____no_output_____" ], [ "early_teks_df[early_teks_df.sample_date_string == extraction_date] \\\n .rolling_period_in_hours.hist(bins=list(range(24)))", "_____no_output_____" ], [ "multi_backend_exposure_keys_df = multi_backend_exposure_keys_df[[\n \"sample_date_string\", \"region\", \"key_data\"]]\nmulti_backend_exposure_keys_df.head()", "_____no_output_____" ], [ "active_regions = \\\n multi_backend_exposure_keys_df.groupby(\"region\").key_data.nunique().sort_values().index.unique().tolist()\nactive_regions", "_____no_output_____" ], [ "multi_backend_summary_df = multi_backend_exposure_keys_df.groupby(\n [\"sample_date_string\", \"region\"]).key_data.nunique().reset_index() \\\n .pivot(index=\"sample_date_string\", columns=\"region\") \\\n .sort_index(ascending=False)\nmulti_backend_summary_df.rename(\n columns={\"key_data\": \"shared_teks_by_generation_date\"},\n inplace=True)\nmulti_backend_summary_df.rename_axis(\"sample_date\", inplace=True)\nmulti_backend_summary_df = multi_backend_summary_df.fillna(0).astype(int)\nmulti_backend_summary_df = multi_backend_summary_df.head(backend_generation_days)\nmulti_backend_summary_df.head()", "_____no_output_____" ], [ "def compute_keys_cross_sharing(x):\n teks_x = x.key_data_x.item()\n common_teks = set(teks_x).intersection(x.key_data_y.item())\n common_teks_fraction = len(common_teks) / len(teks_x)\n return pd.Series(dict(\n common_teks=common_teks,\n common_teks_fraction=common_teks_fraction,\n ))\n\nmulti_backend_exposure_keys_by_region_df = \\\n multi_backend_exposure_keys_df.groupby(\"region\").key_data.unique().reset_index()\nmulti_backend_exposure_keys_by_region_df[\"_merge\"] = True\nmulti_backend_exposure_keys_by_region_combination_df = \\\n multi_backend_exposure_keys_by_region_df.merge(\n multi_backend_exposure_keys_by_region_df, on=\"_merge\")\nmulti_backend_exposure_keys_by_region_combination_df.drop(\n columns=[\"_merge\"], inplace=True)\nif multi_backend_exposure_keys_by_region_combination_df.region_x.nunique() > 1:\n multi_backend_exposure_keys_by_region_combination_df = \\\n multi_backend_exposure_keys_by_region_combination_df[\n multi_backend_exposure_keys_by_region_combination_df.region_x !=\n multi_backend_exposure_keys_by_region_combination_df.region_y]\nmulti_backend_exposure_keys_cross_sharing_df = \\\n multi_backend_exposure_keys_by_region_combination_df \\\n .groupby([\"region_x\", \"region_y\"]) \\\n .apply(compute_keys_cross_sharing) \\\n .reset_index()\nmulti_backend_cross_sharing_summary_df = \\\n multi_backend_exposure_keys_cross_sharing_df.pivot_table(\n values=[\"common_teks_fraction\"],\n columns=\"region_x\",\n index=\"region_y\",\n aggfunc=lambda x: x.item())\nmulti_backend_cross_sharing_summary_df", "<ipython-input-21-4e21708c19d8>:2: FutureWarning: `item` has been deprecated and will be removed in a future version\n teks_x = x.key_data_x.item()\n<ipython-input-21-4e21708c19d8>:3: FutureWarning: `item` has been deprecated and will be removed in a future version\n common_teks = set(teks_x).intersection(x.key_data_y.item())\n" ], [ "multi_backend_without_active_region_exposure_keys_df = \\\n multi_backend_exposure_keys_df[multi_backend_exposure_keys_df.region != report_backend_identifier]\nmulti_backend_without_active_region = \\\n multi_backend_without_active_region_exposure_keys_df.groupby(\"region\").key_data.nunique().sort_values().index.unique().tolist()\nmulti_backend_without_active_region", "_____no_output_____" ], [ "exposure_keys_summary_df = multi_backend_exposure_keys_df[\n multi_backend_exposure_keys_df.region == report_backend_identifier]\nexposure_keys_summary_df.drop(columns=[\"region\"], inplace=True)\nexposure_keys_summary_df = \\\n exposure_keys_summary_df.groupby([\"sample_date_string\"]).key_data.nunique().to_frame()\nexposure_keys_summary_df = \\\n exposure_keys_summary_df.reset_index().set_index(\"sample_date_string\")\nexposure_keys_summary_df.sort_index(ascending=False, inplace=True)\nexposure_keys_summary_df.rename(columns={\"key_data\": \"shared_teks_by_generation_date\"}, inplace=True)\nexposure_keys_summary_df.head()", "/opt/hostedtoolcache/Python/3.8.10/x64/lib/python3.8/site-packages/pandas/core/frame.py:4110: SettingWithCopyWarning: \nA value is trying to be set on a copy of a slice from a DataFrame\n\nSee the caveats in the documentation: http://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy\n return super().drop(\n" ] ], [ [ "### Dump API TEKs", "_____no_output_____" ] ], [ [ "tek_list_df = multi_backend_exposure_keys_df[\n [\"sample_date_string\", \"region\", \"key_data\"]].copy()\ntek_list_df[\"key_data\"] = tek_list_df[\"key_data\"].apply(str)\ntek_list_df.rename(columns={\n \"sample_date_string\": \"sample_date\",\n \"key_data\": \"tek_list\"}, inplace=True)\ntek_list_df = tek_list_df.groupby(\n [\"sample_date\", \"region\"]).tek_list.unique().reset_index()\ntek_list_df[\"extraction_date\"] = extraction_date\ntek_list_df[\"extraction_date_with_hour\"] = extraction_date_with_hour\n\ntek_list_path_prefix = \"Data/TEKs/\"\ntek_list_current_path = tek_list_path_prefix + f\"/Current/RadarCOVID-TEKs.json\"\ntek_list_daily_path = tek_list_path_prefix + f\"Daily/RadarCOVID-TEKs-{extraction_date}.json\"\ntek_list_hourly_path = tek_list_path_prefix + f\"Hourly/RadarCOVID-TEKs-{extraction_date_with_hour}.json\"\n\nfor path in [tek_list_current_path, tek_list_daily_path, tek_list_hourly_path]:\n os.makedirs(os.path.dirname(path), exist_ok=True)\n\ntek_list_base_df = tek_list_df[tek_list_df.region == report_backend_identifier]\ntek_list_base_df.drop(columns=[\"extraction_date\", \"extraction_date_with_hour\"]).to_json(\n tek_list_current_path,\n lines=True, orient=\"records\")\ntek_list_base_df.drop(columns=[\"extraction_date_with_hour\"]).to_json(\n tek_list_daily_path,\n lines=True, orient=\"records\")\ntek_list_base_df.to_json(\n tek_list_hourly_path,\n lines=True, orient=\"records\")\ntek_list_base_df.head()", "_____no_output_____" ] ], [ [ "### Load TEK Dumps", "_____no_output_____" ] ], [ [ "import glob\n\ndef load_extracted_teks(mode, region=None, limit=None) -> pd.DataFrame:\n extracted_teks_df = pd.DataFrame(columns=[\"region\"])\n file_paths = list(reversed(sorted(glob.glob(tek_list_path_prefix + mode + \"/RadarCOVID-TEKs-*.json\"))))\n if limit:\n file_paths = file_paths[:limit]\n for file_path in file_paths:\n logging.info(f\"Loading TEKs from '{file_path}'...\")\n iteration_extracted_teks_df = pd.read_json(file_path, lines=True)\n extracted_teks_df = extracted_teks_df.append(\n iteration_extracted_teks_df, sort=False)\n extracted_teks_df[\"region\"] = \\\n extracted_teks_df.region.fillna(spain_region_country_code).copy()\n if region:\n extracted_teks_df = \\\n extracted_teks_df[extracted_teks_df.region == region]\n return extracted_teks_df", "_____no_output_____" ], [ "daily_extracted_teks_df = load_extracted_teks(\n mode=\"Daily\",\n region=report_backend_identifier,\n limit=tek_dumps_load_limit)\ndaily_extracted_teks_df.head()", "_____no_output_____" ], [ "exposure_keys_summary_df_ = daily_extracted_teks_df \\\n .sort_values(\"extraction_date\", ascending=False) \\\n .groupby(\"sample_date\").tek_list.first() \\\n .to_frame()\nexposure_keys_summary_df_.index.name = \"sample_date_string\"\nexposure_keys_summary_df_[\"tek_list\"] = \\\n exposure_keys_summary_df_.tek_list.apply(len)\nexposure_keys_summary_df_ = exposure_keys_summary_df_ \\\n .rename(columns={\"tek_list\": \"shared_teks_by_generation_date\"}) \\\n .sort_index(ascending=False)\nexposure_keys_summary_df = exposure_keys_summary_df_\nexposure_keys_summary_df.head()", "_____no_output_____" ] ], [ [ "### Daily New TEKs", "_____no_output_____" ] ], [ [ "tek_list_df = daily_extracted_teks_df.groupby(\"extraction_date\").tek_list.apply(\n lambda x: set(sum(x, []))).reset_index()\ntek_list_df = tek_list_df.set_index(\"extraction_date\").sort_index(ascending=True)\ntek_list_df.head()", "_____no_output_____" ], [ "def compute_teks_by_generation_and_upload_date(date):\n day_new_teks_set_df = tek_list_df.copy().diff()\n try:\n day_new_teks_set = day_new_teks_set_df[\n day_new_teks_set_df.index == date].tek_list.item()\n except ValueError:\n day_new_teks_set = None\n if pd.isna(day_new_teks_set):\n day_new_teks_set = set()\n day_new_teks_df = daily_extracted_teks_df[\n daily_extracted_teks_df.extraction_date == date].copy()\n day_new_teks_df[\"shared_teks\"] = \\\n day_new_teks_df.tek_list.apply(lambda x: set(x).intersection(day_new_teks_set))\n day_new_teks_df[\"shared_teks\"] = \\\n day_new_teks_df.shared_teks.apply(len)\n day_new_teks_df[\"upload_date\"] = date\n day_new_teks_df.rename(columns={\"sample_date\": \"generation_date\"}, inplace=True)\n day_new_teks_df = day_new_teks_df[\n [\"upload_date\", \"generation_date\", \"shared_teks\"]]\n day_new_teks_df[\"generation_to_upload_days\"] = \\\n (pd.to_datetime(day_new_teks_df.upload_date) -\n pd.to_datetime(day_new_teks_df.generation_date)).dt.days\n day_new_teks_df = day_new_teks_df[day_new_teks_df.shared_teks > 0]\n return day_new_teks_df\n\nshared_teks_generation_to_upload_df = pd.DataFrame()\nfor upload_date in daily_extracted_teks_df.extraction_date.unique():\n shared_teks_generation_to_upload_df = \\\n shared_teks_generation_to_upload_df.append(\n compute_teks_by_generation_and_upload_date(date=upload_date))\nshared_teks_generation_to_upload_df \\\n .sort_values([\"upload_date\", \"generation_date\"], ascending=False, inplace=True)\nshared_teks_generation_to_upload_df.tail()", "<ipython-input-29-827222b35590>:4: FutureWarning: `item` has been deprecated and will be removed in a future version\n day_new_teks_set = day_new_teks_set_df[\n" ], [ "today_new_teks_df = \\\n shared_teks_generation_to_upload_df[\n shared_teks_generation_to_upload_df.upload_date == extraction_date].copy()\ntoday_new_teks_df.tail()", "_____no_output_____" ], [ "if not today_new_teks_df.empty:\n today_new_teks_df.set_index(\"generation_to_upload_days\") \\\n .sort_index().shared_teks.plot.bar()", "_____no_output_____" ], [ "generation_to_upload_period_pivot_df = \\\n shared_teks_generation_to_upload_df[\n [\"upload_date\", \"generation_to_upload_days\", \"shared_teks\"]] \\\n .pivot(index=\"upload_date\", columns=\"generation_to_upload_days\") \\\n .sort_index(ascending=False).fillna(0).astype(int) \\\n .droplevel(level=0, axis=1)\ngeneration_to_upload_period_pivot_df.head()", "_____no_output_____" ], [ "new_tek_df = tek_list_df.diff().tek_list.apply(\n lambda x: len(x) if not pd.isna(x) else None).to_frame().reset_index()\nnew_tek_df.rename(columns={\n \"tek_list\": \"shared_teks_by_upload_date\",\n \"extraction_date\": \"sample_date_string\",}, inplace=True)\nnew_tek_df.tail()", "_____no_output_____" ], [ "shared_teks_uploaded_on_generation_date_df = shared_teks_generation_to_upload_df[\n shared_teks_generation_to_upload_df.generation_to_upload_days == 0] \\\n [[\"upload_date\", \"shared_teks\"]].rename(\n columns={\n \"upload_date\": \"sample_date_string\",\n \"shared_teks\": \"shared_teks_uploaded_on_generation_date\",\n })\nshared_teks_uploaded_on_generation_date_df.head()", "_____no_output_____" ], [ "estimated_shared_diagnoses_df = shared_teks_generation_to_upload_df \\\n .groupby([\"upload_date\"]).shared_teks.max().reset_index() \\\n .sort_values([\"upload_date\"], ascending=False) \\\n .rename(columns={\n \"upload_date\": \"sample_date_string\",\n \"shared_teks\": \"shared_diagnoses\",\n })\ninvalid_shared_diagnoses_dates_mask = \\\n estimated_shared_diagnoses_df.sample_date_string.isin(invalid_shared_diagnoses_dates)\nestimated_shared_diagnoses_df[invalid_shared_diagnoses_dates_mask] = 0\nestimated_shared_diagnoses_df.head()", "_____no_output_____" ] ], [ [ "### Hourly New TEKs", "_____no_output_____" ] ], [ [ "hourly_extracted_teks_df = load_extracted_teks(\n mode=\"Hourly\", region=report_backend_identifier, limit=25)\nhourly_extracted_teks_df.head()", "_____no_output_____" ], [ "hourly_new_tek_count_df = hourly_extracted_teks_df \\\n .groupby(\"extraction_date_with_hour\").tek_list. \\\n apply(lambda x: set(sum(x, []))).reset_index().copy()\nhourly_new_tek_count_df = hourly_new_tek_count_df.set_index(\"extraction_date_with_hour\") \\\n .sort_index(ascending=True)\n\nhourly_new_tek_count_df[\"new_tek_list\"] = hourly_new_tek_count_df.tek_list.diff()\nhourly_new_tek_count_df[\"new_tek_count\"] = hourly_new_tek_count_df.new_tek_list.apply(\n lambda x: len(x) if not pd.isna(x) else 0)\nhourly_new_tek_count_df.rename(columns={\n \"new_tek_count\": \"shared_teks_by_upload_date\"}, inplace=True)\nhourly_new_tek_count_df = hourly_new_tek_count_df.reset_index()[[\n \"extraction_date_with_hour\", \"shared_teks_by_upload_date\"]]\nhourly_new_tek_count_df.head()", "_____no_output_____" ], [ "hourly_summary_df = hourly_new_tek_count_df.copy()\nhourly_summary_df.set_index(\"extraction_date_with_hour\", inplace=True)\nhourly_summary_df = hourly_summary_df.fillna(0).astype(int).reset_index()\nhourly_summary_df[\"datetime_utc\"] = pd.to_datetime(\n hourly_summary_df.extraction_date_with_hour, format=\"%Y-%m-%d@%H\")\nhourly_summary_df.set_index(\"datetime_utc\", inplace=True)\nhourly_summary_df = hourly_summary_df.tail(-1)\nhourly_summary_df.head()", "_____no_output_____" ] ], [ [ "### Official Statistics", "_____no_output_____" ] ], [ [ "import requests\nimport pandas.io.json\n\nofficial_stats_response = requests.get(\"https://radarcovid.covid19.gob.es/kpi/statistics/basics\")\nofficial_stats_response.raise_for_status()\nofficial_stats_df_ = pandas.io.json.json_normalize(official_stats_response.json())", "_____no_output_____" ], [ "official_stats_df = official_stats_df_.copy()\nofficial_stats_df[\"date\"] = pd.to_datetime(official_stats_df[\"date\"], dayfirst=True)\nofficial_stats_df.head()", "_____no_output_____" ], [ "official_stats_column_map = {\n \"date\": \"sample_date\",\n \"applicationsDownloads.totalAcummulated\": \"app_downloads_es_accumulated\",\n \"communicatedContagions.totalAcummulated\": \"shared_diagnoses_es_accumulated\",\n}\naccumulated_suffix = \"_accumulated\"\naccumulated_values_columns = \\\n list(filter(lambda x: x.endswith(accumulated_suffix), official_stats_column_map.values()))\ninterpolated_values_columns = \\\n list(map(lambda x: x[:-len(accumulated_suffix)], accumulated_values_columns))", "_____no_output_____" ], [ "official_stats_df = \\\n official_stats_df[official_stats_column_map.keys()] \\\n .rename(columns=official_stats_column_map)\nofficial_stats_df[\"extraction_date\"] = extraction_date\nofficial_stats_df.head()", "_____no_output_____" ], [ "official_stats_path = \"Data/Statistics/Current/RadarCOVID-Statistics.json\"\nprevious_official_stats_df = pd.read_json(official_stats_path, orient=\"records\", lines=True)\nprevious_official_stats_df[\"sample_date\"] = pd.to_datetime(previous_official_stats_df[\"sample_date\"], dayfirst=True)\nofficial_stats_df = official_stats_df.append(previous_official_stats_df)\nofficial_stats_df.head()", "_____no_output_____" ], [ "official_stats_df = official_stats_df[~(official_stats_df.shared_diagnoses_es_accumulated == 0)]\nofficial_stats_df.sort_values(\"extraction_date\", ascending=False, inplace=True)\nofficial_stats_df.drop_duplicates(subset=[\"sample_date\"], keep=\"first\", inplace=True)\nofficial_stats_df.head()", "_____no_output_____" ], [ "official_stats_stored_df = official_stats_df.copy()\nofficial_stats_stored_df[\"sample_date\"] = official_stats_stored_df.sample_date.dt.strftime(\"%Y-%m-%d\")\nofficial_stats_stored_df.to_json(official_stats_path, orient=\"records\", lines=True)", "_____no_output_____" ], [ "official_stats_df.drop(columns=[\"extraction_date\"], inplace=True)\nofficial_stats_df = confirmed_days_df.merge(official_stats_df, how=\"left\")\nofficial_stats_df.sort_values(\"sample_date\", ascending=False, inplace=True)\nofficial_stats_df.head()", "_____no_output_____" ], [ "official_stats_df[accumulated_values_columns] = \\\n official_stats_df[accumulated_values_columns] \\\n .astype(float).interpolate(limit_area=\"inside\")\nofficial_stats_df[interpolated_values_columns] = \\\n official_stats_df[accumulated_values_columns].diff(periods=-1)\nofficial_stats_df.drop(columns=\"sample_date\", inplace=True)\nofficial_stats_df.head()", "_____no_output_____" ] ], [ [ "### Data Merge", "_____no_output_____" ] ], [ [ "result_summary_df = exposure_keys_summary_df.merge(\n new_tek_df, on=[\"sample_date_string\"], how=\"outer\")\nresult_summary_df.head()", "_____no_output_____" ], [ "result_summary_df = result_summary_df.merge(\n shared_teks_uploaded_on_generation_date_df, on=[\"sample_date_string\"], how=\"outer\")\nresult_summary_df.head()", "_____no_output_____" ], [ "result_summary_df = result_summary_df.merge(\n estimated_shared_diagnoses_df, on=[\"sample_date_string\"], how=\"outer\")\nresult_summary_df.head()", "_____no_output_____" ], [ "result_summary_df = result_summary_df.merge(\n official_stats_df, on=[\"sample_date_string\"], how=\"outer\")\nresult_summary_df.head()", "_____no_output_____" ], [ "result_summary_df = confirmed_eu_df.tail(daily_summary_days).merge(\n result_summary_df, on=[\"sample_date_string\"], how=\"left\")\nresult_summary_df.head()", "_____no_output_____" ], [ "result_summary_df = confirmed_es_df.tail(daily_summary_days).merge(\n result_summary_df, on=[\"sample_date_string\"], how=\"left\")\nresult_summary_df.head()", "_____no_output_____" ], [ "result_summary_df[\"sample_date\"] = pd.to_datetime(result_summary_df.sample_date_string)\nresult_summary_df = result_summary_df.merge(source_regions_for_summary_df, how=\"left\")\nresult_summary_df.set_index([\"sample_date\", \"source_regions\"], inplace=True)\nresult_summary_df.drop(columns=[\"sample_date_string\"], inplace=True)\nresult_summary_df.sort_index(ascending=False, inplace=True)\nresult_summary_df.head()", "_____no_output_____" ], [ "with pd.option_context(\"mode.use_inf_as_na\", True):\n result_summary_df = result_summary_df.fillna(0).astype(int)\n result_summary_df[\"teks_per_shared_diagnosis\"] = \\\n (result_summary_df.shared_teks_by_upload_date / result_summary_df.shared_diagnoses).fillna(0)\n result_summary_df[\"shared_diagnoses_per_covid_case\"] = \\\n (result_summary_df.shared_diagnoses / result_summary_df.covid_cases).fillna(0)\n result_summary_df[\"shared_diagnoses_per_covid_case_es\"] = \\\n (result_summary_df.shared_diagnoses_es / result_summary_df.covid_cases_es).fillna(0)\n\nresult_summary_df.head(daily_plot_days)", "_____no_output_____" ], [ "def compute_aggregated_results_summary(days) -> pd.DataFrame:\n aggregated_result_summary_df = result_summary_df.copy()\n aggregated_result_summary_df[\"covid_cases_for_ratio\"] = \\\n aggregated_result_summary_df.covid_cases.mask(\n aggregated_result_summary_df.shared_diagnoses == 0, 0)\n aggregated_result_summary_df[\"covid_cases_for_ratio_es\"] = \\\n aggregated_result_summary_df.covid_cases_es.mask(\n aggregated_result_summary_df.shared_diagnoses_es == 0, 0)\n aggregated_result_summary_df = aggregated_result_summary_df \\\n .sort_index(ascending=True).fillna(0).rolling(days).agg({\n \"covid_cases\": \"sum\",\n \"covid_cases_es\": \"sum\",\n \"covid_cases_for_ratio\": \"sum\",\n \"covid_cases_for_ratio_es\": \"sum\",\n \"shared_teks_by_generation_date\": \"sum\",\n \"shared_teks_by_upload_date\": \"sum\",\n \"shared_diagnoses\": \"sum\",\n \"shared_diagnoses_es\": \"sum\",\n }).sort_index(ascending=False)\n\n with pd.option_context(\"mode.use_inf_as_na\", True):\n aggregated_result_summary_df = aggregated_result_summary_df.fillna(0).astype(int)\n aggregated_result_summary_df[\"teks_per_shared_diagnosis\"] = \\\n (aggregated_result_summary_df.shared_teks_by_upload_date /\n aggregated_result_summary_df.covid_cases_for_ratio).fillna(0)\n aggregated_result_summary_df[\"shared_diagnoses_per_covid_case\"] = \\\n (aggregated_result_summary_df.shared_diagnoses /\n aggregated_result_summary_df.covid_cases_for_ratio).fillna(0)\n aggregated_result_summary_df[\"shared_diagnoses_per_covid_case_es\"] = \\\n (aggregated_result_summary_df.shared_diagnoses_es /\n aggregated_result_summary_df.covid_cases_for_ratio_es).fillna(0)\n\n return aggregated_result_summary_df", "_____no_output_____" ], [ "aggregated_result_with_7_days_window_summary_df = compute_aggregated_results_summary(days=7)\naggregated_result_with_7_days_window_summary_df.head()", "_____no_output_____" ], [ "last_7_days_summary = aggregated_result_with_7_days_window_summary_df.to_dict(orient=\"records\")[1]\nlast_7_days_summary", "_____no_output_____" ], [ "aggregated_result_with_14_days_window_summary_df = compute_aggregated_results_summary(days=13)\nlast_14_days_summary = aggregated_result_with_14_days_window_summary_df.to_dict(orient=\"records\")[1]\nlast_14_days_summary", "_____no_output_____" ] ], [ [ "## Report Results", "_____no_output_____" ] ], [ [ "display_column_name_mapping = {\n \"sample_date\": \"Sample\\u00A0Date\\u00A0(UTC)\",\n \"source_regions\": \"Source Countries\",\n \"datetime_utc\": \"Timestamp (UTC)\",\n \"upload_date\": \"Upload Date (UTC)\",\n \"generation_to_upload_days\": \"Generation to Upload Period in Days\",\n \"region\": \"Backend\",\n \"region_x\": \"Backend\\u00A0(A)\",\n \"region_y\": \"Backend\\u00A0(B)\",\n \"common_teks\": \"Common TEKs Shared Between Backends\",\n \"common_teks_fraction\": \"Fraction of TEKs in Backend (A) Available in Backend (B)\",\n \"covid_cases\": \"COVID-19 Cases (Source Countries)\",\n \"shared_teks_by_generation_date\": \"Shared TEKs by Generation Date (Source Countries)\",\n \"shared_teks_by_upload_date\": \"Shared TEKs by Upload Date (Source Countries)\",\n \"shared_teks_uploaded_on_generation_date\": \"Shared TEKs Uploaded on Generation Date (Source Countries)\",\n \"shared_diagnoses\": \"Shared Diagnoses (Source Countries – Estimation)\",\n \"teks_per_shared_diagnosis\": \"TEKs Uploaded per Shared Diagnosis (Source Countries)\",\n \"shared_diagnoses_per_covid_case\": \"Usage Ratio (Source Countries)\",\n\n \"covid_cases_es\": \"COVID-19 Cases (Spain)\",\n \"app_downloads_es\": \"App Downloads (Spain – Official)\",\n \"shared_diagnoses_es\": \"Shared Diagnoses (Spain – Official)\",\n \"shared_diagnoses_per_covid_case_es\": \"Usage Ratio (Spain)\",\n}", "_____no_output_____" ], [ "summary_columns = [\n \"covid_cases\",\n \"shared_teks_by_generation_date\",\n \"shared_teks_by_upload_date\",\n \"shared_teks_uploaded_on_generation_date\",\n \"shared_diagnoses\",\n \"teks_per_shared_diagnosis\",\n \"shared_diagnoses_per_covid_case\",\n\n \"covid_cases_es\",\n \"app_downloads_es\",\n \"shared_diagnoses_es\",\n \"shared_diagnoses_per_covid_case_es\",\n]\n\nsummary_percentage_columns= [\n \"shared_diagnoses_per_covid_case_es\",\n \"shared_diagnoses_per_covid_case\",\n]", "_____no_output_____" ] ], [ [ "### Daily Summary Table", "_____no_output_____" ] ], [ [ "result_summary_df_ = result_summary_df.copy()\nresult_summary_df = result_summary_df[summary_columns]\nresult_summary_with_display_names_df = result_summary_df \\\n .rename_axis(index=display_column_name_mapping) \\\n .rename(columns=display_column_name_mapping)\nresult_summary_with_display_names_df", "_____no_output_____" ] ], [ [ "### Daily Summary Plots", "_____no_output_____" ] ], [ [ "result_plot_summary_df = result_summary_df.head(daily_plot_days)[summary_columns] \\\n .droplevel(level=[\"source_regions\"]) \\\n .rename_axis(index=display_column_name_mapping) \\\n .rename(columns=display_column_name_mapping)\nsummary_ax_list = result_plot_summary_df.sort_index(ascending=True).plot.bar(\n title=f\"Daily Summary\",\n rot=45, subplots=True, figsize=(15, 30), legend=False)\nax_ = summary_ax_list[0]\nax_.get_figure().tight_layout()\nax_.get_figure().subplots_adjust(top=0.95)\n_ = ax_.set_xticklabels(sorted(result_plot_summary_df.index.strftime(\"%Y-%m-%d\").tolist()))\n\nfor percentage_column in summary_percentage_columns:\n percentage_column_index = summary_columns.index(percentage_column)\n summary_ax_list[percentage_column_index].yaxis \\\n .set_major_formatter(matplotlib.ticker.PercentFormatter(1.0))", "/opt/hostedtoolcache/Python/3.8.10/x64/lib/python3.8/site-packages/pandas/plotting/_matplotlib/tools.py:307: MatplotlibDeprecationWarning: \nThe rowNum attribute was deprecated in Matplotlib 3.2 and will be removed two minor releases later. Use ax.get_subplotspec().rowspan.start instead.\n layout[ax.rowNum, ax.colNum] = ax.get_visible()\n/opt/hostedtoolcache/Python/3.8.10/x64/lib/python3.8/site-packages/pandas/plotting/_matplotlib/tools.py:307: MatplotlibDeprecationWarning: \nThe colNum attribute was deprecated in Matplotlib 3.2 and will be removed two minor releases later. Use ax.get_subplotspec().colspan.start instead.\n layout[ax.rowNum, ax.colNum] = ax.get_visible()\n/opt/hostedtoolcache/Python/3.8.10/x64/lib/python3.8/site-packages/pandas/plotting/_matplotlib/tools.py:313: MatplotlibDeprecationWarning: \nThe rowNum attribute was deprecated in Matplotlib 3.2 and will be removed two minor releases later. Use ax.get_subplotspec().rowspan.start instead.\n if not layout[ax.rowNum + 1, ax.colNum]:\n/opt/hostedtoolcache/Python/3.8.10/x64/lib/python3.8/site-packages/pandas/plotting/_matplotlib/tools.py:313: MatplotlibDeprecationWarning: \nThe colNum attribute was deprecated in Matplotlib 3.2 and will be removed two minor releases later. Use ax.get_subplotspec().colspan.start instead.\n if not layout[ax.rowNum + 1, ax.colNum]:\n" ] ], [ [ "### Daily Generation to Upload Period Table", "_____no_output_____" ] ], [ [ "display_generation_to_upload_period_pivot_df = \\\n generation_to_upload_period_pivot_df \\\n .head(backend_generation_days)\ndisplay_generation_to_upload_period_pivot_df \\\n .head(backend_generation_days) \\\n .rename_axis(columns=display_column_name_mapping) \\\n .rename_axis(index=display_column_name_mapping)", "_____no_output_____" ], [ "fig, generation_to_upload_period_pivot_table_ax = plt.subplots(\n figsize=(12, 1 + 0.6 * len(display_generation_to_upload_period_pivot_df)))\ngeneration_to_upload_period_pivot_table_ax.set_title(\n \"Shared TEKs Generation to Upload Period Table\")\nsns.heatmap(\n data=display_generation_to_upload_period_pivot_df\n .rename_axis(columns=display_column_name_mapping)\n .rename_axis(index=display_column_name_mapping),\n fmt=\".0f\",\n annot=True,\n ax=generation_to_upload_period_pivot_table_ax)\ngeneration_to_upload_period_pivot_table_ax.get_figure().tight_layout()", "_____no_output_____" ] ], [ [ "### Hourly Summary Plots ", "_____no_output_____" ] ], [ [ "hourly_summary_ax_list = hourly_summary_df \\\n .rename_axis(index=display_column_name_mapping) \\\n .rename(columns=display_column_name_mapping) \\\n .plot.bar(\n title=f\"Last 24h Summary\",\n rot=45, subplots=True, legend=False)\nax_ = hourly_summary_ax_list[-1]\nax_.get_figure().tight_layout()\nax_.get_figure().subplots_adjust(top=0.9)\n_ = ax_.set_xticklabels(sorted(hourly_summary_df.index.strftime(\"%Y-%m-%d@%H\").tolist()))", "_____no_output_____" ] ], [ [ "### Publish Results", "_____no_output_____" ] ], [ [ "github_repository = os.environ.get(\"GITHUB_REPOSITORY\")\nif github_repository is None:\n github_repository = \"pvieito/Radar-STATS\"\n\ngithub_project_base_url = \"https://github.com/\" + github_repository\n\ndisplay_formatters = {\n display_column_name_mapping[\"teks_per_shared_diagnosis\"]: lambda x: f\"{x:.2f}\" if x != 0 else \"\",\n display_column_name_mapping[\"shared_diagnoses_per_covid_case\"]: lambda x: f\"{x:.2%}\" if x != 0 else \"\",\n display_column_name_mapping[\"shared_diagnoses_per_covid_case_es\"]: lambda x: f\"{x:.2%}\" if x != 0 else \"\",\n}\ngeneral_columns = \\\n list(filter(lambda x: x not in display_formatters, display_column_name_mapping.values()))\ngeneral_formatter = lambda x: f\"{x}\" if x != 0 else \"\"\ndisplay_formatters.update(dict(map(lambda x: (x, general_formatter), general_columns)))\n\ndaily_summary_table_html = result_summary_with_display_names_df \\\n .head(daily_plot_days) \\\n .rename_axis(index=display_column_name_mapping) \\\n .rename(columns=display_column_name_mapping) \\\n .to_html(formatters=display_formatters)\nmulti_backend_summary_table_html = multi_backend_summary_df \\\n .head(daily_plot_days) \\\n .rename_axis(columns=display_column_name_mapping) \\\n .rename(columns=display_column_name_mapping) \\\n .rename_axis(index=display_column_name_mapping) \\\n .to_html(formatters=display_formatters)\n\ndef format_multi_backend_cross_sharing_fraction(x):\n if pd.isna(x):\n return \"-\"\n elif round(x * 100, 1) == 0:\n return \"\"\n else:\n return f\"{x:.1%}\"\n\nmulti_backend_cross_sharing_summary_table_html = multi_backend_cross_sharing_summary_df \\\n .rename_axis(columns=display_column_name_mapping) \\\n .rename(columns=display_column_name_mapping) \\\n .rename_axis(index=display_column_name_mapping) \\\n .to_html(\n classes=\"table-center\",\n formatters=display_formatters,\n float_format=format_multi_backend_cross_sharing_fraction)\nmulti_backend_cross_sharing_summary_table_html = \\\n multi_backend_cross_sharing_summary_table_html \\\n .replace(\"<tr>\",\"<tr style=\\\"text-align: center;\\\">\")\n\nextraction_date_result_summary_df = \\\n result_summary_df[result_summary_df.index.get_level_values(\"sample_date\") == extraction_date]\nextraction_date_result_hourly_summary_df = \\\n hourly_summary_df[hourly_summary_df.extraction_date_with_hour == extraction_date_with_hour]\n\ncovid_cases = \\\n extraction_date_result_summary_df.covid_cases.item()\nshared_teks_by_generation_date = \\\n extraction_date_result_summary_df.shared_teks_by_generation_date.item()\nshared_teks_by_upload_date = \\\n extraction_date_result_summary_df.shared_teks_by_upload_date.item()\nshared_diagnoses = \\\n extraction_date_result_summary_df.shared_diagnoses.item()\nteks_per_shared_diagnosis = \\\n extraction_date_result_summary_df.teks_per_shared_diagnosis.item()\nshared_diagnoses_per_covid_case = \\\n extraction_date_result_summary_df.shared_diagnoses_per_covid_case.item()\n\nshared_teks_by_upload_date_last_hour = \\\n extraction_date_result_hourly_summary_df.shared_teks_by_upload_date.sum().astype(int)\n\ndisplay_source_regions = \", \".join(report_source_regions)\nif len(report_source_regions) == 1:\n display_brief_source_regions = report_source_regions[0]\nelse:\n display_brief_source_regions = f\"{len(report_source_regions)} 🇪🇺\"", "<ipython-input-67-0a0cb8e530af>:55: FutureWarning: `item` has been deprecated and will be removed in a future version\n extraction_date_result_summary_df.covid_cases.item()\n<ipython-input-67-0a0cb8e530af>:57: FutureWarning: `item` has been deprecated and will be removed in a future version\n extraction_date_result_summary_df.shared_teks_by_generation_date.item()\n<ipython-input-67-0a0cb8e530af>:59: FutureWarning: `item` has been deprecated and will be removed in a future version\n extraction_date_result_summary_df.shared_teks_by_upload_date.item()\n<ipython-input-67-0a0cb8e530af>:61: FutureWarning: `item` has been deprecated and will be removed in a future version\n extraction_date_result_summary_df.shared_diagnoses.item()\n<ipython-input-67-0a0cb8e530af>:63: FutureWarning: `item` has been deprecated and will be removed in a future version\n extraction_date_result_summary_df.teks_per_shared_diagnosis.item()\n<ipython-input-67-0a0cb8e530af>:65: FutureWarning: `item` has been deprecated and will be removed in a future version\n extraction_date_result_summary_df.shared_diagnoses_per_covid_case.item()\n" ], [ "def get_temporary_image_path() -> str:\n return os.path.join(tempfile.gettempdir(), str(uuid.uuid4()) + \".png\")\n\ndef save_temporary_plot_image(ax):\n if isinstance(ax, np.ndarray):\n ax = ax[0]\n media_path = get_temporary_image_path()\n ax.get_figure().savefig(media_path)\n return media_path\n\ndef save_temporary_dataframe_image(df):\n import dataframe_image as dfi\n df = df.copy()\n df_styler = df.style.format(display_formatters)\n media_path = get_temporary_image_path()\n dfi.export(df_styler, media_path)\n return media_path", "_____no_output_____" ], [ "summary_plots_image_path = save_temporary_plot_image(\n ax=summary_ax_list)\nsummary_table_image_path = save_temporary_dataframe_image(\n df=result_summary_with_display_names_df)\nhourly_summary_plots_image_path = save_temporary_plot_image(\n ax=hourly_summary_ax_list)\nmulti_backend_summary_table_image_path = save_temporary_dataframe_image(\n df=multi_backend_summary_df)\ngeneration_to_upload_period_pivot_table_image_path = save_temporary_plot_image(\n ax=generation_to_upload_period_pivot_table_ax)", "_____no_output_____" ] ], [ [ "### Save Results", "_____no_output_____" ] ], [ [ "report_resources_path_prefix = \"Data/Resources/Current/RadarCOVID-Report-\"\nresult_summary_df.to_csv(\n report_resources_path_prefix + \"Summary-Table.csv\")\nresult_summary_df.to_html(\n report_resources_path_prefix + \"Summary-Table.html\")\nhourly_summary_df.to_csv(\n report_resources_path_prefix + \"Hourly-Summary-Table.csv\")\nmulti_backend_summary_df.to_csv(\n report_resources_path_prefix + \"Multi-Backend-Summary-Table.csv\")\nmulti_backend_cross_sharing_summary_df.to_csv(\n report_resources_path_prefix + \"Multi-Backend-Cross-Sharing-Summary-Table.csv\")\ngeneration_to_upload_period_pivot_df.to_csv(\n report_resources_path_prefix + \"Generation-Upload-Period-Table.csv\")\n_ = shutil.copyfile(\n summary_plots_image_path,\n report_resources_path_prefix + \"Summary-Plots.png\")\n_ = shutil.copyfile(\n summary_table_image_path,\n report_resources_path_prefix + \"Summary-Table.png\")\n_ = shutil.copyfile(\n hourly_summary_plots_image_path,\n report_resources_path_prefix + \"Hourly-Summary-Plots.png\")\n_ = shutil.copyfile(\n multi_backend_summary_table_image_path,\n report_resources_path_prefix + \"Multi-Backend-Summary-Table.png\")\n_ = shutil.copyfile(\n generation_to_upload_period_pivot_table_image_path,\n report_resources_path_prefix + \"Generation-Upload-Period-Table.png\")", "_____no_output_____" ] ], [ [ "### Publish Results as JSON", "_____no_output_____" ] ], [ [ "def generate_summary_api_results(df: pd.DataFrame) -> list:\n api_df = df.reset_index().copy()\n api_df[\"sample_date_string\"] = \\\n api_df[\"sample_date\"].dt.strftime(\"%Y-%m-%d\")\n api_df[\"source_regions\"] = \\\n api_df[\"source_regions\"].apply(lambda x: x.split(\",\"))\n return api_df.to_dict(orient=\"records\")\n\nsummary_api_results = \\\n generate_summary_api_results(df=result_summary_df)\ntoday_summary_api_results = \\\n generate_summary_api_results(df=extraction_date_result_summary_df)[0]\n\nsummary_results = dict(\n backend_identifier=report_backend_identifier,\n source_regions=report_source_regions,\n extraction_datetime=extraction_datetime,\n extraction_date=extraction_date,\n extraction_date_with_hour=extraction_date_with_hour,\n last_hour=dict(\n shared_teks_by_upload_date=shared_teks_by_upload_date_last_hour,\n shared_diagnoses=0,\n ),\n today=today_summary_api_results,\n last_7_days=last_7_days_summary,\n last_14_days=last_14_days_summary,\n daily_results=summary_api_results)\n\nsummary_results = \\\n json.loads(pd.Series([summary_results]).to_json(orient=\"records\"))[0]\n\nwith open(report_resources_path_prefix + \"Summary-Results.json\", \"w\") as f:\n json.dump(summary_results, f, indent=4)", "_____no_output_____" ] ], [ [ "### Publish on README", "_____no_output_____" ] ], [ [ "with open(\"Data/Templates/README.md\", \"r\") as f:\n readme_contents = f.read()\n\nreadme_contents = readme_contents.format(\n extraction_date_with_hour=extraction_date_with_hour,\n github_project_base_url=github_project_base_url,\n daily_summary_table_html=daily_summary_table_html,\n multi_backend_summary_table_html=multi_backend_summary_table_html,\n multi_backend_cross_sharing_summary_table_html=multi_backend_cross_sharing_summary_table_html,\n display_source_regions=display_source_regions)\n\nwith open(\"README.md\", \"w\") as f:\n f.write(readme_contents)", "_____no_output_____" ] ], [ [ "### Publish on Twitter", "_____no_output_____" ] ], [ [ "enable_share_to_twitter = os.environ.get(\"RADARCOVID_REPORT__ENABLE_PUBLISH_ON_TWITTER\")\ngithub_event_name = os.environ.get(\"GITHUB_EVENT_NAME\")\n\nif enable_share_to_twitter and github_event_name == \"schedule\" and \\\n (shared_teks_by_upload_date_last_hour or not are_today_results_partial):\n import tweepy\n\n twitter_api_auth_keys = os.environ[\"RADARCOVID_REPORT__TWITTER_API_AUTH_KEYS\"]\n twitter_api_auth_keys = twitter_api_auth_keys.split(\":\")\n auth = tweepy.OAuthHandler(twitter_api_auth_keys[0], twitter_api_auth_keys[1])\n auth.set_access_token(twitter_api_auth_keys[2], twitter_api_auth_keys[3])\n\n api = tweepy.API(auth)\n\n summary_plots_media = api.media_upload(summary_plots_image_path)\n summary_table_media = api.media_upload(summary_table_image_path)\n generation_to_upload_period_pivot_table_image_media = api.media_upload(generation_to_upload_period_pivot_table_image_path)\n media_ids = [\n summary_plots_media.media_id,\n summary_table_media.media_id,\n generation_to_upload_period_pivot_table_image_media.media_id,\n ]\n\n if are_today_results_partial:\n today_addendum = \" (Partial)\"\n else:\n today_addendum = \"\"\n\n def format_shared_diagnoses_per_covid_case(value) -> str:\n if value == 0:\n return \"–\"\n return f\"≤{value:.2%}\"\n\n display_shared_diagnoses_per_covid_case = \\\n format_shared_diagnoses_per_covid_case(value=shared_diagnoses_per_covid_case)\n display_last_14_days_shared_diagnoses_per_covid_case = \\\n format_shared_diagnoses_per_covid_case(value=last_14_days_summary[\"shared_diagnoses_per_covid_case\"])\n display_last_14_days_shared_diagnoses_per_covid_case_es = \\\n format_shared_diagnoses_per_covid_case(value=last_14_days_summary[\"shared_diagnoses_per_covid_case_es\"])\n\n status = textwrap.dedent(f\"\"\"\n #RadarCOVID – {extraction_date_with_hour}\n\n Today{today_addendum}:\n - Uploaded TEKs: {shared_teks_by_upload_date:.0f} ({shared_teks_by_upload_date_last_hour:+d} last hour)\n - Shared Diagnoses: ≤{shared_diagnoses:.0f}\n - Usage Ratio: {display_shared_diagnoses_per_covid_case}\n\n Last 14 Days:\n - Usage Ratio (Estimation): {display_last_14_days_shared_diagnoses_per_covid_case}\n - Usage Ratio (Official): {display_last_14_days_shared_diagnoses_per_covid_case_es}\n\n Info: {github_project_base_url}#documentation\n \"\"\")\n status = status.encode(encoding=\"utf-8\")\n api.update_status(status=status, media_ids=media_ids)", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown", "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ] ]
e7fff3134ef38b7f5713f8b7b3f07357db3a9abe
2,363
ipynb
Jupyter Notebook
chapter01/exercise10.ipynb
soerenberg/probability-and-computing-exercises
e892852e598d44e55c30350027c5f6f94915da7b
[ "MIT" ]
1
2021-11-04T14:41:24.000Z
2021-11-04T14:41:24.000Z
chapter01/exercise10.ipynb
soerenberg/probability-and-computing-exercises
e892852e598d44e55c30350027c5f6f94915da7b
[ "MIT" ]
null
null
null
chapter01/exercise10.ipynb
soerenberg/probability-and-computing-exercises
e892852e598d44e55c30350027c5f6f94915da7b
[ "MIT" ]
null
null
null
24.614583
216
0.524757
[ [ [ "import numpy as np", "_____no_output_____" ] ], [ [ "# Exercise 1.10\n\nI have a fair coin and a two-headed coin. I choose one of the two coins randomly with equal probability and flip it. Given that the flip was heads, what is the probability that I flipped the two-headed coin?\n\n**Solution:**\nLet $F$ denote picking the fair coin and $T$ picking the two-headed coin, respectively. Furthermore, let $H$ denote the event that the chosen coin shows head.\n\nThen we have\n\n$$\nP(T | H) =\n\\frac{P(H \\cap T)}{P(H)} =\n\\frac{P(H | T) \\cdot P(H)}{P(T \\cap H) + P(F \\cap H)} =\n\\frac{P(H | T) \\cdot P(H)}{P(H|T) \\cdot P(T) + P(H|F) \\cdot P(F)} =\n\\frac{1 \\cdot \\tfrac 1 2}{1 \\cdot \\tfrac 1 2 + \\tfrac 1 2 \\tfrac 1 2} =\n\\frac 2 3.\n$$\n\nWe can check this by simulation.", "_____no_output_____" ] ], [ [ "num_samples = 100000\n\nchosen_coin = np.random.randint(low=0, high=2, size=num_samples) # 0 = fair, 1 = two-headed\nheads = np.random.randint(low=0, high=2, size=num_samples) + chosen_coin > 0\n\n(chosen_coin * heads).sum() / heads.sum()", "_____no_output_____" ] ] ]
[ "code", "markdown", "code" ]
[ [ "code" ], [ "markdown" ], [ "code" ] ]
e7fff5bd0b20d24c1f2e46b37613d11c4cbd8766
663,653
ipynb
Jupyter Notebook
Wrangle-act.ipynb
AbdElrhman-m/Wrangle-and-Analyze-Data
4b2da6e216a6a90c86ebfd2344458213e389a230
[ "MIT" ]
1
2019-04-12T18:56:59.000Z
2019-04-12T18:56:59.000Z
Wrangle-act.ipynb
AbdElrhman-m/Wrangle-and-Analyze-Data
4b2da6e216a6a90c86ebfd2344458213e389a230
[ "MIT" ]
null
null
null
Wrangle-act.ipynb
AbdElrhman-m/Wrangle-and-Analyze-Data
4b2da6e216a6a90c86ebfd2344458213e389a230
[ "MIT" ]
null
null
null
45.063693
31,416
0.561439
[ [ [ "# Project: Tweets Data Analysis\n\n## Table of Contents\n<ul>\n<li><a href=\"#intro\">Introduction</a></li>\n<li>\n <a href=\"#wrangling\">Data Wrangling</a>\n <ol>\n <li><a href=\"#gather\">Data Gathering</a></li>\n <li><a href=\"#assess\">Data Assessing</a></li>\n <li><a href=\"#clean\">Data Cleaning</a></li> \n </ol> \n</li>\n<li><a href=\"#eda\">Exploratory Data Analysis</a></li>\n<li><a href=\"#conclusions\">Conclusions</a></li>\n</ul>", "_____no_output_____" ], [ "<a id='intro'></a>\n## Introduction\n\n> wrangle WeRateDogs Twitter data to create interesting and trustworthy analyses and visualizations. The Twitter archive is great, but it only contains very basic tweet information. Additional gathering, then assessing and cleaning is required for \"Wow!\"-worthy analyses and visualizations.", "_____no_output_____" ] ], [ [ "# import the packages will be used through the project\nimport numpy as np\nimport pandas as pd\n\n# for twitter API\nimport tweepy\nfrom tweepy import OAuthHandler\nimport json\nfrom timeit import default_timer as timer\n\nimport requests\nimport tweepy\nimport json\nimport os\nimport re\n\n# for Exploratory Data Analysis visually \nimport matplotlib.pyplot as plt\n%matplotlib inline\nimport seaborn as sns\n# activating the seaborn \nsns.set()", "_____no_output_____" ] ], [ [ "<a id='wrangling'></a>\n## Data Wrangling\n\n\n\n### 1- Gathering Data\n<a id='gather'></a>", "_____no_output_____" ], [ "#### (A) gathering twitter archivement dog rates Data from the provided csv file\n", "_____no_output_____" ] ], [ [ "# Load your data and print out a few lines. Perform operations to inspect data\n# types and look for instances of missing or possibly errant data.\ntwitter_archive = pd.read_csv('twitter-archive-enhanced.csv')", "_____no_output_____" ], [ "twitter_archive.head(5)", "_____no_output_____" ] ], [ [ "#### (B) Geting data from file (image_predictions.tsv) which is hosted on Udacity's servers and should be downloaded programmatically using the Requests library a", "_____no_output_____" ] ], [ [ "# Scrape the image predictions file from the Udacity website\nurl = 'https://d17h27t6h515a5.cloudfront.net/topher/2017/August/599fd2ad_image-predictions/image-predictions.tsv' \nresponse = requests.get(url)\nwith open(os.path.join('image_predictions.tsv'), mode = 'wb') as file:\n file.write(response.content)", "_____no_output_____" ], [ "# Load the image predictions file \n# using \\t beacuse it's \"Tab Separated Value\" file\nimages = pd.read_csv('image_predictions.tsv', sep = '\\t')", "_____no_output_____" ], [ "images.head()", "_____no_output_____" ] ], [ [ "#### (C) Getting data from twitter API ", "_____no_output_____" ] ], [ [ "# Query Twitter API for each tweet in the Twitter archive and save JSON in a text file\n# These are hidden to comply with Twitter's API terms and conditions\nconsumer_key = 'JRJCYqpq8QnnQde8W60rPUwwb'\nconsumer_secret = 'bysFJFrg0sjpWXmMV4EmePkOxLOPvmIgcbB3v0ZrwxrqhTD3bf'\naccess_token = '307362468-CwCujZZ0OaFQ3Ut2xf4dNlEkxuTVADOQkmhj6A2U'\naccess_secret = 'mYAXhcUOmPdUduQMyRbUXZrmcSPy36j7a9aqS6I4aHmWV'\n\nauth = OAuthHandler(consumer_key, consumer_secret)\nauth.set_access_token(access_token, access_secret)\n\napi = tweepy.API(auth, wait_on_rate_limit=True)", "_____no_output_____" ], [ "# NOTE TO STUDENT WITH MOBILE VERIFICATION ISSUES:\n# df_1 is a DataFrame with the twitter_archive_enhanced.csv file. You may have to\n\n# change line 17 to match the name of your DataFrame with twitter_archive_enhanced.csv\n# NOTE TO REVIEWER: this student had mobile verification issues so the following\n# Twitter API code was sent to this student from a Udacity instructor\n# Tweet IDs for which to gather additional data via Twitter's API\ndf_1 = twitter_archive\ntweet_ids = df_1.tweet_id.values\nlen(tweet_ids)\n\n# Query Twitter's API for JSON data for each tweet ID in the Twitter archive\ncount = 0\nfails_dict = {}\nstart = timer()\n# Save each tweet's returned JSON as a new line in a .txt file\nwith open('tweet_json.txt', 'w') as outfile:\n # This loop will likely take 20-30 minutes to run because of Twitter's rate limit\n for tweet_id in tweet_ids:\n count += 1\n print(str(count) + \": \" + str(tweet_id))\n try:\n tweet = api.get_status(tweet_id, tweet_mode='extended')\n print(\"Success\")\n json.dump(tweet._json, outfile)\n outfile.write('\\n')\n except tweepy.TweepError as e:\n print(\"Fail\")\n fails_dict[tweet_id] = e\n pass\nend = timer()\nprint(end - start)\nprint(fails_dict)", "1: 892420643555336193\nSuccess\n2: 892177421306343426\nSuccess\n3: 891815181378084864\nSuccess\n4: 891689557279858688\nSuccess\n5: 891327558926688256\nSuccess\n6: 891087950875897856\nSuccess\n7: 890971913173991426\nSuccess\n8: 890729181411237888\nSuccess\n9: 890609185150312448\nSuccess\n10: 890240255349198849\nSuccess\n11: 890006608113172480\nSuccess\n12: 889880896479866881\nSuccess\n13: 889665388333682689\nSuccess\n14: 889638837579907072\nSuccess\n15: 889531135344209921\nSuccess\n16: 889278841981685760\nSuccess\n17: 888917238123831296\nSuccess\n18: 888804989199671297\nSuccess\n19: 888554962724278272\nSuccess\n20: 888202515573088257\nFail\n21: 888078434458587136\nSuccess\n22: 887705289381826560\nSuccess\n23: 887517139158093824\nSuccess\n24: 887473957103951883\nSuccess\n25: 887343217045368832\nSuccess\n26: 887101392804085760\nSuccess\n27: 886983233522544640\nSuccess\n28: 886736880519319552\nSuccess\n29: 886680336477933568\nSuccess\n30: 886366144734445568\nSuccess\n31: 886267009285017600\nSuccess\n32: 886258384151887873\nSuccess\n33: 886054160059072513\nSuccess\n34: 885984800019947520\nSuccess\n35: 885528943205470208\nSuccess\n36: 885518971528720385\nSuccess\n37: 885311592912609280\nSuccess\n38: 885167619883638784\nSuccess\n39: 884925521741709313\nSuccess\n40: 884876753390489601\nSuccess\n41: 884562892145688576\nSuccess\n42: 884441805382717440\nSuccess\n43: 884247878851493888\nSuccess\n44: 884162670584377345\nSuccess\n45: 883838122936631299\nSuccess\n46: 883482846933004288\nSuccess\n47: 883360690899218434\nSuccess\n48: 883117836046086144\nSuccess\n49: 882992080364220416\nSuccess\n50: 882762694511734784\nSuccess\n51: 882627270321602560\nSuccess\n52: 882268110199369728\nSuccess\n53: 882045870035918850\nSuccess\n54: 881906580714921986\nSuccess\n55: 881666595344535552\nSuccess\n56: 881633300179243008\nSuccess\n57: 881536004380872706\nSuccess\n58: 881268444196462592\nSuccess\n59: 880935762899988482\nSuccess\n60: 880872448815771648\nSuccess\n61: 880465832366813184\nSuccess\n62: 880221127280381952\nSuccess\n63: 880095782870896641\nSuccess\n64: 879862464715927552\nSuccess\n65: 879674319642796034\nSuccess\n66: 879492040517615616\nSuccess\n67: 879415818425184262\nSuccess\n68: 879376492567855104\nSuccess\n69: 879130579576475649\nSuccess\n70: 879050749262655488\nSuccess\n71: 879008229531029506\nSuccess\n72: 878776093423087618\nSuccess\n73: 878604707211726852\nSuccess\n74: 878404777348136964\nSuccess\n75: 878316110768087041\nSuccess\n76: 878281511006478336\nSuccess\n77: 878057613040115712\nSuccess\n78: 877736472329191424\nSuccess\n79: 877611172832227328\nSuccess\n80: 877556246731214848\nSuccess\n81: 877316821321428993\nSuccess\n82: 877201837425926144\nSuccess\n83: 876838120628539392\nSuccess\n84: 876537666061221889\nSuccess\n85: 876484053909872640\nSuccess\n86: 876120275196170240\nSuccess\n87: 875747767867523072\nSuccess\n88: 875144289856114688\nSuccess\n89: 875097192612077568\nSuccess\n90: 875021211251597312\nSuccess\n91: 874680097055178752\nSuccess\n92: 874434818259525634\nSuccess\n93: 874296783580663808\nSuccess\n94: 874057562936811520\nSuccess\n95: 874012996292530176\nSuccess\n96: 873697596434513921\nFail\n97: 873580283840344065\nSuccess\n98: 873337748698140672\nSuccess\n99: 873213775632977920\nSuccess\n100: 872967104147763200\nSuccess\n101: 872820683541237760\nSuccess\n102: 872668790621863937\nFail\n103: 872620804844003328\nSuccess\n104: 872486979161796608\nSuccess\n105: 872261713294495745\nFail\n106: 872122724285648897\nSuccess\n107: 871879754684805121\nSuccess\n108: 871762521631449091\nSuccess\n109: 871515927908634625\nSuccess\n110: 871166179821445120\nSuccess\n111: 871102520638267392\nSuccess\n112: 871032628920680449\nSuccess\n113: 870804317367881728\nSuccess\n114: 870726314365509632\nSuccess\n115: 870656317836468226\nSuccess\n116: 870374049280663552\nSuccess\n117: 870308999962521604\nSuccess\n118: 870063196459192321\nSuccess\n119: 869988702071779329\nFail\n120: 869772420881756160\nSuccess\n121: 869702957897576449\nSuccess\n122: 869596645499047938\nSuccess\n123: 869227993411051520\nSuccess\n124: 868880397819494401\nSuccess\n125: 868639477480148993\nSuccess\n126: 868622495443632128\nSuccess\n127: 868552278524837888\nSuccess\n128: 867900495410671616\nSuccess\n129: 867774946302451713\nSuccess\n130: 867421006826221569\nSuccess\n131: 867072653475098625\nSuccess\n132: 867051520902168576\nSuccess\n133: 866816280283807744\nFail\n134: 866720684873056260\nSuccess\n135: 866686824827068416\nSuccess\n136: 866450705531457537\nSuccess\n137: 866334964761202691\nSuccess\n138: 866094527597207552\nSuccess\n139: 865718153858494464\nSuccess\n140: 865359393868664832\nSuccess\n141: 865006731092295680\nSuccess\n142: 864873206498414592\nSuccess\n143: 864279568663928832\nSuccess\n144: 864197398364647424\nSuccess\n145: 863907417377173506\nSuccess\n146: 863553081350529029\nSuccess\n147: 863471782782697472\nSuccess\n148: 863432100342583297\nSuccess\n149: 863427515083354112\nSuccess\n150: 863079547188785154\nSuccess\n151: 863062471531167744\nSuccess\n152: 862831371563274240\nSuccess\n153: 862722525377298433\nSuccess\n154: 862457590147678208\nSuccess\n155: 862096992088072192\nSuccess\n156: 861769973181624320\nFail\n157: 861383897657036800\nSuccess\n158: 861288531465048066\nSuccess\n159: 861005113778896900\nSuccess\n160: 860981674716409858\nSuccess\n161: 860924035999428608\nSuccess\n162: 860563773140209665\nSuccess\n163: 860524505164394496\nSuccess\n164: 860276583193509888\nSuccess\n165: 860184849394610176\nSuccess\n166: 860177593139703809\nSuccess\n167: 859924526012018688\nSuccess\n168: 859851578198683649\nSuccess\n169: 859607811541651456\nSuccess\n170: 859196978902773760\nSuccess\n171: 859074603037188101\nSuccess\n172: 858860390427611136\nSuccess\n173: 858843525470990336\nSuccess\n174: 858471635011153920\nSuccess\n175: 858107933456039936\nSuccess\n176: 857989990357356544\nSuccess\n177: 857746408056729600\nSuccess\n178: 857393404942143489\nSuccess\n179: 857263160327368704\nSuccess\n180: 857214891891077121\nSuccess\n181: 857062103051644929\nSuccess\n182: 857029823797047296\nSuccess\n183: 856602993587888130\nSuccess\n184: 856543823941562368\nSuccess\n185: 856526610513747968\nSuccess\n186: 856330835276025856\nSuccess\n187: 856288084350160898\nSuccess\n188: 856282028240666624\nSuccess\n189: 855862651834028034\nSuccess\n190: 855860136149123072\nSuccess\n191: 855857698524602368\nSuccess\n192: 855851453814013952\nSuccess\n193: 855818117272018944\nSuccess\n194: 855459453768019968\nSuccess\n195: 855245323840757760\nSuccess\n196: 855138241867124737\nSuccess\n197: 854732716440526848\nSuccess\n198: 854482394044301312\nSuccess\n199: 854365224396361728\nSuccess\n200: 854120357044912130\nSuccess\n201: 854010172552949760\nSuccess\n202: 853760880890318849\nSuccess\n203: 853639147608842240\nSuccess\n204: 853299958564483072\nSuccess\n205: 852936405516943360\nSuccess\n206: 852912242202992640\nSuccess\n207: 852672615818899456\nSuccess\n208: 852553447878664193\nSuccess\n209: 852311364735569921\nSuccess\n210: 852226086759018497\nSuccess\n211: 852189679701164033\nSuccess\n212: 851953902622658560\nSuccess\n213: 851861385021730816\nSuccess\n214: 851591660324737024\nSuccess\n215: 851464819735769094\nSuccess\n216: 851224888060895234\nSuccess\n217: 850753642995093505\nSuccess\n218: 850380195714523136\nSuccess\n219: 850333567704068097\nSuccess\n220: 850145622816686080\nSuccess\n221: 850019790995546112\nSuccess\n222: 849776966551130114\nSuccess\n223: 849668094696017920\nSuccess\n224: 849412302885593088\nSuccess\n225: 849336543269576704\nSuccess\n226: 849051919805034497\nSuccess\n227: 848690551926992896\nSuccess\n228: 848324959059550208\nSuccess\n229: 848213670039564288\nSuccess\n230: 848212111729840128\nSuccess\n231: 847978865427394560\nSuccess\n232: 847971574464610304\nSuccess\n233: 847962785489326080\nSuccess\n234: 847842811428974592\nSuccess\n235: 847617282490613760\nSuccess\n236: 847606175596138505\nSuccess\n237: 847251039262605312\nSuccess\n238: 847157206088847362\nSuccess\n239: 847116187444137987\nSuccess\n240: 846874817362120707\nSuccess\n241: 846514051647705089\nSuccess\n242: 846505985330044928\nSuccess\n243: 846153765933735936\nSuccess\n244: 846139713627017216\nSuccess\n245: 846042936437604353\nSuccess\n246: 845812042753855489\nSuccess\n247: 845677943972139009\nSuccess\n248: 845459076796616705\nFail\n249: 845397057150107648\nSuccess\n250: 845306882940190720\nSuccess\n251: 845098359547420673\nSuccess\n252: 844979544864018432\nSuccess\n253: 844973813909606400\nSuccess\n254: 844704788403113984\nSuccess\n255: 844580511645339650\nSuccess\n256: 844223788422217728\nSuccess\n257: 843981021012017153\nSuccess\n258: 843856843873095681\nSuccess\n259: 843604394117681152\nSuccess\n260: 843235543001513987\nSuccess\n261: 842892208864923648\n" ], [ "# printing the len of the fails dict and the time in that had taken to make this list in minutes\nlen(fails_dict), 1834.9294512/60", "_____no_output_____" ], [ "tweets_list = []\nwith open('tweet_json.txt', 'r') as json_file:\n# Read the .txt file line by line into a list of dictionaries\n for line in json_file:\n twitter_data = json.loads(line)\n tweets_list.append({'tweet_id': twitter_data['id_str'],\n 'retweet_count': twitter_data['retweet_count'],\n 'favorite_count': twitter_data['favorite_count'],\n 'favorite_count': twitter_data['favorite_count'],\n 'followers_count': twitter_data['user']['followers_count']})\n ", "_____no_output_____" ], [ "# Convert the list of dictionaries to a pandas DataFrame\ntweets_data = pd.DataFrame(tweets_list, columns=['tweet_id',\n 'retweet_count', 'favorite_count', 'followers_count'])", "_____no_output_____" ], [ "tweets_data.head()", "_____no_output_____" ], [ "tweets_data.to_csv('tweets_data.csv')", "_____no_output_____" ] ], [ [ "### 2- Data Assessing\n<a id='assess'></a>", "_____no_output_____" ], [ "#### (A) viusal Assessing ", "_____no_output_____" ] ], [ [ "# Display the twitter_archive table\ntwitter_archive.head()", "_____no_output_____" ], [ "twitter_archive", "_____no_output_____" ], [ "twitter_archive[twitter_archive['expanded_urls'].isnull() == False]", "_____no_output_____" ], [ "twitter_archive['text'][1]", "_____no_output_____" ], [ "twitter_archive['rating_denominator'].value_counts()", "_____no_output_____" ], [ "twitter_archive.nunique()", "_____no_output_____" ], [ "twitter_archive.info()", "<class 'pandas.core.frame.DataFrame'>\nRangeIndex: 2356 entries, 0 to 2355\nData columns (total 17 columns):\ntweet_id 2356 non-null int64\nin_reply_to_status_id 78 non-null float64\nin_reply_to_user_id 78 non-null float64\ntimestamp 2356 non-null object\nsource 2356 non-null object\ntext 2356 non-null object\nretweeted_status_id 181 non-null float64\nretweeted_status_user_id 181 non-null float64\nretweeted_status_timestamp 181 non-null object\nexpanded_urls 2297 non-null object\nrating_numerator 2356 non-null int64\nrating_denominator 2356 non-null int64\nname 2356 non-null object\ndoggo 2356 non-null object\nfloofer 2356 non-null object\npupper 2356 non-null object\npuppo 2356 non-null object\ndtypes: float64(4), int64(3), object(10)\nmemory usage: 313.0+ KB\n" ] ], [ [ "The columns of twitter_archive dataframe \n> * *tweet_id* => the unique identifier for each tweet.\n> * *in_reply_to_status_id* => the id of replay tweet\n> * *in_reply_to_user_id* => the id of replay user \n> * *timestamp* => the tweet post time \n> * *source* => the url of the twitter \n> * *text* => text writen with the pic \n> * *retweeted_status_id* => retweeted status id \n> * *retweeted_status_user_id* => retweeted status user id \n> * *retweeted_status_timestamp* => \n> * *expanded_urls* => the url of the twitter tweet \n> * *rating_numerator* => rating numerator \n> * *rating_denominator* => rating denominator \n> * *name* => the name of the bog \n> * *doggo* => doggo dog breed \n> * *floofer* => floofer dog breed \n> * *pupper* => pupper dog breed \n> * *puppo* => pupper dog breed\n\n", "_____no_output_____" ] ], [ [ "images.head()", "_____no_output_____" ], [ "images.tail()", "_____no_output_____" ], [ "images.nunique()", "_____no_output_____" ] ], [ [ "The columns of images dataframe and i'll use in my analysis\n > * tweet_id ==> tweet_id\n > * jpg_url ==> image link\n > * p1 the ==> probiltiy of a certen bread\n > * p1_conf ==> the probility of being this bread\n > * p1_dog the ==> if the value is true or false", "_____no_output_____" ] ], [ [ "# Display the tweets_data table\ntweets_data.head()", "_____no_output_____" ], [ "# Display the tweets_data table\ntweets_data.tail()", "_____no_output_____" ], [ "tweets_data.sample(5)", "_____no_output_____" ] ], [ [ "The columns of tweets_data dataframe\n > * tweet_id ==> the unique identifier for each tweet.\n > * retweet_num ==> image link\n > * favorite_num ==> probiltiy of a certen bread\n > * followers_num ==> the probility of being this bread\n----", "_____no_output_____" ], [ "#### (B) programming Assessing \n", "_____no_output_____" ] ], [ [ "twitter_archive.info()", "<class 'pandas.core.frame.DataFrame'>\nRangeIndex: 2356 entries, 0 to 2355\nData columns (total 17 columns):\ntweet_id 2356 non-null int64\nin_reply_to_status_id 78 non-null float64\nin_reply_to_user_id 78 non-null float64\ntimestamp 2356 non-null object\nsource 2356 non-null object\ntext 2356 non-null object\nretweeted_status_id 181 non-null float64\nretweeted_status_user_id 181 non-null float64\nretweeted_status_timestamp 181 non-null object\nexpanded_urls 2297 non-null object\nrating_numerator 2356 non-null int64\nrating_denominator 2356 non-null int64\nname 2356 non-null object\ndoggo 2356 non-null object\nfloofer 2356 non-null object\npupper 2356 non-null object\npuppo 2356 non-null object\ndtypes: float64(4), int64(3), object(10)\nmemory usage: 313.0+ KB\n" ], [ "twitter_archive.isnull().sum()", "_____no_output_____" ], [ "twitter_archive.name.value_counts()", "_____no_output_____" ], [ "twitter_archive.isnull().sum().sum()", "_____no_output_____" ], [ "twitter_archive.describe()", "_____no_output_____" ], [ "twitter_archive.sample(5)", "_____no_output_____" ], [ "twitter_archive.sample(5)", "_____no_output_____" ], [ "twitter_archive.rating_denominator.value_counts()", "_____no_output_____" ], [ "twitter_archive[twitter_archive['rating_denominator'] == 110]", "_____no_output_____" ], [ "images.info()", "<class 'pandas.core.frame.DataFrame'>\nRangeIndex: 2075 entries, 0 to 2074\nData columns (total 12 columns):\ntweet_id 2075 non-null int64\njpg_url 2075 non-null object\nimg_num 2075 non-null int64\np1 2075 non-null object\np1_conf 2075 non-null float64\np1_dog 2075 non-null bool\np2 2075 non-null object\np2_conf 2075 non-null float64\np2_dog 2075 non-null bool\np3 2075 non-null object\np3_conf 2075 non-null float64\np3_dog 2075 non-null bool\ndtypes: bool(3), float64(3), int64(2), object(4)\nmemory usage: 152.1+ KB\n" ], [ "tweets_data.info()", "<class 'pandas.core.frame.DataFrame'>\nRangeIndex: 2339 entries, 0 to 2338\nData columns (total 4 columns):\ntweet_id 2339 non-null object\nretweet_count 2339 non-null int64\nfavorite_count 2339 non-null int64\nfollowers_count 2339 non-null int64\ndtypes: int64(3), object(1)\nmemory usage: 73.2+ KB\n" ], [ "tweets_data.isnull().sum()", "_____no_output_____" ] ], [ [ "### Quality issues:\n\n#### Twitter archive (`twitter_archive`) table\n\n* `tweet_id` data type is an int not a string\n* `timestamp`, `retweeted_status_timestamp` are a string not datatime\n* `in_reply_to_status_id`, `in_reply_to_user_id`, `retweeted_status_id`, retweeted_status_user_id they have alot of missing value as well as there data type is float not string.\n* column `doggo`, `floofer`, `pupper`, `puppo`: values are 'None' with type of string instead instead of boolean(true, false) to be easy to use.\n* removing the non need columns\n* some names are just one lowercase letter\n* clean the source to be only 3 values \n#### Image prediction (`images`) table\n\n* tweet_id is an int type not string\n* colomn `p1_dog`,`p2_dog`,`p3_dog`: sometimes all of them as false value or all of them have true value\n* colomn p1 ,p2, p3 have names start with lowercase and uppercase so we have to make everything lower case\n#### Twitter API (`tweets_data`) table\n has null values we should remove the rows with null\n\n### Tidiness\n\n#### Twitter archive (`twitter_archive`) table\n- there are 2column for rate and they should be only one column\n- `doggo`, `floofer`, `pupper` & `puppo` columns should be merged to one column named `dog_stage`\n\n#### Image prediction (`images`) table\n\n* Some column names need to be more descriptive,`jpg_url` to `img_link`.\n* Image predictions table should be added to twitter archive table.\n\n\n#### Twitter API (`tweets_data`) table\n\n- twitter api table columns `retweet_count`, `favorite_count`, `followers_count` should be added to twitter archive table.\n\n", "_____no_output_____" ], [ "\n### 3- Data Cleaning \n<a id='clean'></a>", "_____no_output_____" ] ], [ [ "# making a copy to work one\narchive_clean = twitter_archive.copy()\nimages_clean = images.copy()\ntweets_clean = tweets_data.copy()", "_____no_output_____" ] ], [ [ "## Define\nchaning the rate data type to be float", "_____no_output_____" ] ], [ [ "archive_clean.rating_numerator = archive_clean.rating_numerator.astype(float,copy=False)\n", "_____no_output_____" ], [ "# test\narchive_clean.info()", "<class 'pandas.core.frame.DataFrame'>\nRangeIndex: 2356 entries, 0 to 2355\nData columns (total 17 columns):\ntweet_id 2356 non-null int64\nin_reply_to_status_id 78 non-null float64\nin_reply_to_user_id 78 non-null float64\ntimestamp 2356 non-null object\nsource 2356 non-null object\ntext 2356 non-null object\nretweeted_status_id 181 non-null float64\nretweeted_status_user_id 181 non-null float64\nretweeted_status_timestamp 181 non-null object\nexpanded_urls 2297 non-null object\nrating_numerator 2356 non-null float64\nrating_denominator 2356 non-null int64\nname 2356 non-null object\ndoggo 2356 non-null object\nfloofer 2356 non-null object\npupper 2356 non-null object\npuppo 2356 non-null object\ndtypes: float64(5), int64(2), object(10)\nmemory usage: 313.0+ KB\n" ] ], [ [ "# Define \n fixing the data in `rating_numerator` column \n as in row `46` it's value should be 13.5 but it's only 5 in the data", "_____no_output_____" ] ], [ [ "# for avoiding \"This pattern has match groups\" error from ipython notebook\nimport warnings\nwarnings.filterwarnings(\"ignore\", 'This pattern has match groups')", "_____no_output_____" ], [ "# diplaying the rows that has the problem\narchive_clean[archive_clean.text.str.contains(r\"(\\d+\\.\\d*\\/\\d+)\")][['text', 'rating_numerator']]", "_____no_output_____" ], [ "# storing the index of the rows which have the problem\ninds = archive_clean[archive_clean.text.str.contains(r\"(\\d+\\.\\d*\\/\\d+)\")].index\ninds", "_____no_output_____" ], [ "def fix_rate(inds,col_name):\n # this function take the indexs and the column name we want to fix the rate in\n # extract the the true value from text columm\n # then fix the value in ratting_colimn\n # returns the fixed value\n for i in inds:\n txt = archive_clean.loc[i]['text']\n m = re.search(r\"(\\d+\\.\\d*\\/\\d+)\", txt)\n n = (m.group(1)).split('/')[0]\n n = float(n)\n archive_clean.loc[i, col_name] = n\n \n return archive_clean.iloc[inds][col_name]\n", "_____no_output_____" ], [ "# fix the rating \nfix_rate(inds,'rating_numerator') ", "_____no_output_____" ], [ "# test the fix\narchive_clean[archive_clean.text.str.contains(r\"(\\d+\\.\\d*\\/\\d+)\")][['text', 'rating_numerator']]\n", "_____no_output_____" ] ], [ [ "## Define \n There are retweets should be removed\n", "_____no_output_____" ] ], [ [ "tweets_clean.drop('retweet_count',axis=1,inplace=True)", "_____no_output_____" ], [ "tweets_clean.info()", "<class 'pandas.core.frame.DataFrame'>\nRangeIndex: 2339 entries, 0 to 2338\nData columns (total 3 columns):\ntweet_id 2339 non-null object\nfavorite_count 2339 non-null int64\nfollowers_count 2339 non-null int64\ndtypes: int64(2), object(1)\nmemory usage: 54.9+ KB\n" ] ], [ [ "### define:\n removing the un unnecessary columns for my analysis", "_____no_output_____" ] ], [ [ "# drop the column form archive_clean table\narchive_clean.drop(['in_reply_to_status_id','in_reply_to_user_id','retweeted_status_id','retweeted_status_user_id','retweeted_status_timestamp','expanded_urls','text'],axis=1,inplace=True)", "_____no_output_____" ], [ "# test\n\narchive_clean.info()", "<class 'pandas.core.frame.DataFrame'>\nRangeIndex: 2356 entries, 0 to 2355\nData columns (total 10 columns):\ntweet_id 2356 non-null int64\ntimestamp 2356 non-null object\nsource 2356 non-null object\nrating_numerator 2356 non-null float64\nrating_denominator 2356 non-null int64\nname 2356 non-null object\ndoggo 2356 non-null object\nfloofer 2356 non-null object\npupper 2356 non-null object\npuppo 2356 non-null object\ndtypes: float64(1), int64(2), object(7)\nmemory usage: 184.1+ KB\n" ], [ "# drop the column form archive_clean table\nimages.info()\n", "<class 'pandas.core.frame.DataFrame'>\nRangeIndex: 2075 entries, 0 to 2074\nData columns (total 12 columns):\ntweet_id 2075 non-null int64\njpg_url 2075 non-null object\nimg_num 2075 non-null int64\np1 2075 non-null object\np1_conf 2075 non-null float64\np1_dog 2075 non-null bool\np2 2075 non-null object\np2_conf 2075 non-null float64\np2_dog 2075 non-null bool\np3 2075 non-null object\np3_conf 2075 non-null float64\np3_dog 2075 non-null bool\ndtypes: bool(3), float64(3), int64(2), object(4)\nmemory usage: 152.1+ KB\n" ] ], [ [ "### Define:\nthe source column has 3 urls and it will be nicer and cleaner to make a word for each segmentation.", "_____no_output_____" ] ], [ [ "archive_clean.source.value_counts()", "_____no_output_____" ], [ "url_1 = '<a href=\"http://twitter.com/download/iphone\" rel=\"nofollow\">Twitter for iPhone</a>'\nurl_2 = '<a href=\"http://vine.co\" rel=\"nofollow\">Vine - Make a Scene</a>'\nurl_3 = '<a href=\"https://about.twitter.com/products/tweetdeck\" rel=\"nofollow\">TweetDeck</a>'\nurl_4 = '<a href=\"http://twitter.com\" rel=\"nofollow\">Twitter Web Client</a>'\nind_1 = archive_clean[archive_clean['source'] == url_1]['source'].index\nind_2 = archive_clean[archive_clean['source'] == url_2]['source'].index\nind_3 = archive_clean[archive_clean['source'] == url_3]['source'].index\nind_4 = archive_clean[archive_clean['source'] == url_4]['source'].index\narchive_clean.loc[ind_1, 'source'] = 'twitter_for_iphone'\narchive_clean.loc[ind_2, 'source'] = 'vine'\narchive_clean.loc[ind_3, 'source'] = 'tweet_deck'\narchive_clean.loc[ind_4, 'source'] = 'twitter_web'", "_____no_output_____" ], [ "\narchive_clean.source.value_counts()", "_____no_output_____" ], [ "# test\narchive_clean.source.value_counts()", "_____no_output_____" ] ], [ [ "### define:\nfix data types of the ids to make it easy to merge the tables\nhttps://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.astype.html", "_____no_output_____" ] ], [ [ "# Convert tweet_id to str for the tables\narchive_clean.tweet_id = archive_clean.tweet_id.astype(str,copy=False)\nimages_clean.tweet_id = images_clean.tweet_id.astype(str,copy=False)\narchive_clean.tweet_id = archive_clean.tweet_id.astype(str,copy=False)\n\n", "_____no_output_____" ], [ "archive_clean.info()", "<class 'pandas.core.frame.DataFrame'>\nRangeIndex: 2356 entries, 0 to 2355\nData columns (total 10 columns):\ntweet_id 2356 non-null object\ntimestamp 2356 non-null object\nsource 2356 non-null object\nrating_numerator 2356 non-null float64\nrating_denominator 2356 non-null int64\nname 2356 non-null object\ndoggo 2356 non-null object\nfloofer 2356 non-null object\npupper 2356 non-null object\npuppo 2356 non-null object\ndtypes: float64(1), int64(1), object(8)\nmemory usage: 184.1+ KB\n" ] ], [ [ "Define:\nsome nomert", "_____no_output_____" ], [ "### Define:\nfix the to make the archive_clean timestamp datatype to be the datetime\n", "_____no_output_____" ] ], [ [ "# convert timestamp to datetime data type\narchive_clean.timestamp = pd.to_datetime(archive_clean.timestamp)\n", "_____no_output_____" ] ], [ [ "### define:\n fixing the name column in twitter_clean as there some name is just on the small letter, so I'll replace them by an empty string.", "_____no_output_____" ] ], [ [ "archive_clean.name", "_____no_output_____" ], [ "#replace names lowercase letters with ''\narchive_clean.name = archive_clean.name.str.replace('(^[a-z]*)', '')\n\n#replace '' letters with 'None'\narchive_clean.name = archive_clean.name.replace('', 'None')\n", "_____no_output_____" ], [ "# test\narchive_clean.name.value_counts()", "_____no_output_____" ], [ "#test for the letters\narchive_clean.query('name == \"(^[a-z]*)\"').count()", "_____no_output_____" ] ], [ [ "### define \ntweets_clean data have nulls and we have to remove them\nso we'll drop all the nulls from or dataset", "_____no_output_____" ] ], [ [ "tweets_clean.isnull().sum()", "_____no_output_____" ], [ "tweets_data.isnull().sum()", "_____no_output_____" ], [ "tweets_clean.dropna(axis=0, inplace=True)\n", "_____no_output_____" ], [ "tweets_clean.isnull().sum()", "_____no_output_____" ] ], [ [ "### Define:\n in tweets_clean (from the API) we need to change the data type of favorite_count and followers_count to be int", "_____no_output_____" ] ], [ [ "tweets_clean.info()", "<class 'pandas.core.frame.DataFrame'>\nInt64Index: 2339 entries, 0 to 2338\nData columns (total 3 columns):\ntweet_id 2339 non-null object\nfavorite_count 2339 non-null int32\nfollowers_count 2339 non-null int32\ndtypes: int32(2), object(1)\nmemory usage: 54.8+ KB\n" ], [ "tweets_clean.favorite_count = tweets_clean.favorite_count.astype(int,copy=False)\ntweets_clean.followers_count = tweets_clean.followers_count.astype(int,copy=False)", "_____no_output_____" ], [ "#test\ntweets_clean.info()", "<class 'pandas.core.frame.DataFrame'>\nInt64Index: 2339 entries, 0 to 2338\nData columns (total 3 columns):\ntweet_id 2339 non-null object\nfavorite_count 2339 non-null int32\nfollowers_count 2339 non-null int32\ndtypes: int32(2), object(1)\nmemory usage: 54.8+ KB\n" ] ], [ [ "### define:\n* column p1 ,p2, p3 have names start with lowercase and uppercase so we have to make everything lower case\n", "_____no_output_____" ] ], [ [ "images_clean['p1'] = images_clean['p1'].str.lower()\nimages_clean['p2'] = images_clean['p2'].str.lower()\nimages_clean['p3'] = images_clean['p3'].str.lower()", "_____no_output_____" ], [ "#test\nimages_clean.head()", "_____no_output_____" ], [ "tweets_clean.head()", "_____no_output_____" ], [ "archive_clean.head()", "_____no_output_____" ] ], [ [ "### Define\nrename the jpg_url to img_link", "_____no_output_____" ] ], [ [ "images_clean.head()", "_____no_output_____" ], [ "images_clean.rename(columns={'jpg_url':'img_link'},inplace=True)", "_____no_output_____" ], [ "#test\nimages_clean.head()", "_____no_output_____" ] ], [ [ "## (2) Tidy ", "_____no_output_____" ], [ "### Define:\n1- making the rating_numerator and rating_denominator to one rating column in archive_clean \nthen remove the two columns", "_____no_output_____" ] ], [ [ "# making and adding the column to the archive_clean dataset \narchive_clean['rating'] = archive_clean['rating_numerator']/archive_clean['rating_denominator']", "_____no_output_____" ], [ "#test\narchive_clean.head()", "_____no_output_____" ], [ "# drop the rating_numerator and rating_denominator columns\narchive_clean.drop(['rating_numerator','rating_denominator'],axis=1, inplace=True)\n", "_____no_output_____" ], [ "#test\narchive_clean.head()", "_____no_output_____" ] ], [ [ "### Define:\n2- making the doggo, floofer,pupper and\tpuppo to one dog_stage column in archive_clean \nthen remove the two columns", "_____no_output_____" ] ], [ [ "#1- replace to all the null value in the column\n\ndef remove_None(df, col_name,value):\n # take the df name and col_name and return the col with no None word\n ind = df[df[col_name] == value][col_name].index\n df.loc[ind, col_name] = ''\n return df.head()", "_____no_output_____" ], [ "# replace to all the None value in the column\nremove_None(archive_clean, 'doggo',\"None\")\nremove_None(archive_clean, 'floofer',\"None\")\nremove_None(archive_clean, 'pupper',\"None\")\nremove_None(archive_clean, 'puppo',\"None\")\n", "_____no_output_____" ], [ "# we will melt them together in dog_stage\narchive_clean['dog_stage'] = archive_clean['doggo'] + archive_clean['floofer'] + archive_clean['pupper'] + archive_clean['puppo']", "_____no_output_____" ], [ "# test\narchive_clean['dog_stage'].value_counts(), archive_clean['doggo'].value_counts()", "_____no_output_____" ], [ "# then we will take any unexpect value to be multiple \n\narchive_clean['dog_stage'].replace('', \"multiple\", inplace=True)\narchive_clean['dog_stage'].replace('doggopupper', \"multiple\", inplace=True)\narchive_clean['dog_stage'].replace('doggopuppo', \"multiple\", inplace=True)\narchive_clean['dog_stage'].replace('doggofloofer', \"multiple\", inplace=True)", "_____no_output_____" ], [ "# test\n\narchive_clean['dog_stage'].value_counts()", "_____no_output_____" ], [ "# test\narchive_clean.head()", "_____no_output_____" ], [ "# droping the columns out\narchive_clean.drop(['doggo','floofer','pupper','puppo'],axis=1, inplace=True)", "_____no_output_____" ], [ "# test\narchive_clean.head()", "_____no_output_____" ] ], [ [ "now the `archive_clean` ready to join the other tables", "_____no_output_____" ], [ "### define \n3- in the images_clean dataset i have picked from the 3 Ps one accourding to the highest confedent ", "_____no_output_____" ] ], [ [ "images_clean.head()", "_____no_output_____" ], [ "#define dog_breed function to separate out the 3 columns of breed into one with highest confidence ratio\ndef get_p(r):\n max_num = max(r.p1_conf ,r.p2_conf, r.p3_conf) \n if r.p1_conf == max_num:\n return r.p1\n elif r.p2_conf == max_num:\n return r.p2\n elif r.p3_conf == max_num:\n return r.p3\n ", "_____no_output_____" ], [ "def get_p_conf(r):\n return max(r.p1_conf ,r.p2_conf, r.p3_conf)", "_____no_output_____" ], [ "images_clean['breed'] = images_clean.apply(get_p, axis=1)", "_____no_output_____" ], [ "images_clean['p_conf'] = images_clean.apply(get_p_conf, axis=1)", "_____no_output_____" ], [ "images_clean.head()", "_____no_output_____" ], [ "# droping the columns\nimages_clean.drop(['p1','p1_conf','p1_dog','p2','p2_conf','p2_dog','p3','p3_conf','p3_dog'],axis=1,inplace=True)\n", "_____no_output_____" ], [ "# test\nimages_clean.head()", "_____no_output_____" ] ], [ [ " ### now the data is clean so we ready to marge them to on column ", "_____no_output_____" ] ], [ [ "#merge the two tables\ntwitter_archive_master = pd.merge(left=archive_clean, right=images_clean, how='inner', on='tweet_id')\ntwitter_archive_master = pd.merge(left=twitter_archive_master, right=tweets_clean, how='inner', on='tweet_id')\ntwitter_archive_master.info()", "<class 'pandas.core.frame.DataFrame'>\nInt64Index: 2066 entries, 0 to 2065\nData columns (total 12 columns):\ntweet_id 2066 non-null object\ntimestamp 2066 non-null datetime64[ns, UTC]\nsource 2066 non-null object\nname 2066 non-null object\nrating 2066 non-null float64\ndog_stage 2066 non-null object\nimg_link 2066 non-null object\nimg_num 2066 non-null int64\nbreed 2066 non-null object\np_conf 2066 non-null float64\nfavorite_count 2066 non-null int32\nfollowers_count 2066 non-null int32\ndtypes: datetime64[ns, UTC](1), float64(2), int32(2), int64(1), object(6)\nmemory usage: 193.7+ KB\n" ], [ "twitter_archive_master.head()", "_____no_output_____" ] ], [ [ "## saving the clean data", "_____no_output_____" ] ], [ [ "# saving the data fram to csv file\ntwitter_archive_master.to_csv('twitter_archive_master.csv', index=False)\n# saving the data fram to sqlite file (data base)\n", "_____no_output_____" ], [ "df = twitter_archive_master", "_____no_output_____" ] ], [ [ "<a id='eda'></a>\n## Exploratory Data Analysis\n\n\n### Research Question 1 : what's the most popular dog_stage ?", "_____no_output_____" ] ], [ [ "counts = df['dog_stage'].value_counts()[1:]\nuni = counts.index\ncounts", "_____no_output_____" ], [ "\n# gentarting a list of the loc or the index for each to be replaced by the tick \nlocs = np.arange(len(uni))\nplt.bar(locs, counts)\nplt.xlabel('Dog stage', fontsize=14)\nplt.ylabel('The number of tweets', fontsize=14)\n# Set text labels:\nplt.xticks(locs, uni, fontsize=12, rotation=90)\nplt.title('the number of tweets for each dog stage', fontsize=16)\nplt.show()", "_____no_output_____" ], [ "#ploting it in pie chart\nplt.pie(counts, labels=uni)\nplt.title('the number of tweets for each dog stage', fontsize=16)\nplt.legend();", "_____no_output_____" ] ], [ [ "#### by ignoring the number of unknowns, from our data we can see that:\n> The greatest number of tweets about the dogs are in pupper dogs with 1055 tweets. \n>\n> the \"doggo\"dogs has 335 tweets.\n>\n> 115 tweets for puppo dogs.\n>\n> 35 tweets for floofer dogs and it's the lowest number of tweets.\n", "_____no_output_____" ], [ "\n### Research Question 2 : what's the most popular dog_stage according to rating average?", "_____no_output_____" ] ], [ [ "rating = df['rating'].groupby(df['dog_stage']).mean()[:-1].sort_values(ascending=False)\nrating", "_____no_output_____" ], [ "#polting the values in barchat\ndog_stage = rating.index\n\nplt.bar(dog_stage, rating)\nplt.xlabel('Dog breed', fontsize=14)\nplt.ylabel('averge rating', fontsize=14)\n# Set text labels:\nplt.title('the number of average rate for each dog breed', fontsize=16)\nplt.show()", "_____no_output_____" ] ], [ [ "> from the bar chat we can see that the floofer dogs tweets 1.2 average rating and the puppo tweets have avrage rating 1.197\n> while the doggo tweets have 1.197 rate. \n> and the pupper tweets have 1.068 rate. \n>\n>", "_____no_output_____" ], [ "### Research Question 3 what are the top 10 bead that has the most number of tweets ?", "_____no_output_____" ] ], [ [ "top_breads = df['breed'].value_counts()[:10]\ntopbreds_uni = top_breads.index\ntop_breads", "_____no_output_____" ], [ "# gentarting a list of the loc or the index for each breed to be replaced by the tick \nlocs = np.arange(len(topbreds_uni))\nplt.bar(locs, top_breads)\nplt.xlabel('Dog breed', fontsize=14)\nplt.ylabel('The number of tweets', fontsize=14)\n# Set text labels:\nplt.xticks(locs, topbreds_uni, fontsize=12, rotation=90)\nplt.title('the number of tweets for each dog breed', fontsize=16)\nplt.show()", "_____no_output_____" ] ], [ [ "The top 10 bead that has the most number of tweets are golden_retriever with 750 tweets, labrador_retriever with 495 tweets, pembroke 440 tweets , chihuahua for 405 tweets,285 tweets about pug dogs,220 about chow dogs,210 about samoyed dogs,195 about the toy poodle dogs,190 tweets about pomeranian dogs,150 tweets about the malamute dogs.", "_____no_output_____" ], [ "### Research Question 4 what's the number of retweets form each source ?", "_____no_output_____" ] ], [ [ "df['rating'].groupby(df['source']).mean().sort_values(ascending=False)", "_____no_output_____" ], [ "retweets = df['rating'].groupby(df['source']).mean().sort_values(ascending=False)\nsource = retweets.index\n\nplt.bar(source, retweets)\nplt.xlabel('scource', fontsize=14)\nplt.ylabel('rating average', fontsize=14)\nplt.title('the number of rating average form each scoure', fontsize=16)\nplt.show()", "_____no_output_____" ] ], [ [ ">the average rate source is twitter for iphone with 18.77(about 19) \n>\n>then twitter web with 1.008 average rate\n>\n> 1.006 average rate from tweet deck", "_____no_output_____" ], [ "### Research Question 5 what are the top 4 images that has the most favorite_counts ?", "_____no_output_____" ] ], [ [ "images = df['favorite_count'].groupby(df['img_link']).sum().sort_values(ascending=False).iloc[:4]\nimage_lbl = []\nfor i in range(len(images)):\n x = df[df['img_link'] == images.index[i]]['breed'].iloc[0]\n image_lbl.append(x)\ndog_stage= []\nfor i in range(len(images)):\n x = df[df['img_link'] == images.index[i]]['dog_stage'].iloc[0]\n dog_stage.append(x)\nimage_lbl, dog_stage", "_____no_output_____" ], [ "plt.bar(image_lbl, images)\nplt.xlabel('scource', fontsize=14)\nplt.ylabel('favorite_counts', fontsize=14)\nplt.title('the number of favorite counts form each scoure', fontsize=16)\nplt.xticks(image_lbl, image_lbl, fontsize=12, rotation=90)\n\nplt.show()", "_____no_output_____" ] ], [ [ "\n\n## the top first image is labrador retriever image with 813685 favoirte count\n![Otter](https://pbs.twimg.com/ext_tw_video_thumb/744234667679821824/pu/img/1GaWmtJtdqzZV7jy.jpg)\n\n----\n## then this image of a lakeland terrier dog with 695300 favoirte count\n\n![Otter](https://pbs.twimg.com/media/C2tugXLXgAArJO4.jpg)\n\n----\n## then this image of a chihuahua dog with 629030 favoirte count\n\n![Otter](https://pbs.twimg.com/ext_tw_video_thumb/807106774843039744/pu/img/8XZg1xW35Xp2J6JW.jpg)\n\n----\n## then this image of a with french bulldog 604450 favoirte count\n\n![Otter](https://pbs.twimg.com/media/DAZAUfBXcAAG_Nn.jpg)", "_____no_output_____" ], [ "<a id='conclusions'></a>\n## Conclusions\n> The greatest number of tweets about the dogs are in pupper dogs with 1055 tweets\n>\n> however only 35 tweets about floofer dogs.\n>\n> we can see that the floofer dogs tweets and the puppo tweets have the highest avrage rating 1.2 while the doggo tweets have 1.197 rate. and the pupper tweets have 1.076 rate.\n>\n> the most number of tweets 750 tweets are about golden retriever \n\n> \n", "_____no_output_____" ], [ "## Limitations¶\n> I have removed alot of data ethier for missing value(id)\n>\n> I have removed column of beed with 3 probabilities and I have take the highest probabilities the data with the highest probability.\n>\n> I want the original ratings (no retweets) that have images. Though there are 5000+ tweets in the dataset, not all are dog ratings and some are retweets.\n\n", "_____no_output_____" ], [ "# Resourses\n> [How to lowercase a python dataframe string column if it has missing values?](https://stackoverflow.com/questions/22245171/how-to-lowercase-a-python-dataframe-string-column-if-it-has-missing-values)\n>\n> [Twitter API - get tweets with specific id](https://stackoverflow.com/questions/28384588/twitter-api-get-tweets-with-specific-id)\n>\n>[pandas astype](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.astype.html)\n>\n>[Extract word from string Using python regex](https://stackoverflow.com/questions/24885262/extract-word-from-string-using-python-regex)\n>", "_____no_output_____" ], [ "# Thanks for reviewing my project.\n\nI would be happy to keep in touch with you.\n\n>LinkedIn :[@AbdElrhman-m]('https://www.linkedin.com/in/abdelrhman-m/')\n>\n>Email : AbdElrhman.m056@gmail.com", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code" ], [ "markdown", "markdown" ], [ "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown", "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown", "markdown" ], [ "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code", "code" ], [ "markdown", "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown", "markdown" ], [ "code", "code" ], [ "markdown", "markdown" ], [ "code", "code" ], [ "markdown", "markdown" ], [ "code", "code" ], [ "markdown", "markdown" ], [ "code", "code" ], [ "markdown", "markdown", "markdown", "markdown", "markdown" ] ]