{"nbformat":4,"nbformat_minor":0,"metadata":{"colab":{"name":"NLU_training_multi_class_text_classifier_demo_amazon.ipynb","provenance":[],"collapsed_sections":["zkufh760uvF3"]},"kernelspec":{"display_name":"Python 3","name":"python3"}},"cells":[{"cell_type":"markdown","metadata":{"id":"zkufh760uvF3"},"source":["![JohnSnowLabs](https://nlp.johnsnowlabs.com/assets/images/logo.png)\n","\n","[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/JohnSnowLabs/nlu/blob/master/examples/colab/Training/multi_class_text_classification/NLU_training_multi_class_text_classifier_demo_amazon.ipynb)\n","\n","\n","\n","# Training a Deep Learning Classifier with NLU \n","## ClassifierDL (Multi-class Text Classification)\n","## 3 class Amazon Phone review classifier training]\n","With the [ClassifierDL model](https://nlp.johnsnowlabs.com/docs/en/annotators#classifierdl-multi-class-text-classification) from Spark NLP you can achieve State Of the Art results on any multi class text classification problem \n","\n","This notebook showcases the following features : \n","\n","- How to train the deep learning classifier\n","- How to store a pipeline to disk\n","- How to load the pipeline from disk (Enables NLU offline mode)\n","\n","\n","\n","\n","You can achieve these results or even better on this dataset with training data:\n","\n","\n","\n","<br>\n","\n","![image.png]()\n","\n","\n","You can achieve these results or even better on this dataset with test data:\n","\n","\n","<br>\n","\n","\n","![image.png]()"]},{"cell_type":"markdown","metadata":{"id":"dur2drhW5Rvi"},"source":["# 1. Install Java 8 and NLU"]},{"cell_type":"code","metadata":{"colab":{"base_uri":"https://localhost:8080/"},"id":"hFGnBCHavltY","executionInfo":{"status":"ok","timestamp":1620192280562,"user_tz":-300,"elapsed":32996,"user":{"displayName":"ahmed lone","photoUrl":"","userId":"02458088882398909889"}},"outputId":"05a6bebd-406d-467d-c575-5c1d63ae192b"},"source":["!wget https://setup.johnsnowlabs.com/nlu/colab.sh -O - | bash\n","  \n","\n","import nlu"],"execution_count":null,"outputs":[{"output_type":"stream","text":["--2021-05-05 05:24:08--  https://raw.githubusercontent.com/JohnSnowLabs/nlu/master/scripts/colab_setup.sh\n","Resolving raw.githubusercontent.com (raw.githubusercontent.com)... 185.199.108.133, 185.199.110.133, 185.199.111.133, ...\n","Connecting to raw.githubusercontent.com (raw.githubusercontent.com)|185.199.108.133|:443... connected.\n","HTTP request sent, awaiting response... 200 OK\n","Length: 1671 (1.6K) [text/plain]\n","Saving to: ‘STDOUT’\n","\n","-                     0%[                    ]       0  --.-KB/s               Installing  NLU 3.0.0 with  PySpark 3.0.2 and Spark NLP 3.0.1 for Google Colab ...\n","-                   100%[===================>]   1.63K  --.-KB/s    in 0.001s  \n","\n","2021-05-05 05:24:08 (1.82 MB/s) - written to stdout [1671/1671]\n","\n"],"name":"stdout"}]},{"cell_type":"markdown","metadata":{"id":"f4KkTfnR5Ugg"},"source":["# 2. Download Amazon Unlocked mobile phones dataset \n","https://www.kaggle.com/PromptCloudHQ/amazon-reviews-unlocked-mobile-phones\n","\n","dataset with unlocked mobile phone reviews in 5 review classes\n"]},{"cell_type":"code","metadata":{"colab":{"base_uri":"https://localhost:8080/"},"id":"OrVb5ZMvvrQD","executionInfo":{"status":"ok","timestamp":1620192281175,"user_tz":-300,"elapsed":31025,"user":{"displayName":"ahmed lone","photoUrl":"","userId":"02458088882398909889"}},"outputId":"7d56a78e-05a5-45c8-8f5a-765834be24d2"},"source":["! wget http://ckl-it.de/wp-content/uploads/2021/01/Amazon_Unlocked_Mobile.csv"],"execution_count":null,"outputs":[{"output_type":"stream","text":["--2021-05-05 05:24:40--  http://ckl-it.de/wp-content/uploads/2021/01/Amazon_Unlocked_Mobile.csv\n","Resolving ckl-it.de (ckl-it.de)... 217.160.0.108, 2001:8d8:100f:f000::209\n","Connecting to ckl-it.de (ckl-it.de)|217.160.0.108|:80... connected.\n","HTTP request sent, awaiting response... 200 OK\n","Length: 452621 (442K) [text/csv]\n","Saving to: ‘Amazon_Unlocked_Mobile.csv.1’\n","\n","Amazon_Unlocked_Mob 100%[===================>] 442.01K   826KB/s    in 0.5s    \n","\n","2021-05-05 05:24:40 (826 KB/s) - ‘Amazon_Unlocked_Mobile.csv.1’ saved [452621/452621]\n","\n"],"name":"stdout"}]},{"cell_type":"code","metadata":{"colab":{"base_uri":"https://localhost:8080/","height":411},"id":"y4xSRWIhwT28","executionInfo":{"status":"ok","timestamp":1620192282089,"user_tz":-300,"elapsed":31580,"user":{"displayName":"ahmed lone","photoUrl":"","userId":"02458088882398909889"}},"outputId":"612779a7-dcdc-420f-dda5-08842706f782"},"source":["import pandas as pd\n","test_path = '/content/Amazon_Unlocked_Mobile.csv'\n","train_df = pd.read_csv(test_path,sep=\",\")\n","cols = [\"y\",\"text\"]\n","train_df = train_df[cols]\n","from sklearn.model_selection import train_test_split\n","\n","train_df, test_df = train_test_split(train_df, test_size=0.2)\n","train_df\n","\n"],"execution_count":null,"outputs":[{"output_type":"execute_result","data":{"text/html":["<div>\n","<style scoped>\n","    .dataframe tbody tr th:only-of-type {\n","        vertical-align: middle;\n","    }\n","\n","    .dataframe tbody tr th {\n","        vertical-align: top;\n","    }\n","\n","    .dataframe thead th {\n","        text-align: right;\n","    }\n","</style>\n","<table border=\"1\" class=\"dataframe\">\n","  <thead>\n","    <tr style=\"text-align: right;\">\n","      <th></th>\n","      <th>y</th>\n","      <th>text</th>\n","    </tr>\n","  </thead>\n","  <tbody>\n","    <tr>\n","      <th>42</th>\n","      <td>poor</td>\n","      <td>The product delivery was fast. I received it 4...</td>\n","    </tr>\n","    <tr>\n","      <th>62</th>\n","      <td>good</td>\n","      <td>My wife loves it, works great and easy to use.</td>\n","    </tr>\n","    <tr>\n","      <th>411</th>\n","      <td>poor</td>\n","      <td>loved the phone. I've had it a month and last ...</td>\n","    </tr>\n","    <tr>\n","      <th>197</th>\n","      <td>average</td>\n","      <td>It has missing part: Sim holder. I dont like t...</td>\n","    </tr>\n","    <tr>\n","      <th>645</th>\n","      <td>average</td>\n","      <td>Decent phone for the money, but if you are use...</td>\n","    </tr>\n","    <tr>\n","      <th>...</th>\n","      <td>...</td>\n","      <td>...</td>\n","    </tr>\n","    <tr>\n","      <th>1147</th>\n","      <td>poor</td>\n","      <td>the phone it is terrible ,got some pornovirus,...</td>\n","    </tr>\n","    <tr>\n","      <th>277</th>\n","      <td>average</td>\n","      <td>Phone worked great! Only issue was that it wou...</td>\n","    </tr>\n","    <tr>\n","      <th>891</th>\n","      <td>poor</td>\n","      <td>The phone worked very good the first month, th...</td>\n","    </tr>\n","    <tr>\n","      <th>675</th>\n","      <td>average</td>\n","      <td>I never used the phone. I ordered it and did n...</td>\n","    </tr>\n","    <tr>\n","      <th>930</th>\n","      <td>good</td>\n","      <td>Excellent phone functions recommend</td>\n","    </tr>\n","  </tbody>\n","</table>\n","<p>1200 rows × 2 columns</p>\n","</div>"],"text/plain":["            y                                               text\n","42       poor  The product delivery was fast. I received it 4...\n","62       good     My wife loves it, works great and easy to use.\n","411      poor  loved the phone. I've had it a month and last ...\n","197   average  It has missing part: Sim holder. I dont like t...\n","645   average  Decent phone for the money, but if you are use...\n","...       ...                                                ...\n","1147     poor  the phone it is terrible ,got some pornovirus,...\n","277   average  Phone worked great! Only issue was that it wou...\n","891      poor  The phone worked very good the first month, th...\n","675   average  I never used the phone. I ordered it and did n...\n","930      good                Excellent phone functions recommend\n","\n","[1200 rows x 2 columns]"]},"metadata":{"tags":[]},"execution_count":3}]},{"cell_type":"markdown","metadata":{"id":"0296Om2C5anY"},"source":["# 3. Train Deep Learning Classifier using nlu.load('train.classifier')\n","\n","You dataset label column should be named 'y' and the feature column with text data should be named 'text'"]},{"cell_type":"code","metadata":{"colab":{"base_uri":"https://localhost:8080/","height":1000},"id":"3ZIPkRkWftBG","executionInfo":{"elapsed":253326,"status":"ok","timestamp":1620190665634,"user":{"displayName":"ahmed lone","photoUrl":"","userId":"02458088882398909889"},"user_tz":-300},"outputId":"49150117-7964-4577-bc0b-3aa9de2e8321"},"source":["# load a trainable pipeline by specifying the train. prefix  and fit it on a datset with label and text columns\n","# Since there are no\n","\n","trainable_pipe = nlu.load('train.classifier')\n","fitted_pipe = trainable_pipe.fit(train_df.iloc[:50] )\n","\n","\n","# predict with the trainable pipeline on dataset and get predictions\n","preds = fitted_pipe.predict(train_df.iloc[:50] ,output_level='document')\n","preds"],"execution_count":null,"outputs":[{"output_type":"stream","text":["tfhub_use download started this may take some time.\n","Approximate size to download 923.7 MB\n","[OK!]\n","sentence_detector_dl download started this may take some time.\n","Approximate size to download 354.6 KB\n","[OK!]\n"],"name":"stdout"},{"output_type":"execute_result","data":{"text/html":["<div>\n","<style scoped>\n","    .dataframe tbody tr th:only-of-type {\n","        vertical-align: middle;\n","    }\n","\n","    .dataframe tbody tr th {\n","        vertical-align: top;\n","    }\n","\n","    .dataframe thead th {\n","        text-align: right;\n","    }\n","</style>\n","<table border=\"1\" class=\"dataframe\">\n","  <thead>\n","    <tr style=\"text-align: right;\">\n","      <th></th>\n","      <th>origin_index</th>\n","      <th>y</th>\n","      <th>document</th>\n","      <th>sentence</th>\n","      <th>trained_classifier</th>\n","      <th>text</th>\n","      <th>sentence_embedding_use</th>\n","      <th>trained_classifier_confidence_confidence</th>\n","    </tr>\n","  </thead>\n","  <tbody>\n","    <tr>\n","      <th>0</th>\n","      <td>1045</td>\n","      <td>good</td>\n","      <td>excelente</td>\n","      <td>[excelente]</td>\n","      <td>good</td>\n","      <td>excelente</td>\n","      <td>[0.032463133335113525, -0.01719777286052704, -...</td>\n","      <td>0.688769</td>\n","    </tr>\n","    <tr>\n","      <th>1</th>\n","      <td>501</td>\n","      <td>good</td>\n","      <td>Good tanks</td>\n","      <td>[Good tanks]</td>\n","      <td>good</td>\n","      <td>Good tanks</td>\n","      <td>[-0.04523428902029991, -0.0027615062426775694,...</td>\n","      <td>0.671200</td>\n","    </tr>\n","    <tr>\n","      <th>2</th>\n","      <td>539</td>\n","      <td>poor</td>\n","      <td>My charger does not work.I would like one that...</td>\n","      <td>[My charger does not work., I would like one t...</td>\n","      <td>average</td>\n","      <td>My charger does not work.I would like one that...</td>\n","      <td>[0.05880051478743553, 0.07787849009037018, -0....</td>\n","      <td>0.964706</td>\n","    </tr>\n","    <tr>\n","      <th>3</th>\n","      <td>1073</td>\n","      <td>poor</td>\n","      <td>Positive: Large screen is good for senior peop...</td>\n","      <td>[Positive: Large screen is good for senior peo...</td>\n","      <td>average</td>\n","      <td>Positive: Large screen is good for senior peop...</td>\n","      <td>[0.06194938346743584, 0.046952128410339355, -0...</td>\n","      <td>0.896623</td>\n","    </tr>\n","    <tr>\n","      <th>4</th>\n","      <td>191</td>\n","      <td>good</td>\n","      <td>works good for 3G and above. Simple to use. No...</td>\n","      <td>[works good for 3G and above., Simple to use.,...</td>\n","      <td>good</td>\n","      <td>works good for 3G and above. Simple to use. No...</td>\n","      <td>[0.07321183383464813, 0.023221753537654877, -0...</td>\n","      <td>0.657733</td>\n","    </tr>\n","    <tr>\n","      <th>5</th>\n","      <td>237</td>\n","      <td>average</td>\n","      <td>Well is a old model and they work just fine no...</td>\n","      <td>[Well is a old model and they work just fine n...</td>\n","      <td>average</td>\n","      <td>Well is a old model and they work just fine no...</td>\n","      <td>[0.08121286332607269, 0.07176794856786728, -0....</td>\n","      <td>0.801204</td>\n","    </tr>\n","    <tr>\n","      <th>6</th>\n","      <td>1357</td>\n","      <td>poor</td>\n","      <td>Phone is cheap...only had it for 3-5 months n ...</td>\n","      <td>[Phone is cheap., ..only had it for 3-5 months...</td>\n","      <td>average</td>\n","      <td>Phone is cheap...only had it for 3-5 months n ...</td>\n","      <td>[0.024805108085274696, 0.0025309964548796415, ...</td>\n","      <td>0.644998</td>\n","    </tr>\n","    <tr>\n","      <th>7</th>\n","      <td>543</td>\n","      <td>good</td>\n","      <td>these work nice for the price and that is why ...</td>\n","      <td>[these work nice for the price and that is why...</td>\n","      <td>good</td>\n","      <td>these work nice for the price and that is why ...</td>\n","      <td>[0.02607562020421028, -0.0069553907960653305, ...</td>\n","      <td>0.733167</td>\n","    </tr>\n","    <tr>\n","      <th>8</th>\n","      <td>947</td>\n","      <td>poor</td>\n","      <td>was not what they say good luck getting it to ...</td>\n","      <td>[was not what they say good luck getting it to...</td>\n","      <td>good</td>\n","      <td>was not what they say good luck getting it to ...</td>\n","      <td>[-0.0032445385586470366, 0.02947022207081318, ...</td>\n","      <td>0.481921</td>\n","    </tr>\n","    <tr>\n","      <th>9</th>\n","      <td>1178</td>\n","      <td>good</td>\n","      <td>Just what we needed for an older relative.</td>\n","      <td>[Just what we needed for an older relative.]</td>\n","      <td>good</td>\n","      <td>Just what we needed for an older relative.</td>\n","      <td>[0.02692161314189434, -0.06281831860542297, -0...</td>\n","      <td>0.641099</td>\n","    </tr>\n","    <tr>\n","      <th>10</th>\n","      <td>729</td>\n","      <td>good</td>\n","      <td>Just got this phone and it is a great phone. I...</td>\n","      <td>[Just got this phone and it is a great phone.,...</td>\n","      <td>good</td>\n","      <td>Just got this phone and it is a great phone. I...</td>\n","      <td>[0.056339796632528305, -0.030076347291469574, ...</td>\n","      <td>0.733164</td>\n","    </tr>\n","    <tr>\n","      <th>11</th>\n","      <td>570</td>\n","      <td>good</td>\n","      <td>Really nice, my daughter uses this everyday an...</td>\n","      <td>[Really nice, my daughter uses this everyday a...</td>\n","      <td>good</td>\n","      <td>Really nice, my daughter uses this everyday an...</td>\n","      <td>[-0.018931539729237556, 0.0391848087310791, -0...</td>\n","      <td>0.679282</td>\n","    </tr>\n","    <tr>\n","      <th>12</th>\n","      <td>466</td>\n","      <td>poor</td>\n","      <td>A TOTAL DISAPPOINTMENT, TEAM COMES IN CHINESE ...</td>\n","      <td>[A TOTAL DISAPPOINTMENT, TEAM COMES IN CHINESE...</td>\n","      <td>good</td>\n","      <td>A TOTAL DISAPPOINTMENT, TEAM COMES IN CHINESE ...</td>\n","      <td>[0.038112103939056396, -0.061823394149541855, ...</td>\n","      <td>0.492642</td>\n","    </tr>\n","    <tr>\n","      <th>13</th>\n","      <td>80</td>\n","      <td>average</td>\n","      <td>It's a decent for the price.. I've had this on...</td>\n","      <td>[It's a decent for the price., ., I've had thi...</td>\n","      <td>average</td>\n","      <td>It's a decent for the price.. I've had this on...</td>\n","      <td>[0.05056222155690193, 0.010562625713646412, -0...</td>\n","      <td>0.929107</td>\n","    </tr>\n","    <tr>\n","      <th>14</th>\n","      <td>1284</td>\n","      <td>average</td>\n","      <td>I really liked the original Sparq cell phone b...</td>\n","      <td>[I really liked the original Sparq cell phone ...</td>\n","      <td>average</td>\n","      <td>I really liked the original Sparq cell phone b...</td>\n","      <td>[0.04752641171216965, 0.005131922196596861, -0...</td>\n","      <td>0.963391</td>\n","    </tr>\n","    <tr>\n","      <th>15</th>\n","      <td>1414</td>\n","      <td>poor</td>\n","      <td>useless and in Chinese</td>\n","      <td>[useless and in Chinese]</td>\n","      <td>good</td>\n","      <td>useless and in Chinese</td>\n","      <td>[0.05265609920024872, 0.021143877878785133, -0...</td>\n","      <td>0.592771</td>\n","    </tr>\n","    <tr>\n","      <th>16</th>\n","      <td>29</td>\n","      <td>poor</td>\n","      <td>Didnt offer clear and concise instructions on ...</td>\n","      <td>[Didnt offer clear and concise instructions on...</td>\n","      <td>average</td>\n","      <td>Didnt offer clear and concise instructions on ...</td>\n","      <td>[-0.022721262648701668, -0.026971762999892235,...</td>\n","      <td>0.989909</td>\n","    </tr>\n","    <tr>\n","      <th>17</th>\n","      <td>424</td>\n","      <td>poor</td>\n","      <td>The phone is amazing other than the worthless ...</td>\n","      <td>[The phone is amazing other than the worthless...</td>\n","      <td>good</td>\n","      <td>The phone is amazing other than the worthless ...</td>\n","      <td>[0.07116073369979858, -0.027407672256231308, -...</td>\n","      <td>0.568540</td>\n","    </tr>\n","    <tr>\n","      <th>18</th>\n","      <td>250</td>\n","      <td>average</td>\n","      <td>It's ambidextrous gimmick sometimes screws up ...</td>\n","      <td>[It's ambidextrous gimmick sometimes screws up...</td>\n","      <td>average</td>\n","      <td>It's ambidextrous gimmick sometimes screws up ...</td>\n","      <td>[0.0757051557302475, 0.05544551461935043, -0.0...</td>\n","      <td>0.990839</td>\n","    </tr>\n","    <tr>\n","      <th>19</th>\n","      <td>1017</td>\n","      <td>poor</td>\n","      <td>only 1 star because this phone looks good it s...</td>\n","      <td>[only 1 star because this phone looks good it ...</td>\n","      <td>good</td>\n","      <td>only 1 star because this phone looks good it s...</td>\n","      <td>[0.061529386788606644, -0.005719680339097977, ...</td>\n","      <td>0.527347</td>\n","    </tr>\n","    <tr>\n","      <th>20</th>\n","      <td>629</td>\n","      <td>average</td>\n","      <td>Good iPhone but not what i'm looking for</td>\n","      <td>[Good iPhone but not what i'm looking for]</td>\n","      <td>average</td>\n","      <td>Good iPhone but not what i'm looking for</td>\n","      <td>[0.029704388231039047, -0.014912066049873829, ...</td>\n","      <td>0.616338</td>\n","    </tr>\n","    <tr>\n","      <th>21</th>\n","      <td>313</td>\n","      <td>average</td>\n","      <td>By mistake, I bought a wrong iPhone. It can no...</td>\n","      <td>[By mistake, I bought a wrong iPhone., It can ...</td>\n","      <td>average</td>\n","      <td>By mistake, I bought a wrong iPhone. It can no...</td>\n","      <td>[0.012942199595272541, 0.06830665469169617, 0....</td>\n","      <td>0.847154</td>\n","    </tr>\n","    <tr>\n","      <th>22</th>\n","      <td>1344</td>\n","      <td>good</td>\n","      <td>Â The phone is an unlocked phone and the suppo...</td>\n","      <td>[Â The phone is an unlocked phone and the supp...</td>\n","      <td>good</td>\n","      <td>Â The phone is an unlocked phone and the suppo...</td>\n","      <td>[0.05089927092194557, -0.04746871441602707, 0....</td>\n","      <td>0.687788</td>\n","    </tr>\n","    <tr>\n","      <th>23</th>\n","      <td>577</td>\n","      <td>average</td>\n","      <td>Screen quality is good. Overall, the phone is ...</td>\n","      <td>[Screen quality is good., Overall, the phone i...</td>\n","      <td>average</td>\n","      <td>Screen quality is good. Overall, the phone is ...</td>\n","      <td>[0.050077296793460846, 0.004506402648985386, 0...</td>\n","      <td>0.897357</td>\n","    </tr>\n","    <tr>\n","      <th>24</th>\n","      <td>687</td>\n","      <td>good</td>\n","      <td>Good</td>\n","      <td>[Good]</td>\n","      <td>good</td>\n","      <td>Good</td>\n","      <td>[0.02093948796391487, -0.02558555267751217, 0....</td>\n","      <td>0.620550</td>\n","    </tr>\n","    <tr>\n","      <th>25</th>\n","      <td>845</td>\n","      <td>good</td>\n","      <td>My Dad loves it - works great with no confusin...</td>\n","      <td>[My Dad loves it - works great with no confusi...</td>\n","      <td>good</td>\n","      <td>My Dad loves it - works great with no confusin...</td>\n","      <td>[0.049614597111940384, -0.010910050012171268, ...</td>\n","      <td>0.615326</td>\n","    </tr>\n","    <tr>\n","      <th>26</th>\n","      <td>625</td>\n","      <td>good</td>\n","      <td>I purchased this phone as a gift and we were v...</td>\n","      <td>[I purchased this phone as a gift and we were ...</td>\n","      <td>good</td>\n","      <td>I purchased this phone as a gift and we were v...</td>\n","      <td>[0.059720199555158615, -0.003761302912607789, ...</td>\n","      <td>0.617060</td>\n","    </tr>\n","    <tr>\n","      <th>27</th>\n","      <td>952</td>\n","      <td>good</td>\n","      <td>very responsible seller and excellent product</td>\n","      <td>[very responsible seller and excellent product]</td>\n","      <td>good</td>\n","      <td>very responsible seller and excellent product</td>\n","      <td>[0.06642645597457886, -0.0896696224808693, -0....</td>\n","      <td>0.746512</td>\n","    </tr>\n","    <tr>\n","      <th>28</th>\n","      <td>1317</td>\n","      <td>poor</td>\n","      <td>it's not all good... it's basically a Chinese ...</td>\n","      <td>[it's not all good., .. it's basically a Chine...</td>\n","      <td>good</td>\n","      <td>it's not all good... it's basically a Chinese ...</td>\n","      <td>[0.034627024084329605, 0.005761968903243542, -...</td>\n","      <td>0.602084</td>\n","    </tr>\n","    <tr>\n","      <th>29</th>\n","      <td>159</td>\n","      <td>average</td>\n","      <td>The power button was broke even though shippin...</td>\n","      <td>[The power button was broke even though shippi...</td>\n","      <td>average</td>\n","      <td>The power button was broke even though shippin...</td>\n","      <td>[0.047238051891326904, 0.05953611060976982, -0...</td>\n","      <td>0.917319</td>\n","    </tr>\n","    <tr>\n","      <th>30</th>\n","      <td>160</td>\n","      <td>average</td>\n","      <td>Very nice phone! Just wish the screen was bigg...</td>\n","      <td>[Very nice phone!,  Just wish the screen was b...</td>\n","      <td>average</td>\n","      <td>Very nice phone! Just wish the screen was bigg...</td>\n","      <td>[0.045016467571258545, -0.0780835747718811, -0...</td>\n","      <td>0.678112</td>\n","    </tr>\n","    <tr>\n","      <th>31</th>\n","      <td>720</td>\n","      <td>good</td>\n","      <td>very good phone.....u can go for it.....</td>\n","      <td>[very good phone., .., ..u can go for it., ....]</td>\n","      <td>good</td>\n","      <td>very good phone.....u can go for it.....</td>\n","      <td>[0.03792264685034752, -0.037515196949243546, -...</td>\n","      <td>0.673674</td>\n","    </tr>\n","    <tr>\n","      <th>32</th>\n","      <td>550</td>\n","      <td>average</td>\n","      <td>I've had this phone for a couple of days now; ...</td>\n","      <td>[I've had this phone for a couple of days now;...</td>\n","      <td>average</td>\n","      <td>I've had this phone for a couple of days now; ...</td>\n","      <td>[0.050961945205926895, -0.052520379424095154, ...</td>\n","      <td>0.997278</td>\n","    </tr>\n","    <tr>\n","      <th>33</th>\n","      <td>195</td>\n","      <td>good</td>\n","      <td>I needed to replace my cellphone. My cellphone...</td>\n","      <td>[I needed to replace my cellphone., My cellpho...</td>\n","      <td>good</td>\n","      <td>I needed to replace my cellphone. My cellphone...</td>\n","      <td>[0.05979858711361885, -0.028943181037902832, -...</td>\n","      <td>0.752725</td>\n","    </tr>\n","    <tr>\n","      <th>34</th>\n","      <td>838</td>\n","      <td>average</td>\n","      <td>The phone is a good alternative to the IPhone ...</td>\n","      <td>[The phone is a good alternative to the IPhone...</td>\n","      <td>good</td>\n","      <td>The phone is a good alternative to the IPhone ...</td>\n","      <td>[0.03965594992041588, 0.014275088906288147, -0...</td>\n","      <td>0.555583</td>\n","    </tr>\n","    <tr>\n","      <th>35</th>\n","      <td>256</td>\n","      <td>poor</td>\n","      <td>unfortunately phone did not open. so it dosen ...</td>\n","      <td>[unfortunately phone did not open., so it dose...</td>\n","      <td>average</td>\n","      <td>unfortunately phone did not open. so it dosen ...</td>\n","      <td>[0.05303366482257843, -0.03875928372144699, -0...</td>\n","      <td>0.956713</td>\n","    </tr>\n","    <tr>\n","      <th>36</th>\n","      <td>737</td>\n","      <td>good</td>\n","      <td>very good cell</td>\n","      <td>[very good cell]</td>\n","      <td>good</td>\n","      <td>very good cell</td>\n","      <td>[0.03370168060064316, -0.007878374308347702, 0...</td>\n","      <td>0.705309</td>\n","    </tr>\n","    <tr>\n","      <th>37</th>\n","      <td>1451</td>\n","      <td>good</td>\n","      <td>Excelente</td>\n","      <td>[Excelente]</td>\n","      <td>good</td>\n","      <td>Excelente</td>\n","      <td>[0.032463133335113525, -0.01719777286052704, -...</td>\n","      <td>0.688769</td>\n","    </tr>\n","    <tr>\n","      <th>38</th>\n","      <td>149</td>\n","      <td>average</td>\n","      <td>The case was more beat up than expected. We ha...</td>\n","      <td>[The case was more beat up than expected., We ...</td>\n","      <td>average</td>\n","      <td>The case was more beat up than expected. We ha...</td>\n","      <td>[0.05264990031719208, 0.03365705534815788, -0....</td>\n","      <td>0.997753</td>\n","    </tr>\n","    <tr>\n","      <th>39</th>\n","      <td>382</td>\n","      <td>good</td>\n","      <td>This is an amazing phone. I purchased the 6in ...</td>\n","      <td>[This is an amazing phone., I purchased the 6i...</td>\n","      <td>good</td>\n","      <td>This is an amazing phone. I purchased the 6in ...</td>\n","      <td>[0.04984811693429947, -0.012613500468432903, 0...</td>\n","      <td>0.717286</td>\n","    </tr>\n","    <tr>\n","      <th>40</th>\n","      <td>1041</td>\n","      <td>poor</td>\n","      <td>I am disappointed with The screen I love purpl...</td>\n","      <td>[I am disappointed with The screen I love purp...</td>\n","      <td>average</td>\n","      <td>I am disappointed with The screen I love purpl...</td>\n","      <td>[-0.045780692249536514, -0.07789155095815659, ...</td>\n","      <td>0.990542</td>\n","    </tr>\n","    <tr>\n","      <th>41</th>\n","      <td>835</td>\n","      <td>poor</td>\n","      <td>worst phone ever, gets hot while playing games...</td>\n","      <td>[worst phone ever, gets hot while playing game...</td>\n","      <td>average</td>\n","      <td>worst phone ever, gets hot while playing games...</td>\n","      <td>[0.01746101677417755, -0.0010333916870877147, ...</td>\n","      <td>0.999289</td>\n","    </tr>\n","    <tr>\n","      <th>42</th>\n","      <td>1286</td>\n","      <td>good</td>\n","      <td>It's a nice phone for such a low price</td>\n","      <td>[It's a nice phone for such a low price]</td>\n","      <td>good</td>\n","      <td>It's a nice phone for such a low price</td>\n","      <td>[0.0707060918211937, 0.011986343190073967, -0....</td>\n","      <td>0.679445</td>\n","    </tr>\n","    <tr>\n","      <th>43</th>\n","      <td>878</td>\n","      <td>average</td>\n","      <td>Works well except for the battery which doesn'...</td>\n","      <td>[Works well except for the battery which doesn...</td>\n","      <td>average</td>\n","      <td>Works well except for the battery which doesn'...</td>\n","      <td>[0.0558432899415493, 0.059432461857795715, -0....</td>\n","      <td>0.997743</td>\n","    </tr>\n","    <tr>\n","      <th>44</th>\n","      <td>450</td>\n","      <td>average</td>\n","      <td>Perfect for my husband. He calls and texts and...</td>\n","      <td>[Perfect for my husband., He calls and texts a...</td>\n","      <td>good</td>\n","      <td>Perfect for my husband. He calls and texts and...</td>\n","      <td>[0.05822255462408066, 0.006592782214283943, -0...</td>\n","      <td>0.534327</td>\n","    </tr>\n","    <tr>\n","      <th>45</th>\n","      <td>608</td>\n","      <td>average</td>\n","      <td>Bery good products</td>\n","      <td>[Bery good products]</td>\n","      <td>good</td>\n","      <td>Bery good products</td>\n","      <td>[0.07460196316242218, -0.01939500868320465, -0...</td>\n","      <td>0.702660</td>\n","    </tr>\n","    <tr>\n","      <th>46</th>\n","      <td>1040</td>\n","      <td>good</td>\n","      <td>Nice phone. Easy to read screen. I am a senior...</td>\n","      <td>[Nice phone., Easy to read screen., I am a sen...</td>\n","      <td>good</td>\n","      <td>Nice phone. Easy to read screen. I am a senior...</td>\n","      <td>[0.04795455560088158, 0.04623281955718994, 0.0...</td>\n","      <td>0.619675</td>\n","    </tr>\n","    <tr>\n","      <th>47</th>\n","      <td>1436</td>\n","      <td>poor</td>\n","      <td>Very slow and my provider said it is an old ph...</td>\n","      <td>[Very slow and my provider said it is an old p...</td>\n","      <td>average</td>\n","      <td>Very slow and my provider said it is an old ph...</td>\n","      <td>[0.015130952000617981, 0.0005779814091511071, ...</td>\n","      <td>0.527912</td>\n","    </tr>\n","    <tr>\n","      <th>48</th>\n","      <td>199</td>\n","      <td>good</td>\n","      <td>I love it!</td>\n","      <td>[I love it!]</td>\n","      <td>good</td>\n","      <td>I love it!</td>\n","      <td>[0.01370970718562603, -0.07495597004890442, -0...</td>\n","      <td>0.701215</td>\n","    </tr>\n","    <tr>\n","      <th>49</th>\n","      <td>312</td>\n","      <td>good</td>\n","      <td>Even the instruction book is better than any I...</td>\n","      <td>[Even the instruction book is better than any ...</td>\n","      <td>good</td>\n","      <td>Even the instruction book is better than any I...</td>\n","      <td>[0.025209710001945496, 0.044519584625959396, 0...</td>\n","      <td>0.718504</td>\n","    </tr>\n","  </tbody>\n","</table>\n","</div>"],"text/plain":["    origin_index  ... trained_classifier_confidence_confidence\n","0           1045  ...                                 0.688769\n","1            501  ...                                 0.671200\n","2            539  ...                                 0.964706\n","3           1073  ...                                 0.896623\n","4            191  ...                                 0.657733\n","5            237  ...                                 0.801204\n","6           1357  ...                                 0.644998\n","7            543  ...                                 0.733167\n","8            947  ...                                 0.481921\n","9           1178  ...                                 0.641099\n","10           729  ...                                 0.733164\n","11           570  ...                                 0.679282\n","12           466  ...                                 0.492642\n","13            80  ...                                 0.929107\n","14          1284  ...                                 0.963391\n","15          1414  ...                                 0.592771\n","16            29  ...                                 0.989909\n","17           424  ...                                 0.568540\n","18           250  ...                                 0.990839\n","19          1017  ...                                 0.527347\n","20           629  ...                                 0.616338\n","21           313  ...                                 0.847154\n","22          1344  ...                                 0.687788\n","23           577  ...                                 0.897357\n","24           687  ...                                 0.620550\n","25           845  ...                                 0.615326\n","26           625  ...                                 0.617060\n","27           952  ...                                 0.746512\n","28          1317  ...                                 0.602084\n","29           159  ...                                 0.917319\n","30           160  ...                                 0.678112\n","31           720  ...                                 0.673674\n","32           550  ...                                 0.997278\n","33           195  ...                                 0.752725\n","34           838  ...                                 0.555583\n","35           256  ...                                 0.956713\n","36           737  ...                                 0.705309\n","37          1451  ...                                 0.688769\n","38           149  ...                                 0.997753\n","39           382  ...                                 0.717286\n","40          1041  ...                                 0.990542\n","41           835  ...                                 0.999289\n","42          1286  ...                                 0.679445\n","43           878  ...                                 0.997743\n","44           450  ...                                 0.534327\n","45           608  ...                                 0.702660\n","46          1040  ...                                 0.619675\n","47          1436  ...                                 0.527912\n","48           199  ...                                 0.701215\n","49           312  ...                                 0.718504\n","\n","[50 rows x 8 columns]"]},"metadata":{"tags":[]},"execution_count":4}]},{"cell_type":"markdown","metadata":{"id":"lVyOE2wV0fw_"},"source":["# 4. Test the fitted pipe on new example"]},{"cell_type":"code","metadata":{"colab":{"base_uri":"https://localhost:8080/","height":80},"id":"qdCUg2MR0PD2","executionInfo":{"elapsed":253944,"status":"ok","timestamp":1620190666255,"user":{"displayName":"ahmed lone","photoUrl":"","userId":"02458088882398909889"},"user_tz":-300},"outputId":"9f968556-c44d-43a7-d0bf-771752741a02"},"source":["fitted_pipe.predict(\"It worked perfectly .\")"],"execution_count":null,"outputs":[{"output_type":"execute_result","data":{"text/html":["<div>\n","<style scoped>\n","    .dataframe tbody tr th:only-of-type {\n","        vertical-align: middle;\n","    }\n","\n","    .dataframe tbody tr th {\n","        vertical-align: top;\n","    }\n","\n","    .dataframe thead th {\n","        text-align: right;\n","    }\n","</style>\n","<table border=\"1\" class=\"dataframe\">\n","  <thead>\n","    <tr style=\"text-align: right;\">\n","      <th></th>\n","      <th>origin_index</th>\n","      <th>document</th>\n","      <th>sentence</th>\n","      <th>trained_classifier</th>\n","      <th>sentence_embedding_use</th>\n","      <th>trained_classifier_confidence_confidence</th>\n","    </tr>\n","  </thead>\n","  <tbody>\n","    <tr>\n","      <th>0</th>\n","      <td>0</td>\n","      <td>It worked perfectly .</td>\n","      <td>[It worked perfectly .]</td>\n","      <td>average</td>\n","      <td>[0.01656321808695793, 0.0024238349869847298, -...</td>\n","      <td>0.746443</td>\n","    </tr>\n","  </tbody>\n","</table>\n","</div>"],"text/plain":["   origin_index  ... trained_classifier_confidence_confidence\n","0             0  ...                                 0.746443\n","\n","[1 rows x 6 columns]"]},"metadata":{"tags":[]},"execution_count":5}]},{"cell_type":"markdown","metadata":{"id":"xflpwrVjjBVD"},"source":["## 5. Configure pipe training parameters"]},{"cell_type":"code","metadata":{"colab":{"base_uri":"https://localhost:8080/"},"id":"UtsAUGTmOTms","executionInfo":{"elapsed":253945,"status":"ok","timestamp":1620190666258,"user":{"displayName":"ahmed lone","photoUrl":"","userId":"02458088882398909889"},"user_tz":-300},"outputId":"839a590f-91a6-4da7-e4cc-e88d11f19347"},"source":["trainable_pipe.print_info()"],"execution_count":null,"outputs":[{"output_type":"stream","text":["The following parameters are configurable for this NLU pipeline (You can copy paste the examples) :\n",">>> pipe['classifier_dl'] has settable params:\n","pipe['classifier_dl'].setMaxEpochs(3)                | Info: Maximum number of epochs to train | Currently set to : 3\n","pipe['classifier_dl'].setLr(0.005)                   | Info: Learning Rate | Currently set to : 0.005\n","pipe['classifier_dl'].setBatchSize(64)               | Info: Batch size | Currently set to : 64\n","pipe['classifier_dl'].setDropout(0.5)                | Info: Dropout coefficient | Currently set to : 0.5\n","pipe['classifier_dl'].setEnableOutputLogs(True)      | Info: Whether to use stdout in addition to Spark logs. | Currently set to : True\n",">>> pipe['use@tfhub_use'] has settable params:\n","pipe['use@tfhub_use'].setDimension(512)              | Info: Number of embedding dimensions | Currently set to : 512\n","pipe['use@tfhub_use'].setLoadSP(False)               | Info: Whether to load SentencePiece ops file which is required only by multi-lingual models. This is not changeable after it's set with a pretrained model nor it is compatible with Windows. | Currently set to : False\n","pipe['use@tfhub_use'].setStorageRef('tfhub_use')     | Info: unique reference name for identification | Currently set to : tfhub_use\n",">>> pipe['deep_sentence_detector@SentenceDetectorDLModel_c83c27f46b97'] has settable params:\n","pipe['deep_sentence_detector@SentenceDetectorDLModel_c83c27f46b97'].setExplodeSentences(False)  | Info: whether to explode each sentence into a different row, for better parallelization. Defaults to false. | Currently set to : False\n","pipe['deep_sentence_detector@SentenceDetectorDLModel_c83c27f46b97'].setStorageRef('SentenceDetectorDLModel_c83c27f46b97')  | Info: storage unique identifier | Currently set to : SentenceDetectorDLModel_c83c27f46b97\n","pipe['deep_sentence_detector@SentenceDetectorDLModel_c83c27f46b97'].setEncoder(com.johnsnowlabs.nlp.annotators.sentence_detector_dl.SentenceDetectorDLEncoder@1632a50d)  | Info: Data encoder | Currently set to : com.johnsnowlabs.nlp.annotators.sentence_detector_dl.SentenceDetectorDLEncoder@1632a50d\n","pipe['deep_sentence_detector@SentenceDetectorDLModel_c83c27f46b97'].setImpossiblePenultimates(['Bros', 'No', 'al', 'vs', 'etc', 'Fig', 'Dr', 'Prof', 'PhD', 'MD', 'Co', 'Corp', 'Inc', 'bros', 'VS', 'Vs', 'ETC', 'fig', 'dr', 'prof', 'PHD', 'phd', 'md', 'co', 'corp', 'inc', 'Jan', 'Feb', 'Mar', 'Apr', 'Jul', 'Aug', 'Sep', 'Sept', 'Oct', 'Nov', 'Dec', 'St', 'st', 'AM', 'PM', 'am', 'pm', 'e.g', 'f.e', 'i.e'])  | Info: Impossible penultimates | Currently set to : ['Bros', 'No', 'al', 'vs', 'etc', 'Fig', 'Dr', 'Prof', 'PhD', 'MD', 'Co', 'Corp', 'Inc', 'bros', 'VS', 'Vs', 'ETC', 'fig', 'dr', 'prof', 'PHD', 'phd', 'md', 'co', 'corp', 'inc', 'Jan', 'Feb', 'Mar', 'Apr', 'Jul', 'Aug', 'Sep', 'Sept', 'Oct', 'Nov', 'Dec', 'St', 'st', 'AM', 'PM', 'am', 'pm', 'e.g', 'f.e', 'i.e']\n","pipe['deep_sentence_detector@SentenceDetectorDLModel_c83c27f46b97'].setModelArchitecture('cnn')  | Info: Model architecture (CNN) | Currently set to : cnn\n",">>> pipe['document_assembler'] has settable params:\n","pipe['document_assembler'].setCleanupMode('shrink')  | Info: possible values: disabled, inplace, inplace_full, shrink, shrink_full, each, each_full, delete_full | Currently set to : shrink\n"],"name":"stdout"}]},{"cell_type":"markdown","metadata":{"id":"2GJdDNV9jEIe"},"source":["## 6.  Retrain with new parameters"]},{"cell_type":"code","metadata":{"colab":{"base_uri":"https://localhost:8080/","height":759},"id":"mptfvHx-MMMX","executionInfo":{"elapsed":266759,"status":"ok","timestamp":1620190679075,"user":{"displayName":"ahmed lone","photoUrl":"","userId":"02458088882398909889"},"user_tz":-300},"outputId":"69f0c3de-c61d-45b5-fd3a-96c44d821f42"},"source":["# Train longer!\n","trainable_pipe = nlu.load('train.classifier')\n","trainable_pipe['trainable_multi_classifier_dl'].setMaxEpochs(5)  \n","fitted_pipe = trainable_pipe.fit(train_df.iloc[:100])\n","# predict with the trainable pipeline on dataset and get predictions\n","preds = fitted_pipe.predict(train_df.iloc[:100],output_level='document')\n","\n","#sentence detector that is part of the pipe generates sone NaNs. lets drop them first\n","preds.dropna(inplace=True)\n","from sklearn.metrics import classification_report\n","print(classification_report(preds['y'], preds['classifier_dl']))\n","preds"],"execution_count":null,"outputs":[{"output_type":"stream","text":["              precision    recall  f1-score   support\n","\n","     average       0.00      0.00      0.00        33\n","        good       0.65      0.97      0.78        36\n","        poor       0.57      0.84      0.68        31\n","\n","    accuracy                           0.61       100\n","   macro avg       0.40      0.60      0.48       100\n","weighted avg       0.41      0.61      0.49       100\n","\n"],"name":"stdout"},{"output_type":"execute_result","data":{"text/html":["<div>\n","<style scoped>\n","    .dataframe tbody tr th:only-of-type {\n","        vertical-align: middle;\n","    }\n","\n","    .dataframe tbody tr th {\n","        vertical-align: top;\n","    }\n","\n","    .dataframe thead th {\n","        text-align: right;\n","    }\n","</style>\n","<table border=\"1\" class=\"dataframe\">\n","  <thead>\n","    <tr style=\"text-align: right;\">\n","      <th></th>\n","      <th>origin_index</th>\n","      <th>y</th>\n","      <th>document</th>\n","      <th>sentence</th>\n","      <th>trained_classifier</th>\n","      <th>text</th>\n","      <th>sentence_embedding_use</th>\n","      <th>trained_classifier_confidence_confidence</th>\n","    </tr>\n","  </thead>\n","  <tbody>\n","    <tr>\n","      <th>0</th>\n","      <td>1045</td>\n","      <td>good</td>\n","      <td>excelente</td>\n","      <td>[excelente]</td>\n","      <td>good</td>\n","      <td>excelente</td>\n","      <td>[0.032463133335113525, -0.01719777286052704, -...</td>\n","      <td>0.981568</td>\n","    </tr>\n","    <tr>\n","      <th>1</th>\n","      <td>501</td>\n","      <td>good</td>\n","      <td>Good tanks</td>\n","      <td>[Good tanks]</td>\n","      <td>good</td>\n","      <td>Good tanks</td>\n","      <td>[-0.04523428902029991, -0.0027615062426775694,...</td>\n","      <td>0.949588</td>\n","    </tr>\n","    <tr>\n","      <th>2</th>\n","      <td>539</td>\n","      <td>poor</td>\n","      <td>My charger does not work.I would like one that...</td>\n","      <td>[My charger does not work., I would like one t...</td>\n","      <td>poor</td>\n","      <td>My charger does not work.I would like one that...</td>\n","      <td>[0.05880051478743553, 0.07787849009037018, -0....</td>\n","      <td>0.827465</td>\n","    </tr>\n","    <tr>\n","      <th>3</th>\n","      <td>1073</td>\n","      <td>poor</td>\n","      <td>Positive: Large screen is good for senior peop...</td>\n","      <td>[Positive: Large screen is good for senior peo...</td>\n","      <td>poor</td>\n","      <td>Positive: Large screen is good for senior peop...</td>\n","      <td>[0.06194938346743584, 0.046952128410339355, -0...</td>\n","      <td>0.659105</td>\n","    </tr>\n","    <tr>\n","      <th>4</th>\n","      <td>191</td>\n","      <td>good</td>\n","      <td>works good for 3G and above. Simple to use. No...</td>\n","      <td>[works good for 3G and above., Simple to use.,...</td>\n","      <td>good</td>\n","      <td>works good for 3G and above. Simple to use. No...</td>\n","      <td>[0.07321183383464813, 0.023221753537654877, -0...</td>\n","      <td>0.833691</td>\n","    </tr>\n","    <tr>\n","      <th>...</th>\n","      <td>...</td>\n","      <td>...</td>\n","      <td>...</td>\n","      <td>...</td>\n","      <td>...</td>\n","      <td>...</td>\n","      <td>...</td>\n","      <td>...</td>\n","    </tr>\n","    <tr>\n","      <th>95</th>\n","      <td>276</td>\n","      <td>average</td>\n","      <td>Average phone with very buggy performance</td>\n","      <td>[Average phone with very buggy performance]</td>\n","      <td>poor</td>\n","      <td>Average phone with very buggy performance</td>\n","      <td>[0.02227046899497509, -0.027068903669714928, -...</td>\n","      <td>0.652539</td>\n","    </tr>\n","    <tr>\n","      <th>96</th>\n","      <td>984</td>\n","      <td>good</td>\n","      <td>Great simple to use phone for my mother. Big b...</td>\n","      <td>[Great simple to use phone for my mother., Big...</td>\n","      <td>good</td>\n","      <td>Great simple to use phone for my mother. Big b...</td>\n","      <td>[0.05902329087257385, 0.039018064737319946, 0....</td>\n","      <td>0.956777</td>\n","    </tr>\n","    <tr>\n","      <th>97</th>\n","      <td>1054</td>\n","      <td>good</td>\n","      <td>My husband loves it!! It is simple to use with...</td>\n","      <td>[My husband loves it!!, It is simple to use wi...</td>\n","      <td>good</td>\n","      <td>My husband loves it!! It is simple to use with...</td>\n","      <td>[0.05941574648022652, -0.00272793835029006, -0...</td>\n","      <td>0.960578</td>\n","    </tr>\n","    <tr>\n","      <th>98</th>\n","      <td>1153</td>\n","      <td>poor</td>\n","      <td>This phone sucks the data is terrible if you h...</td>\n","      <td>[This phone sucks the data is terrible if you ...</td>\n","      <td>poor</td>\n","      <td>This phone sucks the data is terrible if you h...</td>\n","      <td>[0.05509388446807861, 0.005729800555855036, -0...</td>\n","      <td>0.905695</td>\n","    </tr>\n","    <tr>\n","      <th>99</th>\n","      <td>732</td>\n","      <td>average</td>\n","      <td>The phone arrived on time as indicated (That i...</td>\n","      <td>[The phone arrived on time as indicated (That ...</td>\n","      <td>poor</td>\n","      <td>The phone arrived on time as indicated (That i...</td>\n","      <td>[0.022888457402586937, -0.02594594471156597, -...</td>\n","      <td>0.832351</td>\n","    </tr>\n","  </tbody>\n","</table>\n","<p>100 rows × 8 columns</p>\n","</div>"],"text/plain":["    origin_index  ... trained_classifier_confidence_confidence\n","0           1045  ...                                 0.981568\n","1            501  ...                                 0.949588\n","2            539  ...                                 0.827465\n","3           1073  ...                                 0.659105\n","4            191  ...                                 0.833691\n","..           ...  ...                                      ...\n","95           276  ...                                 0.652539\n","96           984  ...                                 0.956777\n","97          1054  ...                                 0.960578\n","98          1153  ...                                 0.905695\n","99           732  ...                                 0.832351\n","\n","[100 rows x 8 columns]"]},"metadata":{"tags":[]},"execution_count":7}]},{"cell_type":"markdown","metadata":{"id":"qFoT-s1MjTSS"},"source":["#7. Try training with different Embeddings"]},{"cell_type":"code","metadata":{"colab":{"base_uri":"https://localhost:8080/"},"id":"nxWFzQOhjWC8","executionInfo":{"elapsed":266753,"status":"ok","timestamp":1620190679077,"user":{"displayName":"ahmed lone","photoUrl":"","userId":"02458088882398909889"},"user_tz":-300},"outputId":"5111811b-a8fa-4c3b-9e90-542712d069bd"},"source":["# We can use nlu.print_components(action='embed_sentence') to see every possibler sentence embedding we could use. Lets use bert!\n","nlu.print_components(action='embed_sentence')"],"execution_count":null,"outputs":[{"output_type":"stream","text":["For language <en> NLU provides the following Models : \n","nlu.load('en.embed_sentence') returns Spark NLP model tfhub_use\n","nlu.load('en.embed_sentence.use') returns Spark NLP model tfhub_use\n","nlu.load('en.embed_sentence.tfhub_use') returns Spark NLP model tfhub_use\n","nlu.load('en.embed_sentence.use.lg') returns Spark NLP model tfhub_use_lg\n","nlu.load('en.embed_sentence.tfhub_use.lg') returns Spark NLP model tfhub_use_lg\n","nlu.load('en.embed_sentence.albert') returns Spark NLP model albert_base_uncased\n","nlu.load('en.embed_sentence.electra') returns Spark NLP model sent_electra_small_uncased\n","nlu.load('en.embed_sentence.electra_small_uncased') returns Spark NLP model sent_electra_small_uncased\n","nlu.load('en.embed_sentence.electra_base_uncased') returns Spark NLP model sent_electra_base_uncased\n","nlu.load('en.embed_sentence.electra_large_uncased') returns Spark NLP model sent_electra_large_uncased\n","nlu.load('en.embed_sentence.bert') returns Spark NLP model sent_bert_base_uncased\n","nlu.load('en.embed_sentence.bert_base_uncased') returns Spark NLP model sent_bert_base_uncased\n","nlu.load('en.embed_sentence.bert_base_cased') returns Spark NLP model sent_bert_base_cased\n","nlu.load('en.embed_sentence.bert_large_uncased') returns Spark NLP model sent_bert_large_uncased\n","nlu.load('en.embed_sentence.bert_large_cased') returns Spark NLP model sent_bert_large_cased\n","nlu.load('en.embed_sentence.biobert.pubmed_base_cased') returns Spark NLP model sent_biobert_pubmed_base_cased\n","nlu.load('en.embed_sentence.biobert.pubmed_large_cased') returns Spark NLP model sent_biobert_pubmed_large_cased\n","nlu.load('en.embed_sentence.biobert.pmc_base_cased') returns Spark NLP model sent_biobert_pmc_base_cased\n","nlu.load('en.embed_sentence.biobert.pubmed_pmc_base_cased') returns Spark NLP model sent_biobert_pubmed_pmc_base_cased\n","nlu.load('en.embed_sentence.biobert.clinical_base_cased') returns Spark NLP model sent_biobert_clinical_base_cased\n","nlu.load('en.embed_sentence.biobert.discharge_base_cased') returns Spark NLP model sent_biobert_discharge_base_cased\n","nlu.load('en.embed_sentence.covidbert.large_uncased') returns Spark NLP model sent_covidbert_large_uncased\n","nlu.load('en.embed_sentence.small_bert_L2_128') returns Spark NLP model sent_small_bert_L2_128\n","nlu.load('en.embed_sentence.small_bert_L4_128') returns Spark NLP model sent_small_bert_L4_128\n","nlu.load('en.embed_sentence.small_bert_L6_128') returns Spark NLP model sent_small_bert_L6_128\n","nlu.load('en.embed_sentence.small_bert_L8_128') returns Spark NLP model sent_small_bert_L8_128\n","nlu.load('en.embed_sentence.small_bert_L10_128') returns Spark NLP model sent_small_bert_L10_128\n","nlu.load('en.embed_sentence.small_bert_L12_128') returns Spark NLP model sent_small_bert_L12_128\n","nlu.load('en.embed_sentence.small_bert_L2_256') returns Spark NLP model sent_small_bert_L2_256\n","nlu.load('en.embed_sentence.small_bert_L4_256') returns Spark NLP model sent_small_bert_L4_256\n","nlu.load('en.embed_sentence.small_bert_L6_256') returns Spark NLP model sent_small_bert_L6_256\n","nlu.load('en.embed_sentence.small_bert_L8_256') returns Spark NLP model sent_small_bert_L8_256\n","nlu.load('en.embed_sentence.small_bert_L10_256') returns Spark NLP model sent_small_bert_L10_256\n","nlu.load('en.embed_sentence.small_bert_L12_256') returns Spark NLP model sent_small_bert_L12_256\n","nlu.load('en.embed_sentence.small_bert_L2_512') returns Spark NLP model sent_small_bert_L2_512\n","nlu.load('en.embed_sentence.small_bert_L4_512') returns Spark NLP model sent_small_bert_L4_512\n","nlu.load('en.embed_sentence.small_bert_L6_512') returns Spark NLP model sent_small_bert_L6_512\n","nlu.load('en.embed_sentence.small_bert_L8_512') returns Spark NLP model sent_small_bert_L8_512\n","nlu.load('en.embed_sentence.small_bert_L10_512') returns Spark NLP model sent_small_bert_L10_512\n","nlu.load('en.embed_sentence.small_bert_L12_512') returns Spark NLP model sent_small_bert_L12_512\n","nlu.load('en.embed_sentence.small_bert_L2_768') returns Spark NLP model sent_small_bert_L2_768\n","nlu.load('en.embed_sentence.small_bert_L4_768') returns Spark NLP model sent_small_bert_L4_768\n","nlu.load('en.embed_sentence.small_bert_L6_768') returns Spark NLP model sent_small_bert_L6_768\n","nlu.load('en.embed_sentence.small_bert_L8_768') returns Spark NLP model sent_small_bert_L8_768\n","nlu.load('en.embed_sentence.small_bert_L10_768') returns Spark NLP model sent_small_bert_L10_768\n","nlu.load('en.embed_sentence.small_bert_L12_768') returns Spark NLP model sent_small_bert_L12_768\n","For language <fi> NLU provides the following Models : \n","nlu.load('fi.embed_sentence') returns Spark NLP model sent_bert_finnish_cased\n","nlu.load('fi.embed_sentence.bert.cased') returns Spark NLP model sent_bert_finnish_cased\n","nlu.load('fi.embed_sentence.bert.uncased') returns Spark NLP model sent_bert_finnish_uncased\n","For language <xx> NLU provides the following Models : \n","nlu.load('xx.embed_sentence') returns Spark NLP model sent_bert_multi_cased\n","nlu.load('xx.embed_sentence.bert') returns Spark NLP model sent_bert_multi_cased\n","nlu.load('xx.embed_sentence.bert.cased') returns Spark NLP model sent_bert_multi_cased\n","nlu.load('xx.embed_sentence.labse') returns Spark NLP model labse\n"],"name":"stdout"}]},{"cell_type":"code","metadata":{"colab":{"base_uri":"https://localhost:8080/"},"id":"IKK_Ii_gjJfF","executionInfo":{"status":"ok","timestamp":1620196757175,"user_tz":-300,"elapsed":1376712,"user":{"displayName":"ahmed lone","photoUrl":"","userId":"02458088882398909889"}},"outputId":"1edbc0da-2aee-41f9-f079-f4d124b2ba47"},"source":["trainable_pipe = nlu.load('en.embed_sentence.small_bert_L12_768 train.classifier')\n","# We need to train longer and user smaller LR for NON-USE based sentence embeddings usually\n","# We could tune the hyperparameters further with hyperparameter tuning methods like gridsearch\n","# Also longer training gives more accuracy\n","trainable_pipe['trainable_classifier_dl'].setMaxEpochs(90)  \n","trainable_pipe['trainable_classifier_dl'].setLr(0.0005) \n","fitted_pipe = trainable_pipe.fit(train_df)\n","# predict with the trainable pipeline on dataset and get predictions\n","preds = fitted_pipe.predict(train_df,output_level='document')\n","\n","#sentence detector that is part of the pipe generates sone NaNs. lets drop them first\n","preds.dropna(inplace=True)\n","from sklearn.metrics import classification_report\n","print(classification_report(preds['y'], preds['classifier_dl']))\n","\n","#preds\n"],"execution_count":null,"outputs":[{"output_type":"stream","text":["sent_small_bert_L12_768 download started this may take some time.\n","Approximate size to download 392.9 MB\n","[OK!]\n","sentence_detector_dl download started this may take some time.\n","Approximate size to download 354.6 KB\n","[OK!]\n","              precision    recall  f1-score   support\n","\n","     average       0.69      0.66      0.68       393\n","        good       0.80      0.85      0.82       415\n","        poor       0.77      0.76      0.76       392\n","\n","    accuracy                           0.76      1200\n","   macro avg       0.75      0.76      0.75      1200\n","weighted avg       0.76      0.76      0.76      1200\n","\n"],"name":"stdout"}]},{"cell_type":"markdown","metadata":{"id":"_1jxw3GnVGlI"},"source":["# 7.1 evaluate on Test Data"]},{"cell_type":"code","metadata":{"colab":{"base_uri":"https://localhost:8080/"},"id":"Fxx4yNkNVGFl","executionInfo":{"status":"ok","timestamp":1620196824778,"user_tz":-300,"elapsed":1440688,"user":{"displayName":"ahmed lone","photoUrl":"","userId":"02458088882398909889"}},"outputId":"a8ee67a1-bba9-457f-917d-3e9ccb968f2b"},"source":["preds = fitted_pipe.predict(test_df,output_level='document')\n","\n","#sentence detector that is part of the pipe generates sone NaNs. lets drop them first\n","preds.dropna(inplace=True)\n","print(classification_report(preds['y'], preds['classifier_dl']))"],"execution_count":null,"outputs":[{"output_type":"stream","text":["              precision    recall  f1-score   support\n","\n","     average       0.71      0.65      0.68       107\n","        good       0.77      0.86      0.81        85\n","        poor       0.77      0.76      0.77       108\n","\n","    accuracy                           0.75       300\n","   macro avg       0.75      0.76      0.75       300\n","weighted avg       0.75      0.75      0.75       300\n","\n"],"name":"stdout"}]},{"cell_type":"markdown","metadata":{"id":"2BB-NwZUoHSe"},"source":["# 8. Lets save the model"]},{"cell_type":"code","metadata":{"id":"eLex095goHwm","colab":{"base_uri":"https://localhost:8080/"},"executionInfo":{"status":"ok","timestamp":1620197096185,"user_tz":-300,"elapsed":248696,"user":{"displayName":"ahmed lone","photoUrl":"","userId":"02458088882398909889"}},"outputId":"97597b39-5d93-4dfc-8b4d-852dc625cf0c"},"source":["stored_model_path = './models/classifier_dl_trained' \n","fitted_pipe.save(stored_model_path)"],"execution_count":null,"outputs":[{"output_type":"stream","text":["Stored model in ./model/classifier_dl_trained\n"],"name":"stdout"}]},{"cell_type":"markdown","metadata":{"id":"e_b2DPd4rCiU"},"source":["# 9. Lets load the model from HDD.\n","This makes Offlien NLU usage possible!   \n","You need to call nlu.load(path=path_to_the_pipe) to load a model/pipeline from disk."]},{"cell_type":"code","metadata":{"id":"SO4uz45MoRgp","colab":{"base_uri":"https://localhost:8080/"},"executionInfo":{"status":"ok","timestamp":1620197114270,"user_tz":-300,"elapsed":257414,"user":{"displayName":"ahmed lone","photoUrl":"","userId":"02458088882398909889"}},"outputId":"97c5da96-d94f-4d65-8f97-0074c3ba34fd"},"source":["hdd_pipe = nlu.load(path=stored_model_path)\n","\n","preds = hdd_pipe.predict('It worked perfectly.')\n","preds"],"execution_count":null,"outputs":[{"output_type":"execute_result","data":{"text/html":["<div>\n","<style scoped>\n","    .dataframe tbody tr th:only-of-type {\n","        vertical-align: middle;\n","    }\n","\n","    .dataframe tbody tr th {\n","        vertical-align: top;\n","    }\n","\n","    .dataframe thead th {\n","        text-align: right;\n","    }\n","</style>\n","<table border=\"1\" class=\"dataframe\">\n","  <thead>\n","    <tr style=\"text-align: right;\">\n","      <th></th>\n","      <th>origin_index</th>\n","      <th>from_disk_confidence_confidence</th>\n","      <th>text</th>\n","      <th>sentence_embedding_from_disk</th>\n","      <th>from_disk</th>\n","      <th>document</th>\n","      <th>sentence</th>\n","    </tr>\n","  </thead>\n","  <tbody>\n","    <tr>\n","      <th>0</th>\n","      <td>8589934592</td>\n","      <td>[0.8648654]</td>\n","      <td>It worked perfectly.</td>\n","      <td>[[0.27597182989120483, 0.4924651086330414, 0.2...</td>\n","      <td>[good]</td>\n","      <td>It worked perfectly.</td>\n","      <td>[It worked perfectly.]</td>\n","    </tr>\n","  </tbody>\n","</table>\n","</div>"],"text/plain":["   origin_index  ...                sentence\n","0    8589934592  ...  [It worked perfectly.]\n","\n","[1 rows x 7 columns]"]},"metadata":{"tags":[]},"execution_count":11}]},{"cell_type":"code","metadata":{"id":"e0CVlkk9v6Qi","colab":{"base_uri":"https://localhost:8080/"},"executionInfo":{"status":"ok","timestamp":1620197114272,"user_tz":-300,"elapsed":257161,"user":{"displayName":"ahmed lone","photoUrl":"","userId":"02458088882398909889"}},"outputId":"db2a6ebb-dc50-4d5e-b583-5b6fcde5b3f7"},"source":["hdd_pipe.print_info()"],"execution_count":null,"outputs":[{"output_type":"stream","text":["The following parameters are configurable for this NLU pipeline (You can copy paste the examples) :\n",">>> pipe['document_assembler'] has settable params:\n","pipe['document_assembler'].setCleanupMode('shrink')                                     | Info: possible values: disabled, inplace, inplace_full, shrink, shrink_full, each, each_full, delete_full | Currently set to : shrink\n",">>> pipe['sentence_detector@SentenceDetectorDLModel_c83c27f46b97'] has settable params:\n","pipe['sentence_detector@SentenceDetectorDLModel_c83c27f46b97'].setExplodeSentences(False)  | Info: whether to explode each sentence into a different row, for better parallelization. Defaults to false. | Currently set to : False\n","pipe['sentence_detector@SentenceDetectorDLModel_c83c27f46b97'].setStorageRef('SentenceDetectorDLModel_c83c27f46b97')  | Info: storage unique identifier | Currently set to : SentenceDetectorDLModel_c83c27f46b97\n","pipe['sentence_detector@SentenceDetectorDLModel_c83c27f46b97'].setEncoder(com.johnsnowlabs.nlp.annotators.sentence_detector_dl.SentenceDetectorDLEncoder@12a61521)  | Info: Data encoder | Currently set to : com.johnsnowlabs.nlp.annotators.sentence_detector_dl.SentenceDetectorDLEncoder@12a61521\n","pipe['sentence_detector@SentenceDetectorDLModel_c83c27f46b97'].setImpossiblePenultimates(['Bros', 'No', 'al', 'vs', 'etc', 'Fig', 'Dr', 'Prof', 'PhD', 'MD', 'Co', 'Corp', 'Inc', 'bros', 'VS', 'Vs', 'ETC', 'fig', 'dr', 'prof', 'PHD', 'phd', 'md', 'co', 'corp', 'inc', 'Jan', 'Feb', 'Mar', 'Apr', 'Jul', 'Aug', 'Sep', 'Sept', 'Oct', 'Nov', 'Dec', 'St', 'st', 'AM', 'PM', 'am', 'pm', 'e.g', 'f.e', 'i.e'])  | Info: Impossible penultimates | Currently set to : ['Bros', 'No', 'al', 'vs', 'etc', 'Fig', 'Dr', 'Prof', 'PhD', 'MD', 'Co', 'Corp', 'Inc', 'bros', 'VS', 'Vs', 'ETC', 'fig', 'dr', 'prof', 'PHD', 'phd', 'md', 'co', 'corp', 'inc', 'Jan', 'Feb', 'Mar', 'Apr', 'Jul', 'Aug', 'Sep', 'Sept', 'Oct', 'Nov', 'Dec', 'St', 'st', 'AM', 'PM', 'am', 'pm', 'e.g', 'f.e', 'i.e']\n","pipe['sentence_detector@SentenceDetectorDLModel_c83c27f46b97'].setModelArchitecture('cnn')  | Info: Model architecture (CNN) | Currently set to : cnn\n",">>> pipe['bert_sentence@sent_small_bert_L12_768'] has settable params:\n","pipe['bert_sentence@sent_small_bert_L12_768'].setBatchSize(8)                           | Info: Size of every batch | Currently set to : 8\n","pipe['bert_sentence@sent_small_bert_L12_768'].setCaseSensitive(False)                   | Info: whether to ignore case in tokens for embeddings matching | Currently set to : False\n","pipe['bert_sentence@sent_small_bert_L12_768'].setDimension(768)                         | Info: Number of embedding dimensions | Currently set to : 768\n","pipe['bert_sentence@sent_small_bert_L12_768'].setMaxSentenceLength(128)                 | Info: Max sentence length to process | Currently set to : 128\n","pipe['bert_sentence@sent_small_bert_L12_768'].setIsLong(False)                          | Info: Use Long type instead of Int type for inputs buffer - Some Bert models require Long instead of Int. | Currently set to : False\n","pipe['bert_sentence@sent_small_bert_L12_768'].setStorageRef('sent_small_bert_L12_768')  | Info: unique reference name for identification | Currently set to : sent_small_bert_L12_768\n",">>> pipe['classifier_dl@sent_small_bert_L12_768'] has settable params:\n","pipe['classifier_dl@sent_small_bert_L12_768'].setClasses(['average', 'poor', 'good'])   | Info: get the tags used to trained this ClassifierDLModel | Currently set to : ['average', 'poor', 'good']\n","pipe['classifier_dl@sent_small_bert_L12_768'].setStorageRef('sent_small_bert_L12_768')  | Info: unique reference name for identification | Currently set to : sent_small_bert_L12_768\n"],"name":"stdout"}]},{"cell_type":"code","metadata":{"id":"SShSTNXA_AJ2"},"source":[""],"execution_count":null,"outputs":[]}]}