code
stringlengths
2.5k
6.36M
kind
stringclasses
2 values
parsed_code
stringlengths
0
404k
quality_prob
float64
0
0.98
learning_prob
float64
0.03
1
# EA Assignment 00 - Project Definition __Authored by: Álvaro Bartolomé del Canto (alvarobartt @ GitHub)__ --- <img src="https://media-exp1.licdn.com/dms/image/C561BAQFjp6F5hjzDhg/company-background_10000/0?e=2159024400&v=beta&t=OfpXJFCHCqdhcTu7Ud-lediwihm0cANad1Kc_8JcMpA"> ## Project Overview __The goal of the test is working with a multi-language dataset, in order to demonstrate your Natural Language Processing and Machine Translation abilities.__ The Core Data Scientist and Storytelling attributes will also be evaluated during your resolution of the case. `About the Data`: The dataset you will be using is a multilingual, multi-context set of documents, which are a part of the one described on the following paper: _Ferrero, Jérémy & Agnès, Frédéric & Besacier, Laurent & Schwab, Didier. (2016). A Multilingual, Multi-Style and Multi-Granularity Dataset for Cross-Language Textual Similarity Detection._ Please note the dataset is divided on contexts/categories (Conference_papers, Wikipedia, ... ) and on languages, in the same way the folders are structured. `Objective 1`: Create a document categorization classifier for the different contexts of the documents. You will be addressing this objective at context level, regardless of the language the documents are written in. Tasks/Requirements: * EDA: Exploratory data analysis of the Dataset. * Reproducibility/Methodology: The analysis you provide must be reproductible. Your analysis will fulfill the Data Science methodology. * Classification model: The deliverable will include a model which will receive a document as input and will output its class, which will be the context of that document. `Objective 2`: Perform a topic model analysis on the provided documents. You will discover the hidden topics and describe them. Tasks: * Profile the different documents and topics. * Provide a visualization of the profiles. --- ## Project Analysis So on, we need to create a text classification model so as to tackle the problem of classifying documents in their correct context, which is this case is sorted by the source they come from: Wikipedia, Conference Papers, APR (Amazon Product Reviews) and PAN11 (PAN-PC-11), where those documents are written in three different languages which are: English (en), French (fr) and Spanish (es). This means that we will need to design a text classification model, which regardless of the language the documents are written in, is able to classify any document into the context it comes from. So as to tackle this NLP problem, we will need to do a complete research detailing all the Data Science steps we need to complete during the problem solution so as to finally generate a model which does this classification. Additionally, as the second objective we should also perform a topic modelling analysis using any topic modelling algorithm such as LDA (Latent Dirichlet Allocation), LSA/LSI (Latent Semantic Analysis/Indexing) or NMF (Non-Negative Matrix Factorization), which are un-supervised models used to automatically identify the different topics from a given collection of documents. __Note__: every task should be properly documented in Jupyter Notebooks using a clear formatting and presenting every task in a proper Story Telling way. --- ## Project Considerations * Since the texts are written in multiple languages, either a multi-lingual preprocessing pipeline needs to be defined or, if that does not work, then a different preprocessing pipeline needs to be applied depending on the language which can be easily identified using any language detection Python library, so in this case we will need to define at least 3 different NLP PreProcessing Interfaces (one per language). * The sources/contexts that the texts come from may contain additional stopwords context-based, which means that we will also need to clean the most frequent words not just the default stopwords from the listings. For example, in scientific publications some words such as Introduction, Abstract, Conclusions, Results, etc tend to be present in every scientific publication, so those words should be removed. * As the dataset (available at https://www.dropbox.com/s/le9j5whzv3zzgrw/documents_challenge.zip?dl=0) contains a lot of texts, we will just need to properly define a preprocessing pipeline which preprocesses the text while it is being loaded so as to avoid multiple unnecessary FOR loops. * To tackle the text classification model we will try out some scikit-learn model widely used for Text Classification such as Multinomial Naive Bayes or LinearSVC. Additionally, if the available resources support it, a Deep Learning framework as TensorFlow is suggested to be used since we have a multi-context multi-lingual dataset which means that the input shape of the data will be big and if the scikit-learn does not perform as well as expected, then the Deep Learning approach will be made. * The main focus should be in the Story Telling part more than in the Text Classification one, since we want to extract useful conclusions so as to later improve the model, a perfect model lacking a proper Story Telling is hard to reproduce and scale. * Every Jupyter Notebook should be reproducible, so absolute paths need to be avoided and all the managed data should be available in the GitHub repository. * and more to come while some other considerations are made during the project's research!
github_jupyter
# EA Assignment 00 - Project Definition __Authored by: Álvaro Bartolomé del Canto (alvarobartt @ GitHub)__ --- <img src="https://media-exp1.licdn.com/dms/image/C561BAQFjp6F5hjzDhg/company-background_10000/0?e=2159024400&v=beta&t=OfpXJFCHCqdhcTu7Ud-lediwihm0cANad1Kc_8JcMpA"> ## Project Overview __The goal of the test is working with a multi-language dataset, in order to demonstrate your Natural Language Processing and Machine Translation abilities.__ The Core Data Scientist and Storytelling attributes will also be evaluated during your resolution of the case. `About the Data`: The dataset you will be using is a multilingual, multi-context set of documents, which are a part of the one described on the following paper: _Ferrero, Jérémy & Agnès, Frédéric & Besacier, Laurent & Schwab, Didier. (2016). A Multilingual, Multi-Style and Multi-Granularity Dataset for Cross-Language Textual Similarity Detection._ Please note the dataset is divided on contexts/categories (Conference_papers, Wikipedia, ... ) and on languages, in the same way the folders are structured. `Objective 1`: Create a document categorization classifier for the different contexts of the documents. You will be addressing this objective at context level, regardless of the language the documents are written in. Tasks/Requirements: * EDA: Exploratory data analysis of the Dataset. * Reproducibility/Methodology: The analysis you provide must be reproductible. Your analysis will fulfill the Data Science methodology. * Classification model: The deliverable will include a model which will receive a document as input and will output its class, which will be the context of that document. `Objective 2`: Perform a topic model analysis on the provided documents. You will discover the hidden topics and describe them. Tasks: * Profile the different documents and topics. * Provide a visualization of the profiles. --- ## Project Analysis So on, we need to create a text classification model so as to tackle the problem of classifying documents in their correct context, which is this case is sorted by the source they come from: Wikipedia, Conference Papers, APR (Amazon Product Reviews) and PAN11 (PAN-PC-11), where those documents are written in three different languages which are: English (en), French (fr) and Spanish (es). This means that we will need to design a text classification model, which regardless of the language the documents are written in, is able to classify any document into the context it comes from. So as to tackle this NLP problem, we will need to do a complete research detailing all the Data Science steps we need to complete during the problem solution so as to finally generate a model which does this classification. Additionally, as the second objective we should also perform a topic modelling analysis using any topic modelling algorithm such as LDA (Latent Dirichlet Allocation), LSA/LSI (Latent Semantic Analysis/Indexing) or NMF (Non-Negative Matrix Factorization), which are un-supervised models used to automatically identify the different topics from a given collection of documents. __Note__: every task should be properly documented in Jupyter Notebooks using a clear formatting and presenting every task in a proper Story Telling way. --- ## Project Considerations * Since the texts are written in multiple languages, either a multi-lingual preprocessing pipeline needs to be defined or, if that does not work, then a different preprocessing pipeline needs to be applied depending on the language which can be easily identified using any language detection Python library, so in this case we will need to define at least 3 different NLP PreProcessing Interfaces (one per language). * The sources/contexts that the texts come from may contain additional stopwords context-based, which means that we will also need to clean the most frequent words not just the default stopwords from the listings. For example, in scientific publications some words such as Introduction, Abstract, Conclusions, Results, etc tend to be present in every scientific publication, so those words should be removed. * As the dataset (available at https://www.dropbox.com/s/le9j5whzv3zzgrw/documents_challenge.zip?dl=0) contains a lot of texts, we will just need to properly define a preprocessing pipeline which preprocesses the text while it is being loaded so as to avoid multiple unnecessary FOR loops. * To tackle the text classification model we will try out some scikit-learn model widely used for Text Classification such as Multinomial Naive Bayes or LinearSVC. Additionally, if the available resources support it, a Deep Learning framework as TensorFlow is suggested to be used since we have a multi-context multi-lingual dataset which means that the input shape of the data will be big and if the scikit-learn does not perform as well as expected, then the Deep Learning approach will be made. * The main focus should be in the Story Telling part more than in the Text Classification one, since we want to extract useful conclusions so as to later improve the model, a perfect model lacking a proper Story Telling is hard to reproduce and scale. * Every Jupyter Notebook should be reproducible, so absolute paths need to be avoided and all the managed data should be available in the GitHub repository. * and more to come while some other considerations are made during the project's research!
0.768993
0.76176
# File Input and Output This brief tutorial focuses on the different pyGSTi objects that can be converted to & from text files. Currently, `Model`, `DataSet`, and `MultiDataSet` objects, as well as lists and dictionaries of `Circuit` objects, can be saved to and loaded from text files. All text-based input and output is done via the `pygsti.io` sub-package. When objects have `save` and `load` methods (as `DataSet` objects do), these save and load *binary* formats which are different from the text formats used by the `pygsti.io` routines. Objects may also be serialized using Python's standard `pickle` package, or into the [JSON](https://www.json.org) and [MessagePack](https://msgpack.org/index.html) formats using `pygsti.io.json` and `pygsti.io.msgpack` respectively (see below for more details). Serialization using these formats has the advantage of preserving all of an object's details (the text formats do this for the most part, but not perfectly), but results in a less- or non-human-readable result. ## Text formats Below we give examples of saving and loading each type of object to/from text format. Many of these examples appear in other tutorials, but we thought it seemed useful to collect them into one place. ``` import pygsti #Models ------------------------------------------------------------ model_txt = \ """ # Example text file describing a model # State prepared, specified as a state in the Pauli basis (I,X,Y,Z) PREP: rho0 LiouvilleVec 1/sqrt(2) 0 0 1/sqrt(2) POVM: Mdefault # State measured as yes (zero) outcome, also specified as a state in the Pauli basis EFFECT: 0 LiouvilleVec 1/sqrt(2) 0 0 1/sqrt(2) EFFECT: 1 LiouvilleVec 1/sqrt(2) 0 0 -1/sqrt(2) END POVM GATE: Gi LiouvilleMx 1 0 0 0 0 1 0 0 0 0 1 0 0 0 0 1 GATE: Gx LiouvilleMx 1 0 0 0 0 1 0 0 0 0 0 -1 0 0 1 0 GATE: Gy LiouvilleMx 1 0 0 0 0 0 0 1 0 0 1 0 0 -1 0 0 BASIS: pp 4 """ with open("../tutorial_files/TestModel.txt","w") as f: f.write(model_txt) target_model = pygsti.io.load_model("../tutorial_files/TestModel.txt") pygsti.io.write_model(target_model, "../tutorial_files/TestModel.txt") #DataSets ------------------------------------------------------------ dataset_txt = \ """## Columns = 0 count, count total {} 0 100 Gx 10 90 GxGy 40 60 Gx^4 20 90 """ with open("../tutorial_files/TestDataSet.txt","w") as f: f.write(dataset_txt) ds = pygsti.io.load_dataset("../tutorial_files/TestDataSet.txt") pygsti.io.write_dataset("../tutorial_files/TestDataSet.txt", ds) #MultiDataSets ------------------------------------------------------------ multidataset_txt = \ """## Columns = DS0 0 count, DS0 1 count, DS1 0 frequency, DS1 count total {} 0 100 0 100 Gx 10 90 0.1 100 GxGy 40 60 0.4 100 Gx^4 20 80 0.2 100 """ with open("../tutorial_files/TestMultiDataSet.txt","w") as f: f.write(multidataset_txt) multiDS = pygsti.io.load_multidataset("../tutorial_files/TestMultiDataSet.txt", cache=True) pygsti.io.write_multidataset("../tutorial_files/TestDataSet.txt", multiDS) #DataSets w/timestamped data -------------------------------------------- # Note: left of equals sign is letter, right is spam label tddataset_txt = \ """## 0 = 0 ## 1 = 1 {} 011001 Gx 111000111 Gy 11001100 """ with open("../tutorial_files/TestTDDataset.txt","w") as f: f.write(tddataset_txt) tdds_fromfile = pygsti.io.load_tddataset("../tutorial_files/TestTDDataset.txt") #NOTE: currently there's no way to *write* a DataSet w/timestamped data to a text file yet. #Circuits ------------------------------------------------------------ from pygsti.modelpacks import smq1Q_XY cList = pygsti.construction.make_lsgst_experiment_list( [('Gxpi2',0), ('Gypi2',0)], smq1Q_XY.prep_fiducials(), smq1Q_XY.meas_fiducials(), smq1Q_XY.germs(), [1,2,4,8]) pygsti.io.write_circuit_list("../tutorial_files/TestCircuitList.txt",cList,"#Test Circuit List") pygsti.io.write_empty_dataset("../tutorial_files/TestEmptyDataset.txt",cList) #additionally creates columns of zeros where data should go... cList2 = pygsti.io.load_circuit_list("../tutorial_files/TestCircuitList.txt") ``` ## Serialization to JSON and MSGPACK formats PyGSTi contains support for reading and writing most (if not all) of its objects from and to the JSON and MessagePack formats. The modules `pygsti.io.json` and `pygsti.io.msgpack` mimic the more general Python `json` and `msgpack` packages (`json` is a standard package, `msgpack` is a separate package, and must be installed if you wish to use pyGSTi's MessagePack functionality). These, in turn, mimic the load/dump interface of the standard `pickle` module, so it's very easy to serialize data using any of these formats. Here's a brief summary of the mais advantages and disadvantages of each format: - pickle - **Advantages**: a standard Python package; very easy to use; can serialize almost anything. - **Disadvantages**: incompatibility between python2 and python3 pickles; can be large on disk (inefficient); not web safe. - json - **Advantages**: a standard Python package; web-safe character set.; *should* be the same on python2 or python3 - **Disadvantages**: large on disk (inefficient) - msgpack - **Advantages**: *should* be the same on python2 or python3; efficient binary format (small on disk) - **Disadvantages**: needs external `msgpack` package; binary non-web-safe format. We now demonstrate now to use the `io.json` and `io.msgpack` modules. Using `pickle` is essentially the same, as pyGSTi objects support being pickled too. ``` import pygsti.io.json as json import pygsti.io.msgpack as msgpack #Models json.dump(target_model, open("../tutorial_files/TestModel.json",'w')) target_model_from_json = json.load(open("../tutorial_files/TestModel.json")) msgpack.dump(target_model, open("../tutorial_files/TestModel.mp",'wb')) target_model_from_msgpack = msgpack.load(open("../tutorial_files/TestModel.mp", 'rb')) #DataSets json.dump(ds, open("../tutorial_files/TestDataSet.json",'w')) ds_from_json = json.load(open("../tutorial_files/TestDataSet.json")) msgpack.dump(ds, open("../tutorial_files/TestDataSet.mp",'wb')) ds_from_msgpack = msgpack.load(open("../tutorial_files/TestDataSet.mp",'rb')) #MultiDataSets json.dump(multiDS, open("../tutorial_files/TestMultiDataSet.json",'w')) multiDS_from_json = json.load(open("../tutorial_files/TestMultiDataSet.json")) msgpack.dump(multiDS, open("../tutorial_files/TestMultiDataSet.mp",'wb')) multiDS_from_msgpack = msgpack.load(open("../tutorial_files/TestMultiDataSet.mp",'rb')) # Timestamped-data DataSets json.dump(tdds_fromfile, open("../tutorial_files/TestTDDataset.json",'w')) tdds_from_json = json.load(open("../tutorial_files/TestTDDataset.json")) msgpack.dump(tdds_fromfile, open("../tutorial_files/TestTDDataset.mp",'wb')) tdds_from_msgpack = msgpack.load(open("../tutorial_files/TestTDDataset.mp",'rb')) #Circuit Lists json.dump(cList, open("../tutorial_files/TestCircuitList.json",'w')) cList_from_json = json.load(open("../tutorial_files/TestCircuitList.json")) msgpack.dump(cList, open("../tutorial_files/TestCircuitList.mp",'wb')) cList_from_msgpack = msgpack.load(open("../tutorial_files/TestCircuitList.mp",'rb')) ```
github_jupyter
import pygsti #Models ------------------------------------------------------------ model_txt = \ """ # Example text file describing a model # State prepared, specified as a state in the Pauli basis (I,X,Y,Z) PREP: rho0 LiouvilleVec 1/sqrt(2) 0 0 1/sqrt(2) POVM: Mdefault # State measured as yes (zero) outcome, also specified as a state in the Pauli basis EFFECT: 0 LiouvilleVec 1/sqrt(2) 0 0 1/sqrt(2) EFFECT: 1 LiouvilleVec 1/sqrt(2) 0 0 -1/sqrt(2) END POVM GATE: Gi LiouvilleMx 1 0 0 0 0 1 0 0 0 0 1 0 0 0 0 1 GATE: Gx LiouvilleMx 1 0 0 0 0 1 0 0 0 0 0 -1 0 0 1 0 GATE: Gy LiouvilleMx 1 0 0 0 0 0 0 1 0 0 1 0 0 -1 0 0 BASIS: pp 4 """ with open("../tutorial_files/TestModel.txt","w") as f: f.write(model_txt) target_model = pygsti.io.load_model("../tutorial_files/TestModel.txt") pygsti.io.write_model(target_model, "../tutorial_files/TestModel.txt") #DataSets ------------------------------------------------------------ dataset_txt = \ """## Columns = 0 count, count total {} 0 100 Gx 10 90 GxGy 40 60 Gx^4 20 90 """ with open("../tutorial_files/TestDataSet.txt","w") as f: f.write(dataset_txt) ds = pygsti.io.load_dataset("../tutorial_files/TestDataSet.txt") pygsti.io.write_dataset("../tutorial_files/TestDataSet.txt", ds) #MultiDataSets ------------------------------------------------------------ multidataset_txt = \ """## Columns = DS0 0 count, DS0 1 count, DS1 0 frequency, DS1 count total {} 0 100 0 100 Gx 10 90 0.1 100 GxGy 40 60 0.4 100 Gx^4 20 80 0.2 100 """ with open("../tutorial_files/TestMultiDataSet.txt","w") as f: f.write(multidataset_txt) multiDS = pygsti.io.load_multidataset("../tutorial_files/TestMultiDataSet.txt", cache=True) pygsti.io.write_multidataset("../tutorial_files/TestDataSet.txt", multiDS) #DataSets w/timestamped data -------------------------------------------- # Note: left of equals sign is letter, right is spam label tddataset_txt = \ """## 0 = 0 ## 1 = 1 {} 011001 Gx 111000111 Gy 11001100 """ with open("../tutorial_files/TestTDDataset.txt","w") as f: f.write(tddataset_txt) tdds_fromfile = pygsti.io.load_tddataset("../tutorial_files/TestTDDataset.txt") #NOTE: currently there's no way to *write* a DataSet w/timestamped data to a text file yet. #Circuits ------------------------------------------------------------ from pygsti.modelpacks import smq1Q_XY cList = pygsti.construction.make_lsgst_experiment_list( [('Gxpi2',0), ('Gypi2',0)], smq1Q_XY.prep_fiducials(), smq1Q_XY.meas_fiducials(), smq1Q_XY.germs(), [1,2,4,8]) pygsti.io.write_circuit_list("../tutorial_files/TestCircuitList.txt",cList,"#Test Circuit List") pygsti.io.write_empty_dataset("../tutorial_files/TestEmptyDataset.txt",cList) #additionally creates columns of zeros where data should go... cList2 = pygsti.io.load_circuit_list("../tutorial_files/TestCircuitList.txt") import pygsti.io.json as json import pygsti.io.msgpack as msgpack #Models json.dump(target_model, open("../tutorial_files/TestModel.json",'w')) target_model_from_json = json.load(open("../tutorial_files/TestModel.json")) msgpack.dump(target_model, open("../tutorial_files/TestModel.mp",'wb')) target_model_from_msgpack = msgpack.load(open("../tutorial_files/TestModel.mp", 'rb')) #DataSets json.dump(ds, open("../tutorial_files/TestDataSet.json",'w')) ds_from_json = json.load(open("../tutorial_files/TestDataSet.json")) msgpack.dump(ds, open("../tutorial_files/TestDataSet.mp",'wb')) ds_from_msgpack = msgpack.load(open("../tutorial_files/TestDataSet.mp",'rb')) #MultiDataSets json.dump(multiDS, open("../tutorial_files/TestMultiDataSet.json",'w')) multiDS_from_json = json.load(open("../tutorial_files/TestMultiDataSet.json")) msgpack.dump(multiDS, open("../tutorial_files/TestMultiDataSet.mp",'wb')) multiDS_from_msgpack = msgpack.load(open("../tutorial_files/TestMultiDataSet.mp",'rb')) # Timestamped-data DataSets json.dump(tdds_fromfile, open("../tutorial_files/TestTDDataset.json",'w')) tdds_from_json = json.load(open("../tutorial_files/TestTDDataset.json")) msgpack.dump(tdds_fromfile, open("../tutorial_files/TestTDDataset.mp",'wb')) tdds_from_msgpack = msgpack.load(open("../tutorial_files/TestTDDataset.mp",'rb')) #Circuit Lists json.dump(cList, open("../tutorial_files/TestCircuitList.json",'w')) cList_from_json = json.load(open("../tutorial_files/TestCircuitList.json")) msgpack.dump(cList, open("../tutorial_files/TestCircuitList.mp",'wb')) cList_from_msgpack = msgpack.load(open("../tutorial_files/TestCircuitList.mp",'rb'))
0.464659
0.953449
# Simple Demo - Reviewing WEC Laptime Data For many forms of motorsport, timing data in the form of laptime data is often made available at the end of the race. This data can be used by fans and sports data journalists alike, as well as teams and drivers, for getting a *post hoc* insight into what actuall went on in a race. *Live laptime data is also typically provided via live timing screens, from which data feeds may be available to teams, although not, typically, to the public. (Data from timing screens can still be scraped unofficially, though...) Having access to laptime history during a race can be useful for reviewing how a race is evolving, plotting strategy, and even predicting likely future race events, such as overtaking possibilities or pit stops due to degrading laptimes.* This notebook provides a basic demonstration of how to download and analyse [FIA World Endurance Championship (WEC) data from Al Kamel Systems](http://fiawec.alkamelsystems.com), getting a feel for what the data can tell us and how we might encourage it to reveal some of the stories it undoubtedly contains. Timing info available as PDF, eg [here](https://assets.lemans.org/explorer/pdf/courses/2019/24-heures-du-mans/classification/race/24-heures-du-mans-2019-classification-after-24h.pdf). CSV data from Al Kamel using links of form: `http://fiawec.alkamelsystems.com/Results/08_2018-2019/07_SPA%20FRANCORCHAMPS/267_FIA%20WEC/201905041330_Race/Hour%206/23_Analysis_Race_Hour%206.CSV` *Links don't seem to appear on e.g. [classification data](http://fiawec.alkamelsystems.com/)? So where else might they be found?* ## Setup Although Python code often "just runs", there are a couple of things we can do to help improve our workflow, such as configuring the notebook to work in a particular wat, loading in Pythin programming packages that we can call on, and putting resources into specific locations. ``` #Enable inline plots %matplotlib inline # pandas is a python package for working with tabular datasets import pandas as pd # Add the parent dir to the import path # This lets us load files in from child directories of the parent directory # that this notebook is in. import sys sys.path.append("../py") #Import contents of the utils.py package in the ../py directory from utils import * ``` ## Downloading Data If we know the URL of an online hosted data file, we can download data directly from it. Let's create a variable, `url`, that takes the value of the URL for an Al Kamel Systems timing data CSV file: ``` #Download URL for timing data in CSV format url = 'http://fiawec.alkamelsystems.com/Results/08_2018-2019/07_SPA%20FRANCORCHAMPS/267_FIA%20WEC/201905041330_Race/Hour%206/23_Analysis_Race_Hour%206.CSV' ``` Download the data into a *pandas* dataframe, dropping any empty columns (`.dropna(how='all', axis=1)`) and then previewing the first few rows (`.head()`): ``` # Create a pandas dataframe from data loaded in directly from a web address # dropping any empty columns... laptimes = pd.read_csv(url, sep=';').dropna(how='all', axis=1) # ...and then previewing the first few lines of the dataset laptimes.head() ``` We can save the raw data to a local CSV file so we can access it locally at a future time. For convenience, we can create the filename that we save the file as from the URL by splitting the URL string on each '/' and selecting the last element: ``` # The .split() operator returns a list # The [-1] indexer selects the last item in the list outfile_name = url.split('/')[-1] outfile_name ``` If we want to replace the `%20` encoded value for a *space* character with an *actual* space, we can "unquote" it: ``` from urllib.parse import unquote outfile_name = unquote(outfile_name) outfile_name ``` Now let's save the data into the sibling directory `data` as a file with that filename: ``` # The .. path says "use the parent directory # So ../data says: use the data directory in the parent directory laptimes.to_csv('../data/{}'.format(outfile_name)) ``` If we have saved the data into a file, we can also load the data back in to a data from the file rather than having to download it again from its online location. *Uncomment the following code to load the data back in from the local file.* ``` #Load data from local file #laptimes = pd.read_csv('23_Analysis_Race_Hour 6.csv', sep=';').dropna(how='all', axis=1) #laptimes.head() ``` It often makes sense to tidy a data set to make it more useable. For example we often find that column names may include leading or trailing whitespace in the original datafile, which can make them hard to refer to, so we can rename the columns with any such whitespace stripped out. ``` # Clean the column names of any leading / trailing whitespace # The [X for Y in Z] construction is know as a "list comprehension" # It creates a list of values ("[]") # by iterating through ("for Y in") another list of values ("Z") # and generates new list values, "X", from each original list value "Y" laptimes.columns = [c.strip() for c in laptimes.columns] ``` The car and driver numbers are represented in the dataframe as numerical `int`, which is to say, *integer*, values. However, it is safer to cast this `str`, or *string* values so we don't accidentally perform numerical calculations on them, such as adding them together or finding the "average" value of them... ``` #Tidy the data a little... car and driver number are not numbers laptimes[['NUMBER','DRIVER_NUMBER']] = laptimes[['NUMBER','DRIVER_NUMBER']].astype(str) laptimes.head() # Review the column headings laptimes.columns ``` The `DRIVER_NUMBER` is relative to a car. It may be convenient to also have a unique driver number, `CAR_DRIVER`, which we can construct from the `NUMBER` and `DRIVER_NUMER` columns: ``` laptimes['CAR_DRIVER'] = laptimes['NUMBER'] + '_' + laptimes['DRIVER_NUMBER'] laptimes[['NUMBER','DRIVER_NUMBER','CAR_DRIVER']].head() ``` ## Quick demo laptime chart The *pandas* package provides a handy `.plot()` method that allows us to plot charts directly from a dataframe. For example, we might want to plot lap time against lap number for each car. In the original dataframe, lap times are given in the form `minute:second` (for example, `2:06.349`). To plot the laptimes, we need to convert this to a more convenient numeric form, such as the laptime given as a number of seconds. The `utils` module we loaded in earlier contains a `getTime()` function that can perform this conversion for us. We can use a dataframe `.apply()` method to to apply that function to each value in the original `LAP_TIME` column and assign it to a new `LAP_TIME_S` column, before previewing a selection of the data frame where we select out those two columns, `LAP_TIME` and `LAP_TIME_S`: ``` laptimes['LAP_TIME_S'] = laptimes['LAP_TIME'].apply(getTime) laptimes[['LAP_TIME','LAP_TIME_S']].head() ``` As well as selecting out *columns* of a data frame, we can also select out rows. For example, we can select out the laptimes for car `1` using the following construction: ``` laptimes[laptimes['NUMBER']=='1'].head() ``` Let's start with a plot that allows us to select a particular car number and plot the laptimes associated with it: ``` laptimes[laptimes['NUMBER']=='1'].plot(x='LAP_NUMBER',y='LAP_TIME_S'); ``` We can easily create a function for that decorated with the *ipywidgets* `interact` function to create a set of widget that allows us to select a particular car and plot the laptimes associated with it: ``` from ipywidgets import interact @interact(number=laptimes['NUMBER'].unique().tolist(),) def plotLapByNumber(number): laptimes[laptimes['NUMBER']==number].plot(x='LAP_NUMBER',y='LAP_TIME_S') ``` We can also highlight which laps were driven by which driver by splitting the data for a particular car out over several columns, one for each driver, and then potting each driver column using a separate colour. The table is reshaped using the *pandas* `pivot()` method, setting the lap number as the *index* of the pivoted dataframe and splitting the `LAP_TIME_S` *values* out over several new *columns* that identify each `DRIVER_NUMBER`: ``` laptimes[laptimes['NUMBER']=='1'].pivot(index='LAP_NUMBER', columns='DRIVER_NUMBER', values='LAP_TIME_S').head() ``` The `.plot()` command will, by default, plot the values in each column of a dataframe as a separate line against the corresponding index values: ``` @interact(number=laptimes['NUMBER'].unique().tolist(),) def plotLapByNumberDriver(number): # We can pivot long to wide on driver number, # then plot all cols against the lapnumber index laptimes[laptimes['NUMBER']==number].pivot(index='LAP_NUMBER', columns='DRIVER_NUMBER', values='LAP_TIME_S').plot() ``` We can also add annotations to the chart. For example, we might want to identify laps on which the car pitted so that we can disambiguate slow laps caused by an on-track incident, for example, from laps where the driver went through the pit lane. By inspection of the original table, we note that there are two columns that provide relevant information: the `CROSSING_FINISH_LINE_IN_PIT` column takes the value 'B' for laps where the car crosses the finish line in the pit, a null value otherwise; and the `PIT_TIME` column takes a value on laps where the car exits the pit lane at the start of a lap, null otherwise. *(In general, depending on the timing marker, lap times for laps where the car crossed the finish line in the pit may or may not include the pit stop time.)* ``` @interact(number=laptimes['NUMBER'].unique().tolist(), pitentrylap=True) def plotLapByNumberDriverWithPit(number, pitentrylap): # We can pivot long to wide on driver number, # then plot all cols against the lapnumber index #Grap the matplotli axes so we can overplot onto them ax = laptimes[laptimes['NUMBER']==number].pivot(index='LAP_NUMBER', columns='DRIVER_NUMBER', values='LAP_TIME_S').plot() # Also add in pit laps # Filter rows that identify both the car # and the laps on which the car crossed the finish line in the pit pitcondition = (laptimes['CROSSING_FINISH_LINE_IN_PIT']=='B') if pitentrylap \ else ~(laptimes['PIT_TIME'].isnull()) inpitlaps = laptimes[(laptimes['NUMBER']==number) & (pitcondition) ] # Plot a marker for each of those rows inpitlaps.plot.scatter(x='LAP_NUMBER',y='LAP_TIME_S', ax=ax) ``` ### Inlaps and Outlaps We can use the pit information to create a convenience column with Boolean values that indicate whether a lap was in in-lap or not (that is, whether lap was completed in the pit lane). We can also shift this column to create a column that contains an outlap flag. We can decide whether the set the initial (first lap) value to be an outlap (`True`) or not. *Alternatively, we could set the outlap as laps where there is a non-null value for the pit stop time. This would have a `False` value for the first lap.* ``` #Create a flag to identify when we enter the pit, aka an INLAP laptimes['INLAP'] = (laptimes['CROSSING_FINISH_LINE_IN_PIT'] == 'B') #Make no assumptions about table order - so sort by lap number laptimes = laptimes.sort_values(['NUMBER','LAP_NUMBER']) # Identify a new stint for each car by shifting the pitting / INLAP flag within car tables laptimes['OUTLAP'] = laptimes.groupby('NUMBER')['INLAP'].shift(fill_value=True) #Alternatively, we could define the outlap by laps where there is a non-null pit time #laptimes['OUTLAP'] = ~laptimes['PIT_TIME'].isnull() laptimes[['DRIVER_NUMBER', 'INLAP','OUTLAP']].head() ``` ## Stint Detection Looking at the chart of laptimes vs driver number, we see that each car is on track for several distinct contiguous lap periods, which we might describe as "stints". We can identify several simple heuristics for identifying different sorts of stint: - *driver session*: session equates to continuous period in car irrespective of whether or not the car pits; - *car stint*: laps covered between each pit event; note that in the case of a drive through penalty, this will be counted as a pit event becuase the car passed through the pits, even if the car did not stop; the same is true of stop and go penalties where the car does stop but no work may be carried out on it; - *driver stint*: relative to pit stops; that is, a driver stint is a period bewteen pit stops for a particular driver; this may be renumbered for each session? #### Driver Session We can identify laps where there was a driver change within a particular car by testing whether or not the driver is the same within a car across consecutive laps, setting an appropriate default value for the first lap of the race. ``` #Also set overall lap = 1 to be a driver change laptimes['driverchange'] = (~laptimes['DRIVER_NUMBER'].eq(laptimes['DRIVER_NUMBER'].shift())) | (laptimes['LAP_NUMBER']==1) ``` A driver session is then a count for each driver of the number driver change laps they have been associated with. The *pandas* `cumsum()` method provides a *cumulative sum* operator that can be applied to the values of a column. When the column is typed as a set of Boolean values, `True` values count `1` and `False` values count `0`: ``` pd.DataFrame({'booleans':[False, True, False, False, True, False]}).cumsum() ``` If we group the rows in the dataframe by driver — that is, generating separate groups of rows that contain the laptimes associated with a single driver — and then apply a `cumsum()` over the `driverchange` column within each group, we get a a numeric count of the number of sessions each driver has had. The *pandas* `groupby()` method can be used to access groups of rows based on one or more column values. For example, we can group rows by the car and driver and then pull out just the rows associated with one group using the `get_group()` method: ``` car_num = '1' driver_num = '1' laptimes.groupby(['NUMBER', 'DRIVER_NUMBER']).get_group( (car_num, driver_num) ).head() ``` When applying the `cumsum()` operator to a `groupby()` object, it will be automatically applied to the set of rows associated with each separate group: ``` laptimes['DRIVER_SESSION'] = laptimes.groupby(['NUMBER', 'DRIVER_NUMBER'])['driverchange'].cumsum().astype(int) #Preview laptimes[['DRIVER_NUMBER', 'driverchange','DRIVER_SESSION','LAP_NUMBER']][42:48] ``` #### Car Stint If we define a *car stint* as a period in between pit events, irrespective of driver, we can calculate it by simply by counting the number of pit event flags associated with the car. ``` #Create a counter for each pit stop - the pit flag is entering pit at end of stint # so a new stint applies on the lap after a pit #Find the car stint based on count of pit stops laptimes['CAR_STINT'] = laptimes.groupby('NUMBER')['OUTLAP'].cumsum().astype(int) laptimes[['CROSSING_FINISH_LINE_IN_PIT', 'INLAP', 'OUTLAP', 'CAR_STINT']].head() ``` #### Driver Stint Defining a *driver stint* as a stint between pit events for a particular driver, we can generate a *driver stint* number for each driver as a cumulative count of outlap flags associated with the driver. To provide a unique stint identifier, we can derive another column that identifies the car, driver and driver stint number: ``` #Driver stint - a cumulative count for each driver of their stints laptimes['DRIVER_STINT'] = laptimes.groupby('CAR_DRIVER')['OUTLAP'].cumsum().astype(int) #Let's also derive another identifier - CAR_DRIVER_STINT laptimes['CAR_DRIVER_STINT'] = laptimes['CAR_DRIVER'] + '_' + laptimes['DRIVER_STINT'].astype(str) laptimes[['CAR_DRIVER', 'CROSSING_FINISH_LINE_IN_PIT', 'INLAP', 'CAR_STINT', 'DRIVER_STINT', 'CAR_DRIVER_STINT']].tail(20).head(10) ``` #### Driver Session Stint Where a driver pits within a driver session, we may want to identify the driver stints within a particular session. This can be calculated as a cumulative sum of outlap flags over driver sessions: ``` #Driver session stint - a count for each driver of their stints within a particular driving session laptimes['DRIVER_SESSION_STINT'] = laptimes.groupby(['CAR_DRIVER','DRIVER_SESSION'])['OUTLAP'].cumsum().astype(int) laptimes[['CAR_DRIVER', 'CROSSING_FINISH_LINE_IN_PIT', 'INLAP','CAR_STINT', 'DRIVER_STINT', 'CAR_DRIVER_STINT', 'DRIVER_SESSION_STINT']].head() ``` ## Lap Counts Within Stints It may be convenient to keep track of lap counts within each of the stint types already identified. We can do this by running cumulative counts on rows within specified row groupings. Lap counts we can easily tally include counts of: - *lap count by car stint*: the number of laps between each pit stop; - *lap count by driver*: the nunber of laps driven by each driver; - *lap count by driver session*: the number of laps driven within each driver session; - *lap count by driver stint*: the number of laps driven by a driver between consecutive pit stops; ``` # lap count by car stint - that is, between each pit stop laptimes['LAPS_CAR_STINT'] = laptimes.groupby(['NUMBER','CAR_STINT']).cumcount()+1 #lap count by driver laptimes['LAPS_DRIVER'] = laptimes.groupby('CAR_DRIVER').cumcount()+1 #lap count by driver session laptimes['LAPS_DRIVER_SESSION'] = laptimes.groupby(['CAR_DRIVER','DRIVER_SESSION']).cumcount()+1 #lap count by driver stint laptimes['LAPS_DRIVER_STINT'] = laptimes.groupby(['CAR_DRIVER','DRIVER_STINT']).cumcount()+1 laptimes[['LAPS_CAR_STINT', 'LAPS_DRIVER', 'LAPS_DRIVER_SESSION', 'LAPS_DRIVER_STINT']].tail() ``` ## Basic Individual Driver Reports Using the stint indentifier and stint lap counts, we should be able to start creating reports by driver by faceting on individual drivers. One way of exploring the data is to use an interactive table widget, such as the `qgrid` widget, that allows us to filter the rows displayed in an interactive table directly from an interactive table UI. *Note: it might also be interesting to do some datasette demos with particular facets, which make it easy to select teams, drivers, etc.* ``` #!pip3 install qgrid #!jupyter nbextension enable --py --sys-prefix qgrid import qgrid qgrid.show_grid(laptimes[['LAP_NUMBER', 'NUMBER', 'CAR_DRIVER', 'INLAP', 'CAR_STINT', 'CAR_DRIVER_STINT', 'DRIVER_STINT', 'DRIVER_SESSION', 'DRIVER_SESSION_STINT']]) ``` ## Simple Stint Reports Using the various stint details, we can pull together interactive dashboard style views that provide a simple set of widgets to allow us to explore times by car / driver. ``` import ipywidgets as widgets from ipywidgets import interact ``` For example, we can .. TO DO ``` cars = widgets.Dropdown( options=laptimes['NUMBER'].unique(), # value='1', description='Car:', disabled=False ) drivers = widgets.Dropdown( options=laptimes[laptimes['NUMBER']==cars.value]['CAR_DRIVER'].unique(), description='Driver:', disabled=False) driversessions = widgets.Dropdown( options=laptimes[laptimes['CAR_DRIVER']==drivers.value]['DRIVER_SESSION'].unique(), description='Session:', disabled=False) driverstints = widgets.Dropdown( options=laptimes[laptimes['DRIVER_SESSION']==driversessions.value]['DRIVER_SESSION_STINT'].unique(), description='Stint:', disabled=False) def update_drivers(*args): driverlist = laptimes[laptimes['NUMBER']==cars.value]['CAR_DRIVER'].unique() drivers.options = driverlist def update_driver_session(*args): driversessionlist = laptimes[(laptimes['CAR_DRIVER']==drivers.value)]['DRIVER_SESSION'].unique() driversessions.options = driversessionlist def update_driver_stint(*args): driverstintlist = laptimes[(laptimes['CAR_DRIVER']==drivers.value) & (laptimes['DRIVER_SESSION']==driversessions.value)]['DRIVER_SESSION_STINT'].unique() driverstints.options = driverstintlist cars.observe(update_drivers, 'value') drivers.observe(update_driver_session,'value') driversessions.observe(update_driver_stint,'value') def laptime_table(car, driver, driversession, driverstint): #just basic for now... display(laptimes[(laptimes['CAR_DRIVER']==driver) & (laptimes['DRIVER_SESSION']==driversession) & (laptimes['DRIVER_SESSION_STINT']==driverstint) ][['CAR_DRIVER', 'DRIVER_SESSION', 'DRIVER_STINT', 'DRIVER_SESSION_STINT', 'LAP_NUMBER','LAP_TIME', 'LAP_TIME_S']]) interact(laptime_table, car=cars, driver=drivers, driversession=driversessions, driverstint=driverstints); ``` We can also plot a simple laptime charts over sets of laptimes, such as the laptimes associated with a particular driver's stint. *In and of themselves, without comparison to laptimes of other drivers at the same time within a race, these numbers are not necessarily very informative. However, these data manipulations may prove useful building blocks for generating rather more informative reports.* ``` def laptime_chart(car, driver, driversession, driverstint): tmp_df = laptimes[(laptimes['CAR_DRIVER']==driver) & (laptimes['DRIVER_SESSION']==driversession) & (laptimes['DRIVER_SESSION_STINT']==driverstint) ][['CAR_DRIVER', 'DRIVER_SESSION', 'DRIVER_STINT', 'DRIVER_SESSION_STINT', 'LAP_NUMBER','LAP_TIME', 'LAP_TIME_S']]['LAP_TIME_S'].reset_index(drop=True) if not tmp_df.empty: tmp_df.plot() interact(laptime_chart, car=cars, driver=drivers, driversession=driversessions, driverstint=driverstints); ``` Slightly more useful perhaps, for a particular driver, we can compare the laptime evolution across all their driver sessions. We can optionally toggle the display of inlaps and outlaps which are likely to have laptimes that differ from flying lap laptimes. ``` #Also add check boxes to suppress inlap and outlap? inlaps = widgets.Checkbox( value=True, description='Inlap', disabled=False ) outlaps = widgets.Checkbox( value=True, description='Outlap', disabled=False ) #Plot laptimes by stint for a specified driver def laptime_charts(car, driver, driversession, inlap, outlap): tmp_df = laptimes if not inlap: tmp_df = tmp_df[~tmp_df['INLAP']] if not outlap: tmp_df = tmp_df[~tmp_df['OUTLAP']] tmp_df = tmp_df[(tmp_df['CAR_DRIVER']==driver) & (tmp_df['DRIVER_SESSION']==driversession) ].pivot(index='LAPS_DRIVER_STINT', columns='DRIVER_SESSION_STINT', values='LAP_TIME_S').reset_index(drop=True) if not tmp_df.empty: tmp_df.plot() interact(laptime_charts, car=cars, driver=drivers, driversession=driversessions, inlap=inlaps, outlap=outlaps); ``` ## Simple Laptime Evolution Models Observation of laptime charts might reveal to us trends in laptime evolution that we can recognise by eye, such as periods where the laptime appears consistent or where the laptime appears to drop off at a consistent rate (that is, the laptime increases by the same amount each lap). If we can spot these trends *by eye*, can we also detect them using statistical analyses, and use numbers to characterise the patterns we see? An example of creating a simple model using some explicitly pulled out data. ``` def sample_laptimes(df, driver,driversession,driversessionstint=None, inlap=False, outlap=False): df = df[(df['CAR_DRIVER']==driver) & (df['DRIVER_SESSION']== driversession)] if not inlap: df = df[~df['INLAP']] if not outlap: df = df[~df['OUTLAP']] if driversessionstint: return df[df['DRIVER_SESSION_STINT']==driversessionstint]['LAP_TIME_S'] return df.pivot(index='LAPS_DRIVER_STINT', columns='DRIVER_SESSION_STINT', values='LAP_TIME_S').reset_index(drop=True) sample_laptimes(laptimes,'56_1', 1, 2) ``` ### Some Simple Linear Models The `seaborn` statistical charts package can extract and plot models directly from a provided dataset. For example, the `.lmplot()` ("linear model plot") accepts two data columns from a dataset and then renders the model, along wit an indication of confidence limits. ``` import seaborn as sns sns.lmplot(x='index', y='LAP_TIME_S', data=sample_laptimes(laptimes,'56_1', 1, 2).reset_index()); ``` We can also increase the `order` of the fit line; the `ci` parameter toggles the confidence bounds display: ``` sns.lmplot(x='index', y='LAP_TIME_S', data=sample_laptimes(laptimes,'56_1', 1, 2).reset_index(), order = 2, ci=None); ``` ### Obtaining Simple Linear Model Parameters Being able to plot linear models directly over a dataset is graphically useful, but what if we want ot get hold of the numerical model parameters? ``` #!pip3 install --upgrade statsmodels #Simple model import statsmodels.api as sm Y = sample_laptimes(laptimes,'56_1', 1, 2).reset_index(drop=True) X = Y.index.values X = sm.add_constant(X) model = sm.OLS(Y, X).fit() predictions = model.predict(X) print_model = model.summary() print(print_model) p = model.params ax = pd.DataFrame(Y).reset_index().plot(kind='scatter', x='index', y='LAP_TIME_S') ax.plot(X, p.const + p.x1 * X); import matplotlib.pyplot as plt # scatter-plot data fig, ax = plt.subplots() fig = sm.graphics.plot_fit(model, 0, ax=ax) ``` ### Piecewise Linear Models Sometimes we may be able to fit a dataset quite accurately using a simple first order linear model or second order model, but in other cases a more accurate fit may come from combining several first order linear models over different parts of the data, a so-called *piecewise linear model*. There are a couple of Python packages out there that provide support for this, including [`DataDog/piecewise`](https://github.com/DataDog/piecewise) and the more acticely maintained [*piecewise_linear_fit_py* (`pwlf`)](https://github.com/cjekel/piecewise_linear_fit_py) [[docs](https://jekel.me/piecewise_linear_fit_py/)]. Let's explore the `pwlf` model by pulling out a couple of data columns used in the above charts: ``` data = sample_laptimes(laptimes,'56_1', 1, 2).reset_index() x = data['index'] y = data['LAP_TIME_S'] ``` We can create a model using a specified number of linear segments, in this case, 2. Optionally, we could provide an indication of where we want the breaks to fall along the x-axis, although by default the `.fit()` method will try to find the "best fit" break point(s). ``` #!pip3 install --upgrade scipy #!pip3 install pwlf import pwlf import numpy as np pwlf_fit = pwlf.PiecewiseLinFit(x, y) # fit the data using two line segments pwlf_fit_2_segments = pwlf_fit.fit(2) ``` We can view the model by plotting points predicted using the fitted model for given x-values over the range in the original data: ``` import matplotlib.pyplot as plt # From the docs, generate a prediction xHat = np.linspace(min(x), max(x), num=10000) yHat = pwlf_fit.predict(xHat) # Plot the results plt.figure() plt.plot(x, y, 'o') plt.plot(xHat, yHat, '-') plt.show() ``` We can also review the model parameters: ``` pwlf_fit.slopes, pwlf_fit.intercepts, pwlf_fit.fit_breaks ``` ## Simple Race Position Calculations Some simple demonstrations of calculating track position data. Naively, calculate position based on lap number and accumulated time (there may be complications based on whether the lead car records a laptime from pit entry...). ``` #Find accumulated time in seconds laptimes['ELAPSED_S']=laptimes['ELAPSED'].apply(getTime) #Check laptimes['CHECK_ELAPSED_S'] = laptimes.groupby('NUMBER')['LAP_TIME_S'].cumsum() laptimes[['ELAPSED','ELAPSED_S','CHECK_ELAPSED_S']].tail() ``` We can use the position to identify the leader on each lap and from that a count of leadlap number for each car: ``` #Find position based on accumulated laptime laptimes = laptimes.sort_values('ELAPSED_S') laptimes['POS'] = laptimes.groupby('LAP_NUMBER')['ELAPSED_S'].rank() #Find leader naively laptimes['leader'] = laptimes['POS']==1 #Find lead lap number laptimes['LEAD_LAP_NUMBER'] = laptimes['leader'].cumsum() laptimes[['LAP_NUMBER','LEAD_LAP_NUMBER']].tail() ``` ## Simple Race Position Chart - Top 10 At End Find last lap number, then get top 10 on that lap. ``` LAST_LAP = laptimes['LEAD_LAP_NUMBER'].max() LAST_LAP #Find top 10 at end cols = ['NUMBER','TEAM', 'DRIVER_NAME', 'CLASS','LAP_NUMBER','ELAPSED'] top10 = laptimes[laptimes['LEAD_LAP_NUMBER']==LAST_LAP].sort_values(['LEAD_LAP_NUMBER', 'POS'])[cols].head(10).reset_index(drop=True) top10.index += 1 top10 ```
github_jupyter
#Enable inline plots %matplotlib inline # pandas is a python package for working with tabular datasets import pandas as pd # Add the parent dir to the import path # This lets us load files in from child directories of the parent directory # that this notebook is in. import sys sys.path.append("../py") #Import contents of the utils.py package in the ../py directory from utils import * #Download URL for timing data in CSV format url = 'http://fiawec.alkamelsystems.com/Results/08_2018-2019/07_SPA%20FRANCORCHAMPS/267_FIA%20WEC/201905041330_Race/Hour%206/23_Analysis_Race_Hour%206.CSV' # Create a pandas dataframe from data loaded in directly from a web address # dropping any empty columns... laptimes = pd.read_csv(url, sep=';').dropna(how='all', axis=1) # ...and then previewing the first few lines of the dataset laptimes.head() # The .split() operator returns a list # The [-1] indexer selects the last item in the list outfile_name = url.split('/')[-1] outfile_name from urllib.parse import unquote outfile_name = unquote(outfile_name) outfile_name # The .. path says "use the parent directory # So ../data says: use the data directory in the parent directory laptimes.to_csv('../data/{}'.format(outfile_name)) #Load data from local file #laptimes = pd.read_csv('23_Analysis_Race_Hour 6.csv', sep=';').dropna(how='all', axis=1) #laptimes.head() # Clean the column names of any leading / trailing whitespace # The [X for Y in Z] construction is know as a "list comprehension" # It creates a list of values ("[]") # by iterating through ("for Y in") another list of values ("Z") # and generates new list values, "X", from each original list value "Y" laptimes.columns = [c.strip() for c in laptimes.columns] #Tidy the data a little... car and driver number are not numbers laptimes[['NUMBER','DRIVER_NUMBER']] = laptimes[['NUMBER','DRIVER_NUMBER']].astype(str) laptimes.head() # Review the column headings laptimes.columns laptimes['CAR_DRIVER'] = laptimes['NUMBER'] + '_' + laptimes['DRIVER_NUMBER'] laptimes[['NUMBER','DRIVER_NUMBER','CAR_DRIVER']].head() laptimes['LAP_TIME_S'] = laptimes['LAP_TIME'].apply(getTime) laptimes[['LAP_TIME','LAP_TIME_S']].head() laptimes[laptimes['NUMBER']=='1'].head() laptimes[laptimes['NUMBER']=='1'].plot(x='LAP_NUMBER',y='LAP_TIME_S'); from ipywidgets import interact @interact(number=laptimes['NUMBER'].unique().tolist(),) def plotLapByNumber(number): laptimes[laptimes['NUMBER']==number].plot(x='LAP_NUMBER',y='LAP_TIME_S') laptimes[laptimes['NUMBER']=='1'].pivot(index='LAP_NUMBER', columns='DRIVER_NUMBER', values='LAP_TIME_S').head() @interact(number=laptimes['NUMBER'].unique().tolist(),) def plotLapByNumberDriver(number): # We can pivot long to wide on driver number, # then plot all cols against the lapnumber index laptimes[laptimes['NUMBER']==number].pivot(index='LAP_NUMBER', columns='DRIVER_NUMBER', values='LAP_TIME_S').plot() @interact(number=laptimes['NUMBER'].unique().tolist(), pitentrylap=True) def plotLapByNumberDriverWithPit(number, pitentrylap): # We can pivot long to wide on driver number, # then plot all cols against the lapnumber index #Grap the matplotli axes so we can overplot onto them ax = laptimes[laptimes['NUMBER']==number].pivot(index='LAP_NUMBER', columns='DRIVER_NUMBER', values='LAP_TIME_S').plot() # Also add in pit laps # Filter rows that identify both the car # and the laps on which the car crossed the finish line in the pit pitcondition = (laptimes['CROSSING_FINISH_LINE_IN_PIT']=='B') if pitentrylap \ else ~(laptimes['PIT_TIME'].isnull()) inpitlaps = laptimes[(laptimes['NUMBER']==number) & (pitcondition) ] # Plot a marker for each of those rows inpitlaps.plot.scatter(x='LAP_NUMBER',y='LAP_TIME_S', ax=ax) #Create a flag to identify when we enter the pit, aka an INLAP laptimes['INLAP'] = (laptimes['CROSSING_FINISH_LINE_IN_PIT'] == 'B') #Make no assumptions about table order - so sort by lap number laptimes = laptimes.sort_values(['NUMBER','LAP_NUMBER']) # Identify a new stint for each car by shifting the pitting / INLAP flag within car tables laptimes['OUTLAP'] = laptimes.groupby('NUMBER')['INLAP'].shift(fill_value=True) #Alternatively, we could define the outlap by laps where there is a non-null pit time #laptimes['OUTLAP'] = ~laptimes['PIT_TIME'].isnull() laptimes[['DRIVER_NUMBER', 'INLAP','OUTLAP']].head() #Also set overall lap = 1 to be a driver change laptimes['driverchange'] = (~laptimes['DRIVER_NUMBER'].eq(laptimes['DRIVER_NUMBER'].shift())) | (laptimes['LAP_NUMBER']==1) pd.DataFrame({'booleans':[False, True, False, False, True, False]}).cumsum() car_num = '1' driver_num = '1' laptimes.groupby(['NUMBER', 'DRIVER_NUMBER']).get_group( (car_num, driver_num) ).head() laptimes['DRIVER_SESSION'] = laptimes.groupby(['NUMBER', 'DRIVER_NUMBER'])['driverchange'].cumsum().astype(int) #Preview laptimes[['DRIVER_NUMBER', 'driverchange','DRIVER_SESSION','LAP_NUMBER']][42:48] #Create a counter for each pit stop - the pit flag is entering pit at end of stint # so a new stint applies on the lap after a pit #Find the car stint based on count of pit stops laptimes['CAR_STINT'] = laptimes.groupby('NUMBER')['OUTLAP'].cumsum().astype(int) laptimes[['CROSSING_FINISH_LINE_IN_PIT', 'INLAP', 'OUTLAP', 'CAR_STINT']].head() #Driver stint - a cumulative count for each driver of their stints laptimes['DRIVER_STINT'] = laptimes.groupby('CAR_DRIVER')['OUTLAP'].cumsum().astype(int) #Let's also derive another identifier - CAR_DRIVER_STINT laptimes['CAR_DRIVER_STINT'] = laptimes['CAR_DRIVER'] + '_' + laptimes['DRIVER_STINT'].astype(str) laptimes[['CAR_DRIVER', 'CROSSING_FINISH_LINE_IN_PIT', 'INLAP', 'CAR_STINT', 'DRIVER_STINT', 'CAR_DRIVER_STINT']].tail(20).head(10) #Driver session stint - a count for each driver of their stints within a particular driving session laptimes['DRIVER_SESSION_STINT'] = laptimes.groupby(['CAR_DRIVER','DRIVER_SESSION'])['OUTLAP'].cumsum().astype(int) laptimes[['CAR_DRIVER', 'CROSSING_FINISH_LINE_IN_PIT', 'INLAP','CAR_STINT', 'DRIVER_STINT', 'CAR_DRIVER_STINT', 'DRIVER_SESSION_STINT']].head() # lap count by car stint - that is, between each pit stop laptimes['LAPS_CAR_STINT'] = laptimes.groupby(['NUMBER','CAR_STINT']).cumcount()+1 #lap count by driver laptimes['LAPS_DRIVER'] = laptimes.groupby('CAR_DRIVER').cumcount()+1 #lap count by driver session laptimes['LAPS_DRIVER_SESSION'] = laptimes.groupby(['CAR_DRIVER','DRIVER_SESSION']).cumcount()+1 #lap count by driver stint laptimes['LAPS_DRIVER_STINT'] = laptimes.groupby(['CAR_DRIVER','DRIVER_STINT']).cumcount()+1 laptimes[['LAPS_CAR_STINT', 'LAPS_DRIVER', 'LAPS_DRIVER_SESSION', 'LAPS_DRIVER_STINT']].tail() #!pip3 install qgrid #!jupyter nbextension enable --py --sys-prefix qgrid import qgrid qgrid.show_grid(laptimes[['LAP_NUMBER', 'NUMBER', 'CAR_DRIVER', 'INLAP', 'CAR_STINT', 'CAR_DRIVER_STINT', 'DRIVER_STINT', 'DRIVER_SESSION', 'DRIVER_SESSION_STINT']]) import ipywidgets as widgets from ipywidgets import interact cars = widgets.Dropdown( options=laptimes['NUMBER'].unique(), # value='1', description='Car:', disabled=False ) drivers = widgets.Dropdown( options=laptimes[laptimes['NUMBER']==cars.value]['CAR_DRIVER'].unique(), description='Driver:', disabled=False) driversessions = widgets.Dropdown( options=laptimes[laptimes['CAR_DRIVER']==drivers.value]['DRIVER_SESSION'].unique(), description='Session:', disabled=False) driverstints = widgets.Dropdown( options=laptimes[laptimes['DRIVER_SESSION']==driversessions.value]['DRIVER_SESSION_STINT'].unique(), description='Stint:', disabled=False) def update_drivers(*args): driverlist = laptimes[laptimes['NUMBER']==cars.value]['CAR_DRIVER'].unique() drivers.options = driverlist def update_driver_session(*args): driversessionlist = laptimes[(laptimes['CAR_DRIVER']==drivers.value)]['DRIVER_SESSION'].unique() driversessions.options = driversessionlist def update_driver_stint(*args): driverstintlist = laptimes[(laptimes['CAR_DRIVER']==drivers.value) & (laptimes['DRIVER_SESSION']==driversessions.value)]['DRIVER_SESSION_STINT'].unique() driverstints.options = driverstintlist cars.observe(update_drivers, 'value') drivers.observe(update_driver_session,'value') driversessions.observe(update_driver_stint,'value') def laptime_table(car, driver, driversession, driverstint): #just basic for now... display(laptimes[(laptimes['CAR_DRIVER']==driver) & (laptimes['DRIVER_SESSION']==driversession) & (laptimes['DRIVER_SESSION_STINT']==driverstint) ][['CAR_DRIVER', 'DRIVER_SESSION', 'DRIVER_STINT', 'DRIVER_SESSION_STINT', 'LAP_NUMBER','LAP_TIME', 'LAP_TIME_S']]) interact(laptime_table, car=cars, driver=drivers, driversession=driversessions, driverstint=driverstints); def laptime_chart(car, driver, driversession, driverstint): tmp_df = laptimes[(laptimes['CAR_DRIVER']==driver) & (laptimes['DRIVER_SESSION']==driversession) & (laptimes['DRIVER_SESSION_STINT']==driverstint) ][['CAR_DRIVER', 'DRIVER_SESSION', 'DRIVER_STINT', 'DRIVER_SESSION_STINT', 'LAP_NUMBER','LAP_TIME', 'LAP_TIME_S']]['LAP_TIME_S'].reset_index(drop=True) if not tmp_df.empty: tmp_df.plot() interact(laptime_chart, car=cars, driver=drivers, driversession=driversessions, driverstint=driverstints); #Also add check boxes to suppress inlap and outlap? inlaps = widgets.Checkbox( value=True, description='Inlap', disabled=False ) outlaps = widgets.Checkbox( value=True, description='Outlap', disabled=False ) #Plot laptimes by stint for a specified driver def laptime_charts(car, driver, driversession, inlap, outlap): tmp_df = laptimes if not inlap: tmp_df = tmp_df[~tmp_df['INLAP']] if not outlap: tmp_df = tmp_df[~tmp_df['OUTLAP']] tmp_df = tmp_df[(tmp_df['CAR_DRIVER']==driver) & (tmp_df['DRIVER_SESSION']==driversession) ].pivot(index='LAPS_DRIVER_STINT', columns='DRIVER_SESSION_STINT', values='LAP_TIME_S').reset_index(drop=True) if not tmp_df.empty: tmp_df.plot() interact(laptime_charts, car=cars, driver=drivers, driversession=driversessions, inlap=inlaps, outlap=outlaps); def sample_laptimes(df, driver,driversession,driversessionstint=None, inlap=False, outlap=False): df = df[(df['CAR_DRIVER']==driver) & (df['DRIVER_SESSION']== driversession)] if not inlap: df = df[~df['INLAP']] if not outlap: df = df[~df['OUTLAP']] if driversessionstint: return df[df['DRIVER_SESSION_STINT']==driversessionstint]['LAP_TIME_S'] return df.pivot(index='LAPS_DRIVER_STINT', columns='DRIVER_SESSION_STINT', values='LAP_TIME_S').reset_index(drop=True) sample_laptimes(laptimes,'56_1', 1, 2) import seaborn as sns sns.lmplot(x='index', y='LAP_TIME_S', data=sample_laptimes(laptimes,'56_1', 1, 2).reset_index()); sns.lmplot(x='index', y='LAP_TIME_S', data=sample_laptimes(laptimes,'56_1', 1, 2).reset_index(), order = 2, ci=None); #!pip3 install --upgrade statsmodels #Simple model import statsmodels.api as sm Y = sample_laptimes(laptimes,'56_1', 1, 2).reset_index(drop=True) X = Y.index.values X = sm.add_constant(X) model = sm.OLS(Y, X).fit() predictions = model.predict(X) print_model = model.summary() print(print_model) p = model.params ax = pd.DataFrame(Y).reset_index().plot(kind='scatter', x='index', y='LAP_TIME_S') ax.plot(X, p.const + p.x1 * X); import matplotlib.pyplot as plt # scatter-plot data fig, ax = plt.subplots() fig = sm.graphics.plot_fit(model, 0, ax=ax) data = sample_laptimes(laptimes,'56_1', 1, 2).reset_index() x = data['index'] y = data['LAP_TIME_S'] #!pip3 install --upgrade scipy #!pip3 install pwlf import pwlf import numpy as np pwlf_fit = pwlf.PiecewiseLinFit(x, y) # fit the data using two line segments pwlf_fit_2_segments = pwlf_fit.fit(2) import matplotlib.pyplot as plt # From the docs, generate a prediction xHat = np.linspace(min(x), max(x), num=10000) yHat = pwlf_fit.predict(xHat) # Plot the results plt.figure() plt.plot(x, y, 'o') plt.plot(xHat, yHat, '-') plt.show() pwlf_fit.slopes, pwlf_fit.intercepts, pwlf_fit.fit_breaks #Find accumulated time in seconds laptimes['ELAPSED_S']=laptimes['ELAPSED'].apply(getTime) #Check laptimes['CHECK_ELAPSED_S'] = laptimes.groupby('NUMBER')['LAP_TIME_S'].cumsum() laptimes[['ELAPSED','ELAPSED_S','CHECK_ELAPSED_S']].tail() #Find position based on accumulated laptime laptimes = laptimes.sort_values('ELAPSED_S') laptimes['POS'] = laptimes.groupby('LAP_NUMBER')['ELAPSED_S'].rank() #Find leader naively laptimes['leader'] = laptimes['POS']==1 #Find lead lap number laptimes['LEAD_LAP_NUMBER'] = laptimes['leader'].cumsum() laptimes[['LAP_NUMBER','LEAD_LAP_NUMBER']].tail() LAST_LAP = laptimes['LEAD_LAP_NUMBER'].max() LAST_LAP #Find top 10 at end cols = ['NUMBER','TEAM', 'DRIVER_NAME', 'CLASS','LAP_NUMBER','ELAPSED'] top10 = laptimes[laptimes['LEAD_LAP_NUMBER']==LAST_LAP].sort_values(['LEAD_LAP_NUMBER', 'POS'])[cols].head(10).reset_index(drop=True) top10.index += 1 top10
0.36557
0.938011
# Name Deploying a trained model to Cloud Machine Learning Engine # Label Cloud Storage, Cloud ML Engine, Kubeflow, Pipeline # Summary A Kubeflow Pipeline component to deploy a trained model from a Cloud Storage location to Cloud ML Engine. # Details ## Intended use Use the component to deploy a trained model to Cloud ML Engine. The deployed model can serve online or batch predictions in a Kubeflow Pipeline. ## Runtime arguments | Argument | Description | Optional | Data type | Accepted values | Default | |--------------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|----------|--------------|-----------------|---------| | model_uri | The URI of a Cloud Storage directory that contains a trained model file.<br/> Or <br/> An [Estimator export base directory](https://www.tensorflow.org/guide/saved_model#perform_the_export) that contains a list of subdirectories named by timestamp. The directory with the latest timestamp is used to load the trained model file. | No | GCSPath | | | | project_id | The ID of the Google Cloud Platform (GCP) project of the serving model. | No | GCPProjectID | | | | model_id | The name of the trained model. | Yes | String | | None | | version_id | The name of the version of the model. If it is not provided, the operation uses a random name. | Yes | String | | None | | runtime_version | The Cloud ML Engine runtime version to use for this deployment. If it is not provided, the default stable version, 1.0, is used. | Yes | String | | None | | python_version | The version of Python used in the prediction. If it is not provided, version 2.7 is used. You can use Python 3.5 if runtime_version is set to 1.4 or above. Python 2.7 works with all supported runtime versions. | Yes | String | | 2.7 | | model | The JSON payload of the new [model](https://cloud.google.com/ml-engine/reference/rest/v1/projects.models). | Yes | Dict | | None | | version | The new [version](https://cloud.google.com/ml-engine/reference/rest/v1/projects.models.versions) of the trained model. | Yes | Dict | | None | | replace_existing_version | Indicates whether to replace the existing version in case of a conflict (if the same version number is found.) | Yes | Boolean | | FALSE | | set_default | Indicates whether to set the new version as the default version in the model. | Yes | Boolean | | FALSE | | wait_interval | The number of seconds to wait in case the operation has a long run time. | Yes | Integer | | 30 | ## Input data schema The component looks for a trained model in the location specified by the `model_uri` runtime argument. The accepted trained models are: * [Tensorflow SavedModel](https://cloud.google.com/ml-engine/docs/tensorflow/exporting-for-prediction) * [Scikit-learn & XGBoost model](https://cloud.google.com/ml-engine/docs/scikit/exporting-for-prediction) The accepted file formats are: * *.pb * *.pbtext * model.bst * model.joblib * model.pkl `model_uri` can also be an [Estimator export base directory, ](https://www.tensorflow.org/guide/saved_model#perform_the_export)which contains a list of subdirectories named by timestamp. The directory with the latest timestamp is used to load the trained model file. ## Output | Name | Description | Type | |:------- |:---- | :--- | | job_id | The ID of the created job. | String | | job_dir | The Cloud Storage path that contains the trained model output files. | GCSPath | ## Cautions & requirements To use the component, you must: * [Set up the cloud environment](https://cloud.google.com/ml-engine/docs/tensorflow/getting-started-training-prediction#setup). * The component can authenticate to GCP. Refer to [Authenticating Pipelines to GCP](https://www.kubeflow.org/docs/gke/authentication-pipelines/) for details. * Grant read access to the Cloud Storage bucket that contains the trained model to the Kubeflow user service account. ## Detailed description Use the component to: * Locate the trained model at the Cloud Storage location you specify. * Create a new model if a model provided by you doesn’t exist. * Delete the existing model version if `replace_existing_version` is enabled. * Create a new version of the model from the trained model. * Set the new version as the default version of the model if `set_default` is enabled. Follow these steps to use the component in a pipeline: 1. Install the Kubeflow Pipeline SDK: ``` %%capture --no-stderr KFP_PACKAGE = 'https://storage.googleapis.com/ml-pipeline/release/0.1.14/kfp.tar.gz' !pip3 install $KFP_PACKAGE --upgrade ``` 2. Load the component using KFP SDK ``` import kfp.components as comp mlengine_deploy_op = comp.load_component_from_url( 'https://raw.githubusercontent.com/kubeflow/pipelines/ff116b6f1a0f0cdaafb64fcd04214c169045e6fc/components/gcp/ml_engine/deploy/component.yaml') help(mlengine_deploy_op) ``` ### Sample Note: The following sample code works in IPython notebook or directly in Python code. In this sample, you deploy a pre-built trained model from `gs://ml-pipeline-playground/samples/ml_engine/census/trained_model/` to Cloud ML Engine. The deployed model is `kfp_sample_model`. A new version is created every time the sample is run, and the latest version is set as the default version of the deployed model. #### Set sample parameters ``` # Required Parameters PROJECT_ID = '<Please put your project ID here>' # Optional Parameters EXPERIMENT_NAME = 'CLOUDML - Deploy' TRAINED_MODEL_PATH = 'gs://ml-pipeline-playground/samples/ml_engine/census/trained_model/' ``` #### Example pipeline that uses the component ``` import kfp.dsl as dsl import json @dsl.pipeline( name='CloudML deploy pipeline', description='CloudML deploy pipeline' ) def pipeline( model_uri = 'gs://ml-pipeline-playground/samples/ml_engine/census/trained_model/', project_id = PROJECT_ID, model_id = 'kfp_sample_model', version_id = '', runtime_version = '1.10', python_version = '', version = '', replace_existing_version = 'False', set_default = 'True', wait_interval = '30'): task = mlengine_deploy_op( model_uri=model_uri, project_id=project_id, model_id=model_id, version_id=version_id, runtime_version=runtime_version, python_version=python_version, version=version, replace_existing_version=replace_existing_version, set_default=set_default, wait_interval=wait_interval) ``` #### Compile the pipeline ``` pipeline_func = pipeline pipeline_filename = pipeline_func.__name__ + '.zip' import kfp.compiler as compiler compiler.Compiler().compile(pipeline_func, pipeline_filename) ``` #### Submit the pipeline for execution ``` #Specify pipeline argument values arguments = {} #Get or create an experiment and submit a pipeline run import kfp client = kfp.Client() experiment = client.create_experiment(EXPERIMENT_NAME) #Submit a pipeline run run_name = pipeline_func.__name__ + ' run' run_result = client.run_pipeline(experiment.id, run_name, pipeline_filename, arguments) ``` ## References * [Component python code](https://github.com/kubeflow/pipelines/blob/master/components/gcp/container/component_sdk/python/kfp_component/google/ml_engine/_deploy.py) * [Component docker file](https://github.com/kubeflow/pipelines/blob/master/components/gcp/container/Dockerfile) * [Sample notebook](https://github.com/kubeflow/pipelines/blob/master/components/gcp/ml_engine/deploy/sample.ipynb) * [Cloud Machine Learning Engine Model REST API](https://cloud.google.com/ml-engine/reference/rest/v1/projects.models) * [Cloud Machine Learning Engine Version REST API](https://cloud.google.com/ml-engine/reference/rest/v1/projects.versions) ## License By deploying or using this software you agree to comply with the [AI Hub Terms of Service](https://aihub.cloud.google.com/u/0/aihub-tos) and the [Google APIs Terms of Service](https://developers.google.com/terms/). To the extent of a direct conflict of terms, the AI Hub Terms of Service will control.
github_jupyter
%%capture --no-stderr KFP_PACKAGE = 'https://storage.googleapis.com/ml-pipeline/release/0.1.14/kfp.tar.gz' !pip3 install $KFP_PACKAGE --upgrade import kfp.components as comp mlengine_deploy_op = comp.load_component_from_url( 'https://raw.githubusercontent.com/kubeflow/pipelines/ff116b6f1a0f0cdaafb64fcd04214c169045e6fc/components/gcp/ml_engine/deploy/component.yaml') help(mlengine_deploy_op) # Required Parameters PROJECT_ID = '<Please put your project ID here>' # Optional Parameters EXPERIMENT_NAME = 'CLOUDML - Deploy' TRAINED_MODEL_PATH = 'gs://ml-pipeline-playground/samples/ml_engine/census/trained_model/' import kfp.dsl as dsl import json @dsl.pipeline( name='CloudML deploy pipeline', description='CloudML deploy pipeline' ) def pipeline( model_uri = 'gs://ml-pipeline-playground/samples/ml_engine/census/trained_model/', project_id = PROJECT_ID, model_id = 'kfp_sample_model', version_id = '', runtime_version = '1.10', python_version = '', version = '', replace_existing_version = 'False', set_default = 'True', wait_interval = '30'): task = mlengine_deploy_op( model_uri=model_uri, project_id=project_id, model_id=model_id, version_id=version_id, runtime_version=runtime_version, python_version=python_version, version=version, replace_existing_version=replace_existing_version, set_default=set_default, wait_interval=wait_interval) pipeline_func = pipeline pipeline_filename = pipeline_func.__name__ + '.zip' import kfp.compiler as compiler compiler.Compiler().compile(pipeline_func, pipeline_filename) #Specify pipeline argument values arguments = {} #Get or create an experiment and submit a pipeline run import kfp client = kfp.Client() experiment = client.create_experiment(EXPERIMENT_NAME) #Submit a pipeline run run_name = pipeline_func.__name__ + ' run' run_result = client.run_pipeline(experiment.id, run_name, pipeline_filename, arguments)
0.47244
0.947624
# RNN, GRU, LSTM reference : https://www.youtube.com/watch?v=Gl2WXLIMvKA&list=PLhhyoLH6IjfxeoooqP9rhU3HJIAVAJ3Vz&index=5 ``` import torch import torchvision import torch.nn as nn import torch.optim as optim import torch.nn.functional as F from torch.utils.data import DataLoader import torchvision.datasets as datasets import torchvision.transforms as transforms device = torch.device('cuda' if torch.cuda.is_available() else 'cpu') ``` # BASE ``` input_size = 784 num_classes = 10 learning_rate = 0.001 batch_size = 64 num_epochs = 1 class NN(nn.Module): def __init__(self, input_size, num_classes): super(NN, self).__init__() self.fc1 = nn.Linear(input_size, 50) self.fc2 = nn.Linear(50, num_classes) def forward(self, x): x = F.relu(self.fc1(x)) x = self.fc2(x) return x train_dataset = datasets.MNIST(root = './content', train = True, transform = transforms.ToTensor(), download = False) test_dataset = datasets.MNIST(root = './content', train = False, transform = transforms.ToTensor(), download = False) train_loader = DataLoader(dataset = train_dataset, batch_size=batch_size, shuffle = True) test_loader = DataLoader(dataset = test_dataset, batch_size=batch_size, shuffle = True) model = NN(input_size=input_size, num_classes=num_classes).to(device) criterion = nn.CrossEntropyLoss() optimizer = optim.Adam(model.parameters(), lr=learning_rate) for epoch in range(num_epochs): for batch_idx, (data, targets) in enumerate(train_loader): #Get data data = data.to(device=device) targets = targets.to(device=device) #Get reshape data data = data.reshape(data.shape[0], -1) scores = model(data) loss = criterion(scores, targets) optimizer.zero_grad() loss.backward() optimizer.step() def check_accuracy(loader, model): if loader.dataset.train: print('check train data') else: print('check test data') num_correct = 0 num_samples = 0 model.eval() with torch.no_grad(): for x, y in loader: x = x.to(device=device) y = y.to(device=device) x = x.reshape(x.shape[0],-1) scores = model(x) _, predictions = scores.max(1) num_correct += (predictions == y).sum() num_samples += predictions.size(0) print(f'Got {num_correct} / {num_samples} with accuracy \ {float(num_correct)/float(num_samples)*100:.2f}') model.train() check_accuracy(train_loader, model) check_accuracy(test_loader, model) ``` # BASE + Batchnorm ``` input_size = 784 num_classes = 10 learning_rate = 0.001 batch_size = 64 num_epochs = 1 class NN(nn.Module): def __init__(self, input_size, num_classes): super(NN, self).__init__() self.fc1 = nn.Linear(input_size, 50) self.fc2 = nn.Linear(50, num_classes) self.batch = nn.BatchNorm1d(50) def forward(self, x): x = F.relu(self.fc1(x)) x = self.batch(x) x = self.fc2(x) return x model = NN(input_size=input_size, num_classes=num_classes).to(device) criterion = nn.CrossEntropyLoss() optimizer = optim.Adam(model.parameters(), lr=learning_rate) for epoch in range(num_epochs): for batch_idx, (data, targets) in enumerate(train_loader): #Get data data = data.to(device=device) targets = targets.to(device=device) #Get reshape data data = data.reshape(data.shape[0], -1) scores = model(data) loss = criterion(scores, targets) optimizer.zero_grad() loss.backward() optimizer.step() check_accuracy(train_loader, model) check_accuracy(test_loader, model) ``` # RNN ``` # Make RNN input_size = 28 sequence_length = 28 num_layers = 2 hidden_size = 256 num_classes = 10 learning_rate = 0.001 batch_size = 64 num_epochs = 2 class RNN(nn.Module): def __init__(self, input_size, hidden_size, num_layers, num_classes): super(RNN, self).__init__() self.hidden_size = hidden_size self.num_layers = num_layers self.rnn = nn.RNN(input_size, hidden_size, num_layers, batch_first=True) self.fc = nn.Linear(hidden_size*sequence_length, num_classes) def forward(self, x): h0 = torch.zeros(self.num_layers, x.size(0), self.hidden_size).to(device) #Forward out, _ = self.rnn(x, h0) out = out.reshape(out.shape[0], -1) out = self.fc(out) return out model = RNN(input_size, hidden_size, num_layers, num_classes).to(device) criterion = nn.CrossEntropyLoss() optimizer = optim.Adam(model.parameters(), lr=learning_rate) for epoch in range(num_epochs): for batch_idx, (data, targets) in enumerate(train_loader): #Get data data = data.to(device=device).squeeze(1) targets = targets.to(device=device) scores = model(data) loss = criterion(scores, targets) optimizer.zero_grad() loss.backward() optimizer.step() def check_accuracy(loader, model): if loader.dataset.train: print('check train data') else: print('check test data') num_correct = 0 num_samples = 0 model.eval() with torch.no_grad(): for x, y in loader: x = x.to(device=device).squeeze(1) y = y.to(device=device) scores = model(x) _, predictions = scores.max(1) num_correct += (predictions == y).sum() num_samples += predictions.size(0) print(f'Got {num_correct} / {num_samples} with accuracy \ {float(num_correct)/float(num_samples)*100:.2f}') model.train() check_accuracy(train_loader, model) check_accuracy(test_loader, model) ``` # GRU ``` # Make RNN input_size = 28 sequence_length = 28 num_layers = 2 hidden_size = 256 num_classes = 10 learning_rate = 0.001 batch_size = 64 num_epochs = 2 class GRU(nn.Module): def __init__(self, input_size, hidden_size, num_layers, num_classes): super(GRU, self).__init__() self.hidden_size = hidden_size self.num_layers = num_layers self.gru = nn.GRU(input_size, hidden_size, num_layers, batch_first=True) self.fc = nn.Linear(hidden_size*sequence_length, num_classes) def forward(self, x): h0 = torch.zeros(self.num_layers, x.size(0), self.hidden_size).to(device) #Forward out, _ = self.gru(x, h0) out = out.reshape(out.shape[0], -1) out = self.fc(out) return out model = GRU(input_size, hidden_size, num_layers, num_classes).to(device) criterion = nn.CrossEntropyLoss() optimizer = optim.Adam(model.parameters(), lr=learning_rate) for epoch in range(num_epochs): for batch_idx, (data, targets) in enumerate(train_loader): #Get data data = data.to(device=device).squeeze(1) targets = targets.to(device=device) scores = model(data) loss = criterion(scores, targets) optimizer.zero_grad() loss.backward() optimizer.step() def check_accuracy(loader, model): if loader.dataset.train: print('check train data') else: print('check test data') num_correct = 0 num_samples = 0 model.eval() with torch.no_grad(): for x, y in loader: x = x.to(device=device).squeeze(1) y = y.to(device=device) scores = model(x) _, predictions = scores.max(1) num_correct += (predictions == y).sum() num_samples += predictions.size(0) print(f'Got {num_correct} / {num_samples} with accuracy \ {float(num_correct)/float(num_samples)*100:.2f}') model.train() check_accuracy(train_loader, model) check_accuracy(test_loader, model) ``` ### LSTM ``` # Make RNN input_size = 28 sequence_length = 28 num_layers = 2 hidden_size = 256 num_classes = 10 learning_rate = 0.001 batch_size = 64 num_epochs = 2 class LSTM(nn.Module): def __init__(self, input_size, hidden_size, num_layers, num_classes): super(LSTM, self).__init__() self.hidden_size = hidden_size self.num_layers = num_layers self.lstm = nn.LSTM(input_size, hidden_size, num_layers, batch_first=True) self.fc = nn.Linear(hidden_size*sequence_length, num_classes) def forward(self, x): h0 = torch.zeros(self.num_layers, x.size(0), self.hidden_size).to(device) c0 = torch.zeros(self.num_layers, x.size(0), self.hidden_size).to(device) #Forward out, _ = self.lstm(x, (h0, c0)) out = out.reshape(out.shape[0], -1) out = self.fc(out) return out model = LSTM(input_size, hidden_size, num_layers, num_classes).to(device) criterion = nn.CrossEntropyLoss() optimizer = optim.Adam(model.parameters(), lr=learning_rate) for epoch in range(num_epochs): for batch_idx, (data, targets) in enumerate(train_loader): #Get data data = data.to(device=device).squeeze(1) targets = targets.to(device=device) scores = model(data) loss = criterion(scores, targets) optimizer.zero_grad() loss.backward() optimizer.step() def check_accuracy(loader, model): if loader.dataset.train: print('check train data') else: print('check test data') num_correct = 0 num_samples = 0 model.eval() with torch.no_grad(): for x, y in loader: x = x.to(device=device).squeeze(1) y = y.to(device=device) scores = model(x) _, predictions = scores.max(1) num_correct += (predictions == y).sum() num_samples += predictions.size(0) print(f'Got {num_correct} / {num_samples} with accuracy \ {float(num_correct)/float(num_samples)*100:.2f}') model.train() check_accuracy(train_loader, model) check_accuracy(test_loader, model) ```
github_jupyter
import torch import torchvision import torch.nn as nn import torch.optim as optim import torch.nn.functional as F from torch.utils.data import DataLoader import torchvision.datasets as datasets import torchvision.transforms as transforms device = torch.device('cuda' if torch.cuda.is_available() else 'cpu') input_size = 784 num_classes = 10 learning_rate = 0.001 batch_size = 64 num_epochs = 1 class NN(nn.Module): def __init__(self, input_size, num_classes): super(NN, self).__init__() self.fc1 = nn.Linear(input_size, 50) self.fc2 = nn.Linear(50, num_classes) def forward(self, x): x = F.relu(self.fc1(x)) x = self.fc2(x) return x train_dataset = datasets.MNIST(root = './content', train = True, transform = transforms.ToTensor(), download = False) test_dataset = datasets.MNIST(root = './content', train = False, transform = transforms.ToTensor(), download = False) train_loader = DataLoader(dataset = train_dataset, batch_size=batch_size, shuffle = True) test_loader = DataLoader(dataset = test_dataset, batch_size=batch_size, shuffle = True) model = NN(input_size=input_size, num_classes=num_classes).to(device) criterion = nn.CrossEntropyLoss() optimizer = optim.Adam(model.parameters(), lr=learning_rate) for epoch in range(num_epochs): for batch_idx, (data, targets) in enumerate(train_loader): #Get data data = data.to(device=device) targets = targets.to(device=device) #Get reshape data data = data.reshape(data.shape[0], -1) scores = model(data) loss = criterion(scores, targets) optimizer.zero_grad() loss.backward() optimizer.step() def check_accuracy(loader, model): if loader.dataset.train: print('check train data') else: print('check test data') num_correct = 0 num_samples = 0 model.eval() with torch.no_grad(): for x, y in loader: x = x.to(device=device) y = y.to(device=device) x = x.reshape(x.shape[0],-1) scores = model(x) _, predictions = scores.max(1) num_correct += (predictions == y).sum() num_samples += predictions.size(0) print(f'Got {num_correct} / {num_samples} with accuracy \ {float(num_correct)/float(num_samples)*100:.2f}') model.train() check_accuracy(train_loader, model) check_accuracy(test_loader, model) input_size = 784 num_classes = 10 learning_rate = 0.001 batch_size = 64 num_epochs = 1 class NN(nn.Module): def __init__(self, input_size, num_classes): super(NN, self).__init__() self.fc1 = nn.Linear(input_size, 50) self.fc2 = nn.Linear(50, num_classes) self.batch = nn.BatchNorm1d(50) def forward(self, x): x = F.relu(self.fc1(x)) x = self.batch(x) x = self.fc2(x) return x model = NN(input_size=input_size, num_classes=num_classes).to(device) criterion = nn.CrossEntropyLoss() optimizer = optim.Adam(model.parameters(), lr=learning_rate) for epoch in range(num_epochs): for batch_idx, (data, targets) in enumerate(train_loader): #Get data data = data.to(device=device) targets = targets.to(device=device) #Get reshape data data = data.reshape(data.shape[0], -1) scores = model(data) loss = criterion(scores, targets) optimizer.zero_grad() loss.backward() optimizer.step() check_accuracy(train_loader, model) check_accuracy(test_loader, model) # Make RNN input_size = 28 sequence_length = 28 num_layers = 2 hidden_size = 256 num_classes = 10 learning_rate = 0.001 batch_size = 64 num_epochs = 2 class RNN(nn.Module): def __init__(self, input_size, hidden_size, num_layers, num_classes): super(RNN, self).__init__() self.hidden_size = hidden_size self.num_layers = num_layers self.rnn = nn.RNN(input_size, hidden_size, num_layers, batch_first=True) self.fc = nn.Linear(hidden_size*sequence_length, num_classes) def forward(self, x): h0 = torch.zeros(self.num_layers, x.size(0), self.hidden_size).to(device) #Forward out, _ = self.rnn(x, h0) out = out.reshape(out.shape[0], -1) out = self.fc(out) return out model = RNN(input_size, hidden_size, num_layers, num_classes).to(device) criterion = nn.CrossEntropyLoss() optimizer = optim.Adam(model.parameters(), lr=learning_rate) for epoch in range(num_epochs): for batch_idx, (data, targets) in enumerate(train_loader): #Get data data = data.to(device=device).squeeze(1) targets = targets.to(device=device) scores = model(data) loss = criterion(scores, targets) optimizer.zero_grad() loss.backward() optimizer.step() def check_accuracy(loader, model): if loader.dataset.train: print('check train data') else: print('check test data') num_correct = 0 num_samples = 0 model.eval() with torch.no_grad(): for x, y in loader: x = x.to(device=device).squeeze(1) y = y.to(device=device) scores = model(x) _, predictions = scores.max(1) num_correct += (predictions == y).sum() num_samples += predictions.size(0) print(f'Got {num_correct} / {num_samples} with accuracy \ {float(num_correct)/float(num_samples)*100:.2f}') model.train() check_accuracy(train_loader, model) check_accuracy(test_loader, model) # Make RNN input_size = 28 sequence_length = 28 num_layers = 2 hidden_size = 256 num_classes = 10 learning_rate = 0.001 batch_size = 64 num_epochs = 2 class GRU(nn.Module): def __init__(self, input_size, hidden_size, num_layers, num_classes): super(GRU, self).__init__() self.hidden_size = hidden_size self.num_layers = num_layers self.gru = nn.GRU(input_size, hidden_size, num_layers, batch_first=True) self.fc = nn.Linear(hidden_size*sequence_length, num_classes) def forward(self, x): h0 = torch.zeros(self.num_layers, x.size(0), self.hidden_size).to(device) #Forward out, _ = self.gru(x, h0) out = out.reshape(out.shape[0], -1) out = self.fc(out) return out model = GRU(input_size, hidden_size, num_layers, num_classes).to(device) criterion = nn.CrossEntropyLoss() optimizer = optim.Adam(model.parameters(), lr=learning_rate) for epoch in range(num_epochs): for batch_idx, (data, targets) in enumerate(train_loader): #Get data data = data.to(device=device).squeeze(1) targets = targets.to(device=device) scores = model(data) loss = criterion(scores, targets) optimizer.zero_grad() loss.backward() optimizer.step() def check_accuracy(loader, model): if loader.dataset.train: print('check train data') else: print('check test data') num_correct = 0 num_samples = 0 model.eval() with torch.no_grad(): for x, y in loader: x = x.to(device=device).squeeze(1) y = y.to(device=device) scores = model(x) _, predictions = scores.max(1) num_correct += (predictions == y).sum() num_samples += predictions.size(0) print(f'Got {num_correct} / {num_samples} with accuracy \ {float(num_correct)/float(num_samples)*100:.2f}') model.train() check_accuracy(train_loader, model) check_accuracy(test_loader, model) # Make RNN input_size = 28 sequence_length = 28 num_layers = 2 hidden_size = 256 num_classes = 10 learning_rate = 0.001 batch_size = 64 num_epochs = 2 class LSTM(nn.Module): def __init__(self, input_size, hidden_size, num_layers, num_classes): super(LSTM, self).__init__() self.hidden_size = hidden_size self.num_layers = num_layers self.lstm = nn.LSTM(input_size, hidden_size, num_layers, batch_first=True) self.fc = nn.Linear(hidden_size*sequence_length, num_classes) def forward(self, x): h0 = torch.zeros(self.num_layers, x.size(0), self.hidden_size).to(device) c0 = torch.zeros(self.num_layers, x.size(0), self.hidden_size).to(device) #Forward out, _ = self.lstm(x, (h0, c0)) out = out.reshape(out.shape[0], -1) out = self.fc(out) return out model = LSTM(input_size, hidden_size, num_layers, num_classes).to(device) criterion = nn.CrossEntropyLoss() optimizer = optim.Adam(model.parameters(), lr=learning_rate) for epoch in range(num_epochs): for batch_idx, (data, targets) in enumerate(train_loader): #Get data data = data.to(device=device).squeeze(1) targets = targets.to(device=device) scores = model(data) loss = criterion(scores, targets) optimizer.zero_grad() loss.backward() optimizer.step() def check_accuracy(loader, model): if loader.dataset.train: print('check train data') else: print('check test data') num_correct = 0 num_samples = 0 model.eval() with torch.no_grad(): for x, y in loader: x = x.to(device=device).squeeze(1) y = y.to(device=device) scores = model(x) _, predictions = scores.max(1) num_correct += (predictions == y).sum() num_samples += predictions.size(0) print(f'Got {num_correct} / {num_samples} with accuracy \ {float(num_correct)/float(num_samples)*100:.2f}') model.train() check_accuracy(train_loader, model) check_accuracy(test_loader, model)
0.942009
0.938435
``` import numpy as np import pandas as pd import matplotlib.pyplot as plt from sklearn.model_selection import train_test_split import scipy.optimize as opt %matplotlib inline data = pd.read_csv('C:\\Users\\Owner\\Napa\\results_model_data_8.csv') def result_assign(win_margin): # This function converts the win_margin column into a binary win/loss result if win_margin>0: return 1 else: return 0 def sigmoid(z): # Computes the sigmoid function for logistic regression return 1 / (1 + np.exp(-z)) def cost(theta, X, y): # Computes the cost function for logistic regression theta = np.matrix(theta) X = np.matrix(X) y = np.matrix(y) first = np.multiply(-y, np.log(sigmoid(X * theta.T))) second = np.multiply((1 - y), np.log(1 - sigmoid(X * theta.T))) return np.sum(first - second) / (len(X)) def gradient(theta, X, y): # Computes the gradient of the cost function for logistic regression theta = np.matrix(theta) X = np.matrix(X) y = np.matrix(y) parameters = int(theta.ravel().shape[1]) grad = np.zeros(parameters) error = sigmoid(X * theta.T) - y for i in range(parameters): term = np.multiply(error, X[:,i]) grad[i] = np.sum(term) / len(X) return grad def predict(theta, X): # Uses the minimized theta parameter to generate predictions based on model probability = sigmoid(X * theta.T) return [1 if x >= 0.5 else 0 for x in probability] def get_accuracy(predictions, y): # Compares the model predictions to the real data and returns accuracy correct = [1 if ((a == 1 and b == 1) or (a == 0 and b == 0)) else 0 for (a, b) in zip(predictions, y)] return sum(map(float, correct)) / float(len(correct))*100 # Add a new binary column to the data, which has value 1 where the result is positive, and 0 if negative data['Result'] = data.apply(lambda x: result_assign(x['Win Margin']),axis=1) # Select only quantitive paramaters to be used in the model model_data = data[['Race Margin', 'Win % Margin', 'Skill Margin', 'Game Margin', 'AvgPPM Margin', 'Result']] model_data.head() # add a ones column - this makes the matrix multiplication work out easier model_data.insert(0, 'Ones', 1) # set X (training data) and y (target variable) cols = model_data.shape[1] X = model_data.iloc[:,0:cols-1] y = model_data.iloc[:,cols-1:cols] # Split the data into training and validation sets with 80/20 ratio train_X, val_X, train_y, val_y = train_test_split(X, y, train_size=0.8, test_size=0.2, random_state = 0) # convert to numpy arrays and initalize the parameter array theta X_train = np.array(train_X.values) y_train = np.array(train_y.values) X_val = np.array(val_X.values) y_val = np.array(val_y.values) theta = np.zeros(cols-1) # Use a TNC optimization algorithm to minimize the cost function result = opt.fmin_tnc(func=cost, x0=theta, fprime=gradient, args=(X_train, y_train)) cost(result[0], X_train, y_train) # Convert theta_min to a matrix and retrieve the training and validation accuracies theta_min = np.matrix(result[0]) train_predictions = predict(theta_min, X_train) val_predictions = predict(theta_min, X_val) train_accuracy = get_accuracy(train_predictions, y_train) val_accuracy = get_accuracy(val_predictions, y_val) print 'Train accuracy = {0}%'.format(train_accuracy) print 'Validation accuracy = {0}%'.format(val_accuracy) theta_min ```
github_jupyter
import numpy as np import pandas as pd import matplotlib.pyplot as plt from sklearn.model_selection import train_test_split import scipy.optimize as opt %matplotlib inline data = pd.read_csv('C:\\Users\\Owner\\Napa\\results_model_data_8.csv') def result_assign(win_margin): # This function converts the win_margin column into a binary win/loss result if win_margin>0: return 1 else: return 0 def sigmoid(z): # Computes the sigmoid function for logistic regression return 1 / (1 + np.exp(-z)) def cost(theta, X, y): # Computes the cost function for logistic regression theta = np.matrix(theta) X = np.matrix(X) y = np.matrix(y) first = np.multiply(-y, np.log(sigmoid(X * theta.T))) second = np.multiply((1 - y), np.log(1 - sigmoid(X * theta.T))) return np.sum(first - second) / (len(X)) def gradient(theta, X, y): # Computes the gradient of the cost function for logistic regression theta = np.matrix(theta) X = np.matrix(X) y = np.matrix(y) parameters = int(theta.ravel().shape[1]) grad = np.zeros(parameters) error = sigmoid(X * theta.T) - y for i in range(parameters): term = np.multiply(error, X[:,i]) grad[i] = np.sum(term) / len(X) return grad def predict(theta, X): # Uses the minimized theta parameter to generate predictions based on model probability = sigmoid(X * theta.T) return [1 if x >= 0.5 else 0 for x in probability] def get_accuracy(predictions, y): # Compares the model predictions to the real data and returns accuracy correct = [1 if ((a == 1 and b == 1) or (a == 0 and b == 0)) else 0 for (a, b) in zip(predictions, y)] return sum(map(float, correct)) / float(len(correct))*100 # Add a new binary column to the data, which has value 1 where the result is positive, and 0 if negative data['Result'] = data.apply(lambda x: result_assign(x['Win Margin']),axis=1) # Select only quantitive paramaters to be used in the model model_data = data[['Race Margin', 'Win % Margin', 'Skill Margin', 'Game Margin', 'AvgPPM Margin', 'Result']] model_data.head() # add a ones column - this makes the matrix multiplication work out easier model_data.insert(0, 'Ones', 1) # set X (training data) and y (target variable) cols = model_data.shape[1] X = model_data.iloc[:,0:cols-1] y = model_data.iloc[:,cols-1:cols] # Split the data into training and validation sets with 80/20 ratio train_X, val_X, train_y, val_y = train_test_split(X, y, train_size=0.8, test_size=0.2, random_state = 0) # convert to numpy arrays and initalize the parameter array theta X_train = np.array(train_X.values) y_train = np.array(train_y.values) X_val = np.array(val_X.values) y_val = np.array(val_y.values) theta = np.zeros(cols-1) # Use a TNC optimization algorithm to minimize the cost function result = opt.fmin_tnc(func=cost, x0=theta, fprime=gradient, args=(X_train, y_train)) cost(result[0], X_train, y_train) # Convert theta_min to a matrix and retrieve the training and validation accuracies theta_min = np.matrix(result[0]) train_predictions = predict(theta_min, X_train) val_predictions = predict(theta_min, X_val) train_accuracy = get_accuracy(train_predictions, y_train) val_accuracy = get_accuracy(val_predictions, y_val) print 'Train accuracy = {0}%'.format(train_accuracy) print 'Validation accuracy = {0}%'.format(val_accuracy) theta_min
0.825343
0.806472
``` %reset import numpy as np import pandas as pd import matplotlib.pyplot as plt from scipy import stats # These are some parameters to make figures nice (and big) %matplotlib inline %config InlineBackend.figure_format = 'retina' plt.rcParams['figure.figsize'] = 16,8 params = {'legend.fontsize': 'x-large', 'figure.figsize': (15, 5), 'axes.labelsize': 'x-large', 'axes.titlesize':'x-large', 'xtick.labelsize':'x-large', 'ytick.labelsize':'x-large'} plt.rcParams.update(params) ``` # Exercise 1: Unfair dice Consider a pair of unfair dice. The probabilities for the two dice are as follows: |Roll|Probability Dice 1|Probability Dice 2 |---|---|---| |1|1/8|1/10| |2|1/8|1/10| |3|1/8|1/10| |4|1/8|1/10| |5|1/8|3/10| |6|3/8|3/10| ## Question: Use the law of total probability. to compute the probability of rolling a total of 11. ### Answer We denote by $S$ the sum of the dice and by $D_1$ the value of the roll of dice 1 $$P(S=11)=\sum_{n=1}^{6}P(S=11|D_{1}=n)$$ $$P(S=11)=P(S=11|D_{1}=5)\cdot P(D_{1}=5)+P(S=11|D_{1}=6)\cdot P(D_{1}=6)$$ $$P(S=11)=P(D_{2}=6)\cdot P(D_{1}=5)+P(D_{2}=6)\cdot P(D_{1}=5)$$ $$P(S=11)=3/10\cdot1/8+3/10\cdot3/8=10/80=1/8$$ <hr style="border:2px solid black"> </hr> # Exercise 2: Covariance vs independence Consider two random variables, $X$ and $Y$. $X$ is uniformly distributed over the interval $\left[-1,1\right]$: $$X\sim U[-1,1],$$ while $Y$ is normally distributed (Gaussian), with a variance equal to $X^{2}$. We would denote this as: $$Y|X\sim\mathcal{N}\left(0,X^{2}\right),$$ to imply that $$P(Y=y|X=x)=p(y|x)=\left(2\pi x^2\right)^{-1/2}\exp\left[-\frac{1}{2}\left(\frac{y}{x}\right)^2\right]$$ The two random variables are obviously not independent. Indepencene requires $p(y|x)=p(y)$, which in turn would imply $p(y)=p(y|x_1)p(y|x_2)$ for $x_1\neq x_2$. ## Question 1 (Theory): Prove analyitically that $Cov(X,Y)=0$.<br> *Hint:* Use the relation $p(x,y)=p(y|x)p(x)$ to compute $E(XY)$. Alternatively, you can use the same relation to first prove $E(E(Y|X))$. ### Answer: $$Cov(X,Y)=E(XY)-E(X)E(Y)=E(XY)$$ $$=\int_{-1}^{1}\int_{-\infty}^{\infty}x\cdot y\cdot p(x,y)\cdot dx\cdot dy=\int_{-1}^{1}\int_{-\infty}^{\infty}y\cdot x\cdot p(y|x)p(x)\cdot dx\cdot dy$$ $$=\int_{-1}^{1}\left[\int_{-\infty}^{\infty}y\cdot p(y|x)\cdot dy\right]x\cdot dx$$ $$=\int_{-1}^{1}\left[\int_{-\infty}^{\infty}y\cdot\frac{1}{\sqrt{2\pi x^{2}}}e^{-\frac{1}{2}\left(\frac{y}{x}\right)^{2}}\right]x\cdot dx$$ The inner integral is just the expected value of $y$ for a constant $x$, $E(Y|X)$ and it is zero, since $Y|X\sim\mathcal{N}\left(0,X^{2}\right)$. Thus, since the integrand is zero, the whole intergral is zero. ## Question 2 (Numerical): Show, numerically, that expected covariance is zero. 1. Draw $n$ samples $(x_j,y_j)$ of $(X,Y)$ and plot $y_j$ vs $x_j$ for $n=100$: 2. Compute the sample covariance $s_{n-1}=\frac{1}{n-1}\sum_{j=1}^{n}(y_j-\overline y)$ of $X,Y$ for $n=100$. Repeat the experiment a large number of times (e.g. $M=10,000$) and plot the sampling distribution of $s_{100-1}$. What is the mean of the sampling distribution. 3. Now increase the sample size up to $n=100,000$ and plot the value of the sample covariance as a function of $n$. By the Law of Large Numbers you should see it asymptote to zero ### Answer ``` #2.1 Ndraws=100 X=stats.uniform.rvs(loc=-1,scale=2,size=Ndraws); Y=np.zeros([Ndraws]) for i in range(Ndraws): Y[i]=stats.norm.rvs(loc=0,scale=np.abs(X[i]),size=1) plt.plot(X,Y,'.') scov=1/(Ndraws-1)*np.sum((X-np.mean(X))*(Y-np.mean(Y))) print(scov) #2.2 M=1000 Ndraws=100 scov=np.zeros(M); for j in range(M): X=stats.uniform.rvs(loc=-1,scale=2,size=Ndraws); Y=np.zeros([Ndraws]); for i in range(Ndraws): Y[i]=stats.norm.rvs(loc=0,scale=np.abs(X[i]),size=1); scov[j]=1/(Ndraws-1)*np.sum((X-np.mean(X))*(Y-np.mean(Y))); plt.hist(scov,rwidth=0.98); print(np.mean(scov)) #2.3 Ndraws=100000 scov=np.zeros(Ndraws) X=stats.uniform.rvs(loc=-1,scale=2,size=Ndraws) Y=np.zeros([Ndraws]) for i in range(Ndraws): Y[i]=stats.norm.rvs(loc=0,scale=np.abs(X[i]),size=1) if i>1: scov[i]=1/(i-1)*np.sum((X[0:i]-np.mean(X[0:i]))*(Y[0:i]-np.mean(Y[0:i]))) plt.plot(scov) plt.grid() ``` <hr style="border:2px solid black"> </hr> # Exercise 3: Central Limit Theorem The central limit theorem says that the distribution of the sample mean of **any** random variable approaches a normal distribution. **Theorem** Let $ X_1, \cdots , X_n $ be $n$ independent and identically distributed (i.i.d) random variables with expectation $\mu$ and variance $\sigma^2$. The distribution of the sample mean $\overline X_n=\frac{1}{n}\sum_{i=1}^n X_i$ approaches the distribution of a gaussian $$\overline X_n \sim \mathcal N (\mu,\sigma^2/n),$$ for large $n$. In this exercise, you will convince yourself of this theorem numerically. Here is a recipe for how to do it: - Pick your probability distribution. The CLT even works for discrete random variables! - Generate a random $n \times m$ matrix ($n$ rows, $m$ columns) of realizations from that distribution. - For each column, find the sample mean $\overline X_n$ of the $n$ samples, by taking the mean along the first (0-th) dimension. You now have $m$ independent realizations of the sample mean $\overline X_n$. - You can think of each column as an experiment where you take $n$ samples and average over them. We want to know the distribution of the sample-mean. The $m$ columns represent $m$ experiments, and thus provide us with $m$ realizations of the sample mean random variable. From these we can approximate a distribution of the sample mean (via, e.g. a histogram). - On top of the histogram of the sample mean distribution, plot the pdf of a normal distribution with the same process mean and process variance as the sample mean of the distribution of $\overline X_n$. ## Question 1: Continuous random variables: Demonstrate, numerically, that the sample mean of a number of Gamma-distributed random variables is approximately normal. https://en.wikipedia.org/wiki/Gamma_distribution Plot the distribution of the sample mean for $n=[1,5,25,100]$,using $m=10,000$, and overlay it with a normal pdf. For best visualization,use values of $\alpha=1$ loc$=0$, scale=$1$ for the gamma distribution; 30 bins for the histogram; and set the x-limits of [3,6] for all four values of $n$. ### Answer: ``` m=10000 n=[1,5,20,100] Nbins=30 fig,ax=plt.subplots(4,1,figsize=[8,8]) alpha=1; loc=0; scale=1; for j in range(4): x=stats.gamma.rvs(alpha,loc=loc,scale=scale,size=[n[j],m]) sample_mean=np.mean(x,axis=0); z=np.linspace(0,5,100); norm_pdf=stats.norm.pdf(z,loc=np.mean(sample_mean),scale=np.std(sample_mean)); ax[j].hist(sample_mean,Nbins,rwidth=1,density=True) ax[j].plot(z,norm_pdf); ax[j].set_xlim(left=0,right=4) ``` ## Question 2: Discrete random variables: Demonstrate, numerically, that the sample mean of a large number of random dice throws is approximately normal. Simulate the dice using a discrete uniform random variables <code>stats.randint.rvs</code>, taking values from 1 to 6 (remember Python is right exclusive). The sample mean $\overline X_n$ is thus equivalnt to the average value of the dice throw $n$ throws. Plot the normalized (density=True) histogram for $n=[1,2,25,200]$, using $m=100,000$, and overlay it with a normal pdf. For best visualization use 50 bins for the histogram, and set the x-limits of [1,6] for all four values of $n$. ### Answer ``` m=100000 n=[1,2,25,200] Nbins=50 fig,ax=plt.subplots(4,1,figsize=[16,8]) alpha=1; loc=0; scale=1; for j in range(4): x=stats.randint.rvs(1,7,size=[n[j],m]) sample_mean=np.mean(x,axis=0); z=np.linspace(0,7,1000); norm_pdf=stats.norm.pdf(z,loc=np.mean(sample_mean),scale=np.std(sample_mean)); ax[j].hist(sample_mean,Nbins,rwidth=1,density=True) ax[j].plot(z,norm_pdf); ax[j].set_xlim(left=1,right=6) ``` ## Question 3: Precip in Urbana Plot the histograms of precipitation in urbana on hourly, daily, monthly, and annual time scales. What do you observe? For convenience, I've downloaded 4-times daily hourly data from ERA5 for the gridcell representing Urbana. We'll use xarray since it makes it very easy to compute daily-, monthly-, and annual-total precipitation. The cell below computes hourly, daily, monthly, and annual values of precipitation. All you have to do is plot their histograms ``` import xarray as xr #convert from m/hr to inches/hr, taking into account we only sample 4hrs of the day ds=xr.open_dataset('/data/keeling/a/cristi/SIMLES/data/ERA5precip_urbana_1950-2021.nc'); unit_conv=1000/24.5*6 pr_hr =ds.tp*unit_conv; pr_day =pr_hr.resample(time='1D').sum('time') pr_mon=pr_hr.resample(time='1M').sum('time') pr_yr =pr_hr.resample(time='1Y').sum('time') Nbins=15; ``` ### Answer ``` Nbins=15 fig,ax=plt.subplots(2,2,figsize=[12,12]); ax[0,0].hist(pr_hr,Nbins,rwidth=0.9); ax[0,1].hist(pr_day,Nbins,rwidth=0.9); ax[1,0].hist(pr_mon,Nbins,rwidth=0.9);4 ax[1,1].hist(pr_yr,Nbins,rwidth=0.9); ``` <hr style="border:2px solid black"> </hr> # Exercise 4: Houston precipitation return times via MLE In the wake of Hurricane Harvey, many have described the assocaited flooding as a "500-year event". How can this be, given that in most places there are only a few decades of data available? In this exercise we apply a simple (and most likely wrong) methodology to estimate _return periods_, and comment on the wisdom of that concept. Let's load and get to know the data. We are looking at daily precip data (in cm) at Beaumont Research Center and Port Arthur, two of the weather stations in the Houston area that reported very high daily precip totals. The data comes from NOAA GHCN:<br> https://www.ncdc.noaa.gov/cdo-web/datasets/GHCND/stations/GHCND:USC00410613/detail<br> https://www.ncdc.noaa.gov/cdo-web/datasets/GHCND/stations/GHCND:USW00012917/detail ``` # read data and take a cursory look #df=pd.read_csv('/data/keeling/a/cristi/SIMLES/data/Beaumont_precip.csv') df=pd.read_csv('/data/keeling/a/cristi/SIMLES/data/PortArthur_precip.csv') df.head() # plot raw precipitation precip_raw=df['PRCP'].values precip_raw=precip_raw[np.isnan(precip_raw)==False] # take out nans fig,ax=plt.subplots(1,1) ax.plot(precip_raw) ax.set_xlabel('day since beginning of record') ax.set_ylabel('Daily Precip (cm)') # Plot the histogram of the data. # For distributions such as a gamma distribution it makes sense to use a logarithmic axis. #define bin edges and bin widths. # we'll use the maximum value in the data to define the upper limit bin_edge_low=0 bin_edge_high=np.round(np.max(precip_raw)+1); bin_width=0.25 bin_edges=np.arange(bin_edge_low,bin_edge_high,bin_width) fig,ax=plt.subplots(1,2) ax[0].hist(precip_raw,bin_edges,rwidth=0.9); ax[0].set_xlabel('daily precip (cm)') ax[0].set_ylabel('count (number of days)') ax[0].grid() ax[1].hist(precip_raw,bin_edges,rwidth=0.9) ax[1].set_yscale('log') ax[1].grid() ax[1].set_xlabel('daily precip (cm)') ax[1].set_ylabel('count (number of days)') # the jump in the first bin indicates a probability mass at 0 ( a large number of days do not see any precipitation). # Let's only look at days when it rains. While we're at it, let's clean NaNs as well. precip=precip_raw[precip_raw>0.01] # Plot the histogram of the data fig,ax=plt.subplots(1,2) ax[0].hist(precip,bin_edges,rwidth=0.9); ax[0].set_xlabel('daily precip (cm)') ax[0].set_ylabel('count (number of days)') ax[0].grid() ax[0].set_xlabel('daily precip (cm)') ax[0].set_ylabel('count (number of days)') ax[1].hist(precip,bin_edges,rwidth=0.9) ax[1].set_yscale('log') ax[1].grid() ax[1].set_xlabel('daily precip (cm)') ax[1].set_ylabel('count (number of days)') ``` ## Question 1: Fit an gamma distribution to the data, using the <code>stats.gamma.fit</code> method to obtain maximum likelihood estimates for the parameters. Show the fit by overlaying the pdf of the gamma distribution with mle parameters on top of the histogram of daily precipitation at Beaumont Research Center. Hints: - you'll need to show a *density* estimate of the histogram, unlike the count i.e. ensure <code>density=True</code>. - The method will output the thre parameters of the gamma random variable: <code>a,loc,scale</code> (see documentation <a href="https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.gamma.html"> here</a>). So you'll need to call it as <code>alpha_mle,loc_mle,scale_mle=stats.gama.fit( .... )</code> ### Answer: ``` alpha_mle,loc_mle,scale_mle=stats.gamma.fit(precip) x_plot=np.linspace(0,np.max(precip),200) gamma_pdf=stats.gamma.pdf(x_plot,alpha_mle,loc_mle,scale_mle) # Plot the histogram of the data fig,ax=plt.subplots(1,2) ax[0].hist(precip,bin_edges,rwidth=0.9,density=True); ax[0].set_xlabel('daily precip (cm)') ax[0].set_ylabel('count (number of days)') ax[1].hist(precip,bin_edges,rwidth=0.9,density=True) ax[1].set_yscale('log') ax[0].plot(x_plot,gamma_pdf) ax[1].plot(x_plot,gamma_pdf) np.max(precip) ``` ## Question 2: Compute the return time of the rainiest day recorded at Beaumont Research Center (in years). What does this mean? The rainiest day at Beaumont brought $x$ cm. The return time represents how often we would expect to get $x$ cm or more of rain at Beaumont. To compute the return time we need to compute the probability of daily rain >$x$ cm. The inverse of this probability is the frequency of daily rain >$x$ cm. For example, if the probability of daily rain > 3 cm =1/30, it means we would expect that it rains 3 cm or more once about every 30 day, and we would say 3 cm is a 10 day event. For the largest precip event the probability will be significantly smaller, and thus the return time significantly larger *Hint*: Remember that the probability of daily rain being *less* than $x$ cm is given by the CDF: $$F(x)=P(\text{daily rain}<x\text{ cm})$$. *Hint*: The answer should only take a very small number of lines of code ### Answer ``` gamma_F=stats.gamma.cdf(x_plot,alpha_mle,loc_mle,scale_mle) prob=1-stats.gamma.cdf(np.max(precip),alpha_mle,loc_mle,scale_mle) 1/prob/365 ``` ## Question 3: Repeat the analysis for the Port Arthur data. If you fit a Gamma ditribution and compute the return time of the largest daily rain event, what is the return time? Does that seem reasonable? Why do you think the statistical model fails here? Think of the type of precipitation events that make up the precipitation data at Port Arthur { "tags": [ "margin", ] } ### Answer ``` # read data and take a cursory look df=pd.read_csv('/data/keeling/a/cristi/SIMLES/data/PortArthur_precip.csv') df.head() # plot raw precipitation precip_raw=df['PRCP'].values precip_raw=precip_raw[np.isnan(precip_raw)==False] # take out nans precip=precip_raw[precip_raw>0.01] alpha_mle,loc_mle,scale_mle=stats.gamma.fit(precip) x_plot=np.linspace(0,np.max(precip),200) gamma_pdf=stats.gamma.pdf(x_plot,alpha_mle,loc_mle,scale_mle) # Plot the histogram of the data fig,ax=plt.subplots(1,2) ax[0].hist(precip,bin_edges,rwidth=0.9,density=True); ax[0].set_xlabel('daily precip (cm)') ax[0].set_ylabel('count (number of days)') ax[1].hist(precip,bin_edges,rwidth=0.9,density=True) ax[1].set_yscale('log') ax[0].plot(x_plot,gamma_pdf) ax[1].plot(x_plot,gamma_pdf) gamma_F=stats.gamma.cdf(x_plot,alpha_mle,loc_mle,scale_mle) prob=1-stats.gamma.cdf(np.max(precip),alpha_mle,loc_mle,scale_mle) 1/prob/365 ```
github_jupyter
%reset import numpy as np import pandas as pd import matplotlib.pyplot as plt from scipy import stats # These are some parameters to make figures nice (and big) %matplotlib inline %config InlineBackend.figure_format = 'retina' plt.rcParams['figure.figsize'] = 16,8 params = {'legend.fontsize': 'x-large', 'figure.figsize': (15, 5), 'axes.labelsize': 'x-large', 'axes.titlesize':'x-large', 'xtick.labelsize':'x-large', 'ytick.labelsize':'x-large'} plt.rcParams.update(params) #2.1 Ndraws=100 X=stats.uniform.rvs(loc=-1,scale=2,size=Ndraws); Y=np.zeros([Ndraws]) for i in range(Ndraws): Y[i]=stats.norm.rvs(loc=0,scale=np.abs(X[i]),size=1) plt.plot(X,Y,'.') scov=1/(Ndraws-1)*np.sum((X-np.mean(X))*(Y-np.mean(Y))) print(scov) #2.2 M=1000 Ndraws=100 scov=np.zeros(M); for j in range(M): X=stats.uniform.rvs(loc=-1,scale=2,size=Ndraws); Y=np.zeros([Ndraws]); for i in range(Ndraws): Y[i]=stats.norm.rvs(loc=0,scale=np.abs(X[i]),size=1); scov[j]=1/(Ndraws-1)*np.sum((X-np.mean(X))*(Y-np.mean(Y))); plt.hist(scov,rwidth=0.98); print(np.mean(scov)) #2.3 Ndraws=100000 scov=np.zeros(Ndraws) X=stats.uniform.rvs(loc=-1,scale=2,size=Ndraws) Y=np.zeros([Ndraws]) for i in range(Ndraws): Y[i]=stats.norm.rvs(loc=0,scale=np.abs(X[i]),size=1) if i>1: scov[i]=1/(i-1)*np.sum((X[0:i]-np.mean(X[0:i]))*(Y[0:i]-np.mean(Y[0:i]))) plt.plot(scov) plt.grid() m=10000 n=[1,5,20,100] Nbins=30 fig,ax=plt.subplots(4,1,figsize=[8,8]) alpha=1; loc=0; scale=1; for j in range(4): x=stats.gamma.rvs(alpha,loc=loc,scale=scale,size=[n[j],m]) sample_mean=np.mean(x,axis=0); z=np.linspace(0,5,100); norm_pdf=stats.norm.pdf(z,loc=np.mean(sample_mean),scale=np.std(sample_mean)); ax[j].hist(sample_mean,Nbins,rwidth=1,density=True) ax[j].plot(z,norm_pdf); ax[j].set_xlim(left=0,right=4) m=100000 n=[1,2,25,200] Nbins=50 fig,ax=plt.subplots(4,1,figsize=[16,8]) alpha=1; loc=0; scale=1; for j in range(4): x=stats.randint.rvs(1,7,size=[n[j],m]) sample_mean=np.mean(x,axis=0); z=np.linspace(0,7,1000); norm_pdf=stats.norm.pdf(z,loc=np.mean(sample_mean),scale=np.std(sample_mean)); ax[j].hist(sample_mean,Nbins,rwidth=1,density=True) ax[j].plot(z,norm_pdf); ax[j].set_xlim(left=1,right=6) import xarray as xr #convert from m/hr to inches/hr, taking into account we only sample 4hrs of the day ds=xr.open_dataset('/data/keeling/a/cristi/SIMLES/data/ERA5precip_urbana_1950-2021.nc'); unit_conv=1000/24.5*6 pr_hr =ds.tp*unit_conv; pr_day =pr_hr.resample(time='1D').sum('time') pr_mon=pr_hr.resample(time='1M').sum('time') pr_yr =pr_hr.resample(time='1Y').sum('time') Nbins=15; Nbins=15 fig,ax=plt.subplots(2,2,figsize=[12,12]); ax[0,0].hist(pr_hr,Nbins,rwidth=0.9); ax[0,1].hist(pr_day,Nbins,rwidth=0.9); ax[1,0].hist(pr_mon,Nbins,rwidth=0.9);4 ax[1,1].hist(pr_yr,Nbins,rwidth=0.9); # read data and take a cursory look #df=pd.read_csv('/data/keeling/a/cristi/SIMLES/data/Beaumont_precip.csv') df=pd.read_csv('/data/keeling/a/cristi/SIMLES/data/PortArthur_precip.csv') df.head() # plot raw precipitation precip_raw=df['PRCP'].values precip_raw=precip_raw[np.isnan(precip_raw)==False] # take out nans fig,ax=plt.subplots(1,1) ax.plot(precip_raw) ax.set_xlabel('day since beginning of record') ax.set_ylabel('Daily Precip (cm)') # Plot the histogram of the data. # For distributions such as a gamma distribution it makes sense to use a logarithmic axis. #define bin edges and bin widths. # we'll use the maximum value in the data to define the upper limit bin_edge_low=0 bin_edge_high=np.round(np.max(precip_raw)+1); bin_width=0.25 bin_edges=np.arange(bin_edge_low,bin_edge_high,bin_width) fig,ax=plt.subplots(1,2) ax[0].hist(precip_raw,bin_edges,rwidth=0.9); ax[0].set_xlabel('daily precip (cm)') ax[0].set_ylabel('count (number of days)') ax[0].grid() ax[1].hist(precip_raw,bin_edges,rwidth=0.9) ax[1].set_yscale('log') ax[1].grid() ax[1].set_xlabel('daily precip (cm)') ax[1].set_ylabel('count (number of days)') # the jump in the first bin indicates a probability mass at 0 ( a large number of days do not see any precipitation). # Let's only look at days when it rains. While we're at it, let's clean NaNs as well. precip=precip_raw[precip_raw>0.01] # Plot the histogram of the data fig,ax=plt.subplots(1,2) ax[0].hist(precip,bin_edges,rwidth=0.9); ax[0].set_xlabel('daily precip (cm)') ax[0].set_ylabel('count (number of days)') ax[0].grid() ax[0].set_xlabel('daily precip (cm)') ax[0].set_ylabel('count (number of days)') ax[1].hist(precip,bin_edges,rwidth=0.9) ax[1].set_yscale('log') ax[1].grid() ax[1].set_xlabel('daily precip (cm)') ax[1].set_ylabel('count (number of days)') alpha_mle,loc_mle,scale_mle=stats.gamma.fit(precip) x_plot=np.linspace(0,np.max(precip),200) gamma_pdf=stats.gamma.pdf(x_plot,alpha_mle,loc_mle,scale_mle) # Plot the histogram of the data fig,ax=plt.subplots(1,2) ax[0].hist(precip,bin_edges,rwidth=0.9,density=True); ax[0].set_xlabel('daily precip (cm)') ax[0].set_ylabel('count (number of days)') ax[1].hist(precip,bin_edges,rwidth=0.9,density=True) ax[1].set_yscale('log') ax[0].plot(x_plot,gamma_pdf) ax[1].plot(x_plot,gamma_pdf) np.max(precip) gamma_F=stats.gamma.cdf(x_plot,alpha_mle,loc_mle,scale_mle) prob=1-stats.gamma.cdf(np.max(precip),alpha_mle,loc_mle,scale_mle) 1/prob/365 # read data and take a cursory look df=pd.read_csv('/data/keeling/a/cristi/SIMLES/data/PortArthur_precip.csv') df.head() # plot raw precipitation precip_raw=df['PRCP'].values precip_raw=precip_raw[np.isnan(precip_raw)==False] # take out nans precip=precip_raw[precip_raw>0.01] alpha_mle,loc_mle,scale_mle=stats.gamma.fit(precip) x_plot=np.linspace(0,np.max(precip),200) gamma_pdf=stats.gamma.pdf(x_plot,alpha_mle,loc_mle,scale_mle) # Plot the histogram of the data fig,ax=plt.subplots(1,2) ax[0].hist(precip,bin_edges,rwidth=0.9,density=True); ax[0].set_xlabel('daily precip (cm)') ax[0].set_ylabel('count (number of days)') ax[1].hist(precip,bin_edges,rwidth=0.9,density=True) ax[1].set_yscale('log') ax[0].plot(x_plot,gamma_pdf) ax[1].plot(x_plot,gamma_pdf) gamma_F=stats.gamma.cdf(x_plot,alpha_mle,loc_mle,scale_mle) prob=1-stats.gamma.cdf(np.max(precip),alpha_mle,loc_mle,scale_mle) 1/prob/365
0.448426
0.952662
# Class Notes ## Helping with the Assignment 1 ``` import app person_instance = app.Person('Metin', 'Senturk', 1989) person_instance.birth_year person_instance.first_name person_instance.last_name app.find_age(person_instance.birth_year) ``` ### Decorator Syntax ``` def print_my_name(name): print('Your name is', name) print_my_name('Metin') ``` ``` py # this is your decorator function definition def print_before_after(input_is_function): # todo: something pass ``` Sugar syntax version ``` py # this is the definition @print_before_after def print_my_name(name): print('Your name is', name) # this is you calling the function print_my_name('Metin') ``` Without sugar syntax ``` # this is you calling the function print_before_after(print_my_name('Metin')) ``` ### Documentation and Jupyter Helpers In Jupyter, if you do shift-tab, that will open the signature of the function/ module/ package/ etc. ``` # '?' will open a signature of what that function does. app.find_age? # '?' will open a signature of what that function does, and the code. app.find_age?? ``` ### Task 6, 7 - map and reduce #### Map ``` # lower is a function 'Metin Senturk'.lower() type('Metin Senturk') str letters = ['A', 'B', 'C'] # map(func, *iterables) --> map object list(map(str.lower, letters)) ``` `map` function goes into each one of the list item, and applies the given function to the item. Above, map function applied `lower` function to each item in `letters` list. #### Reduce ``` from functools import reduce # reduce(function, sequence[, initial]) -> value reduce?? numbers = [1,2,3,4,5,6,7,8] numbers # range(stop) -> range object # range(start, stop[, step]) -> range object numbers = range(2, 10) list(numbers) sum(numbers) def add(item1, item2): return item1 + item2 reduce(add, numbers) ``` ``` # list items => result from the reduce group [1, 2] => 3 [3, 3] => 6 [6, 4] => 10 .. ``` ``` list(numbers[:3]) # (2 ** 3) ** 4 reduce(lambda x, y: x ** y, numbers[:3]) ``` ## Class Content from Here Functions like %%time, %%timeit, etc. [Jupyter magic functions](https://ipython.readthedocs.io/en/stable/interactive/magics.html) ``` %%time # CPU bound result = sum(range(1000000000)) result %%time # CPU bound sum(range(10000000000)) sum(map(lambda x: x * result, [1,2,3,4,5])) # standard vs site pacakges import multiprocessing import pandas as pd # when you do pip install, this is the where the packages goes # this package is under site-pacakges pd.__file__ multiprocessing.__file__ import subprocess ```
github_jupyter
import app person_instance = app.Person('Metin', 'Senturk', 1989) person_instance.birth_year person_instance.first_name person_instance.last_name app.find_age(person_instance.birth_year) def print_my_name(name): print('Your name is', name) print_my_name('Metin') Sugar syntax version Without sugar syntax ### Documentation and Jupyter Helpers In Jupyter, if you do shift-tab, that will open the signature of the function/ module/ package/ etc. ### Task 6, 7 - map and reduce #### Map `map` function goes into each one of the list item, and applies the given function to the item. Above, map function applied `lower` function to each item in `letters` list. #### Reduce ## Class Content from Here Functions like %%time, %%timeit, etc. [Jupyter magic functions](https://ipython.readthedocs.io/en/stable/interactive/magics.html)
0.422743
0.704351
``` import numpy as np a = np.arange(15).reshape(3,5) print(a) a.shape a.size a.dtype.itemsize a.dtype ``` a.itemsize ###### ndarray.itemsize the size in bytes of each element of the array. For example, an array of elements of type float64 has itemsize 8 (=64/8), while one of type complex32 has itemsize 4 (=32/8). It is equivalent to ndarray.dtype.itemsize. ``` a.ndim a.data ``` ###### ndarray.data the buffer containing the actual elements of the array. Normally, we won’t need to use this attribute because we will access the elements in an array using indexing facilities. ``` type(a) ``` #### array creation ``` import numpy as np s = np.array([2,3,4,3,22,34,56]) print(s) type(s) st = np.array((1,2,3,5,66,75,44)) st type(st) st.dtype ss = np.arange(20, dtype=np.float32) ss ss.dtype #by default the numpy float is float64 ss.reshape(2,2,5) ss.dtype d = np.array([[3.4,44.5],[55.66,7.7]], dtype = complex) d d.imag d.real type(d) d.dtype # by default the numpy complex is complex 128 d.shape d.itemsize d.data d d.T d.shape d.T.shape t = np.array(((2,3,4,5),(44,56,77,88)), dtype = complex) t tt = np.array(((2,3,4,5),(44,56,77,88)), dtype = float) tt tt.dtype import numpy as np np.zeros((3,4), dtype = int) np.eye(5,5,dtype=int) np.ones((3,3),dtype=float) np.empty((3,3), dtype = int) np.arange(20) f= np.arange(30,40,.2, dtype=float).reshape((10,5)) f.size f np.linspace(2,10,25, dtype= float).reshape((5,5)) import numpy as np import matplotlib.pyplot as plt a = np.linspace(0,20,200) b = np.sin(a) bb = np.exp(a) plt.title("sine and exponential plot") plt.plot(b,bb) np.random.rand(3,3) np.random.random((3,4)) np.random.randn(5,3) np.random.randint(44,54) np.random.randint((44,54)) np.random.randint(44) f = np.random.normal() f np.random.normal(22) np.random.normal((22,30)) np.random.normal(22,30) type(f) np.arange(2999) import sys np.set_printoptions(threshold=sys.maxsize) ``` #### Basic operations ``` import numpy as np a = np.arange(4) b= np.array([33,44,55,66]) c= b-a c b**3 10*np.sin(b) a<33 a = np.array( [[1,1], [0,1]] ) b = np.array( [[2,0], [3,4]] ) a*b a**b a.dot(b) a@b a.dtype.name ddd = np.random.rand(3,3) ddd ddd.dtype ddd.dtype.name ddd.sum() ddd.min() ddd.max() ddd.mean() ddd.std() ddd.var() cs = ddd.cumsum() cs plt.plot(cs,ddd.ravel(),c="r") plt.title('Cumsum and original flatten data plot') plt.xlabel("Cumulative sum") plt.ylabel("Flattened array") ml = np.array([[[2,22,33,43,3],[44,54,5,6,77]], [[4,33,22,11,123],[6,77,56,4,37]] ]) ml ml.ndim ml.shape type(ml) ml.dtype ml.sum(axis=0) ml.sum(axis=2) ml.sum(axis=1) ml.min(axis=2) ml.min(axis=1) ml.max(axis=2) ml.max(axis=1) ml.cumsum(axis=2) ml.cumsum(axis=1) ml.mean(axis=2) ml.mean(axis=1) a= np.arange(3) a np.exp(a) np.sqrt(a) np.add(a,np.exp(a)) np.subtract(a,np.sqrt(a)) np.multiply(a,np.sum(a)) np.divide(a,np.exp(a)) w = np.arange(10)*2 w w[:5] w[::2] w[:7:2]=-100 w w w[::-1] for i in w: print(i*(2/3), end ="\n") def f(x,y): return 10*x+y b= np.fromfunction(f,(5,5),dtype=np.int) b b[2,4] b[:3] b[3:4] b[:5,2] b[:,2] b[-1] b[3] b for i in b.flat: print(i) ``` column stack == hstack (only for 2D arrays) \n On the other hand, the function row_stack is equivalent to vstack for any input arrays. In fact, row_stack is an alias for vstack: ``` np.column_stack is np.hstack np.row_stack is np.vstack import numpy as np import matplotlib.pyplot as plt # Build a vector of 10000 normal deviates with variance 0.5^2 and mean 2 mu, sigma = 2, 0.5 v = np.random.normal(mu,sigma,2000) #print(v) # Plot a normalized histogram with 50 bins plt.hist(v, bins=50, density=0) # matplotlib version (plot) plt.show() b = np.random.random((2,3)) a *= 3 print(b) a b += a b a b a += b # b is not automatically converted to integer type d=[] for i in b: for j in i: d.append(j) d dd=[] for i in d: dd.append(np.floor(i)) dd a+=dd a p = np.exp(a*1j) p p.dtype.name ```
github_jupyter
import numpy as np a = np.arange(15).reshape(3,5) print(a) a.shape a.size a.dtype.itemsize a.dtype a.ndim a.data type(a) import numpy as np s = np.array([2,3,4,3,22,34,56]) print(s) type(s) st = np.array((1,2,3,5,66,75,44)) st type(st) st.dtype ss = np.arange(20, dtype=np.float32) ss ss.dtype #by default the numpy float is float64 ss.reshape(2,2,5) ss.dtype d = np.array([[3.4,44.5],[55.66,7.7]], dtype = complex) d d.imag d.real type(d) d.dtype # by default the numpy complex is complex 128 d.shape d.itemsize d.data d d.T d.shape d.T.shape t = np.array(((2,3,4,5),(44,56,77,88)), dtype = complex) t tt = np.array(((2,3,4,5),(44,56,77,88)), dtype = float) tt tt.dtype import numpy as np np.zeros((3,4), dtype = int) np.eye(5,5,dtype=int) np.ones((3,3),dtype=float) np.empty((3,3), dtype = int) np.arange(20) f= np.arange(30,40,.2, dtype=float).reshape((10,5)) f.size f np.linspace(2,10,25, dtype= float).reshape((5,5)) import numpy as np import matplotlib.pyplot as plt a = np.linspace(0,20,200) b = np.sin(a) bb = np.exp(a) plt.title("sine and exponential plot") plt.plot(b,bb) np.random.rand(3,3) np.random.random((3,4)) np.random.randn(5,3) np.random.randint(44,54) np.random.randint((44,54)) np.random.randint(44) f = np.random.normal() f np.random.normal(22) np.random.normal((22,30)) np.random.normal(22,30) type(f) np.arange(2999) import sys np.set_printoptions(threshold=sys.maxsize) import numpy as np a = np.arange(4) b= np.array([33,44,55,66]) c= b-a c b**3 10*np.sin(b) a<33 a = np.array( [[1,1], [0,1]] ) b = np.array( [[2,0], [3,4]] ) a*b a**b a.dot(b) a@b a.dtype.name ddd = np.random.rand(3,3) ddd ddd.dtype ddd.dtype.name ddd.sum() ddd.min() ddd.max() ddd.mean() ddd.std() ddd.var() cs = ddd.cumsum() cs plt.plot(cs,ddd.ravel(),c="r") plt.title('Cumsum and original flatten data plot') plt.xlabel("Cumulative sum") plt.ylabel("Flattened array") ml = np.array([[[2,22,33,43,3],[44,54,5,6,77]], [[4,33,22,11,123],[6,77,56,4,37]] ]) ml ml.ndim ml.shape type(ml) ml.dtype ml.sum(axis=0) ml.sum(axis=2) ml.sum(axis=1) ml.min(axis=2) ml.min(axis=1) ml.max(axis=2) ml.max(axis=1) ml.cumsum(axis=2) ml.cumsum(axis=1) ml.mean(axis=2) ml.mean(axis=1) a= np.arange(3) a np.exp(a) np.sqrt(a) np.add(a,np.exp(a)) np.subtract(a,np.sqrt(a)) np.multiply(a,np.sum(a)) np.divide(a,np.exp(a)) w = np.arange(10)*2 w w[:5] w[::2] w[:7:2]=-100 w w w[::-1] for i in w: print(i*(2/3), end ="\n") def f(x,y): return 10*x+y b= np.fromfunction(f,(5,5),dtype=np.int) b b[2,4] b[:3] b[3:4] b[:5,2] b[:,2] b[-1] b[3] b for i in b.flat: print(i) np.column_stack is np.hstack np.row_stack is np.vstack import numpy as np import matplotlib.pyplot as plt # Build a vector of 10000 normal deviates with variance 0.5^2 and mean 2 mu, sigma = 2, 0.5 v = np.random.normal(mu,sigma,2000) #print(v) # Plot a normalized histogram with 50 bins plt.hist(v, bins=50, density=0) # matplotlib version (plot) plt.show() b = np.random.random((2,3)) a *= 3 print(b) a b += a b a b a += b # b is not automatically converted to integer type d=[] for i in b: for j in i: d.append(j) d dd=[] for i in d: dd.append(np.floor(i)) dd a+=dd a p = np.exp(a*1j) p p.dtype.name
0.256832
0.938124
<a href="https://colab.research.google.com/github/Norod/my-colab-experiments/blob/master/EvgenyKashin_Animal_conditional_generation.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> ``` %cd /content/ !mkdir pretrained !wget 'https://github.com/EvgenyKashin/stylegan2/releases/download/v1.0.0/network-snapshot-005532.pkl' -O ./pretrained/network.pkl !ls -latr ./pretrained !mkdir animations !apt-get install imagemagick %cd /content/ %tensorflow_version 1.x import tensorflow as tf # Download the code !git clone https://github.com/EvgenyKashin/stylegan2 %cd /content/stylegan2 !nvcc test_nvcc.cu -o test_nvcc -run print('Tensorflow version: {}'.format(tf.__version__) ) !nvidia-smi -L print('GPU Identified at: {}'.format(tf.test.gpu_device_name())) %cd /content/stylegan2/ import ipywidgets as widgets import pretrained_networks import PIL.Image import numpy as np import dnnlib import dnnlib.tflib as tflib network_pkl = '/content/pretrained/network.pkl' _G, _D, Gs = pretrained_networks.load_networks(network_pkl) Gs_syn_kwargs = dnnlib.EasyDict() batch_size = 1 Gs_syn_kwargs.output_transform = dict(func=tflib.convert_images_to_uint8, nchw_to_nhwc=True) Gs_syn_kwargs.randomize_noise = True Gs_syn_kwargs.minibatch_size = batch_size def display_sample_conditional(cat, dog, wild, seed, truncation, return_img=False): batch_size = 1 l1 = np.zeros((1,3)) l1[0][0] = cat l1[0][1] = dog l1[0][2] = wild all_seeds = [seed] * batch_size all_z = np.stack([np.random.RandomState(seed).randn(*Gs.input_shape[1:]) for seed in all_seeds]) all_w = Gs.components.mapping.run(all_z, np.tile(l1, (batch_size, 1))) # [minibatch, layer, component] if truncation != 1: w_avg = Gs.get_var('dlatent_avg') all_w = w_avg + (all_w - w_avg) * truncation # [minibatch, layer, component] all_images = Gs.components.synthesis.run(all_w, **Gs_syn_kwargs) if return_img: return PIL.Image.fromarray(np.median(all_images, axis=0).astype(np.uint8)) else: display(PIL.Image.fromarray(np.median(all_images, axis=0).astype(np.uint8))) ``` ## Conditional generation of animals ``` animal = widgets.Dropdown( options=[('Cat', 0), ('Dog', 1), ('Wild', 2)], value=0, description='Animal: ' ) seed = widgets.IntSlider(min=0, max=100000, step=1, value=0, description='Seed: ') truncation = widgets.FloatSlider(min=0, max=1, step=0.1, value=1, description='Truncation: ') top_box = widgets.HBox([animal]) bot_box = widgets.HBox([seed, truncation]) ui = widgets.VBox([top_box, bot_box]) def display_animal(animal, seed, truncation): cat = (animal == 0) dog = (animal == 1) wild = (animal == 2) display_sample_conditional(cat, dog, wild, seed, truncation) out = widgets.interactive_output(display_animal, {'animal': animal, 'seed': seed, 'truncation': truncation}) display(ui, out) ``` ## Mixed generation of animal ``` cat = widgets.FloatSlider(min=0, max=1, step=0.05, value=1, description='Cat: ') dog = widgets.FloatSlider(min=0, max=1, step=0.05, value=0, description='Dog: ') wild = widgets.FloatSlider(min=0, max=1, step=0.05, value=0, description='Wild: ') top_box = widgets.HBox([cat, dog, wild]) bot_box = widgets.HBox([seed, truncation]) ui = widgets.VBox([top_box, bot_box]) out = widgets.interactive_output(display_sample_conditional, {'cat': cat, 'dog': dog, 'wild': wild, 'seed': seed, 'truncation': truncation}) display(ui, out) ``` ## Transition between labels ``` direction = widgets.Dropdown( options=['cat2wild', 'cat2dog', 'dog2wild'], value='cat2wild', description='Animal: ' ) value = widgets.FloatSlider(min=0, max=1, step=0.05, value=1, description='Value: ') top_box = widgets.HBox([direction, value]) bot_box = widgets.HBox([seed, truncation]) ui = widgets.VBox([top_box, bot_box]) def display_transition(direction, value, truncation, seed, return_img=False): if direction == 'cat2wild': wild = value cat = 1 - value dog = 0 elif direction == 'cat2dog': dog = value cat = 1 - value wild = 0 elif direction == 'dog2wild': wild = value dog = 1 - value cat = 0 else: raise ValueError('Wrong direction value') if return_img: return display_sample_conditional(cat, dog, wild, seed, truncation, return_img) else: display_sample_conditional(cat, dog, wild, seed, truncation, return_img) out = widgets.interactive_output(display_transition, {'direction': direction, 'value': value, 'seed': seed, 'truncation': truncation}) display(ui, out) ``` ## Save images for animation with imagemagick and ffmpeg ``` output_img_dir = '/content/animations/' + str(seed.value) + '-' + str(truncation.value) !mkdir "$output_img_dir" imgs = [display_transition(direction.value, i, truncation.value, seed.value, return_img=True) for i in np.linspace(0, 1, 31)] for i, im in enumerate(imgs): im.save(f"{output_img_dir}/{i:03}.jpg") ``` Generate gif animation ``` output_gif_file = '/content/animations/' + str(seed.value) + '-' + str(truncation.value) + '-animation.gif' !convert -delay 10 -layers optimize "$output_img_dir/*.jpg" "$output_gif_file" ``` Generate video animation ``` output_vid_file = '/content/animations/' + str(seed.value) + '-' + str(truncation.value) + '-video.mp4' !ffmpeg -f image2 -framerate 8 -i "$output_img_dir/%03d.jpg" -b:v 8192k -r 30 -y -c:v libx264 "$output_vid_file" print("Your gif is available at " + output_gif_file) print("Your video is available at " + output_vid_file) ```
github_jupyter
%cd /content/ !mkdir pretrained !wget 'https://github.com/EvgenyKashin/stylegan2/releases/download/v1.0.0/network-snapshot-005532.pkl' -O ./pretrained/network.pkl !ls -latr ./pretrained !mkdir animations !apt-get install imagemagick %cd /content/ %tensorflow_version 1.x import tensorflow as tf # Download the code !git clone https://github.com/EvgenyKashin/stylegan2 %cd /content/stylegan2 !nvcc test_nvcc.cu -o test_nvcc -run print('Tensorflow version: {}'.format(tf.__version__) ) !nvidia-smi -L print('GPU Identified at: {}'.format(tf.test.gpu_device_name())) %cd /content/stylegan2/ import ipywidgets as widgets import pretrained_networks import PIL.Image import numpy as np import dnnlib import dnnlib.tflib as tflib network_pkl = '/content/pretrained/network.pkl' _G, _D, Gs = pretrained_networks.load_networks(network_pkl) Gs_syn_kwargs = dnnlib.EasyDict() batch_size = 1 Gs_syn_kwargs.output_transform = dict(func=tflib.convert_images_to_uint8, nchw_to_nhwc=True) Gs_syn_kwargs.randomize_noise = True Gs_syn_kwargs.minibatch_size = batch_size def display_sample_conditional(cat, dog, wild, seed, truncation, return_img=False): batch_size = 1 l1 = np.zeros((1,3)) l1[0][0] = cat l1[0][1] = dog l1[0][2] = wild all_seeds = [seed] * batch_size all_z = np.stack([np.random.RandomState(seed).randn(*Gs.input_shape[1:]) for seed in all_seeds]) all_w = Gs.components.mapping.run(all_z, np.tile(l1, (batch_size, 1))) # [minibatch, layer, component] if truncation != 1: w_avg = Gs.get_var('dlatent_avg') all_w = w_avg + (all_w - w_avg) * truncation # [minibatch, layer, component] all_images = Gs.components.synthesis.run(all_w, **Gs_syn_kwargs) if return_img: return PIL.Image.fromarray(np.median(all_images, axis=0).astype(np.uint8)) else: display(PIL.Image.fromarray(np.median(all_images, axis=0).astype(np.uint8))) animal = widgets.Dropdown( options=[('Cat', 0), ('Dog', 1), ('Wild', 2)], value=0, description='Animal: ' ) seed = widgets.IntSlider(min=0, max=100000, step=1, value=0, description='Seed: ') truncation = widgets.FloatSlider(min=0, max=1, step=0.1, value=1, description='Truncation: ') top_box = widgets.HBox([animal]) bot_box = widgets.HBox([seed, truncation]) ui = widgets.VBox([top_box, bot_box]) def display_animal(animal, seed, truncation): cat = (animal == 0) dog = (animal == 1) wild = (animal == 2) display_sample_conditional(cat, dog, wild, seed, truncation) out = widgets.interactive_output(display_animal, {'animal': animal, 'seed': seed, 'truncation': truncation}) display(ui, out) cat = widgets.FloatSlider(min=0, max=1, step=0.05, value=1, description='Cat: ') dog = widgets.FloatSlider(min=0, max=1, step=0.05, value=0, description='Dog: ') wild = widgets.FloatSlider(min=0, max=1, step=0.05, value=0, description='Wild: ') top_box = widgets.HBox([cat, dog, wild]) bot_box = widgets.HBox([seed, truncation]) ui = widgets.VBox([top_box, bot_box]) out = widgets.interactive_output(display_sample_conditional, {'cat': cat, 'dog': dog, 'wild': wild, 'seed': seed, 'truncation': truncation}) display(ui, out) direction = widgets.Dropdown( options=['cat2wild', 'cat2dog', 'dog2wild'], value='cat2wild', description='Animal: ' ) value = widgets.FloatSlider(min=0, max=1, step=0.05, value=1, description='Value: ') top_box = widgets.HBox([direction, value]) bot_box = widgets.HBox([seed, truncation]) ui = widgets.VBox([top_box, bot_box]) def display_transition(direction, value, truncation, seed, return_img=False): if direction == 'cat2wild': wild = value cat = 1 - value dog = 0 elif direction == 'cat2dog': dog = value cat = 1 - value wild = 0 elif direction == 'dog2wild': wild = value dog = 1 - value cat = 0 else: raise ValueError('Wrong direction value') if return_img: return display_sample_conditional(cat, dog, wild, seed, truncation, return_img) else: display_sample_conditional(cat, dog, wild, seed, truncation, return_img) out = widgets.interactive_output(display_transition, {'direction': direction, 'value': value, 'seed': seed, 'truncation': truncation}) display(ui, out) output_img_dir = '/content/animations/' + str(seed.value) + '-' + str(truncation.value) !mkdir "$output_img_dir" imgs = [display_transition(direction.value, i, truncation.value, seed.value, return_img=True) for i in np.linspace(0, 1, 31)] for i, im in enumerate(imgs): im.save(f"{output_img_dir}/{i:03}.jpg") output_gif_file = '/content/animations/' + str(seed.value) + '-' + str(truncation.value) + '-animation.gif' !convert -delay 10 -layers optimize "$output_img_dir/*.jpg" "$output_gif_file" output_vid_file = '/content/animations/' + str(seed.value) + '-' + str(truncation.value) + '-video.mp4' !ffmpeg -f image2 -framerate 8 -i "$output_img_dir/%03d.jpg" -b:v 8192k -r 30 -y -c:v libx264 "$output_vid_file" print("Your gif is available at " + output_gif_file) print("Your video is available at " + output_vid_file)
0.333395
0.800458
# Sampling RDDs So far we have introduced RDD creation together with some basic transformations such as `map` and `filter` and some actions such as `count`, `take`, and `collect`. This notebook will show how to sample RDDs. Regarding transformations, `sample` will be introduced since it will be useful in many statistical learning scenarios. Then we will compare results with the `takeSample` action. ``` from __future__ import print_function import sys if sys.version[0] == 3: xrange = range ``` ## Getting the data and creating the RDD ``` data_file = "/KDD/kddcup.data_10_percent.gz" raw_data = sc.textFile(data_file) ``` ## Sampling RDDs In Spark, there are two sampling operations, the transformation `sample` and the action `takeSample`. By using a transformation we can tell Spark to apply successive transformation on a sample of a given RDD. By using an action we retrieve a given sample and we can have it in local memory to be used by any other standard library (e.g. Scikit-learn). ### The `sample` transformation The `sample` transformation takes up to three parameters. First is whether the sampling is done with replacement or not. Second is the sample size as a fraction. Finally we can optionally provide a *random seed*. ``` raw_data_sample = raw_data.sample(False, 0.1, 1234) sample_size = raw_data_sample.count() total_size = raw_data.count() print("Sample size is {} of {}".format(sample_size, total_size)) ``` But the power of sampling as a transformation comes from doing it as part of a sequence of additional transformations. This will show more powerful once we start doing aggregations and key-value pairs operations, and will be specially useful when using Spark's machine learning library MLlib. In the meantime, imagine we want to have an approximation of the proportion of `normal.` interactions in our dataset. We could do this by counting the total number of tags as we did in previous notebooks. However we want a quicker response and we don't need the exact answer but just an approximation. We can do it as follows. ``` from time import time # transformations to be applied raw_data_sample_items = raw_data_sample.map(lambda x: x.split(",")) sample_normal_tags = raw_data_sample_items.filter(lambda x: "normal." in x) # actions + time t0 = time() sample_normal_tags_count = sample_normal_tags.count() tt = time() - t0 sample_normal_ratio = sample_normal_tags_count / float(sample_size) print("The ratio of 'normal' interactions is {}".format(round(sample_normal_ratio,3))) print("Count done in {} seconds".format(round(tt,3))) ``` Let's compare this with calculating the ratio without sampling. ``` # transformations to be applied raw_data_items = raw_data.map(lambda x: x.split(",")) normal_tags = raw_data_items.filter(lambda x: "normal." in x) # actions + time t0 = time() normal_tags_count = normal_tags.count() tt = time() - t0 normal_ratio = normal_tags_count / float(total_size) print("The ratio of 'normal' interactions is {}".format(round(normal_ratio,3))) print("Count done in {} seconds".format(round(tt,3))) ``` We can see a gain in time. The more transformations we apply after the sampling the bigger this gain. This is because without sampling all the transformations are applied to the complete set of data. ### The `takeSample` action If what we need is to grab a sample of raw data from our RDD into local memory in order to be used by other non-Spark libraries, `takeSample` can be used. The syntax is very similar, but in this case we specify the number of items instead of the sample size as a fraction of the complete data size. ``` t0 = time() raw_data_sample = raw_data.takeSample(False, 400000, 1234) normal_data_sample = [x.split(",") for x in raw_data_sample if "normal." in x] tt = time() - t0 normal_sample_size = len(normal_data_sample) normal_ratio = normal_sample_size / 400000.0 print("The ratio of 'normal' interactions is {}".format(normal_ratio)) print("Count done in {} seconds".format(round(tt,3))) ``` The process was very similar as before. We obtained a sample of about 10 percent of the data, and then filter and split. However, it took longer, even with a slightly smaller sample. The reason is that Spark just distributed the execution of the sampling process. The filtering and splitting of the results were done locally in a single node.
github_jupyter
from __future__ import print_function import sys if sys.version[0] == 3: xrange = range data_file = "/KDD/kddcup.data_10_percent.gz" raw_data = sc.textFile(data_file) raw_data_sample = raw_data.sample(False, 0.1, 1234) sample_size = raw_data_sample.count() total_size = raw_data.count() print("Sample size is {} of {}".format(sample_size, total_size)) from time import time # transformations to be applied raw_data_sample_items = raw_data_sample.map(lambda x: x.split(",")) sample_normal_tags = raw_data_sample_items.filter(lambda x: "normal." in x) # actions + time t0 = time() sample_normal_tags_count = sample_normal_tags.count() tt = time() - t0 sample_normal_ratio = sample_normal_tags_count / float(sample_size) print("The ratio of 'normal' interactions is {}".format(round(sample_normal_ratio,3))) print("Count done in {} seconds".format(round(tt,3))) # transformations to be applied raw_data_items = raw_data.map(lambda x: x.split(",")) normal_tags = raw_data_items.filter(lambda x: "normal." in x) # actions + time t0 = time() normal_tags_count = normal_tags.count() tt = time() - t0 normal_ratio = normal_tags_count / float(total_size) print("The ratio of 'normal' interactions is {}".format(round(normal_ratio,3))) print("Count done in {} seconds".format(round(tt,3))) t0 = time() raw_data_sample = raw_data.takeSample(False, 400000, 1234) normal_data_sample = [x.split(",") for x in raw_data_sample if "normal." in x] tt = time() - t0 normal_sample_size = len(normal_data_sample) normal_ratio = normal_sample_size / 400000.0 print("The ratio of 'normal' interactions is {}".format(normal_ratio)) print("Count done in {} seconds".format(round(tt,3)))
0.358016
0.991456
<div> <img src="attachment:qgssqml2021wordmark.png"/> </div> In this lab, you will see how noise affects a typical parameterized quantum circuit used in machine learning using quantum process tomography. <div class="alert alert-danger" role="alert"> For grading purposes, please specify all simulator arguments (<i>noise_model=noise_thermal, seed_simulator=3145, seed_transpiler=3145, shots=8192</i>) in the <b><i>execute</i></b> function. </div> ``` # General tools import numpy as np import matplotlib.pyplot as plt # Qiskit Circuit Functions from qiskit import execute,QuantumCircuit, QuantumRegister, ClassicalRegister, Aer, transpile import qiskit.quantum_info as qi # Tomography functions from qiskit.ignis.verification.tomography import process_tomography_circuits, ProcessTomographyFitter from qiskit.ignis.mitigation.measurement import complete_meas_cal, CompleteMeasFitter import warnings warnings.filterwarnings('ignore') ``` ### Question 1 - Make this Quantum Circuit <div> <img src="attachment:lab5ex1.png"/> </div> ``` target = QuantumCircuit(2) target.h(0) target.h(1) target.rx(np.pi/2., 0) target.rx(np.pi/2., 1) target.cx(0,1) target.p(np.pi, 1) target.cx(0,1) # target.draw() target_unitary = qi.Operator(target) from qc_grader import grade_lab5_ex1 # Note that the grading function is expecting a quantum circuit with no measurements grade_lab5_ex1(target) ``` # Quantum Process Tomography with Only Shot Noise Here we will now use the `qasm_simulator` to simulate a Quantum Process Tomography Circuit ### Question 2a - Using the Process Tomography Circuits function built into qiskit, create the set of circuits to do quantum process tomography and simulation with a qasm simulator (with shot noise only). For this please use the execute function of the QPT Circuits with `seed_simulator=3145`, `seed_transpiler=3145` and `shots=8192`. - _Hint: The appropriate function, <a href="https://qiskit.org/documentation/stubs/qiskit.ignis.verification.process_tomography_circuits.html">process_tomography_circuits</a>, has been imported above. When complete you should have a total of 144 circuits that are given to the `qasm_simulator` via the `execute` function. You can find out the number of circuits created using `len(qpt_circs)`._ ``` simulator = Aer.get_backend('qasm_simulator') qpt_circs = process_tomography_circuits(target, measured_qubits=[0,1]) qpt_job = execute(qpt_circs,simulator,seed_simulator=3145,seed_transpiler=3145,shots=8192) qpt_result = qpt_job.result() qpt_result.get_counts() len(qpt_circs) ``` ### Question 2b - Using a least squares fitting method for the Process Tomography Fitter, determine the fidelity of your target unitary - _Hint: First use the <a href="https://qiskit.org/documentation/stubs/qiskit.ignis.verification.ProcessTomographyFitter.html">ProcessTomographyFitter</a> function above to process the results from question 2a and use ProcessTomographyFitter.fit(method='....') to extract the "Choi Matrix", which effectively describes the measured unitary operation. From here you will use the <a href="https://qiskit.org/documentation/stubs/qiskit.quantum_info.average_gate_fidelity.html#qiskit.quantum_info.average_gate_fidelity">average_gate_fidelity</a> function from the quantum information module to extract the achieved fidelity of your results_ ``` qpt_tomo = ProcessTomographyFitter(qpt_result, qpt_circs) qpt_lstsq = qpt_tomo.fit(method="lstsq") fidelity = qi.average_gate_fidelity(qpt_lstsq, target_unitary) print(fidelity) from qc_grader import grade_lab5_ex2 # Note that the grading function is expecting a floating point number grade_lab5_ex2(fidelity) ``` # Quantum Process Tomography with a T1/T2 Noise Model For the sake of consistency, let's set some values to characterize the duration of our gates and T1/T2 times: ``` # T1 and T2 values for qubits 0-3 T1s = [15000, 19000, 22000, 14000] T2s = [30000, 25000, 18000, 28000] # Instruction times (in nanoseconds) time_u1 = 0 # virtual gate time_u2 = 50 # (single X90 pulse) time_u3 = 100 # (two X90 pulses) time_cx = 300 time_reset = 1000 # 1 microsecond time_measure = 1000 # 1 microsecond from qiskit.providers.aer.noise import thermal_relaxation_error from qiskit.providers.aer.noise import NoiseModel ``` ### Question 3 - Using the Thermal Relaxation Error model built into qiskit, define `u1`,`u2`,`u3`, `cx`, `measurement` and `reset` errors using the values for qubits 0-3 defined above, and build a thermal noise model. - _Hint: The Qiskit tutorial on <a href="https://github.com/Qiskit/qiskit-tutorials/blob/master/tutorials/simulators/3_building_noise_models.ipynb">building noise models</a> will prove to be useful, particularly where they add quantum errors for `u1`,`u2`,`u3`,`cx`, `reset`, and `measurement` errors (please include all of these)._ ``` # QuantumError objects errors_reset = [thermal_relaxation_error(t1, t2, time_reset) for t1, t2 in zip(T1s, T2s)] errors_measure = [thermal_relaxation_error(t1, t2, time_measure) for t1, t2 in zip(T1s, T2s)] errors_u1 = [thermal_relaxation_error(t1, t2, time_u1) for t1, t2 in zip(T1s, T2s)] errors_u2 = [thermal_relaxation_error(t1, t2, time_u2) for t1, t2 in zip(T1s, T2s)] errors_u3 = [thermal_relaxation_error(t1, t2, time_u3) for t1, t2 in zip(T1s, T2s)] errors_cx = [[thermal_relaxation_error(t1a, t2a, time_cx).expand( thermal_relaxation_error(t1b, t2b, time_cx)) for t1a, t2a in zip(T1s, T2s)] for t1b, t2b in zip(T1s, T2s)] # Add errors to noise model noise_thermal = NoiseModel() for j in range(4): noise_thermal.add_quantum_error(errors_u1[j], "u1", [j]) noise_thermal.add_quantum_error(errors_u2[j], "u2", [j]) noise_thermal.add_quantum_error(errors_u3[j], "u3", [j]) noise_thermal.add_quantum_error(errors_reset[j], "reset", [j]) noise_thermal.add_quantum_error(errors_measure[j], "measure", [j]) for k in range(4): noise_thermal.add_quantum_error(errors_cx[j][k], "cx", [j,k]) print(noise_thermal) from qc_grader import grade_lab5_ex3 # Note that the grading function is expecting a NoiseModel grade_lab5_ex3(noise_thermal) ``` ### Question 4. - Get a QPT fidelity using the noise model,but without using any error mitigation techniques. Again, use `seed_simulator=3145`, `seed_transpiler=3145` and `shots=8192` for the `execute` function - _Hint: The process here should be very similar to that in question 2a/b, except you will need to ensure you include the noise model from question 3 in the `execute` function_ ``` np.random.seed(0) noisy_qpt_job = execute(qpt_circs,simulator,seed_simulator=3145,seed_transpiler=3145,shots=8192,noise_model=noise_thermal) noisy_qpt_result = noisy_qpt_job.result() noisy_qpt_result.get_counts() noisy_qpt_tomo = ProcessTomographyFitter(noisy_qpt_result, qpt_circs) noisy_qpt_lstsq = noisy_qpt_tomo.fit(method="lstsq") fidelity = qi.average_gate_fidelity(noisy_qpt_lstsq, target_unitary) print(fidelity) from qc_grader import grade_lab5_ex4 # Note that the grading function is expecting a floating point number grade_lab5_ex4(fidelity) ``` ### Question 5. - Use the `complete_meas_cal` function built into qiskit and apply to the QPT results in the previous question. For both, use the `execute` function and `seed_simulator=3145`, `seed_transpiler=3145` and `shots=8192`. Also include the noise model from question 3 in the `execute` function. - *Hint: The Qiskit textbook has a very good chapter on <a href="https://qiskit.org/textbook/ch-quantum-hardware/measurement-error-mitigation.html">`readout error mitigation`</a>. Specifically, you will want to use the <a href="https://qiskit.org/documentation/stubs/qiskit.ignis.mitigation.complete_meas_cal.html">`complete_meas_cal`</a> function to generate the desired set of circuits to create the calibration matrix with <a href="https://qiskit.org/documentation/stubs/qiskit.ignis.mitigation.CompleteMeasFitter.html">`CompleteMeasureFitter`</a> function. This can then be used to generate a correction matrix <a href="https://qiskit.org/documentation/stubs/qiskit.ignis.mitigation.CompleteMeasFitter.html#qiskit.ignis.mitigation.CompleteMeasFitter.filter">`meas_filter`</a>. Apply this function to the results from question 4.* ``` np.random.seed(0) meas_cal_circs,state_labels = complete_meas_cal(qubit_list=[0,1]) meas_cal_job = execute(meas_cal_circs,simulator,seed_simulator=3145,seed_transpiler=3145,shots=8192,noise_model=noise_thermal) meas_cal_result = meas_cal_job.result() meas_fitter = CompleteMeasFitter(meas_cal_result,state_labels) meas_filter = meas_fitter.filter mit_noisy_qpt_result = meas_filter.apply(noisy_qpt_result) mit_noisy_qpt_tomo = ProcessTomographyFitter(mit_noisy_qpt_result, qpt_circs) mit_noisy_qpt_lstsq = mit_noisy_qpt_tomo.fit(method="lstsq") fidelity = qi.average_gate_fidelity(mit_noisy_qpt_lstsq, target_unitary) print(fidelity) from qc_grader import grade_lab5_ex5 # Note that the grading function is expecting a floating point number grade_lab5_ex5(fidelity) ``` ### Exploratory Question 6. - Test how the gate fidelity depends on the CX duration by running noise models with varying cx durations (but leaving everything else fixed). (Note: this would ideally be done using the scaling technique discussed in the previous lecture, but due to backend availability limitations we are instead demonstrating the effect by adjusting duration of the CX itself. This is not exactly how this is implemented on the hardware itself as the gates are not full CX gates.)
github_jupyter
# General tools import numpy as np import matplotlib.pyplot as plt # Qiskit Circuit Functions from qiskit import execute,QuantumCircuit, QuantumRegister, ClassicalRegister, Aer, transpile import qiskit.quantum_info as qi # Tomography functions from qiskit.ignis.verification.tomography import process_tomography_circuits, ProcessTomographyFitter from qiskit.ignis.mitigation.measurement import complete_meas_cal, CompleteMeasFitter import warnings warnings.filterwarnings('ignore') target = QuantumCircuit(2) target.h(0) target.h(1) target.rx(np.pi/2., 0) target.rx(np.pi/2., 1) target.cx(0,1) target.p(np.pi, 1) target.cx(0,1) # target.draw() target_unitary = qi.Operator(target) from qc_grader import grade_lab5_ex1 # Note that the grading function is expecting a quantum circuit with no measurements grade_lab5_ex1(target) simulator = Aer.get_backend('qasm_simulator') qpt_circs = process_tomography_circuits(target, measured_qubits=[0,1]) qpt_job = execute(qpt_circs,simulator,seed_simulator=3145,seed_transpiler=3145,shots=8192) qpt_result = qpt_job.result() qpt_result.get_counts() len(qpt_circs) qpt_tomo = ProcessTomographyFitter(qpt_result, qpt_circs) qpt_lstsq = qpt_tomo.fit(method="lstsq") fidelity = qi.average_gate_fidelity(qpt_lstsq, target_unitary) print(fidelity) from qc_grader import grade_lab5_ex2 # Note that the grading function is expecting a floating point number grade_lab5_ex2(fidelity) # T1 and T2 values for qubits 0-3 T1s = [15000, 19000, 22000, 14000] T2s = [30000, 25000, 18000, 28000] # Instruction times (in nanoseconds) time_u1 = 0 # virtual gate time_u2 = 50 # (single X90 pulse) time_u3 = 100 # (two X90 pulses) time_cx = 300 time_reset = 1000 # 1 microsecond time_measure = 1000 # 1 microsecond from qiskit.providers.aer.noise import thermal_relaxation_error from qiskit.providers.aer.noise import NoiseModel # QuantumError objects errors_reset = [thermal_relaxation_error(t1, t2, time_reset) for t1, t2 in zip(T1s, T2s)] errors_measure = [thermal_relaxation_error(t1, t2, time_measure) for t1, t2 in zip(T1s, T2s)] errors_u1 = [thermal_relaxation_error(t1, t2, time_u1) for t1, t2 in zip(T1s, T2s)] errors_u2 = [thermal_relaxation_error(t1, t2, time_u2) for t1, t2 in zip(T1s, T2s)] errors_u3 = [thermal_relaxation_error(t1, t2, time_u3) for t1, t2 in zip(T1s, T2s)] errors_cx = [[thermal_relaxation_error(t1a, t2a, time_cx).expand( thermal_relaxation_error(t1b, t2b, time_cx)) for t1a, t2a in zip(T1s, T2s)] for t1b, t2b in zip(T1s, T2s)] # Add errors to noise model noise_thermal = NoiseModel() for j in range(4): noise_thermal.add_quantum_error(errors_u1[j], "u1", [j]) noise_thermal.add_quantum_error(errors_u2[j], "u2", [j]) noise_thermal.add_quantum_error(errors_u3[j], "u3", [j]) noise_thermal.add_quantum_error(errors_reset[j], "reset", [j]) noise_thermal.add_quantum_error(errors_measure[j], "measure", [j]) for k in range(4): noise_thermal.add_quantum_error(errors_cx[j][k], "cx", [j,k]) print(noise_thermal) from qc_grader import grade_lab5_ex3 # Note that the grading function is expecting a NoiseModel grade_lab5_ex3(noise_thermal) np.random.seed(0) noisy_qpt_job = execute(qpt_circs,simulator,seed_simulator=3145,seed_transpiler=3145,shots=8192,noise_model=noise_thermal) noisy_qpt_result = noisy_qpt_job.result() noisy_qpt_result.get_counts() noisy_qpt_tomo = ProcessTomographyFitter(noisy_qpt_result, qpt_circs) noisy_qpt_lstsq = noisy_qpt_tomo.fit(method="lstsq") fidelity = qi.average_gate_fidelity(noisy_qpt_lstsq, target_unitary) print(fidelity) from qc_grader import grade_lab5_ex4 # Note that the grading function is expecting a floating point number grade_lab5_ex4(fidelity) np.random.seed(0) meas_cal_circs,state_labels = complete_meas_cal(qubit_list=[0,1]) meas_cal_job = execute(meas_cal_circs,simulator,seed_simulator=3145,seed_transpiler=3145,shots=8192,noise_model=noise_thermal) meas_cal_result = meas_cal_job.result() meas_fitter = CompleteMeasFitter(meas_cal_result,state_labels) meas_filter = meas_fitter.filter mit_noisy_qpt_result = meas_filter.apply(noisy_qpt_result) mit_noisy_qpt_tomo = ProcessTomographyFitter(mit_noisy_qpt_result, qpt_circs) mit_noisy_qpt_lstsq = mit_noisy_qpt_tomo.fit(method="lstsq") fidelity = qi.average_gate_fidelity(mit_noisy_qpt_lstsq, target_unitary) print(fidelity) from qc_grader import grade_lab5_ex5 # Note that the grading function is expecting a floating point number grade_lab5_ex5(fidelity)
0.581422
0.990006
# Knihovna CtiOSDb Nejdrive naimportujeme knihovny: ``` import os import sys # Moduly jsou v jinem adresari, je tedy nutne tento adresar pridat do sys.path module_path = os.path.abspath(os.path.join('../../')) if module_path not in sys.path: sys.path.append(module_path) from pywsdp.modules import CtiOS from pywsdp import OutputFormat ``` ### Zpracovani cele tabulky OPSUB z databaze VFK Podivame se na data z databaze VFK. Abychom neztratili puvodni nevyplnena data, udelame si bokem kopii databaze, kterou pak budeme editovat Nejprve nas bude zajimat zpracovani cele tabulky OPSUB - tedy doplneni udaju ke vsem radkum v teto tabulce. Jak muzeme videt, v databazi jsou osobni udaje opravnenych subjektu zatim nevyplnene: ``` import sqlite3 from shutil import copyfile db_path = os.path.join('../', module_path, 'data', 'input', 'ctios_template.db') output_path = copyfile(db_path, os.path.join('../', module_path, 'data', 'output', 'ctios_template.db')) con = sqlite3.connect(output_path) cur = con.cursor() for row in cur.execute("SELECT * FROM OPSUB"): print(row) con.close() ``` Inicializujeme knihovnu a nacteme do ni data z databaze: ``` ctiOS = CtiOS() ctiOS.nacti_identifikatory_z_databaze(output_path) print(ctiOS.ctios) print(ctiOS.ctios.parameters) print(ctiOS.ctios.xml_attrs) print(ctiOS.ctios.credentials) print(ctiOS.ctios.services_dir) print(ctiOS.ctios.service_dir) print(ctiOS.ctios.service_name) print(ctiOS.ctios.service_group) print(ctiOS.ctios.config_path) print(ctiOS.ctios.template_path) print(ctiOS.ctios.service_headers) ``` Posidenty nyni muzeme zpracovat, cimz updatujeme databazi: ``` vysledek = ctiOS.zpracuj_identifikatory() print(vysledek) ``` Vystup muzeme ulozit do databaze, ktera ma stejnou strukturu jako vstupni databaze: ``` ctiOS.uloz_vystup(vysledek, output_path , OutputFormat.GdalDb) ``` Pri posilani pozadavku na sluzbu mohou byt na strane WSDP sluzby odhaleny chyby ve vstupnim souboru: neplatny identifikator, expirovany identifikator a opravneny subjekt neexistuje. Jak muzeme videt, na konci zpracovani se nam vypise jednoducha statistika zpracovavanych subjektu obsahujici i statistiku chyb. Tuto statistiku je treba po kazdem zpracovani zkontrolovat. Nyni uz se muzeme podivat do naplnene databaze: ``` con = sqlite3.connect(output_path) cur = con.cursor() for row in cur.execute("SELECT * FROM OPSUB"): print(row) con.close() ctiOS.uloz_vystup(vysledek, output_path , OutputFormat.GdalDb) ``` Skvele! Databazi se podarilo aktualizovat. :-) Osobni udaje muzeme muzeme take dale vyexportovat do csv souboru: ``` vystupni_soubor = os.path.join('../', module_path, 'data', 'output', 'ctios.csv') ctiOS.uloz_vystup(vysledek, vystupni_soubor , OutputFormat.Csv) ``` Nebo muzeme vyuzit export do json souboru. Zalezi na preferencich. ``` vystupni_soubor = os.path.join('../', module_path, 'data', 'output', 'ctios.json') ctiOS.uloz_vystup(vysledek, vystupni_soubor, OutputFormat.Json) ``` ### Zpracovani vybranych pseudoidentifikatoru z tabulky OPSUB Pokud nepotrebujeme zpracovavat celou tabulku, muzeme si vybrat jen nektere pseudonymizovane identifikatory. Pri ziskavani identifikatoru z databaze pouzijeme parametr "sql". V teto ukazce nas zajima pouze deset prvnich identifikatoru: ``` ctiOS = CtiOS() output_path = copyfile(db_path, os.path.join('../', module_path, 'data', 'output', 'ctios_template2.db')) sql = "SELECT * FROM OPSUB order by ID LIMIT 10" ctiOS.nacti_identifikatory_z_databaze(output_path, sql) print(ctiOS.ctios) print(ctiOS.ctios.parameters) print(ctiOS.ctios.xml_attrs) print(ctiOS.ctios.credentials) print(ctiOS.ctios.services_dir) print(ctiOS.ctios.service_dir) print(ctiOS.ctios.service_name) print(ctiOS.ctios.service_group) print(ctiOS.ctios.config_path) print(ctiOS.ctios.template_path) print(ctiOS.ctios.service_headers) vysledek = ctiOS.zpracuj_identifikatory() print(vysledek) ``` Vysledek ulozime do databaze: ``` ctiOS.uloz_vystup(vysledek, output_path , OutputFormat.GdalDb) ``` Vysledny slovnik s daty muzeme podobne jako v minule ukazce vyexportovat do jsonu ci csv souboru.
github_jupyter
import os import sys # Moduly jsou v jinem adresari, je tedy nutne tento adresar pridat do sys.path module_path = os.path.abspath(os.path.join('../../')) if module_path not in sys.path: sys.path.append(module_path) from pywsdp.modules import CtiOS from pywsdp import OutputFormat import sqlite3 from shutil import copyfile db_path = os.path.join('../', module_path, 'data', 'input', 'ctios_template.db') output_path = copyfile(db_path, os.path.join('../', module_path, 'data', 'output', 'ctios_template.db')) con = sqlite3.connect(output_path) cur = con.cursor() for row in cur.execute("SELECT * FROM OPSUB"): print(row) con.close() ctiOS = CtiOS() ctiOS.nacti_identifikatory_z_databaze(output_path) print(ctiOS.ctios) print(ctiOS.ctios.parameters) print(ctiOS.ctios.xml_attrs) print(ctiOS.ctios.credentials) print(ctiOS.ctios.services_dir) print(ctiOS.ctios.service_dir) print(ctiOS.ctios.service_name) print(ctiOS.ctios.service_group) print(ctiOS.ctios.config_path) print(ctiOS.ctios.template_path) print(ctiOS.ctios.service_headers) vysledek = ctiOS.zpracuj_identifikatory() print(vysledek) ctiOS.uloz_vystup(vysledek, output_path , OutputFormat.GdalDb) con = sqlite3.connect(output_path) cur = con.cursor() for row in cur.execute("SELECT * FROM OPSUB"): print(row) con.close() ctiOS.uloz_vystup(vysledek, output_path , OutputFormat.GdalDb) vystupni_soubor = os.path.join('../', module_path, 'data', 'output', 'ctios.csv') ctiOS.uloz_vystup(vysledek, vystupni_soubor , OutputFormat.Csv) vystupni_soubor = os.path.join('../', module_path, 'data', 'output', 'ctios.json') ctiOS.uloz_vystup(vysledek, vystupni_soubor, OutputFormat.Json) ctiOS = CtiOS() output_path = copyfile(db_path, os.path.join('../', module_path, 'data', 'output', 'ctios_template2.db')) sql = "SELECT * FROM OPSUB order by ID LIMIT 10" ctiOS.nacti_identifikatory_z_databaze(output_path, sql) print(ctiOS.ctios) print(ctiOS.ctios.parameters) print(ctiOS.ctios.xml_attrs) print(ctiOS.ctios.credentials) print(ctiOS.ctios.services_dir) print(ctiOS.ctios.service_dir) print(ctiOS.ctios.service_name) print(ctiOS.ctios.service_group) print(ctiOS.ctios.config_path) print(ctiOS.ctios.template_path) print(ctiOS.ctios.service_headers) vysledek = ctiOS.zpracuj_identifikatory() print(vysledek) ctiOS.uloz_vystup(vysledek, output_path , OutputFormat.GdalDb)
0.084731
0.590455
``` !wget https://raw.githubusercontent.com/UniversalDependencies/UD_English-EWT/master/en_ewt-ud-dev.conllu !wget https://raw.githubusercontent.com/UniversalDependencies/UD_English-EWT/master/en_ewt-ud-train.conllu !wget https://raw.githubusercontent.com/UniversalDependencies/UD_English-EWT/master/en_ewt-ud-test.conllu !pip install malaya -U import malaya import re from malaya.texts._text_functions import split_into_sentences from malaya.texts import _regex import numpy as np import itertools import tensorflow as tf from tensorflow.keras.preprocessing.sequence import pad_sequences tokenizer = malaya.preprocessing._tokenizer splitter = split_into_sentences def is_number_regex(s): if re.match("^\d+?\.\d+?$", s) is None: return s.isdigit() return True def preprocessing(w): if is_number_regex(w): return '<NUM>' elif re.match(_regex._money, w): return '<MONEY>' elif re.match(_regex._date, w): return '<DATE>' elif re.match(_regex._expressions['email'], w): return '<EMAIL>' elif re.match(_regex._expressions['url'], w): return '<URL>' else: w = ''.join(''.join(s)[:2] for _, s in itertools.groupby(w)) return w word2idx = {'PAD': 0,'UNK':1, '_ROOT': 2} tag2idx = {'PAD': 0, '_<ROOT>': 1} char2idx = {'PAD': 0,'UNK':1, '_ROOT': 2} word_idx = 3 tag_idx = 2 char_idx = 3 special_tokens = ['<NUM>', '<MONEY>', '<DATE>', '<URL>', '<EMAIL>'] for t in special_tokens: word2idx[t] = word_idx word_idx += 1 char2idx[t] = char_idx char_idx += 1 word2idx, char2idx PAD = "_PAD" PAD_POS = "_PAD_POS" PAD_TYPE = "_<PAD>" PAD_CHAR = "_PAD_CHAR" ROOT = "_ROOT" ROOT_POS = "_ROOT_POS" ROOT_TYPE = "_<ROOT>" ROOT_CHAR = "_ROOT_CHAR" END = "_END" END_POS = "_END_POS" END_TYPE = "_<END>" END_CHAR = "_END_CHAR" def process_corpus(corpus, until = None): global word2idx, tag2idx, char2idx, word_idx, tag_idx, char_idx sentences, words, depends, labels, pos, chars = [], [], [], [], [], [] temp_sentence, temp_word, temp_depend, temp_label, temp_pos = [], [], [], [], [] first_time = True for sentence in corpus: try: if len(sentence): if sentence[0] == '#': continue if first_time: print(sentence) first_time = False sentence = sentence.split('\t') for c in sentence[1]: if c not in char2idx: char2idx[c] = char_idx char_idx += 1 if sentence[7] not in tag2idx: tag2idx[sentence[7]] = tag_idx tag_idx += 1 sentence[1] = preprocessing(sentence[1]) if sentence[1] not in word2idx: word2idx[sentence[1]] = word_idx word_idx += 1 temp_word.append(word2idx[sentence[1]]) temp_depend.append(int(sentence[6])) temp_label.append(tag2idx[sentence[7]]) temp_sentence.append(sentence[1]) temp_pos.append(sentence[3]) else: if len(temp_sentence) < 2 or len(temp_word) != len(temp_label): temp_word = [] temp_depend = [] temp_label = [] temp_sentence = [] temp_pos = [] continue words.append(temp_word) depends.append(temp_depend) labels.append(temp_label) sentences.append( temp_sentence) pos.append(temp_pos) char_ = [[char2idx['_ROOT']]] for w in temp_sentence: if w in char2idx: char_.append([char2idx[w]]) else: char_.append([char2idx[c] for c in w]) chars.append(char_) temp_word = [] temp_depend = [] temp_label = [] temp_sentence = [] temp_pos = [] except Exception as e: print(e, sentence) return sentences[:-1], words[:-1], depends[:-1], labels[:-1], pos[:-1], chars[:-1] with open('en_ewt-ud-dev.conllu') as fopen: dev = fopen.read().split('\n') sentences_dev, words_dev, depends_dev, labels_dev, _, _ = process_corpus(dev) with open('en_ewt-ud-test.conllu') as fopen: test = fopen.read().split('\n') sentences_test, words_test, depends_test, labels_test, _, _ = process_corpus(test) sentences_test.extend(sentences_dev) words_test.extend(words_dev) depends_test.extend(depends_dev) labels_test.extend(labels_dev) with open('en_ewt-ud-train.conllu') as fopen: train = fopen.read().split('\n') sentences_train, words_train, depends_train, labels_train, _, _ = process_corpus(train) len(sentences_train), len(sentences_test) idx2word = {v:k for k, v in word2idx.items()} idx2tag = {v:k for k, v in tag2idx.items()} len(idx2word) def generate_char_seq(batch, UNK = 2): maxlen_c = max([len(k) for k in batch]) x = [[len(i) for i in k] for k in batch] maxlen = max([j for i in x for j in i]) temp = np.zeros((len(batch),maxlen_c,maxlen),dtype=np.int32) for i in range(len(batch)): for k in range(len(batch[i])): for no, c in enumerate(batch[i][k]): temp[i,k,-1-no] = char2idx.get(c, UNK) return temp generate_char_seq(sentences_train[:5]).shape pad_sequences(words_train[:5],padding='post').shape train_X = words_train train_Y = labels_train train_depends = depends_train train_char = sentences_train test_X = words_test test_Y = labels_test test_depends = depends_test test_char = sentences_test class BiAAttention: def __init__(self, input_size_encoder, input_size_decoder, num_labels): self.input_size_encoder = input_size_encoder self.input_size_decoder = input_size_decoder self.num_labels = num_labels self.W_d = tf.get_variable("W_d", shape=[self.num_labels, self.input_size_decoder], initializer=tf.contrib.layers.xavier_initializer()) self.W_e = tf.get_variable("W_e", shape=[self.num_labels, self.input_size_encoder], initializer=tf.contrib.layers.xavier_initializer()) self.U = tf.get_variable("U", shape=[self.num_labels, self.input_size_decoder, self.input_size_encoder], initializer=tf.contrib.layers.xavier_initializer()) def forward(self, input_d, input_e, mask_d=None, mask_e=None): batch = tf.shape(input_d)[0] length_decoder = tf.shape(input_d)[1] length_encoder = tf.shape(input_e)[1] out_d = tf.expand_dims(tf.matmul(self.W_d, tf.transpose(input_d, [0, 2, 1])), 3) out_e = tf.expand_dims(tf.matmul(self.W_e, tf.transpose(input_e, [0, 2, 1])), 2) output = tf.matmul(tf.expand_dims(input_d, 1), self.U) output = tf.matmul(output, tf.transpose(tf.expand_dims(input_e, 1), [0, 1, 3, 2])) output = output + out_d + out_e if mask_d is not None: d = tf.expand_dims(tf.expand_dims(mask_d, 1), 3) e = tf.expand_dims(tf.expand_dims(mask_e, 1), 2) output = output * d * e return output class Model: def __init__( self, dim_word, dim_char, dropout, learning_rate, hidden_size_char, hidden_size_word, num_layers ): def cells(size, reuse = False): return tf.contrib.rnn.DropoutWrapper( tf.nn.rnn_cell.LSTMCell( size, initializer = tf.orthogonal_initializer(), reuse = reuse, ), output_keep_prob = dropout, ) def bahdanau(embedded, size): attention_mechanism = tf.contrib.seq2seq.BahdanauAttention( num_units = hidden_size_word, memory = embedded ) return tf.contrib.seq2seq.AttentionWrapper( cell = cells(hidden_size_word), attention_mechanism = attention_mechanism, attention_layer_size = hidden_size_word, ) self.word_ids = tf.placeholder(tf.int32, shape = [None, None]) self.char_ids = tf.placeholder(tf.int32, shape = [None, None, None]) self.labels = tf.placeholder(tf.int32, shape = [None, None]) self.depends = tf.placeholder(tf.int32, shape = [None, None]) self.maxlen = tf.shape(self.word_ids)[1] self.lengths = tf.count_nonzero(self.word_ids, 1) self.mask = tf.math.not_equal(self.word_ids, 0) float_mask = tf.cast(self.mask, tf.float32) self.arc_h = tf.layers.Dense(hidden_size_word) self.arc_c = tf.layers.Dense(hidden_size_word) self.attention = BiAAttention(hidden_size_word, hidden_size_word, 1) self.word_embeddings = tf.Variable( tf.truncated_normal( [len(word2idx), dim_word], stddev = 1.0 / np.sqrt(dim_word) ) ) self.char_embeddings = tf.Variable( tf.truncated_normal( [len(char2idx), dim_char], stddev = 1.0 / np.sqrt(dim_char) ) ) word_embedded = tf.nn.embedding_lookup( self.word_embeddings, self.word_ids ) char_embedded = tf.nn.embedding_lookup( self.char_embeddings, self.char_ids ) s = tf.shape(char_embedded) char_embedded = tf.reshape( char_embedded, shape = [s[0] * s[1], s[-2], dim_char] ) for n in range(num_layers): (out_fw, out_bw), ( state_fw, state_bw, ) = tf.nn.bidirectional_dynamic_rnn( cell_fw = cells(hidden_size_char), cell_bw = cells(hidden_size_char), inputs = char_embedded, dtype = tf.float32, scope = 'bidirectional_rnn_char_%d' % (n), ) char_embedded = tf.concat((out_fw, out_bw), 2) output = tf.reshape( char_embedded[:, -1], shape = [s[0], s[1], 2 * hidden_size_char] ) word_embedded = tf.concat([word_embedded, output], axis = -1) for n in range(num_layers): (out_fw, out_bw), ( state_fw, state_bw, ) = tf.nn.bidirectional_dynamic_rnn( cell_fw = bahdanau(word_embedded, hidden_size_word), cell_bw = bahdanau(word_embedded, hidden_size_word), inputs = word_embedded, dtype = tf.float32, scope = 'bidirectional_rnn_word_%d' % (n), ) word_embedded = tf.concat((out_fw, out_bw), 2) logits = tf.layers.dense(word_embedded, len(idx2tag)) log_likelihood, transition_params = tf.contrib.crf.crf_log_likelihood( logits, self.labels, self.lengths ) arc_h = tf.nn.elu(self.arc_h(word_embedded)) arc_c = tf.nn.elu(self.arc_c(word_embedded)) out_arc = tf.squeeze(self.attention.forward(arc_h, arc_h, mask_d=float_mask, mask_e=float_mask), axis = 1) batch = tf.shape(out_arc)[0] batch_index = tf.range(0, batch) max_len = tf.shape(out_arc)[1] sec_max_len = tf.shape(out_arc)[2] minus_inf = -1e8 minus_mask = (1 - float_mask) * minus_inf out_arc = out_arc + tf.expand_dims(minus_mask, axis = 2) + tf.expand_dims(minus_mask, axis = 1) loss_arc = tf.nn.log_softmax(out_arc, dim=1) loss_arc = loss_arc * tf.expand_dims(float_mask, axis = 2) * tf.expand_dims(float_mask, axis = 1) num = tf.reduce_sum(float_mask) - tf.cast(batch, tf.float32) child_index = tf.tile(tf.expand_dims(tf.range(0, max_len), 1), [1, batch]) t = tf.transpose(self.depends) broadcasted = tf.broadcast_to(batch_index, tf.shape(t)) concatenated = tf.transpose(tf.concat([tf.expand_dims(broadcasted, axis = 0), tf.expand_dims(t, axis = 0), tf.expand_dims(child_index, axis = 0)], axis = 0)) loss_arc = tf.gather_nd(loss_arc, concatenated) loss_arc = tf.transpose(loss_arc, [1, 0])[1:] loss_arc = tf.reduce_sum(-loss_arc) / num self.cost = tf.reduce_mean(-log_likelihood) + loss_arc self.optimizer = tf.train.AdamOptimizer( learning_rate = learning_rate ).minimize(self.cost) mask = tf.sequence_mask(self.lengths, maxlen = self.maxlen) self.tags_seq, _ = tf.contrib.crf.crf_decode( logits, transition_params, self.lengths ) out_arc = out_arc + tf.linalg.diag(tf.fill([max_len], -np.inf)) minus_mask = tf.expand_dims(tf.cast(1.0 - float_mask, tf.bool), axis = 2) minus_mask = tf.tile(minus_mask, [1, 1, sec_max_len]) out_arc = tf.where(minus_mask, tf.fill(tf.shape(out_arc), -np.inf), out_arc) self.heads = tf.argmax(out_arc, axis = 1) self.prediction = tf.boolean_mask(self.tags_seq, mask) mask_label = tf.boolean_mask(self.labels, mask) correct_pred = tf.equal(self.prediction, mask_label) correct_index = tf.cast(correct_pred, tf.float32) self.accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32)) self.prediction = tf.cast(tf.boolean_mask(self.heads, mask), tf.int32) mask_label = tf.boolean_mask(self.depends, mask) correct_pred = tf.equal(self.prediction, mask_label) correct_index = tf.cast(correct_pred, tf.float32) self.accuracy_depends = tf.reduce_mean(tf.cast(correct_pred, tf.float32)) tf.reset_default_graph() sess = tf.InteractiveSession() dim_word = 128 dim_char = 256 dropout = 1.0 learning_rate = 1e-3 hidden_size_char = 128 hidden_size_word = 128 num_layers = 2 model = Model(dim_word,dim_char,dropout,learning_rate,hidden_size_char,hidden_size_word,num_layers) sess.run(tf.global_variables_initializer()) batch_x = train_X[:5] batch_x = pad_sequences(batch_x,padding='post') batch_char = train_char[:5] batch_char = generate_char_seq(batch_char) batch_y = train_Y[:5] batch_y = pad_sequences(batch_y,padding='post') batch_depends = train_depends[:5] batch_depends = pad_sequences(batch_depends,padding='post') sess.run([model.accuracy, model.accuracy_depends, model.cost], feed_dict = {model.word_ids: batch_x, model.char_ids: batch_char, model.labels: batch_y, model.depends: batch_depends}) from tqdm import tqdm batch_size = 32 epoch = 15 for e in range(epoch): train_acc, train_loss = [], [] test_acc, test_loss = [], [] train_acc_depends, test_acc_depends = [], [] pbar = tqdm( range(0, len(train_X), batch_size), desc = 'train minibatch loop' ) for i in pbar: index = min(i + batch_size, len(train_X)) batch_x = train_X[i: index] batch_x = pad_sequences(batch_x,padding='post') batch_char = train_char[i: index] batch_char = generate_char_seq(batch_char) batch_y = train_Y[i: index] batch_y = pad_sequences(batch_y,padding='post') batch_depends = train_depends[i: index] batch_depends = pad_sequences(batch_depends,padding='post') acc_depends, acc, cost, _ = sess.run( [model.accuracy_depends, model.accuracy, model.cost, model.optimizer], feed_dict = { model.word_ids: batch_x, model.char_ids: batch_char, model.labels: batch_y, model.depends: batch_depends }, ) train_loss.append(cost) train_acc.append(acc) train_acc_depends.append(acc_depends) pbar.set_postfix(cost = cost, accuracy = acc, accuracy_depends = acc_depends) pbar = tqdm( range(0, len(test_X), batch_size), desc = 'test minibatch loop' ) for i in pbar: index = min(i + batch_size, len(test_X)) batch_x = test_X[i: index] batch_x = pad_sequences(batch_x,padding='post') batch_char = test_char[i: index] batch_char = generate_char_seq(batch_char) batch_y = test_Y[i: index] batch_y = pad_sequences(batch_y,padding='post') batch_depends = test_depends[i: index] batch_depends = pad_sequences(batch_depends,padding='post') acc_depends, acc, cost = sess.run( [model.accuracy_depends, model.accuracy, model.cost], feed_dict = { model.word_ids: batch_x, model.char_ids: batch_char, model.labels: batch_y, model.depends: batch_depends }, ) test_loss.append(cost) test_acc.append(acc) test_acc_depends.append(acc_depends) pbar.set_postfix(cost = cost, accuracy = acc, accuracy_depends = acc_depends) print( 'epoch: %d, training loss: %f, training acc: %f, training depends: %f, valid loss: %f, valid acc: %f, valid depends: %f\n' % (e, np.mean(train_loss), np.mean(train_acc), np.mean(train_acc_depends), np.mean(test_loss), np.mean(test_acc), np.mean(test_acc_depends) )) def evaluate(heads_pred, types_pred, heads, types, lengths, symbolic_root=False, symbolic_end=False): batch_size, _ = words.shape ucorr = 0. lcorr = 0. total = 0. ucomplete_match = 0. lcomplete_match = 0. corr_root = 0. total_root = 0. start = 1 if symbolic_root else 0 end = 1 if symbolic_end else 0 for i in range(batch_size): ucm = 1. lcm = 1. for j in range(start, lengths[i] - end): total += 1 if heads[i, j] == heads_pred[i, j]: ucorr += 1 if types[i, j] == types_pred[i, j]: lcorr += 1 else: lcm = 0 else: ucm = 0 lcm = 0 if heads[i, j] == 0: total_root += 1 corr_root += 1 if heads_pred[i, j] == 0 else 0 ucomplete_match += ucm lcomplete_match += lcm return (ucorr, lcorr, total, ucomplete_match, lcomplete_match), \ (corr_root, total_root), batch_size tags_seq, heads = sess.run( [model.tags_seq, model.heads], feed_dict = { model.word_ids: batch_x, model.char_ids: batch_char }, ) tags_seq[0], heads[0], batch_depends[0] def evaluate(heads_pred, types_pred, heads, types, lengths, symbolic_root=False, symbolic_end=False): batch_size, _ = heads_pred.shape ucorr = 0. lcorr = 0. total = 0. ucomplete_match = 0. lcomplete_match = 0. corr_root = 0. total_root = 0. start = 1 if symbolic_root else 0 end = 1 if symbolic_end else 0 for i in range(batch_size): ucm = 1. lcm = 1. for j in range(start, lengths[i] - end): total += 1 if heads[i, j] == heads_pred[i, j]: ucorr += 1 if types[i, j] == types_pred[i, j]: lcorr += 1 else: lcm = 0 else: ucm = 0 lcm = 0 if heads[i, j] == 0: total_root += 1 corr_root += 1 if heads_pred[i, j] == 0 else 0 ucomplete_match += ucm lcomplete_match += lcm return ucorr / total, lcorr / total, corr_root / total_root arc_accuracy, type_accuracy, root_accuracy = evaluate(heads, tags_seq, batch_depends, batch_y, np.count_nonzero(batch_x, axis = 1)) arc_accuracy, type_accuracy, root_accuracy arcs, types, roots = [], [], [] pbar = tqdm( range(0, len(test_X), batch_size), desc = 'test minibatch loop' ) for i in pbar: index = min(i + batch_size, len(test_X)) batch_x = test_X[i: index] batch_x = pad_sequences(batch_x,padding='post') batch_char = test_char[i: index] batch_char = generate_char_seq(batch_char) batch_y = test_Y[i: index] batch_y = pad_sequences(batch_y,padding='post') batch_depends = test_depends[i: index] batch_depends = pad_sequences(batch_depends,padding='post') tags_seq, heads = sess.run( [model.tags_seq, model.heads], feed_dict = { model.word_ids: batch_x, model.char_ids: batch_char }, ) arc_accuracy, type_accuracy, root_accuracy = evaluate(heads, tags_seq, batch_depends, batch_y, np.count_nonzero(batch_x, axis = 1)) pbar.set_postfix(arc_accuracy = arc_accuracy, type_accuracy = type_accuracy, root_accuracy = root_accuracy) arcs.append(arc_accuracy) types.append(type_accuracy) roots.append(root_accuracy) print('arc accuracy:', np.mean(arcs)) print('types accuracy:', np.mean(types)) print('root accuracy:', np.mean(roots)) ```
github_jupyter
!wget https://raw.githubusercontent.com/UniversalDependencies/UD_English-EWT/master/en_ewt-ud-dev.conllu !wget https://raw.githubusercontent.com/UniversalDependencies/UD_English-EWT/master/en_ewt-ud-train.conllu !wget https://raw.githubusercontent.com/UniversalDependencies/UD_English-EWT/master/en_ewt-ud-test.conllu !pip install malaya -U import malaya import re from malaya.texts._text_functions import split_into_sentences from malaya.texts import _regex import numpy as np import itertools import tensorflow as tf from tensorflow.keras.preprocessing.sequence import pad_sequences tokenizer = malaya.preprocessing._tokenizer splitter = split_into_sentences def is_number_regex(s): if re.match("^\d+?\.\d+?$", s) is None: return s.isdigit() return True def preprocessing(w): if is_number_regex(w): return '<NUM>' elif re.match(_regex._money, w): return '<MONEY>' elif re.match(_regex._date, w): return '<DATE>' elif re.match(_regex._expressions['email'], w): return '<EMAIL>' elif re.match(_regex._expressions['url'], w): return '<URL>' else: w = ''.join(''.join(s)[:2] for _, s in itertools.groupby(w)) return w word2idx = {'PAD': 0,'UNK':1, '_ROOT': 2} tag2idx = {'PAD': 0, '_<ROOT>': 1} char2idx = {'PAD': 0,'UNK':1, '_ROOT': 2} word_idx = 3 tag_idx = 2 char_idx = 3 special_tokens = ['<NUM>', '<MONEY>', '<DATE>', '<URL>', '<EMAIL>'] for t in special_tokens: word2idx[t] = word_idx word_idx += 1 char2idx[t] = char_idx char_idx += 1 word2idx, char2idx PAD = "_PAD" PAD_POS = "_PAD_POS" PAD_TYPE = "_<PAD>" PAD_CHAR = "_PAD_CHAR" ROOT = "_ROOT" ROOT_POS = "_ROOT_POS" ROOT_TYPE = "_<ROOT>" ROOT_CHAR = "_ROOT_CHAR" END = "_END" END_POS = "_END_POS" END_TYPE = "_<END>" END_CHAR = "_END_CHAR" def process_corpus(corpus, until = None): global word2idx, tag2idx, char2idx, word_idx, tag_idx, char_idx sentences, words, depends, labels, pos, chars = [], [], [], [], [], [] temp_sentence, temp_word, temp_depend, temp_label, temp_pos = [], [], [], [], [] first_time = True for sentence in corpus: try: if len(sentence): if sentence[0] == '#': continue if first_time: print(sentence) first_time = False sentence = sentence.split('\t') for c in sentence[1]: if c not in char2idx: char2idx[c] = char_idx char_idx += 1 if sentence[7] not in tag2idx: tag2idx[sentence[7]] = tag_idx tag_idx += 1 sentence[1] = preprocessing(sentence[1]) if sentence[1] not in word2idx: word2idx[sentence[1]] = word_idx word_idx += 1 temp_word.append(word2idx[sentence[1]]) temp_depend.append(int(sentence[6])) temp_label.append(tag2idx[sentence[7]]) temp_sentence.append(sentence[1]) temp_pos.append(sentence[3]) else: if len(temp_sentence) < 2 or len(temp_word) != len(temp_label): temp_word = [] temp_depend = [] temp_label = [] temp_sentence = [] temp_pos = [] continue words.append(temp_word) depends.append(temp_depend) labels.append(temp_label) sentences.append( temp_sentence) pos.append(temp_pos) char_ = [[char2idx['_ROOT']]] for w in temp_sentence: if w in char2idx: char_.append([char2idx[w]]) else: char_.append([char2idx[c] for c in w]) chars.append(char_) temp_word = [] temp_depend = [] temp_label = [] temp_sentence = [] temp_pos = [] except Exception as e: print(e, sentence) return sentences[:-1], words[:-1], depends[:-1], labels[:-1], pos[:-1], chars[:-1] with open('en_ewt-ud-dev.conllu') as fopen: dev = fopen.read().split('\n') sentences_dev, words_dev, depends_dev, labels_dev, _, _ = process_corpus(dev) with open('en_ewt-ud-test.conllu') as fopen: test = fopen.read().split('\n') sentences_test, words_test, depends_test, labels_test, _, _ = process_corpus(test) sentences_test.extend(sentences_dev) words_test.extend(words_dev) depends_test.extend(depends_dev) labels_test.extend(labels_dev) with open('en_ewt-ud-train.conllu') as fopen: train = fopen.read().split('\n') sentences_train, words_train, depends_train, labels_train, _, _ = process_corpus(train) len(sentences_train), len(sentences_test) idx2word = {v:k for k, v in word2idx.items()} idx2tag = {v:k for k, v in tag2idx.items()} len(idx2word) def generate_char_seq(batch, UNK = 2): maxlen_c = max([len(k) for k in batch]) x = [[len(i) for i in k] for k in batch] maxlen = max([j for i in x for j in i]) temp = np.zeros((len(batch),maxlen_c,maxlen),dtype=np.int32) for i in range(len(batch)): for k in range(len(batch[i])): for no, c in enumerate(batch[i][k]): temp[i,k,-1-no] = char2idx.get(c, UNK) return temp generate_char_seq(sentences_train[:5]).shape pad_sequences(words_train[:5],padding='post').shape train_X = words_train train_Y = labels_train train_depends = depends_train train_char = sentences_train test_X = words_test test_Y = labels_test test_depends = depends_test test_char = sentences_test class BiAAttention: def __init__(self, input_size_encoder, input_size_decoder, num_labels): self.input_size_encoder = input_size_encoder self.input_size_decoder = input_size_decoder self.num_labels = num_labels self.W_d = tf.get_variable("W_d", shape=[self.num_labels, self.input_size_decoder], initializer=tf.contrib.layers.xavier_initializer()) self.W_e = tf.get_variable("W_e", shape=[self.num_labels, self.input_size_encoder], initializer=tf.contrib.layers.xavier_initializer()) self.U = tf.get_variable("U", shape=[self.num_labels, self.input_size_decoder, self.input_size_encoder], initializer=tf.contrib.layers.xavier_initializer()) def forward(self, input_d, input_e, mask_d=None, mask_e=None): batch = tf.shape(input_d)[0] length_decoder = tf.shape(input_d)[1] length_encoder = tf.shape(input_e)[1] out_d = tf.expand_dims(tf.matmul(self.W_d, tf.transpose(input_d, [0, 2, 1])), 3) out_e = tf.expand_dims(tf.matmul(self.W_e, tf.transpose(input_e, [0, 2, 1])), 2) output = tf.matmul(tf.expand_dims(input_d, 1), self.U) output = tf.matmul(output, tf.transpose(tf.expand_dims(input_e, 1), [0, 1, 3, 2])) output = output + out_d + out_e if mask_d is not None: d = tf.expand_dims(tf.expand_dims(mask_d, 1), 3) e = tf.expand_dims(tf.expand_dims(mask_e, 1), 2) output = output * d * e return output class Model: def __init__( self, dim_word, dim_char, dropout, learning_rate, hidden_size_char, hidden_size_word, num_layers ): def cells(size, reuse = False): return tf.contrib.rnn.DropoutWrapper( tf.nn.rnn_cell.LSTMCell( size, initializer = tf.orthogonal_initializer(), reuse = reuse, ), output_keep_prob = dropout, ) def bahdanau(embedded, size): attention_mechanism = tf.contrib.seq2seq.BahdanauAttention( num_units = hidden_size_word, memory = embedded ) return tf.contrib.seq2seq.AttentionWrapper( cell = cells(hidden_size_word), attention_mechanism = attention_mechanism, attention_layer_size = hidden_size_word, ) self.word_ids = tf.placeholder(tf.int32, shape = [None, None]) self.char_ids = tf.placeholder(tf.int32, shape = [None, None, None]) self.labels = tf.placeholder(tf.int32, shape = [None, None]) self.depends = tf.placeholder(tf.int32, shape = [None, None]) self.maxlen = tf.shape(self.word_ids)[1] self.lengths = tf.count_nonzero(self.word_ids, 1) self.mask = tf.math.not_equal(self.word_ids, 0) float_mask = tf.cast(self.mask, tf.float32) self.arc_h = tf.layers.Dense(hidden_size_word) self.arc_c = tf.layers.Dense(hidden_size_word) self.attention = BiAAttention(hidden_size_word, hidden_size_word, 1) self.word_embeddings = tf.Variable( tf.truncated_normal( [len(word2idx), dim_word], stddev = 1.0 / np.sqrt(dim_word) ) ) self.char_embeddings = tf.Variable( tf.truncated_normal( [len(char2idx), dim_char], stddev = 1.0 / np.sqrt(dim_char) ) ) word_embedded = tf.nn.embedding_lookup( self.word_embeddings, self.word_ids ) char_embedded = tf.nn.embedding_lookup( self.char_embeddings, self.char_ids ) s = tf.shape(char_embedded) char_embedded = tf.reshape( char_embedded, shape = [s[0] * s[1], s[-2], dim_char] ) for n in range(num_layers): (out_fw, out_bw), ( state_fw, state_bw, ) = tf.nn.bidirectional_dynamic_rnn( cell_fw = cells(hidden_size_char), cell_bw = cells(hidden_size_char), inputs = char_embedded, dtype = tf.float32, scope = 'bidirectional_rnn_char_%d' % (n), ) char_embedded = tf.concat((out_fw, out_bw), 2) output = tf.reshape( char_embedded[:, -1], shape = [s[0], s[1], 2 * hidden_size_char] ) word_embedded = tf.concat([word_embedded, output], axis = -1) for n in range(num_layers): (out_fw, out_bw), ( state_fw, state_bw, ) = tf.nn.bidirectional_dynamic_rnn( cell_fw = bahdanau(word_embedded, hidden_size_word), cell_bw = bahdanau(word_embedded, hidden_size_word), inputs = word_embedded, dtype = tf.float32, scope = 'bidirectional_rnn_word_%d' % (n), ) word_embedded = tf.concat((out_fw, out_bw), 2) logits = tf.layers.dense(word_embedded, len(idx2tag)) log_likelihood, transition_params = tf.contrib.crf.crf_log_likelihood( logits, self.labels, self.lengths ) arc_h = tf.nn.elu(self.arc_h(word_embedded)) arc_c = tf.nn.elu(self.arc_c(word_embedded)) out_arc = tf.squeeze(self.attention.forward(arc_h, arc_h, mask_d=float_mask, mask_e=float_mask), axis = 1) batch = tf.shape(out_arc)[0] batch_index = tf.range(0, batch) max_len = tf.shape(out_arc)[1] sec_max_len = tf.shape(out_arc)[2] minus_inf = -1e8 minus_mask = (1 - float_mask) * minus_inf out_arc = out_arc + tf.expand_dims(minus_mask, axis = 2) + tf.expand_dims(minus_mask, axis = 1) loss_arc = tf.nn.log_softmax(out_arc, dim=1) loss_arc = loss_arc * tf.expand_dims(float_mask, axis = 2) * tf.expand_dims(float_mask, axis = 1) num = tf.reduce_sum(float_mask) - tf.cast(batch, tf.float32) child_index = tf.tile(tf.expand_dims(tf.range(0, max_len), 1), [1, batch]) t = tf.transpose(self.depends) broadcasted = tf.broadcast_to(batch_index, tf.shape(t)) concatenated = tf.transpose(tf.concat([tf.expand_dims(broadcasted, axis = 0), tf.expand_dims(t, axis = 0), tf.expand_dims(child_index, axis = 0)], axis = 0)) loss_arc = tf.gather_nd(loss_arc, concatenated) loss_arc = tf.transpose(loss_arc, [1, 0])[1:] loss_arc = tf.reduce_sum(-loss_arc) / num self.cost = tf.reduce_mean(-log_likelihood) + loss_arc self.optimizer = tf.train.AdamOptimizer( learning_rate = learning_rate ).minimize(self.cost) mask = tf.sequence_mask(self.lengths, maxlen = self.maxlen) self.tags_seq, _ = tf.contrib.crf.crf_decode( logits, transition_params, self.lengths ) out_arc = out_arc + tf.linalg.diag(tf.fill([max_len], -np.inf)) minus_mask = tf.expand_dims(tf.cast(1.0 - float_mask, tf.bool), axis = 2) minus_mask = tf.tile(minus_mask, [1, 1, sec_max_len]) out_arc = tf.where(minus_mask, tf.fill(tf.shape(out_arc), -np.inf), out_arc) self.heads = tf.argmax(out_arc, axis = 1) self.prediction = tf.boolean_mask(self.tags_seq, mask) mask_label = tf.boolean_mask(self.labels, mask) correct_pred = tf.equal(self.prediction, mask_label) correct_index = tf.cast(correct_pred, tf.float32) self.accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32)) self.prediction = tf.cast(tf.boolean_mask(self.heads, mask), tf.int32) mask_label = tf.boolean_mask(self.depends, mask) correct_pred = tf.equal(self.prediction, mask_label) correct_index = tf.cast(correct_pred, tf.float32) self.accuracy_depends = tf.reduce_mean(tf.cast(correct_pred, tf.float32)) tf.reset_default_graph() sess = tf.InteractiveSession() dim_word = 128 dim_char = 256 dropout = 1.0 learning_rate = 1e-3 hidden_size_char = 128 hidden_size_word = 128 num_layers = 2 model = Model(dim_word,dim_char,dropout,learning_rate,hidden_size_char,hidden_size_word,num_layers) sess.run(tf.global_variables_initializer()) batch_x = train_X[:5] batch_x = pad_sequences(batch_x,padding='post') batch_char = train_char[:5] batch_char = generate_char_seq(batch_char) batch_y = train_Y[:5] batch_y = pad_sequences(batch_y,padding='post') batch_depends = train_depends[:5] batch_depends = pad_sequences(batch_depends,padding='post') sess.run([model.accuracy, model.accuracy_depends, model.cost], feed_dict = {model.word_ids: batch_x, model.char_ids: batch_char, model.labels: batch_y, model.depends: batch_depends}) from tqdm import tqdm batch_size = 32 epoch = 15 for e in range(epoch): train_acc, train_loss = [], [] test_acc, test_loss = [], [] train_acc_depends, test_acc_depends = [], [] pbar = tqdm( range(0, len(train_X), batch_size), desc = 'train minibatch loop' ) for i in pbar: index = min(i + batch_size, len(train_X)) batch_x = train_X[i: index] batch_x = pad_sequences(batch_x,padding='post') batch_char = train_char[i: index] batch_char = generate_char_seq(batch_char) batch_y = train_Y[i: index] batch_y = pad_sequences(batch_y,padding='post') batch_depends = train_depends[i: index] batch_depends = pad_sequences(batch_depends,padding='post') acc_depends, acc, cost, _ = sess.run( [model.accuracy_depends, model.accuracy, model.cost, model.optimizer], feed_dict = { model.word_ids: batch_x, model.char_ids: batch_char, model.labels: batch_y, model.depends: batch_depends }, ) train_loss.append(cost) train_acc.append(acc) train_acc_depends.append(acc_depends) pbar.set_postfix(cost = cost, accuracy = acc, accuracy_depends = acc_depends) pbar = tqdm( range(0, len(test_X), batch_size), desc = 'test minibatch loop' ) for i in pbar: index = min(i + batch_size, len(test_X)) batch_x = test_X[i: index] batch_x = pad_sequences(batch_x,padding='post') batch_char = test_char[i: index] batch_char = generate_char_seq(batch_char) batch_y = test_Y[i: index] batch_y = pad_sequences(batch_y,padding='post') batch_depends = test_depends[i: index] batch_depends = pad_sequences(batch_depends,padding='post') acc_depends, acc, cost = sess.run( [model.accuracy_depends, model.accuracy, model.cost], feed_dict = { model.word_ids: batch_x, model.char_ids: batch_char, model.labels: batch_y, model.depends: batch_depends }, ) test_loss.append(cost) test_acc.append(acc) test_acc_depends.append(acc_depends) pbar.set_postfix(cost = cost, accuracy = acc, accuracy_depends = acc_depends) print( 'epoch: %d, training loss: %f, training acc: %f, training depends: %f, valid loss: %f, valid acc: %f, valid depends: %f\n' % (e, np.mean(train_loss), np.mean(train_acc), np.mean(train_acc_depends), np.mean(test_loss), np.mean(test_acc), np.mean(test_acc_depends) )) def evaluate(heads_pred, types_pred, heads, types, lengths, symbolic_root=False, symbolic_end=False): batch_size, _ = words.shape ucorr = 0. lcorr = 0. total = 0. ucomplete_match = 0. lcomplete_match = 0. corr_root = 0. total_root = 0. start = 1 if symbolic_root else 0 end = 1 if symbolic_end else 0 for i in range(batch_size): ucm = 1. lcm = 1. for j in range(start, lengths[i] - end): total += 1 if heads[i, j] == heads_pred[i, j]: ucorr += 1 if types[i, j] == types_pred[i, j]: lcorr += 1 else: lcm = 0 else: ucm = 0 lcm = 0 if heads[i, j] == 0: total_root += 1 corr_root += 1 if heads_pred[i, j] == 0 else 0 ucomplete_match += ucm lcomplete_match += lcm return (ucorr, lcorr, total, ucomplete_match, lcomplete_match), \ (corr_root, total_root), batch_size tags_seq, heads = sess.run( [model.tags_seq, model.heads], feed_dict = { model.word_ids: batch_x, model.char_ids: batch_char }, ) tags_seq[0], heads[0], batch_depends[0] def evaluate(heads_pred, types_pred, heads, types, lengths, symbolic_root=False, symbolic_end=False): batch_size, _ = heads_pred.shape ucorr = 0. lcorr = 0. total = 0. ucomplete_match = 0. lcomplete_match = 0. corr_root = 0. total_root = 0. start = 1 if symbolic_root else 0 end = 1 if symbolic_end else 0 for i in range(batch_size): ucm = 1. lcm = 1. for j in range(start, lengths[i] - end): total += 1 if heads[i, j] == heads_pred[i, j]: ucorr += 1 if types[i, j] == types_pred[i, j]: lcorr += 1 else: lcm = 0 else: ucm = 0 lcm = 0 if heads[i, j] == 0: total_root += 1 corr_root += 1 if heads_pred[i, j] == 0 else 0 ucomplete_match += ucm lcomplete_match += lcm return ucorr / total, lcorr / total, corr_root / total_root arc_accuracy, type_accuracy, root_accuracy = evaluate(heads, tags_seq, batch_depends, batch_y, np.count_nonzero(batch_x, axis = 1)) arc_accuracy, type_accuracy, root_accuracy arcs, types, roots = [], [], [] pbar = tqdm( range(0, len(test_X), batch_size), desc = 'test minibatch loop' ) for i in pbar: index = min(i + batch_size, len(test_X)) batch_x = test_X[i: index] batch_x = pad_sequences(batch_x,padding='post') batch_char = test_char[i: index] batch_char = generate_char_seq(batch_char) batch_y = test_Y[i: index] batch_y = pad_sequences(batch_y,padding='post') batch_depends = test_depends[i: index] batch_depends = pad_sequences(batch_depends,padding='post') tags_seq, heads = sess.run( [model.tags_seq, model.heads], feed_dict = { model.word_ids: batch_x, model.char_ids: batch_char }, ) arc_accuracy, type_accuracy, root_accuracy = evaluate(heads, tags_seq, batch_depends, batch_y, np.count_nonzero(batch_x, axis = 1)) pbar.set_postfix(arc_accuracy = arc_accuracy, type_accuracy = type_accuracy, root_accuracy = root_accuracy) arcs.append(arc_accuracy) types.append(type_accuracy) roots.append(root_accuracy) print('arc accuracy:', np.mean(arcs)) print('types accuracy:', np.mean(types)) print('root accuracy:', np.mean(roots))
0.342681
0.220217
# Hierarchical Partial Pooling Suppose you are tasked with estimating baseball batting skills for several players. One such performance metric is batting average. Since players play a different number of games and bat in different positions in the order, each player has a different number of at-bats. However, you want to estimate the skill of all players, including those with a relatively small number of batting opportunities. So, suppose a player came to bat only 4 times, and never hit the ball. Are they a bad player? As a disclaimer, the author of this notebook assumes little to non-existant knowledge about baseball and its rules. The number of times at bat in his entire life is around "4". ## Data We will use the [baseball data for 18 players from Efron and Morris](http://www.swarthmore.edu/NatSci/peverso1/Sports%20Data/JamesSteinData/Efron-Morris%20Baseball/EfronMorrisBB.txt) (1975). ## Approach We will use PyMC3 to estimate the batting average for each player. Having estimated the averages across all players in the datasets, we can use this information to inform an estimate of an additional player, for which there is little data (*i.e.* 4 at-bats). In the absence of a Bayesian hierarchical model, there are two approaches for this problem: (1) independently compute batting average for each player (no pooling) (2) compute an overall average, under the assumption that everyone has the same underlying average (complete pooling) Of course, neither approach is realistic. Clearly, all players aren't equally skilled hitters, so the global average is implausible. At the same time, professional baseball players are similar in many ways, so their averages aren't entirely independent either. It may be possible to cluster groups of "similar" players, and estimate group averages, but using a hierarchical modeling approach is a natural way of sharing information that does not involve identifying *ad hoc* clusters. The idea of hierarchical partial pooling is to model the global performance, and use that estimate to parameterize a population of players that accounts for differences among the players' performances. This tradeoff between global and individual performance will be automatically tuned by the model. Also, uncertainty due to different number of at bats for each player (*i.e.* informatino) will be automatically accounted for, by shrinking those estimates closer to the global mean. For far more in-depth discussion please refer to Stan [tutorial](http://mc-stan.org/documentation/case-studies/pool-binary-trials.html) on the subject. The model and parameter values were taken from that example. ``` %matplotlib inline import pymc3 as pm import numpy as np import matplotlib.pyplot as plt import pandas as pd import theano.tensor as tt ``` Now we can load the dataset using pandas: ``` data = pd.read_table(pm.get_data('efron-morris-75-data.tsv'), sep="\t") at_bats, hits = data[['At-Bats', 'Hits']].values.T ``` Now let's develop a generative model for these data. We will assume that there exists a hidden factor (`phi`) related to the expected performance for all players (not limited to our 18). Since the population mean is an unknown value between 0 and 1, it must be bounded from below and above. Also, we assume that nothing is known about global average. Hence, a natural choice for a prior distribution is the uniform distribution. Next, we introduce a hyperparameter `kappa` to account for the variance in the population batting averages, for which we will use a bounded Pareto distribution. This will ensure that the estimated value falls within reasonable bounds. These hyperparameters will be, in turn, used to parameterize a beta distribution, which is ideal for modeling quantities on the unit interval. The beta distribution is typically parameterized via a scale and shape parameter, it may also be parametrized in terms of its mean $\mu \in [0,1]$ and sample size (a proxy for variance) $\nu = \alpha + \beta (\nu > 0)$. The final step is to specify a sampling distribution for the data (hit or miss) for every player, using a Binomial distribution. This is where the data are brought to bear on the model. We could use `pm.Pareto('kappa', m=1.5)`, to define our prior on `kappa`, but the Pareto distribution has very long tails. Exploring these properly is difficult for the sampler, so we use an equivalent but faster parametrization using the exponential distribution. We use the fact that the log of a Pareto distributed random variable follows an exponential distribution. ``` N = len(hits) with pm.Model() as baseball_model: phi = pm.Uniform('phi', lower=0.0, upper=1.0) kappa_log = pm.Exponential('kappa_log', lam=1.5) kappa = pm.Deterministic('kappa', tt.exp(kappa_log)) thetas = pm.Beta('thetas', alpha=phi*kappa, beta=(1.0-phi)*kappa, shape=N) y = pm.Binomial('y', n=at_bats, p=thetas, observed=hits) ``` Recall our original question was with regard to the true batting average for a player with only 4 at bats and no hits. We can add this as an additional variable in the model ``` with baseball_model: theta_new = pm.Beta('theta_new', alpha=phi*kappa, beta=(1.0-phi)*kappa) y_new = pm.Binomial('y_new', n=4, p=theta_new, observed=0) ``` We can now fit the model using MCMC: ``` with baseball_model: trace = pm.sample(2000, tune=1000, chains=2, target_accept=0.95) ``` Now we can plot the posteriors distribution of the parameters. First, the population hyperparameters: ``` pm.traceplot(trace, varnames=['phi', 'kappa']); ``` Hence, the population mean batting average is in the 0.22-0.31 range, with an expected value of around 0.26. Next, the estimates for all 18 players in the dataset: ``` player_names = data.apply(lambda x: x.FirstName + ' ' + x.LastName, axis=1) pm.forestplot(trace, varnames=['thetas'], ylabels=player_names) ``` Finally, let's get the estimate for our 0-for-4 player: ``` pm.traceplot(trace, varnames=['theta_new']); ``` Notice that, despite the fact our additional player did not get any hits, the estimate of his average is not zero -- zero is not even a highly-probably value. This is because we are assuming that the player is drawn from a *population* of players with a distribution specified by our estimated hyperparemeters. However, the estimated mean for this player is toward the low end of the means for the players in our dataset, indicating that the 4 at-bats contributed some information toward the estimate.
github_jupyter
%matplotlib inline import pymc3 as pm import numpy as np import matplotlib.pyplot as plt import pandas as pd import theano.tensor as tt data = pd.read_table(pm.get_data('efron-morris-75-data.tsv'), sep="\t") at_bats, hits = data[['At-Bats', 'Hits']].values.T N = len(hits) with pm.Model() as baseball_model: phi = pm.Uniform('phi', lower=0.0, upper=1.0) kappa_log = pm.Exponential('kappa_log', lam=1.5) kappa = pm.Deterministic('kappa', tt.exp(kappa_log)) thetas = pm.Beta('thetas', alpha=phi*kappa, beta=(1.0-phi)*kappa, shape=N) y = pm.Binomial('y', n=at_bats, p=thetas, observed=hits) with baseball_model: theta_new = pm.Beta('theta_new', alpha=phi*kappa, beta=(1.0-phi)*kappa) y_new = pm.Binomial('y_new', n=4, p=theta_new, observed=0) with baseball_model: trace = pm.sample(2000, tune=1000, chains=2, target_accept=0.95) pm.traceplot(trace, varnames=['phi', 'kappa']); player_names = data.apply(lambda x: x.FirstName + ' ' + x.LastName, axis=1) pm.forestplot(trace, varnames=['thetas'], ylabels=player_names) pm.traceplot(trace, varnames=['theta_new']);
0.389547
0.992626
# ITK in Python ### Learning Objectives * Learn how to write simple Python code with ITK * Become familiar with the functional and object-oriented interfaces to ITK in Python * Understand how to bridge ITK with machine learning libraries with [NumPy](https://numpy.org/) # Working with NumPy and friends * ITK is great at reading and processing images * Some algorithms are not available in ITK * NumPy is great at processing arrays in simple ways * NumPy arrays can be read by many other Python packages * [matplotlib](https://matplotlib.org) * [scikit-learn](https://scikit-learn.org) * [PyTorch](https://pytorch.org) * [TensorFlow](https://www.tensorflow.org) * [scikit-image](https://scikit-image.org) * [OpenCV](https://opencv.org) ``` import itk from itkwidgets import view import numpy as np import matplotlib.pyplot as plt %matplotlib inline image = itk.imread("data/KitwareITK.jpg") view(image, ui_collapsed=True) array = itk.array_from_image(image) print(array[1,1]) ``` Let go the other way around: NumPy array to an ITK image. First, we create an array with some values. ``` def make_gaussian(size, fwhm=3, center=None): """ Make a square gaussian kernel. size is the length of a side of the square fwhm is full-width-half-maximum, which can be considered an effective radius. """ x = np.arange(0, size, 1, np.float32) y = x[:,np.newaxis] if center is None: x0 = y0 = size // 2 else: x0 = center[0] y0 = center[1] return np.exp(-4*np.log(2) * ((x-x0)**2 + (y-y0)**2) / fwhm**2) array = make_gaussian(11) ``` Let's look at the array. We use `matplotlib` or `itkwidgets.view` to do this. ``` plt.gray() plt.imshow(array) plt.axis('off') image = itk.image_from_array(array) view(image, cmap='Grayscale', interpolation=False) ``` ## Exercises ### Exercise 1: Visualize an image * Read an image with ITK * Apply a filter * Show both original image and filtered images with matplotlib ``` image = itk.imread('data/CBCT-TextureInput.png', itk.F) # %load solutions/2_ITK_in_Python_answers_Exercise1.py ``` ## Views vs Copies So far we have used - `itk.array_from_image()` - `itk.image_from_array()`. Also available: - `itk.array_view_from_image()` - `itk.image_view_from_array()` You can see the keyword **view** in both the names of these functions. How to they differ in their behavior? Let's compare the result of `itk.array_view_from_image()` and `itk.array_from_image()`. ``` gaussian = itk.gaussian_image_source(size=11, sigma=3, scale=100, mean=[5,5]) arr_view_gaussian = itk.array_view_from_image(gaussian) arr_gaussian = itk.array_from_image(gaussian) gaussian.SetPixel([5,5], 0) plt.subplot(1, 2, 1) plt.imshow(arr_view_gaussian) plt.title("View") plt.subplot(1, 2, 2) plt.imshow(arr_gaussian) plt.title("Copy") ``` ### Exercise 2: ITK image to NumPy array * Read an image with ITK -- * Convert image to NumPy array as a view * Modify a pixel in the image * Has the array been modified? -- * Convert image to NumPy array as a copy * Modify a pixel in the image * Has the array been modified? ``` # %load solutions/2_ITK_and_NumPy_answers_Exercise2.py ``` ## Templated Types * Is my ITK type templated? (hint: usually, yes) ``` help(itk.Image) ``` * If so, you need to specify the data type * This is similar to `dtype` in NumPy ``` import numpy numpy.array([0,0], dtype=numpy.float) ``` * Define and create a simple object ``` # A pixel Index is templated over the image dimension IndexType = itk.Index[3] index = IndexType() print(index) ``` * Define and use smart pointer object ``` ImageType = itk.Image[itk.ctype('float'), 2] my_image = ImageType.New() print(my_image) ``` * Print list of available types ``` itk.Image.GetTypes() ``` ## Non-templated Types * `MetaDataDictionary` ``` d = itk.MetaDataDictionary() d['new_key'] = 5 print("'new_key' value: %d" % d['new_key']) ``` ## Python Sequences for of ITK Objects * Some ITK objects expect inputs of a certain ITK type. However, it is often more convenient to directly provide Python sequences, i.e. a `list` or `tuple`. ``` image = ImageType.New() help(image.SetOrigin) ``` * Use a Python `list` where an `itk.Index`, `itk.Point`, or `itk.Vector` is requested. ``` image.SetOrigin([2, 10]) print("Image origin: %s" % str(image.GetOrigin())) print("Image origin: %s" % str(list(image.GetOrigin()))) print("Image origin: %s" % str(tuple(image.GetOrigin()))) ``` # scikit-learn * *scikit-learn* is a machine learning package in Python. * scikit-learn is used to illustrate solving a problem using ITK and *NumPy* arrays. ``` import sklearn ``` First, we load 10 2D-images of circles with different radii and center position to which noise has been added and their corresponding ground truth segmentations. ``` l_label = [] l_image = [] for i in range(0,10): image_name = 'data/sklearn/im%d.nrrd' % i image = itk.imread(image_name, itk.F) array = itk.array_from_image(image) l_image.append(array) label_name = 'data/sklearn/im%d_label.nrrd' % i image = itk.imread(label_name, itk.UC) array = itk.array_from_image(image) l_label.append(array) size = itk.size(image) print(size) plt.subplot(1, 2, 1) plt.imshow(l_image[0]) plt.title("Image") plt.subplot(1, 2, 2) plt.imshow(l_label[0]) plt.title("Segmentation") ``` The goal is to find the segmentation based on the input image. We create arrays of data: * X - the input samples * Y - the target values ``` X0 = l_image[0].flatten() X = X0 Y = l_label[0].flatten() for i in range(1,10): X = np.concatenate((X, l_image[i].flatten())) Y = np.concatenate((Y, l_label[i].flatten())) ``` * We use a supervised learning methods based on Bayes’ theorem. * The only information provided to the algorithm is the image intensity value. ``` from sklearn.naive_bayes import GaussianNB clf = GaussianNB() clf.fit(X.reshape(-1,1), Y) result = clf.predict(X0.reshape(-1,1)).reshape(size[0],size[1]) plt.imshow(result) ``` To improve our segmentation, we filter the input image with a median image filter and add this information as a second sample vector. ITK is often used to read, reformat, denoise, and augment medical imaging data to improve the effectiveness of medical imaging models. ``` l_median = [] for i in range(0,10): image_name = 'data/sklearn/im%d.nrrd' % i image = itk.imread(image_name, itk.F) median=itk.median_image_filter(image, radius=3) array = itk.array_from_image(median) l_median.append(array) plt.gray() plt.imshow(l_median[0]) plt.title("Median Filtered Image") M0 = l_median[0].flatten() M = M0 X0 = np.concatenate((X0.reshape(-1,1),M0.reshape(-1,1)), axis=1) for i in range(1,10): M = np.concatenate((M, l_median[i].flatten())) X = np.concatenate((X.reshape(-1,1),M.reshape(-1,1)), axis=1) clf.fit(X, Y) result = clf.predict(X0).reshape(50,50) plt.imshow(result) ``` ## Typical processing * Resampling for consistent pixel grid size and spacing * Image preprocessing * Bias field correction, e.g. `n4_bias_field_correction_image_filter` * Noise reduction, e.g. `smoothing_recursive_gaussian_image_filter` * Feature computation, e.g. texture, wavelet, or edge detector * Converting ITK data to NumPy and organize the data as needed * Train classifier * Use classifier on new data * Convert classifier result to ITK data * Apply post processing filters * Fill holes, e.g. `binary_fillhole_image_filter` * Smoothing, e.g. `median_image_filter` ## Two ways of using ITK in Python * Functional programming API * *Pythonic* * Eager execution * More concise * A few functions and filters are not available * Object-oriented way * Set up processing pipelines * Delayed execution * Full access to ITK * Conserve memory * Optimally re-use and release pixel buffer memory * Stream process pipelines in chunks ## Let's start with the Pythonic way ``` image = itk.imread("data/CBCT-TextureInput.png", itk.ctype('float')) filtered_image = itk.median_image_filter(image, radius = 3) view(filtered_image, ui_collapsed=True) ``` ### Pythonic exercises * In the example above, change the radius of the filter and observe the result. * Replace filter with `mean_image_filter` * Replace filter with `otsu_threshold_image_filter` * Visualize results Uncomment and change the radius of the filter and observe the result. ``` # median_filtered_image = itk.median_image_filter(image, radius = XX) # view(median_filtered_image) # %load solutions/2_Using_ITK_in_Python_real_world_filters_median.py ``` Uncomment and replace filter with `mean_image_filter` ``` # mean_filtered_image = itk.filter(image, radius = 5) # view(mean_filtered_image) # %load solutions/2_Using_ITK_in_Python_real_world_filters_mean.py ``` Uncomment and replace filter with `otsu_threshold_image_filter` ``` # otsu_filtered_image = itk.filter(image) # view(otsu_filtered_image) # %load solutions/2_Using_ITK_in_Python_real_world_filters_otsu.py ``` ## Object-oriented * Two types of C++ ITK objects * Smart pointers (hint: most ITK objects are smart pointers) * "Simple" objects * Translates in two ways of creating objects in Python * `obj = itk.SmartPointerObjectType.New()` (use auto-completion to see if `New()` method exists) * `obj = itk.SimpleObjectType()` ## Examples of objects * With `New()` method: * `itk.Image` * `itk.MedianImageFilter` * Without `New()` method: * `itk.Index` * `itk.RGBPixel` ## Filters with object-oriented syntax ``` PixelType = itk.ctype('float') image = itk.imread("data/CBCT-TextureInput.png", PixelType) ImageType = itk.Image[PixelType, 2] median_filter = itk.MedianImageFilter[ImageType, ImageType].New() median_filter.SetInput(image) median_filter.SetRadius(4) median_filter.Update() view(median_filter.GetOutput(), ui_collapsed=True) ``` ### Object-oriented exercises * In the example above, change the radius of the filter and observe the result. * Replace filter with `MeanImageFilter` * Replace filter with `OtsuThresholdImageFilter` * Visualize results Uncomment and change the radius of the filter and observe the result. ``` # median_filter = itk.MedianImageFilter[ImageType, ImageType].New() # median_filter.SetInput(image) # median_filter.SetRadius(XX) # median_filter.Update() # median_filtered_image = median_filter.GetOutput() # view(median_filtered_image) # %load solutions/2_Using_ITK_in_Python_real_world_filters_MedianFilter.py ``` Uncomment and edit to use `MeanImageFilter` ``` # mean_filter = itk.XX[ImageType, ImageType].New() # mean_filter.SetInput(XX) # mean_filter.SetRadius(XX) # mean_filter.Update() # mean_filtered_image = mean_filter.GetOutput() # view(mean_filtered_image) # %load solutions/2_Using_ITK_in_Python_real_world_filters_MeanFilter.py ``` Uncomment and replace filter with `OtsuThresholdImageFilter` ``` # InputImageType = itk.Image[itk.ctype('float'), 2] # OutputImageType = itk.Image[itk.ctype('short'), 2] # otsu_filter = itk.OtsuThresholdImageFilter[XX] # XX # %load solutions/2_Using_ITK_in_Python_real_world_filters_OtsuFilter.py ``` ## ITK Object-oriented Summary * Has `New()` method? * Call `Update()` with filters! ## Supported Image Types ### Unsupported (image) types ITK filters have compile-time performance optimized for a specific image types and dimensions. When an attempt is made to use a filter with an image type that is not supported, a error will occur like: `KeyError: "itkTemplate : No template [<class 'itkImagePython.itkImageD2'>] for the itk::ImageFileReader class"` ``` # image = itk.imread("data/BrainProtonDensitySlice.png", itk.D) # print(image) ``` To find the supported types for a filter, call `.GetTypes()` on the filter. `itk.imread` wraps, the `itk.ImageFileReader` filter. ``` itk.ImageFileReader.GetTypes() ``` One approach to handle this type of error is is to read the image into a supported pixel type: ``` image = itk.imread("data/KitwareITK.jpg", itk.F) ``` Another approach is to cast the image to a supported image type: ``` InputImageType = itk.Image[itk.F, 2] OutputImageType = itk.Image[itk.UC, 2] cast_filter = itk.CastImageFilter[InputImageType, OutputImageType].New(image) cast_filter.Update() ``` ## Appendix ### Functions to know * `itk.imread(file_name [, pixel_type])` * `itk.imwrite(image, file_name [, compression])` * `itk.array_from_image(image)` and `itk.array_view_from_image(image)` * `itk.image_from_array(arr)` and `itk.image_view_from_array(arr)` ### Pixel types - Two options * itk.ctype('float'), itk.ctype('unsigned char') * itk.F, itk.UC ### Convenience functions * `itk.size(image)` * `itk.spacing(image)` * `itk.origin(image)` * `itk.index(image)` * `itk.physical_size(image)` ### Type names in C++, ITK Python and Numpy | C++ | ITK Python | Numpy | | :------------- | :----------: | -----------: | | float | itk.F | numpy.float32| | double | itk.D | numpy.float64| | unsigned char | itk.UC | numpy.uint8 | | bool | itk.B | bool | The complete list of types can be found in the [ITK Software Guide](https://itk.org/ITKSoftwareGuide/html/Book1/ITKSoftwareGuide-Book1ch9.html#x48-1530009.5). Alternatively, such correspondence can be obtained by the following methods. ``` # names in C++ and ITK python print(itk.F == itk.ctype('float')) # True print(itk.B == itk.ctype('bool')) # True # print the numpy names of ITK python data types print(itk.D.dtype) # <class 'numpy.float64'> print(itk.UC.dtype) # <class 'numpy.uint8'> print(itk.B.dtype) # <class 'bool'> ``` #### Specify pixel types * `Some ITK functions (e.g., imread()) will automatically detect the pixel type and dimensions of the image.` * `However, in certain cases, you want to choose the pixel type of the image that is read. ` ``` # Automatically detect image = itk.imread("data/KitwareITK.jpg") print(type(image)) # <class 'itkImagePython.itkImageRGBUC2'> # specify pixel type image = itk.imread("data/KitwareITK.jpg", itk.ctype("unsigned char")) print(type(image)) # <class 'itkImagePython.itkImageUC2'> ``` See also: the **[ITK Python Quick Start Guide](https://itkpythonpackage.readthedocs.io/en/latest/Quick_start_guide.html)** ### Enjoy ITK!
github_jupyter
import itk from itkwidgets import view import numpy as np import matplotlib.pyplot as plt %matplotlib inline image = itk.imread("data/KitwareITK.jpg") view(image, ui_collapsed=True) array = itk.array_from_image(image) print(array[1,1]) def make_gaussian(size, fwhm=3, center=None): """ Make a square gaussian kernel. size is the length of a side of the square fwhm is full-width-half-maximum, which can be considered an effective radius. """ x = np.arange(0, size, 1, np.float32) y = x[:,np.newaxis] if center is None: x0 = y0 = size // 2 else: x0 = center[0] y0 = center[1] return np.exp(-4*np.log(2) * ((x-x0)**2 + (y-y0)**2) / fwhm**2) array = make_gaussian(11) plt.gray() plt.imshow(array) plt.axis('off') image = itk.image_from_array(array) view(image, cmap='Grayscale', interpolation=False) image = itk.imread('data/CBCT-TextureInput.png', itk.F) # %load solutions/2_ITK_in_Python_answers_Exercise1.py gaussian = itk.gaussian_image_source(size=11, sigma=3, scale=100, mean=[5,5]) arr_view_gaussian = itk.array_view_from_image(gaussian) arr_gaussian = itk.array_from_image(gaussian) gaussian.SetPixel([5,5], 0) plt.subplot(1, 2, 1) plt.imshow(arr_view_gaussian) plt.title("View") plt.subplot(1, 2, 2) plt.imshow(arr_gaussian) plt.title("Copy") # %load solutions/2_ITK_and_NumPy_answers_Exercise2.py help(itk.Image) import numpy numpy.array([0,0], dtype=numpy.float) # A pixel Index is templated over the image dimension IndexType = itk.Index[3] index = IndexType() print(index) ImageType = itk.Image[itk.ctype('float'), 2] my_image = ImageType.New() print(my_image) itk.Image.GetTypes() d = itk.MetaDataDictionary() d['new_key'] = 5 print("'new_key' value: %d" % d['new_key']) image = ImageType.New() help(image.SetOrigin) image.SetOrigin([2, 10]) print("Image origin: %s" % str(image.GetOrigin())) print("Image origin: %s" % str(list(image.GetOrigin()))) print("Image origin: %s" % str(tuple(image.GetOrigin()))) import sklearn l_label = [] l_image = [] for i in range(0,10): image_name = 'data/sklearn/im%d.nrrd' % i image = itk.imread(image_name, itk.F) array = itk.array_from_image(image) l_image.append(array) label_name = 'data/sklearn/im%d_label.nrrd' % i image = itk.imread(label_name, itk.UC) array = itk.array_from_image(image) l_label.append(array) size = itk.size(image) print(size) plt.subplot(1, 2, 1) plt.imshow(l_image[0]) plt.title("Image") plt.subplot(1, 2, 2) plt.imshow(l_label[0]) plt.title("Segmentation") X0 = l_image[0].flatten() X = X0 Y = l_label[0].flatten() for i in range(1,10): X = np.concatenate((X, l_image[i].flatten())) Y = np.concatenate((Y, l_label[i].flatten())) from sklearn.naive_bayes import GaussianNB clf = GaussianNB() clf.fit(X.reshape(-1,1), Y) result = clf.predict(X0.reshape(-1,1)).reshape(size[0],size[1]) plt.imshow(result) l_median = [] for i in range(0,10): image_name = 'data/sklearn/im%d.nrrd' % i image = itk.imread(image_name, itk.F) median=itk.median_image_filter(image, radius=3) array = itk.array_from_image(median) l_median.append(array) plt.gray() plt.imshow(l_median[0]) plt.title("Median Filtered Image") M0 = l_median[0].flatten() M = M0 X0 = np.concatenate((X0.reshape(-1,1),M0.reshape(-1,1)), axis=1) for i in range(1,10): M = np.concatenate((M, l_median[i].flatten())) X = np.concatenate((X.reshape(-1,1),M.reshape(-1,1)), axis=1) clf.fit(X, Y) result = clf.predict(X0).reshape(50,50) plt.imshow(result) image = itk.imread("data/CBCT-TextureInput.png", itk.ctype('float')) filtered_image = itk.median_image_filter(image, radius = 3) view(filtered_image, ui_collapsed=True) # median_filtered_image = itk.median_image_filter(image, radius = XX) # view(median_filtered_image) # %load solutions/2_Using_ITK_in_Python_real_world_filters_median.py # mean_filtered_image = itk.filter(image, radius = 5) # view(mean_filtered_image) # %load solutions/2_Using_ITK_in_Python_real_world_filters_mean.py # otsu_filtered_image = itk.filter(image) # view(otsu_filtered_image) # %load solutions/2_Using_ITK_in_Python_real_world_filters_otsu.py PixelType = itk.ctype('float') image = itk.imread("data/CBCT-TextureInput.png", PixelType) ImageType = itk.Image[PixelType, 2] median_filter = itk.MedianImageFilter[ImageType, ImageType].New() median_filter.SetInput(image) median_filter.SetRadius(4) median_filter.Update() view(median_filter.GetOutput(), ui_collapsed=True) # median_filter = itk.MedianImageFilter[ImageType, ImageType].New() # median_filter.SetInput(image) # median_filter.SetRadius(XX) # median_filter.Update() # median_filtered_image = median_filter.GetOutput() # view(median_filtered_image) # %load solutions/2_Using_ITK_in_Python_real_world_filters_MedianFilter.py # mean_filter = itk.XX[ImageType, ImageType].New() # mean_filter.SetInput(XX) # mean_filter.SetRadius(XX) # mean_filter.Update() # mean_filtered_image = mean_filter.GetOutput() # view(mean_filtered_image) # %load solutions/2_Using_ITK_in_Python_real_world_filters_MeanFilter.py # InputImageType = itk.Image[itk.ctype('float'), 2] # OutputImageType = itk.Image[itk.ctype('short'), 2] # otsu_filter = itk.OtsuThresholdImageFilter[XX] # XX # %load solutions/2_Using_ITK_in_Python_real_world_filters_OtsuFilter.py # image = itk.imread("data/BrainProtonDensitySlice.png", itk.D) # print(image) itk.ImageFileReader.GetTypes() image = itk.imread("data/KitwareITK.jpg", itk.F) InputImageType = itk.Image[itk.F, 2] OutputImageType = itk.Image[itk.UC, 2] cast_filter = itk.CastImageFilter[InputImageType, OutputImageType].New(image) cast_filter.Update() # names in C++ and ITK python print(itk.F == itk.ctype('float')) # True print(itk.B == itk.ctype('bool')) # True # print the numpy names of ITK python data types print(itk.D.dtype) # <class 'numpy.float64'> print(itk.UC.dtype) # <class 'numpy.uint8'> print(itk.B.dtype) # <class 'bool'> # Automatically detect image = itk.imread("data/KitwareITK.jpg") print(type(image)) # <class 'itkImagePython.itkImageRGBUC2'> # specify pixel type image = itk.imread("data/KitwareITK.jpg", itk.ctype("unsigned char")) print(type(image)) # <class 'itkImagePython.itkImageUC2'>
0.505371
0.988657
### Get googld cloud project id ``` !gcloud config list project ``` ### Set googld cloud config ``` gcloud config set compute/region $REGION gcloud config set ai_platform/region global ``` ### Create bucket ``` %%bash if ! gsutil ls | grep -q gs://${BUCKET}; then gsutil mb -l ${REGION} gs://${BUCKET} fi ``` ### Usage of argparse ``` # https://docs.python.org/ko/3/library/argparse.html import argparse parser = argparse.ArgumentParser() parser.add_argument( name or flags... # ex: "--job-dir" [, nargs] # ex: "+" 존재하는 모든 명령행 인자를 리스트로 모읍니다. 또한, 적어도 하나의 명령행 인자가 제공되지 않으면 에러 메시지가 만들어집니다 [, default] [, type] # ex: int, float [, required] # ex: True or False [, help] ) args = parser.parse_args() arguments = args.__dict__ ``` ### train_and_evaluate 1. build_wide_deep_model - create input layers - { name: keras.Input(name, shape, dtype) } - create feature columns of wide and deep features - create numeric_column(name) - create categorical column - categorical_column_with_vocabulary_list(name, values) - indicator_column(category) - create new deep feature with numeric columns - bucketized_column(fc, boundaries=list) - indicator_column(bucketized): one-hot encoding - crossed_column([bucketized1, bucketized2], hash_bucket_size) - embedding_column(crossed, dimension=nembeds) - stack DenseFeatures layers of wide and deep features - add Dense layers - activation=relu - add Concatenate layer of wide and deep layers - add output Dense layer - units=1, activation=linear - create model with inputs and outputs - compile model - loss=mse, optimizer=adam, metrics=[rmse, mse] 2. load_dataset: traind, evals - get dataset from csv - make_csv_dataset(pattern, batch_size, COLUMNS, DEFAULTS) - append map function to split feature and labels - if train, then shuffle and repeat - repeat(None): infinite repeat - shuffle(buffer_size=1000): input proper value - prefetch - buffer_size=1 is AUTOTUNE 3. model.fit - trainds - validation_data=evalds - epochs = NUM_EVALS or NUM_EPOCHS - steps_per_epoch = train_examples // (batch_size * epochs) - callbacks=[callbacks] 4. set hypertune - report_hyperparameter_tuning_metric - hyperparameter_metric_tag='rmse' # monitoring target name - metric_value=history.history['val_rmse'][-1], # monitoring target value: - global_step=args['num_epochs'] 5. save model ### steps_per_epoch 의미 steps_per_epoch = train_examples // (batch_size * epochs) - virtual epoch. 1개의 epoch을 여러 step으로 나누고 각 step이 epoch과 동일하게 eval, callbacks 처리됨 ### Deleting and deploying model ``` %%bash MODEL_NAME="babyweight" MODEL_VERSION="ml_on_gcp" MODEL_LOCATION='gs://qwiklabs-gcp-00-0db9b1bc58c6/babyweight/trained_model/20210602015319' echo "Deleting and deploying $MODEL_NAME $MODEL_VERSION from $MODEL_LOCATION" # 모델 삭제 전에 모든 버전을 삭제해야 함 # default 버전은 다른 버전을 모두 삭제한 후에 삭제 가능함 gcloud ai-platform versions delete ${MODEL_VERSION} --model ${MODEL_NAME} -q gcloud ai-platform models delete ${MODEL_NAME} -q # 모델 생성 후 버전 생성 gcloud ai-platform models create ${MODEL_NAME} --regions ${REGION} gcloud ai-platform versions create ${MODEL_VERSION} \ --model=${MODEL_NAME} \ --origin=${MODEL_LOCATION} \ --runtime-version=2.1 \ --python-version=3.7 ```
github_jupyter
!gcloud config list project gcloud config set compute/region $REGION gcloud config set ai_platform/region global %%bash if ! gsutil ls | grep -q gs://${BUCKET}; then gsutil mb -l ${REGION} gs://${BUCKET} fi # https://docs.python.org/ko/3/library/argparse.html import argparse parser = argparse.ArgumentParser() parser.add_argument( name or flags... # ex: "--job-dir" [, nargs] # ex: "+" 존재하는 모든 명령행 인자를 리스트로 모읍니다. 또한, 적어도 하나의 명령행 인자가 제공되지 않으면 에러 메시지가 만들어집니다 [, default] [, type] # ex: int, float [, required] # ex: True or False [, help] ) args = parser.parse_args() arguments = args.__dict__ %%bash MODEL_NAME="babyweight" MODEL_VERSION="ml_on_gcp" MODEL_LOCATION='gs://qwiklabs-gcp-00-0db9b1bc58c6/babyweight/trained_model/20210602015319' echo "Deleting and deploying $MODEL_NAME $MODEL_VERSION from $MODEL_LOCATION" # 모델 삭제 전에 모든 버전을 삭제해야 함 # default 버전은 다른 버전을 모두 삭제한 후에 삭제 가능함 gcloud ai-platform versions delete ${MODEL_VERSION} --model ${MODEL_NAME} -q gcloud ai-platform models delete ${MODEL_NAME} -q # 모델 생성 후 버전 생성 gcloud ai-platform models create ${MODEL_NAME} --regions ${REGION} gcloud ai-platform versions create ${MODEL_VERSION} \ --model=${MODEL_NAME} \ --origin=${MODEL_LOCATION} \ --runtime-version=2.1 \ --python-version=3.7
0.325842
0.689698
``` import logging import importlib importlib.reload(logging) # see https://stackoverflow.com/a/21475297/1469195 log = logging.getLogger() log.setLevel('INFO') import sys logging.basicConfig(format='%(asctime)s %(levelname)s : %(message)s', level=logging.INFO, stream=sys.stdout) %%capture import os import site os.sys.path.insert(0, '/home/schirrmr/code/reversible/') os.sys.path.insert(0, '/home/schirrmr/braindecode/code/braindecode/') os.sys.path.insert(0, '/home/schirrmr/code/explaining/reversible//') %load_ext autoreload %autoreload 2 import numpy as np import logging log = logging.getLogger() log.setLevel('INFO') import sys logging.basicConfig(format='%(asctime)s %(levelname)s : %(message)s', level=logging.INFO, stream=sys.stdout) import matplotlib from matplotlib import pyplot as plt from matplotlib import cm %matplotlib inline %config InlineBackend.figure_format = 'png' matplotlib.rcParams['figure.figsize'] = (12.0, 1.0) matplotlib.rcParams['font.size'] = 14 import seaborn seaborn.set_style('darkgrid') from reversible2.sliced import sliced_from_samples from numpy.random import RandomState import torch import torch.nn as nn import torch.nn.functional as F from torch.autograd import Variable import numpy as np import copy import math import itertools import torch as th from braindecode.torch_ext.util import np_to_var, var_to_np from reversible2.splitter import SubsampleSplitter from reversible2.view_as import ViewAs from reversible2.affine import AdditiveBlock from reversible2.plot import display_text, display_close from reversible2.high_gamma import load_file, create_inputs from reversible2.high_gamma import load_train_test th.backends.cudnn.benchmark = True from reversible2.models import deep_invertible rng = RandomState(201904113)#2 ganz gut # 13 sehr gut) X = rng.randn(5,2) * np.array([1,0])[None] + np.array([-1,0])[None] X = np.sort(X, axis=0) X_test = rng.randn(500,2) * np.array([1,0])[None] + np.array([-1,0])[None] X_test = np.sort(X_test, axis=0) train_inputs = np_to_var(X[::2], dtype=np.float32) val_inputs = np_to_var(X[1::2], dtype=np.float32) test_inputs = np_to_var(X_test, dtype=np.float32) plt.figure(figsize=(5,5)) plt.scatter(var_to_np(train_inputs)[:,0], var_to_np(train_inputs)[:,1]) plt.scatter(var_to_np(val_inputs)[:,0], var_to_np(val_inputs)[:,1]) plt.scatter([-1],[0], color='black') plt.xlim(-2.5,5.5) plt.ylim(-4,4) ``` ### train on test ``` cuda = False from reversible2.distribution import TwoClassIndependentDist from reversible2.blocks import dense_add_block from reversible2.rfft import RFFT, Interleave from reversible2.util import set_random_seeds from torch.nn import ConstantPad2d import torch as th from reversible2.splitter import SubsampleSplitter from matplotlib.patches import Ellipse set_random_seeds(2019011641, cuda) model = nn.Sequential( dense_add_block(2,200), dense_add_block(2,200), dense_add_block(2,200), dense_add_block(2,200), ) dist = TwoClassIndependentDist(2, truncate_to=None) from reversible2.dist_model import ModelAndDist model_and_dist = ModelAndDist(model, dist) optim = th.optim.Adam([{'params': model_and_dist.dist.parameters(), 'lr':1e-3}, {'params': list(model_and_dist.model.parameters()), 'lr': 1e-4}]) model_and_dist.dist.get_mean_std(0) n_epochs = 10001 for i_epoch in range(n_epochs): nll = -th.mean(model_and_dist.get_total_log_prob(0, test_inputs + (th.rand_like(test_inputs) -0.5) * 1e-2)) optim.zero_grad() nll.backward() optim.step() if i_epoch % (n_epochs // 20) == 0: tr_out = model_and_dist.model(train_inputs) va_out = model_and_dist.model(val_inputs) te_out = model_and_dist.model(test_inputs) fig = plt.figure(figsize=(5,5)) mean, std = model_and_dist.dist.get_mean_std(0) plt.plot(var_to_np(te_out)[:,0], var_to_np(te_out)[:,1], color=seaborn.color_palette()[1]) plt.scatter(var_to_np(tr_out)[:,0], var_to_np(tr_out)[:,1], color=seaborn.color_palette()[0]) plt.scatter(var_to_np(va_out)[:,0], var_to_np(va_out)[:,1], color=seaborn.color_palette()[2]) ellipse = Ellipse(var_to_np(mean), var_to_np(std[0] * 2), var_to_np(std[1]*2), edgecolor='black', facecolor='None') ax = plt.gca() ax.add_artist(ellipse) plt.axis('equal') plt.title("Epoch {:d} of {:d}\nTrain NLL {:.1E}\nValid NLL {:.1E}\nTest NLL {:.1E}".format( i_epoch, n_epochs, -th.mean(model_and_dist.get_total_log_prob(0, train_inputs)).item(), -th.mean(model_and_dist.get_total_log_prob(0, val_inputs)).item(), -th.mean(model_and_dist.get_total_log_prob(0, test_inputs)).item(), )) display_close(fig) examples = model_and_dist.get_examples(0,200) fig = plt.figure(figsize=(5,5)) plt.plot(var_to_np(test_inputs)[:,0], var_to_np(test_inputs)[:,1], color=seaborn.color_palette()[2]) plt.scatter(var_to_np(examples)[:,0], var_to_np(examples)[:,1], color=seaborn.color_palette()[1]) plt.scatter(var_to_np(train_inputs)[:,0], var_to_np(train_inputs)[:,1], color=seaborn.color_palette()[0]) plt.axis('equal') plt.title("Input space") plt.legend(("Test", "Fake","Train", ),) display_close(fig) ``` ## without noise ``` cuda = False from reversible2.distribution import TwoClassIndependentDist from reversible2.blocks import dense_add_block from reversible2.rfft import RFFT, Interleave from reversible2.util import set_random_seeds from torch.nn import ConstantPad2d import torch as th from reversible2.splitter import SubsampleSplitter from matplotlib.patches import Ellipse set_random_seeds(2019011641, cuda) model = nn.Sequential( dense_add_block(2,200), dense_add_block(2,200), dense_add_block(2,200), dense_add_block(2,200), ) dist = TwoClassIndependentDist(2, truncate_to=None) from reversible2.dist_model import ModelAndDist model_and_dist = ModelAndDist(model, dist) optim = th.optim.Adam([{'params': model_and_dist.dist.parameters(), 'lr':1e-3}, {'params': list(model_and_dist.model.parameters()), 'lr': 1e-4}]) n_epochs = 10001 for i_epoch in range(n_epochs): nll = -th.mean(model_and_dist.get_total_log_prob(0, test_inputs)) optim.zero_grad() nll.backward() optim.step() if i_epoch % (n_epochs // 20) == 0: tr_out = model_and_dist.model(train_inputs) va_out = model_and_dist.model(val_inputs) te_out = model_and_dist.model(test_inputs) fig = plt.figure(figsize=(5,5)) mean, std = model_and_dist.dist.get_mean_std(0) plt.plot(var_to_np(te_out)[:,0], var_to_np(te_out)[:,1], color=seaborn.color_palette()[1]) plt.scatter(var_to_np(tr_out)[:,0], var_to_np(tr_out)[:,1], color=seaborn.color_palette()[0]) plt.scatter(var_to_np(va_out)[:,0], var_to_np(va_out)[:,1], color=seaborn.color_palette()[2]) ellipse = Ellipse(var_to_np(mean), var_to_np(std[0] * 2), var_to_np(std[1]*2), edgecolor='black', facecolor='None') ax = plt.gca() ax.add_artist(ellipse) plt.axis('equal') plt.title("Epoch {:d} of {:d}\nTrain NLL {:.1E}\nValid NLL {:.1E}\nTest NLL {:.1E}".format( i_epoch, n_epochs, -th.mean(model_and_dist.get_total_log_prob(0, train_inputs)).item(), -th.mean(model_and_dist.get_total_log_prob(0, val_inputs)).item(), -th.mean(model_and_dist.get_total_log_prob(0, test_inputs)).item(), )) display_close(fig) examples = model_and_dist.get_examples(0,200) fig = plt.figure(figsize=(5,5)) plt.plot(var_to_np(test_inputs)[:,0], var_to_np(test_inputs)[:,1], color=seaborn.color_palette()[2]) plt.scatter(var_to_np(examples)[:,0], var_to_np(examples)[:,1], color=seaborn.color_palette()[1]) plt.scatter(var_to_np(train_inputs)[:,0], var_to_np(train_inputs)[:,1], color=seaborn.color_palette()[0]) plt.axis('equal') plt.title("Input space") plt.legend(("Test", "Fake","Train", ),) display_close(fig) ``` ## higher lr for dist ``` cuda = False from reversible2.distribution import TwoClassIndependentDist from reversible2.blocks import dense_add_block from reversible2.rfft import RFFT, Interleave from reversible2.util import set_random_seeds from torch.nn import ConstantPad2d import torch as th from reversible2.splitter import SubsampleSplitter from matplotlib.patches import Ellipse set_random_seeds(2019011641, cuda) model = nn.Sequential( dense_add_block(2,200), dense_add_block(2,200), dense_add_block(2,200), dense_add_block(2,200), ) dist = TwoClassIndependentDist(2, truncate_to=None) from reversible2.dist_model import ModelAndDist model_and_dist = ModelAndDist(model, dist) optim = th.optim.Adam([{'params': model_and_dist.dist.parameters(), 'lr':1e-2}, {'params': list(model_and_dist.model.parameters()), 'lr': 1e-4}]) n_epochs = 10001 for i_epoch in range(n_epochs): nll = -th.mean(model_and_dist.get_total_log_prob(0, test_inputs)) optim.zero_grad() nll.backward() optim.step() if i_epoch % (n_epochs // 20) == 0: tr_out = model_and_dist.model(train_inputs) va_out = model_and_dist.model(val_inputs) te_out = model_and_dist.model(test_inputs) fig = plt.figure(figsize=(5,5)) mean, std = model_and_dist.dist.get_mean_std(0) plt.plot(var_to_np(te_out)[:,0], var_to_np(te_out)[:,1], color=seaborn.color_palette()[1]) plt.scatter(var_to_np(tr_out)[:,0], var_to_np(tr_out)[:,1], color=seaborn.color_palette()[0]) plt.scatter(var_to_np(va_out)[:,0], var_to_np(va_out)[:,1], color=seaborn.color_palette()[2]) ellipse = Ellipse(var_to_np(mean), var_to_np(std[0] * 2), var_to_np(std[1]*2), edgecolor='black', facecolor='None') ax = plt.gca() ax.add_artist(ellipse) plt.axis('equal') plt.title("Epoch {:d} of {:d}\nTrain NLL {:.1E}\nValid NLL {:.1E}\nTest NLL {:.1E}".format( i_epoch, n_epochs, -th.mean(model_and_dist.get_total_log_prob(0, train_inputs)).item(), -th.mean(model_and_dist.get_total_log_prob(0, val_inputs)).item(), -th.mean(model_and_dist.get_total_log_prob(0, test_inputs)).item(), )) display_close(fig) examples = model_and_dist.get_examples(0,200) fig = plt.figure(figsize=(5,5)) plt.plot(var_to_np(test_inputs)[:,0], var_to_np(test_inputs)[:,1], color=seaborn.color_palette()[2]) plt.scatter(var_to_np(examples)[:,0], var_to_np(examples)[:,1], color=seaborn.color_palette()[1]) plt.scatter(var_to_np(train_inputs)[:,0], var_to_np(train_inputs)[:,1], color=seaborn.color_palette()[0]) plt.axis('equal') plt.title("Input space") plt.legend(("Test", "Fake","Train", ),) display_close(fig) ``` ## now setup again with valid mixture, higher lrs for mixture and dist ``` cuda = False from reversible2.distribution import TwoClassIndependentDist from reversible2.blocks import dense_add_block from reversible2.rfft import RFFT, Interleave from reversible2.util import set_random_seeds from torch.nn import ConstantPad2d import torch as th from reversible2.splitter import SubsampleSplitter from matplotlib.patches import Ellipse from reversible2.gaussian import get_gauss_samples set_random_seeds(2019011641, cuda) model = nn.Sequential( dense_add_block(2,200), dense_add_block(2,200), dense_add_block(2,200), dense_add_block(2,200), ) dist = TwoClassIndependentDist(2, truncate_to=None) tr_log_stds = (th.zeros_like(train_inputs, requires_grad=True) - 2).detach().clone().requires_grad_(True) from reversible2.dist_model import ModelAndDist model_and_dist = ModelAndDist(model, dist) optim = th.optim.Adam([{'params': model_and_dist.dist.parameters(), 'lr':1e-2}, {'params': list(model_and_dist.model.parameters()), 'lr': 1e-4}]) optim_stds = th.optim.Adam([{'params': [tr_log_stds], 'lr':1e-2},]) from reversible2.invert import invert n_epochs = 10001 for i_epoch in range(n_epochs): with th.no_grad(): outs = model_and_dist.model(train_inputs + (th.rand_like(train_inputs) -0.5) * 1e-2) outs = outs + (th.randn_like(outs) * th.exp(tr_log_stds)) ins = invert(model_and_dist.model, outs).detach() outs = model_and_dist.model(ins) nll = -th.mean(model_and_dist.dist.get_total_log_prob(0, outs)) optim.zero_grad() nll.backward() optim.step() tr_out = model_and_dist.model(train_inputs + (th.rand_like(train_inputs) -0.5) * 1e-2) va_out = model_and_dist.model(val_inputs + (th.rand_like(val_inputs) -0.5) * 1e-2).detach() demeaned = va_out.unsqueeze(1) - tr_out.unsqueeze(0) rescaled = demeaned / th.exp(tr_log_stds).unsqueeze(0) eps = 1e-8 log_probs = -(rescaled **2) / 2 - np.log(float(np.sqrt(2 * np.pi))) - tr_log_stds.unsqueeze(0) # sum over dimensions log_probs = th.sum(log_probs, dim=-1) probs = th.exp(log_probs) probs = th.mean(probs, dim=1) nll = -th.mean(th.log(probs + eps)) #optim.zero_grad() optim_stds.zero_grad() nll.backward() #optim.step() optim_stds.step() if i_epoch % (n_epochs // 20) == 0: tr_out = model_and_dist.model(train_inputs) va_out = model_and_dist.model(val_inputs) te_out = model_and_dist.model(test_inputs) demeaned = va_out.unsqueeze(1) - tr_out.unsqueeze(0) rescaled = demeaned / th.exp(tr_log_stds).unsqueeze(0) eps = 1e-8 log_probs = -(rescaled **2) / 2 - np.log(float(np.sqrt(2 * np.pi))) - tr_log_stds.unsqueeze(0) # sum over dimensions log_probs = th.sum(log_probs, dim=-1) probs = th.exp(log_probs) probs = th.mean(probs, dim=1) log_probs = th.log(probs + eps) fig = plt.figure(figsize=(5,5)) mean, std = model_and_dist.dist.get_mean_std(0) plt.plot(var_to_np(te_out)[:,0], var_to_np(te_out)[:,1], color=seaborn.color_palette()[1]) plt.scatter(var_to_np(tr_out)[:,0], var_to_np(tr_out)[:,1], color=seaborn.color_palette()[0]) plt.scatter(var_to_np(va_out)[:,0], var_to_np(va_out)[:,1], color=seaborn.color_palette()[2]) for lprob, out in zip(log_probs, va_out): plt.annotate("{:.1E}".format(-lprob.item()), var_to_np(out)) ellipse = Ellipse(var_to_np(mean), var_to_np(std[0]), var_to_np(std[1]), edgecolor='black', facecolor='None') ax = plt.gca() ax.add_artist(ellipse) for out, lstd in zip(tr_out, tr_log_stds): ellipse = Ellipse(var_to_np(out), var_to_np(th.exp(lstd)[0]), var_to_np(th.exp(lstd)[1]), edgecolor='blue', facecolor='None') ax.add_artist(ellipse) plt.title("Epoch {:d} of {:d}\nTrain NLL {:.1E}\nValid NLL {:.1E}\nTest NLL {:.1E}\n" "ValidMixNLL {:.1E}\n".format( i_epoch, n_epochs, -th.mean(model_and_dist.get_total_log_prob(0, train_inputs)).item(), -th.mean(model_and_dist.get_total_log_prob(0, val_inputs)).item(), -th.mean(model_and_dist.get_total_log_prob(0, test_inputs)).item(), -th.mean(log_probs).item(), )) plt.axis('equal') display_close(fig) examples = model_and_dist.get_examples(0,200) fig = plt.figure(figsize=(5,5)) plt.plot(var_to_np(test_inputs)[:,0], var_to_np(test_inputs)[:,1], color=seaborn.color_palette()[2]) plt.scatter(var_to_np(examples)[:,0], var_to_np(examples)[:,1], color=seaborn.color_palette()[1]) plt.scatter(var_to_np(train_inputs)[:,0], var_to_np(train_inputs)[:,1], color=seaborn.color_palette()[0]) plt.axis('equal') plt.title("Input space") plt.legend(("Test", "Fake","Train", ),) display_close(fig) ``` ## Multi Phase training? ``` cuda = False from reversible2.distribution import TwoClassIndependentDist from reversible2.blocks import dense_add_block from reversible2.rfft import RFFT, Interleave from reversible2.util import set_random_seeds from torch.nn import ConstantPad2d import torch as th from reversible2.splitter import SubsampleSplitter from matplotlib.patches import Ellipse from reversible2.gaussian import get_gauss_samples set_random_seeds(2019011641, cuda) model = nn.Sequential( dense_add_block(2,200), dense_add_block(2,200), dense_add_block(2,200), dense_add_block(2,200), ) dist = TwoClassIndependentDist(2, truncate_to=None) tr_log_stds = (th.zeros_like(train_inputs, requires_grad=True) - 2).detach().clone().requires_grad_(True) from reversible2.dist_model import ModelAndDist model_and_dist = ModelAndDist(model, dist) optim = th.optim.Adam([{'params': model_and_dist.dist.parameters(), 'lr':1e-2}, {'params': list(model_and_dist.model.parameters()), 'lr': 1e-4}]) optim_stds = th.optim.Adam([{'params': [tr_log_stds], 'lr':1e-2},]) ``` ### train on train ``` n_epochs = 10001 for i_epoch in range(n_epochs): nll = -th.mean(model_and_dist.get_total_log_prob(0, train_inputs + (th.rand_like(train_inputs) -0.5) * 1e-2)) optim.zero_grad() nll.backward() optim.step() if i_epoch % (n_epochs // 20) == 0: tr_out = model_and_dist.model(train_inputs) va_out = model_and_dist.model(val_inputs) te_out = model_and_dist.model(test_inputs) fig = plt.figure(figsize=(5,5)) mean, std = model_and_dist.dist.get_mean_std(0) plt.plot(var_to_np(te_out)[:,0], var_to_np(te_out)[:,1], color=seaborn.color_palette()[1]) plt.scatter(var_to_np(tr_out)[:,0], var_to_np(tr_out)[:,1], color=seaborn.color_palette()[0]) plt.scatter(var_to_np(va_out)[:,0], var_to_np(va_out)[:,1], color=seaborn.color_palette()[2]) ellipse = Ellipse(var_to_np(mean), var_to_np(std[0] * 2), var_to_np(std[1]*2), edgecolor='black', facecolor='None') ax = plt.gca() ax.add_artist(ellipse) plt.axis('equal') plt.title("Epoch {:d} of {:d}\nTrain NLL {:.1E}\nValid NLL {:.1E}\nTest NLL {:.1E}".format( i_epoch, n_epochs, -th.mean(model_and_dist.get_total_log_prob(0, train_inputs)).item(), -th.mean(model_and_dist.get_total_log_prob(0, val_inputs)).item(), -th.mean(model_and_dist.get_total_log_prob(0, test_inputs)).item(), )) display_close(fig) examples = model_and_dist.get_examples(0,200) fig = plt.figure(figsize=(5,5)) plt.plot(var_to_np(test_inputs)[:,0], var_to_np(test_inputs)[:,1], color=seaborn.color_palette()[2]) plt.scatter(var_to_np(examples)[:,0], var_to_np(examples)[:,1], color=seaborn.color_palette()[1]) plt.scatter(var_to_np(train_inputs)[:,0], var_to_np(train_inputs)[:,1], color=seaborn.color_palette()[0]) plt.axis('equal') plt.title("Input space") plt.legend(("Test", "Fake","Train", ),) display_close(fig) ``` ### train mixture stds ``` from reversible2.invert import invert n_epochs = 10001 for i_epoch in range(n_epochs): tr_out = model_and_dist.model(train_inputs + (th.rand_like(train_inputs) -0.5) * 1e-2) va_out = model_and_dist.model(val_inputs + (th.rand_like(val_inputs) -0.5) * 1e-2).detach() demeaned = va_out.unsqueeze(1) - tr_out.unsqueeze(0) rescaled = demeaned / th.exp(tr_log_stds).unsqueeze(0) eps = 1e-8 log_probs = -(rescaled **2) / 2 - np.log(float(np.sqrt(2 * np.pi))) - tr_log_stds.unsqueeze(0) # sum over dimensions log_probs = th.sum(log_probs, dim=-1) probs = th.exp(log_probs) probs = th.mean(probs, dim=1) nll = -th.mean(th.log(probs + eps)) #optim.zero_grad() optim_stds.zero_grad() nll.backward() #optim.step() optim_stds.step() if i_epoch % (n_epochs // 20) == 0: tr_out = model_and_dist.model(train_inputs) va_out = model_and_dist.model(val_inputs) te_out = model_and_dist.model(test_inputs) demeaned = va_out.unsqueeze(1) - tr_out.unsqueeze(0) rescaled = demeaned / th.exp(tr_log_stds).unsqueeze(0) eps = 1e-8 log_probs = -(rescaled **2) / 2 - np.log(float(np.sqrt(2 * np.pi))) - tr_log_stds.unsqueeze(0) # sum over dimensions log_probs = th.sum(log_probs, dim=-1) probs = th.exp(log_probs) probs = th.mean(probs, dim=1) log_probs = th.log(probs + eps) fig = plt.figure(figsize=(5,5)) mean, std = model_and_dist.dist.get_mean_std(0) plt.plot(var_to_np(te_out)[:,0], var_to_np(te_out)[:,1], color=seaborn.color_palette()[1]) plt.scatter(var_to_np(tr_out)[:,0], var_to_np(tr_out)[:,1], color=seaborn.color_palette()[0]) plt.scatter(var_to_np(va_out)[:,0], var_to_np(va_out)[:,1], color=seaborn.color_palette()[2]) for lprob, out in zip(log_probs, va_out): plt.annotate("{:.1E}".format(-lprob.item()), var_to_np(out)) ellipse = Ellipse(var_to_np(mean), var_to_np(std[0]), var_to_np(std[1]), edgecolor='black', facecolor='None') ax = plt.gca() ax.add_artist(ellipse) for out, lstd in zip(tr_out, tr_log_stds): ellipse = Ellipse(var_to_np(out), var_to_np(th.exp(lstd)[0]), var_to_np(th.exp(lstd)[1]), edgecolor='blue', facecolor='None') ax.add_artist(ellipse) plt.title("Epoch {:d} of {:d}\nTrain NLL {:.1E}\nValid NLL {:.1E}\nTest NLL {:.1E}\n" "ValidMixNLL {:.1E}\n".format( i_epoch, n_epochs, -th.mean(model_and_dist.get_total_log_prob(0, train_inputs)).item(), -th.mean(model_and_dist.get_total_log_prob(0, val_inputs)).item(), -th.mean(model_and_dist.get_total_log_prob(0, test_inputs)).item(), -th.mean(log_probs).item(), )) plt.axis('equal') display_close(fig) examples = model_and_dist.get_examples(0,200) fig = plt.figure(figsize=(5,5)) plt.plot(var_to_np(test_inputs)[:,0], var_to_np(test_inputs)[:,1], color=seaborn.color_palette()[2]) plt.scatter(var_to_np(examples)[:,0], var_to_np(examples)[:,1], color=seaborn.color_palette()[1]) plt.scatter(var_to_np(train_inputs)[:,0], var_to_np(train_inputs)[:,1], color=seaborn.color_palette()[0]) plt.axis('equal') plt.title("Input space") plt.legend(("Test", "Fake","Train", ),) display_close(fig) ``` ### train now just from mixture samples ``` from reversible2.invert import invert n_epochs = 10001 for i_epoch in range(n_epochs): with th.no_grad(): outs = model_and_dist.model(train_inputs + (th.rand_like(train_inputs) -0.5) * 1e-2) outs = outs + (th.randn_like(outs) * th.exp(tr_log_stds)) ins = invert(model_and_dist.model, outs).detach() outs = model_and_dist.model(ins) nll = -th.mean(model_and_dist.dist.get_total_log_prob(0, outs)) optim.zero_grad() nll.backward() optim.step() if i_epoch % (n_epochs // 20) == 0: tr_out = model_and_dist.model(train_inputs) va_out = model_and_dist.model(val_inputs) te_out = model_and_dist.model(test_inputs) demeaned = va_out.unsqueeze(1) - tr_out.unsqueeze(0) rescaled = demeaned / th.exp(tr_log_stds).unsqueeze(0) eps = 1e-8 log_probs = -(rescaled **2) / 2 - np.log(float(np.sqrt(2 * np.pi))) - tr_log_stds.unsqueeze(0) # sum over dimensions log_probs = th.sum(log_probs, dim=-1) probs = th.exp(log_probs) probs = th.mean(probs, dim=1) log_probs = th.log(probs + eps) fig = plt.figure(figsize=(5,5)) mean, std = model_and_dist.dist.get_mean_std(0) plt.plot(var_to_np(te_out)[:,0], var_to_np(te_out)[:,1], color=seaborn.color_palette()[1]) plt.scatter(var_to_np(tr_out)[:,0], var_to_np(tr_out)[:,1], color=seaborn.color_palette()[0]) plt.scatter(var_to_np(va_out)[:,0], var_to_np(va_out)[:,1], color=seaborn.color_palette()[2]) for lprob, out in zip(log_probs, va_out): plt.annotate("{:.1E}".format(-lprob.item()), var_to_np(out)) ellipse = Ellipse(var_to_np(mean), var_to_np(std[0]), var_to_np(std[1]), edgecolor='black', facecolor='None') ax = plt.gca() ax.add_artist(ellipse) for out, lstd in zip(tr_out, tr_log_stds): ellipse = Ellipse(var_to_np(out), var_to_np(th.exp(lstd)[0]), var_to_np(th.exp(lstd)[1]), edgecolor='blue', facecolor='None') ax.add_artist(ellipse) plt.title("Epoch {:d} of {:d}\nTrain NLL {:.1E}\nValid NLL {:.1E}\nTest NLL {:.1E}\n" "ValidMixNLL {:.1E}\n".format( i_epoch, n_epochs, -th.mean(model_and_dist.get_total_log_prob(0, train_inputs)).item(), -th.mean(model_and_dist.get_total_log_prob(0, val_inputs)).item(), -th.mean(model_and_dist.get_total_log_prob(0, test_inputs)).item(), -th.mean(log_probs).item(), )) plt.axis('equal') display_close(fig) examples = model_and_dist.get_examples(0,200) fig = plt.figure(figsize=(5,5)) plt.plot(var_to_np(test_inputs)[:,0], var_to_np(test_inputs)[:,1], color=seaborn.color_palette()[2]) plt.scatter(var_to_np(examples)[:,0], var_to_np(examples)[:,1], color=seaborn.color_palette()[1]) plt.scatter(var_to_np(train_inputs)[:,0], var_to_np(train_inputs)[:,1], color=seaborn.color_palette()[0]) plt.axis('equal') plt.title("Input space") plt.legend(("Test", "Fake","Train", ),) display_close(fig) ``` ### Same again, but in last phae on mixture samples, also train mixture mean ``` cuda = False from reversible2.distribution import TwoClassIndependentDist from reversible2.blocks import dense_add_block from reversible2.rfft import RFFT, Interleave from reversible2.util import set_random_seeds from torch.nn import ConstantPad2d import torch as th from reversible2.splitter import SubsampleSplitter from matplotlib.patches import Ellipse from reversible2.gaussian import get_gauss_samples set_random_seeds(2019011641, cuda) model = nn.Sequential( dense_add_block(2,200), dense_add_block(2,200), dense_add_block(2,200), dense_add_block(2,200), ) dist = TwoClassIndependentDist(2, truncate_to=None) tr_log_stds = (th.zeros_like(train_inputs, requires_grad=True) - 2).detach().clone().requires_grad_(True) from reversible2.dist_model import ModelAndDist model_and_dist = ModelAndDist(model, dist) optim = th.optim.Adam([{'params': model_and_dist.dist.parameters(), 'lr':1e-2}, {'params': list(model_and_dist.model.parameters()), 'lr': 1e-4}]) optim_stds = th.optim.Adam([{'params': [tr_log_stds], 'lr':1e-2},]) n_epochs = 10001 for i_epoch in range(n_epochs): nll = -th.mean(model_and_dist.get_total_log_prob(0, train_inputs + (th.rand_like(train_inputs) -0.5) * 1e-2)) optim.zero_grad() nll.backward() optim.step() if i_epoch % (n_epochs // 20) == 0: tr_out = model_and_dist.model(train_inputs) va_out = model_and_dist.model(val_inputs) te_out = model_and_dist.model(test_inputs) fig = plt.figure(figsize=(5,5)) mean, std = model_and_dist.dist.get_mean_std(0) plt.plot(var_to_np(te_out)[:,0], var_to_np(te_out)[:,1], color=seaborn.color_palette()[1]) plt.scatter(var_to_np(tr_out)[:,0], var_to_np(tr_out)[:,1], color=seaborn.color_palette()[0]) plt.scatter(var_to_np(va_out)[:,0], var_to_np(va_out)[:,1], color=seaborn.color_palette()[2]) ellipse = Ellipse(var_to_np(mean), var_to_np(std[0] * 2), var_to_np(std[1]*2), edgecolor='black', facecolor='None') ax = plt.gca() ax.add_artist(ellipse) plt.axis('equal') plt.title("Epoch {:d} of {:d}\nTrain NLL {:.1E}\nValid NLL {:.1E}\nTest NLL {:.1E}".format( i_epoch, n_epochs, -th.mean(model_and_dist.get_total_log_prob(0, train_inputs)).item(), -th.mean(model_and_dist.get_total_log_prob(0, val_inputs)).item(), -th.mean(model_and_dist.get_total_log_prob(0, test_inputs)).item(), )) display_close(fig) examples = model_and_dist.get_examples(0,200) fig = plt.figure(figsize=(5,5)) plt.plot(var_to_np(test_inputs)[:,0], var_to_np(test_inputs)[:,1], color=seaborn.color_palette()[2]) plt.scatter(var_to_np(examples)[:,0], var_to_np(examples)[:,1], color=seaborn.color_palette()[1]) plt.scatter(var_to_np(train_inputs)[:,0], var_to_np(train_inputs)[:,1], color=seaborn.color_palette()[0]) plt.axis('equal') plt.title("Input space") plt.legend(("Test", "Fake","Train", ),) display_close(fig) from reversible2.invert import invert n_epochs = 5001 for i_epoch in range(n_epochs): tr_out = model_and_dist.model(train_inputs + (th.rand_like(train_inputs) -0.5) * 1e-2) va_out = model_and_dist.model(val_inputs + (th.rand_like(val_inputs) -0.5) * 1e-2).detach() demeaned = va_out.unsqueeze(1) - tr_out.unsqueeze(0) rescaled = demeaned / th.exp(tr_log_stds).unsqueeze(0) eps = 1e-8 log_probs = -(rescaled **2) / 2 - np.log(float(np.sqrt(2 * np.pi))) - tr_log_stds.unsqueeze(0) # sum over dimensions log_probs = th.sum(log_probs, dim=-1) probs = th.exp(log_probs) probs = th.mean(probs, dim=1) nll = -th.mean(th.log(probs + eps)) #optim.zero_grad() optim_stds.zero_grad() nll.backward() #optim.step() optim_stds.step() if i_epoch % (n_epochs // 20) == 0: tr_out = model_and_dist.model(train_inputs) va_out = model_and_dist.model(val_inputs) te_out = model_and_dist.model(test_inputs) demeaned = va_out.unsqueeze(1) - tr_out.unsqueeze(0) rescaled = demeaned / th.exp(tr_log_stds).unsqueeze(0) eps = 1e-8 log_probs = -(rescaled **2) / 2 - np.log(float(np.sqrt(2 * np.pi))) - tr_log_stds.unsqueeze(0) # sum over dimensions log_probs = th.sum(log_probs, dim=-1) probs = th.exp(log_probs) probs = th.mean(probs, dim=1) log_probs = th.log(probs + eps) fig = plt.figure(figsize=(5,5)) mean, std = model_and_dist.dist.get_mean_std(0) plt.plot(var_to_np(te_out)[:,0], var_to_np(te_out)[:,1], color=seaborn.color_palette()[1]) plt.scatter(var_to_np(tr_out)[:,0], var_to_np(tr_out)[:,1], color=seaborn.color_palette()[0]) plt.scatter(var_to_np(va_out)[:,0], var_to_np(va_out)[:,1], color=seaborn.color_palette()[2]) for lprob, out in zip(log_probs, va_out): plt.annotate("{:.1E}".format(-lprob.item()), var_to_np(out)) ellipse = Ellipse(var_to_np(mean), var_to_np(std[0]), var_to_np(std[1]), edgecolor='black', facecolor='None') ax = plt.gca() ax.add_artist(ellipse) for out, lstd in zip(tr_out, tr_log_stds): ellipse = Ellipse(var_to_np(out), var_to_np(th.exp(lstd)[0]), var_to_np(th.exp(lstd)[1]), edgecolor='blue', facecolor='None') ax.add_artist(ellipse) plt.title("Epoch {:d} of {:d}\nTrain NLL {:.1E}\nValid NLL {:.1E}\nTest NLL {:.1E}\n" "ValidMixNLL {:.1E}\n".format( i_epoch, n_epochs, -th.mean(model_and_dist.get_total_log_prob(0, train_inputs)).item(), -th.mean(model_and_dist.get_total_log_prob(0, val_inputs)).item(), -th.mean(model_and_dist.get_total_log_prob(0, test_inputs)).item(), -th.mean(log_probs).item(), )) plt.axis('equal') display_close(fig) examples = model_and_dist.get_examples(0,200) fig = plt.figure(figsize=(5,5)) plt.plot(var_to_np(test_inputs)[:,0], var_to_np(test_inputs)[:,1], color=seaborn.color_palette()[2]) plt.scatter(var_to_np(examples)[:,0], var_to_np(examples)[:,1], color=seaborn.color_palette()[1]) plt.scatter(var_to_np(train_inputs)[:,0], var_to_np(train_inputs)[:,1], color=seaborn.color_palette()[0]) plt.axis('equal') plt.title("Input space") plt.legend(("Test", "Fake","Train", ),) display_close(fig) from reversible2.invert import invert n_epochs = 10001 for i_epoch in range(n_epochs): outs = model_and_dist.model(train_inputs + (th.rand_like(train_inputs) -0.5) * 1e-2) outs = outs + (th.randn_like(outs) * th.exp(tr_log_stds)) ins = invert(model_and_dist.model, outs) outs = model_and_dist.model(ins) nll = -th.mean(model_and_dist.dist.get_total_log_prob(0, outs)) optim.zero_grad() nll.backward() optim.step() if i_epoch % (n_epochs // 20) == 0: tr_out = model_and_dist.model(train_inputs) va_out = model_and_dist.model(val_inputs) te_out = model_and_dist.model(test_inputs) demeaned = va_out.unsqueeze(1) - tr_out.unsqueeze(0) rescaled = demeaned / th.exp(tr_log_stds).unsqueeze(0) eps = 1e-8 log_probs = -(rescaled **2) / 2 - np.log(float(np.sqrt(2 * np.pi))) - tr_log_stds.unsqueeze(0) # sum over dimensions log_probs = th.sum(log_probs, dim=-1) probs = th.exp(log_probs) probs = th.mean(probs, dim=1) log_probs = th.log(probs + eps) fig = plt.figure(figsize=(5,5)) mean, std = model_and_dist.dist.get_mean_std(0) plt.plot(var_to_np(te_out)[:,0], var_to_np(te_out)[:,1], color=seaborn.color_palette()[1]) plt.scatter(var_to_np(tr_out)[:,0], var_to_np(tr_out)[:,1], color=seaborn.color_palette()[0]) plt.scatter(var_to_np(va_out)[:,0], var_to_np(va_out)[:,1], color=seaborn.color_palette()[2]) for lprob, out in zip(log_probs, va_out): plt.annotate("{:.1E}".format(-lprob.item()), var_to_np(out)) ellipse = Ellipse(var_to_np(mean), var_to_np(std[0]), var_to_np(std[1]), edgecolor='black', facecolor='None') ax = plt.gca() ax.add_artist(ellipse) for out, lstd in zip(tr_out, tr_log_stds): ellipse = Ellipse(var_to_np(out), var_to_np(th.exp(lstd)[0]), var_to_np(th.exp(lstd)[1]), edgecolor='blue', facecolor='None') ax.add_artist(ellipse) plt.title("Epoch {:d} of {:d}\nTrain NLL {:.1E}\nValid NLL {:.1E}\nTest NLL {:.1E}\n" "ValidMixNLL {:.1E}\n".format( i_epoch, n_epochs, -th.mean(model_and_dist.get_total_log_prob(0, train_inputs)).item(), -th.mean(model_and_dist.get_total_log_prob(0, val_inputs)).item(), -th.mean(model_and_dist.get_total_log_prob(0, test_inputs)).item(), -th.mean(log_probs).item(), )) plt.axis('equal') display_close(fig) examples = model_and_dist.get_examples(0,200) fig = plt.figure(figsize=(5,5)) plt.plot(var_to_np(test_inputs)[:,0], var_to_np(test_inputs)[:,1], color=seaborn.color_palette()[2]) plt.scatter(var_to_np(examples)[:,0], var_to_np(examples)[:,1], color=seaborn.color_palette()[1]) plt.scatter(var_to_np(train_inputs)[:,0], var_to_np(train_inputs)[:,1], color=seaborn.color_palette()[0]) plt.axis('equal') plt.title("Input space") plt.legend(("Test", "Fake","Train", ),) display_close(fig) ```
github_jupyter
import logging import importlib importlib.reload(logging) # see https://stackoverflow.com/a/21475297/1469195 log = logging.getLogger() log.setLevel('INFO') import sys logging.basicConfig(format='%(asctime)s %(levelname)s : %(message)s', level=logging.INFO, stream=sys.stdout) %%capture import os import site os.sys.path.insert(0, '/home/schirrmr/code/reversible/') os.sys.path.insert(0, '/home/schirrmr/braindecode/code/braindecode/') os.sys.path.insert(0, '/home/schirrmr/code/explaining/reversible//') %load_ext autoreload %autoreload 2 import numpy as np import logging log = logging.getLogger() log.setLevel('INFO') import sys logging.basicConfig(format='%(asctime)s %(levelname)s : %(message)s', level=logging.INFO, stream=sys.stdout) import matplotlib from matplotlib import pyplot as plt from matplotlib import cm %matplotlib inline %config InlineBackend.figure_format = 'png' matplotlib.rcParams['figure.figsize'] = (12.0, 1.0) matplotlib.rcParams['font.size'] = 14 import seaborn seaborn.set_style('darkgrid') from reversible2.sliced import sliced_from_samples from numpy.random import RandomState import torch import torch.nn as nn import torch.nn.functional as F from torch.autograd import Variable import numpy as np import copy import math import itertools import torch as th from braindecode.torch_ext.util import np_to_var, var_to_np from reversible2.splitter import SubsampleSplitter from reversible2.view_as import ViewAs from reversible2.affine import AdditiveBlock from reversible2.plot import display_text, display_close from reversible2.high_gamma import load_file, create_inputs from reversible2.high_gamma import load_train_test th.backends.cudnn.benchmark = True from reversible2.models import deep_invertible rng = RandomState(201904113)#2 ganz gut # 13 sehr gut) X = rng.randn(5,2) * np.array([1,0])[None] + np.array([-1,0])[None] X = np.sort(X, axis=0) X_test = rng.randn(500,2) * np.array([1,0])[None] + np.array([-1,0])[None] X_test = np.sort(X_test, axis=0) train_inputs = np_to_var(X[::2], dtype=np.float32) val_inputs = np_to_var(X[1::2], dtype=np.float32) test_inputs = np_to_var(X_test, dtype=np.float32) plt.figure(figsize=(5,5)) plt.scatter(var_to_np(train_inputs)[:,0], var_to_np(train_inputs)[:,1]) plt.scatter(var_to_np(val_inputs)[:,0], var_to_np(val_inputs)[:,1]) plt.scatter([-1],[0], color='black') plt.xlim(-2.5,5.5) plt.ylim(-4,4) cuda = False from reversible2.distribution import TwoClassIndependentDist from reversible2.blocks import dense_add_block from reversible2.rfft import RFFT, Interleave from reversible2.util import set_random_seeds from torch.nn import ConstantPad2d import torch as th from reversible2.splitter import SubsampleSplitter from matplotlib.patches import Ellipse set_random_seeds(2019011641, cuda) model = nn.Sequential( dense_add_block(2,200), dense_add_block(2,200), dense_add_block(2,200), dense_add_block(2,200), ) dist = TwoClassIndependentDist(2, truncate_to=None) from reversible2.dist_model import ModelAndDist model_and_dist = ModelAndDist(model, dist) optim = th.optim.Adam([{'params': model_and_dist.dist.parameters(), 'lr':1e-3}, {'params': list(model_and_dist.model.parameters()), 'lr': 1e-4}]) model_and_dist.dist.get_mean_std(0) n_epochs = 10001 for i_epoch in range(n_epochs): nll = -th.mean(model_and_dist.get_total_log_prob(0, test_inputs + (th.rand_like(test_inputs) -0.5) * 1e-2)) optim.zero_grad() nll.backward() optim.step() if i_epoch % (n_epochs // 20) == 0: tr_out = model_and_dist.model(train_inputs) va_out = model_and_dist.model(val_inputs) te_out = model_and_dist.model(test_inputs) fig = plt.figure(figsize=(5,5)) mean, std = model_and_dist.dist.get_mean_std(0) plt.plot(var_to_np(te_out)[:,0], var_to_np(te_out)[:,1], color=seaborn.color_palette()[1]) plt.scatter(var_to_np(tr_out)[:,0], var_to_np(tr_out)[:,1], color=seaborn.color_palette()[0]) plt.scatter(var_to_np(va_out)[:,0], var_to_np(va_out)[:,1], color=seaborn.color_palette()[2]) ellipse = Ellipse(var_to_np(mean), var_to_np(std[0] * 2), var_to_np(std[1]*2), edgecolor='black', facecolor='None') ax = plt.gca() ax.add_artist(ellipse) plt.axis('equal') plt.title("Epoch {:d} of {:d}\nTrain NLL {:.1E}\nValid NLL {:.1E}\nTest NLL {:.1E}".format( i_epoch, n_epochs, -th.mean(model_and_dist.get_total_log_prob(0, train_inputs)).item(), -th.mean(model_and_dist.get_total_log_prob(0, val_inputs)).item(), -th.mean(model_and_dist.get_total_log_prob(0, test_inputs)).item(), )) display_close(fig) examples = model_and_dist.get_examples(0,200) fig = plt.figure(figsize=(5,5)) plt.plot(var_to_np(test_inputs)[:,0], var_to_np(test_inputs)[:,1], color=seaborn.color_palette()[2]) plt.scatter(var_to_np(examples)[:,0], var_to_np(examples)[:,1], color=seaborn.color_palette()[1]) plt.scatter(var_to_np(train_inputs)[:,0], var_to_np(train_inputs)[:,1], color=seaborn.color_palette()[0]) plt.axis('equal') plt.title("Input space") plt.legend(("Test", "Fake","Train", ),) display_close(fig) cuda = False from reversible2.distribution import TwoClassIndependentDist from reversible2.blocks import dense_add_block from reversible2.rfft import RFFT, Interleave from reversible2.util import set_random_seeds from torch.nn import ConstantPad2d import torch as th from reversible2.splitter import SubsampleSplitter from matplotlib.patches import Ellipse set_random_seeds(2019011641, cuda) model = nn.Sequential( dense_add_block(2,200), dense_add_block(2,200), dense_add_block(2,200), dense_add_block(2,200), ) dist = TwoClassIndependentDist(2, truncate_to=None) from reversible2.dist_model import ModelAndDist model_and_dist = ModelAndDist(model, dist) optim = th.optim.Adam([{'params': model_and_dist.dist.parameters(), 'lr':1e-3}, {'params': list(model_and_dist.model.parameters()), 'lr': 1e-4}]) n_epochs = 10001 for i_epoch in range(n_epochs): nll = -th.mean(model_and_dist.get_total_log_prob(0, test_inputs)) optim.zero_grad() nll.backward() optim.step() if i_epoch % (n_epochs // 20) == 0: tr_out = model_and_dist.model(train_inputs) va_out = model_and_dist.model(val_inputs) te_out = model_and_dist.model(test_inputs) fig = plt.figure(figsize=(5,5)) mean, std = model_and_dist.dist.get_mean_std(0) plt.plot(var_to_np(te_out)[:,0], var_to_np(te_out)[:,1], color=seaborn.color_palette()[1]) plt.scatter(var_to_np(tr_out)[:,0], var_to_np(tr_out)[:,1], color=seaborn.color_palette()[0]) plt.scatter(var_to_np(va_out)[:,0], var_to_np(va_out)[:,1], color=seaborn.color_palette()[2]) ellipse = Ellipse(var_to_np(mean), var_to_np(std[0] * 2), var_to_np(std[1]*2), edgecolor='black', facecolor='None') ax = plt.gca() ax.add_artist(ellipse) plt.axis('equal') plt.title("Epoch {:d} of {:d}\nTrain NLL {:.1E}\nValid NLL {:.1E}\nTest NLL {:.1E}".format( i_epoch, n_epochs, -th.mean(model_and_dist.get_total_log_prob(0, train_inputs)).item(), -th.mean(model_and_dist.get_total_log_prob(0, val_inputs)).item(), -th.mean(model_and_dist.get_total_log_prob(0, test_inputs)).item(), )) display_close(fig) examples = model_and_dist.get_examples(0,200) fig = plt.figure(figsize=(5,5)) plt.plot(var_to_np(test_inputs)[:,0], var_to_np(test_inputs)[:,1], color=seaborn.color_palette()[2]) plt.scatter(var_to_np(examples)[:,0], var_to_np(examples)[:,1], color=seaborn.color_palette()[1]) plt.scatter(var_to_np(train_inputs)[:,0], var_to_np(train_inputs)[:,1], color=seaborn.color_palette()[0]) plt.axis('equal') plt.title("Input space") plt.legend(("Test", "Fake","Train", ),) display_close(fig) cuda = False from reversible2.distribution import TwoClassIndependentDist from reversible2.blocks import dense_add_block from reversible2.rfft import RFFT, Interleave from reversible2.util import set_random_seeds from torch.nn import ConstantPad2d import torch as th from reversible2.splitter import SubsampleSplitter from matplotlib.patches import Ellipse set_random_seeds(2019011641, cuda) model = nn.Sequential( dense_add_block(2,200), dense_add_block(2,200), dense_add_block(2,200), dense_add_block(2,200), ) dist = TwoClassIndependentDist(2, truncate_to=None) from reversible2.dist_model import ModelAndDist model_and_dist = ModelAndDist(model, dist) optim = th.optim.Adam([{'params': model_and_dist.dist.parameters(), 'lr':1e-2}, {'params': list(model_and_dist.model.parameters()), 'lr': 1e-4}]) n_epochs = 10001 for i_epoch in range(n_epochs): nll = -th.mean(model_and_dist.get_total_log_prob(0, test_inputs)) optim.zero_grad() nll.backward() optim.step() if i_epoch % (n_epochs // 20) == 0: tr_out = model_and_dist.model(train_inputs) va_out = model_and_dist.model(val_inputs) te_out = model_and_dist.model(test_inputs) fig = plt.figure(figsize=(5,5)) mean, std = model_and_dist.dist.get_mean_std(0) plt.plot(var_to_np(te_out)[:,0], var_to_np(te_out)[:,1], color=seaborn.color_palette()[1]) plt.scatter(var_to_np(tr_out)[:,0], var_to_np(tr_out)[:,1], color=seaborn.color_palette()[0]) plt.scatter(var_to_np(va_out)[:,0], var_to_np(va_out)[:,1], color=seaborn.color_palette()[2]) ellipse = Ellipse(var_to_np(mean), var_to_np(std[0] * 2), var_to_np(std[1]*2), edgecolor='black', facecolor='None') ax = plt.gca() ax.add_artist(ellipse) plt.axis('equal') plt.title("Epoch {:d} of {:d}\nTrain NLL {:.1E}\nValid NLL {:.1E}\nTest NLL {:.1E}".format( i_epoch, n_epochs, -th.mean(model_and_dist.get_total_log_prob(0, train_inputs)).item(), -th.mean(model_and_dist.get_total_log_prob(0, val_inputs)).item(), -th.mean(model_and_dist.get_total_log_prob(0, test_inputs)).item(), )) display_close(fig) examples = model_and_dist.get_examples(0,200) fig = plt.figure(figsize=(5,5)) plt.plot(var_to_np(test_inputs)[:,0], var_to_np(test_inputs)[:,1], color=seaborn.color_palette()[2]) plt.scatter(var_to_np(examples)[:,0], var_to_np(examples)[:,1], color=seaborn.color_palette()[1]) plt.scatter(var_to_np(train_inputs)[:,0], var_to_np(train_inputs)[:,1], color=seaborn.color_palette()[0]) plt.axis('equal') plt.title("Input space") plt.legend(("Test", "Fake","Train", ),) display_close(fig) cuda = False from reversible2.distribution import TwoClassIndependentDist from reversible2.blocks import dense_add_block from reversible2.rfft import RFFT, Interleave from reversible2.util import set_random_seeds from torch.nn import ConstantPad2d import torch as th from reversible2.splitter import SubsampleSplitter from matplotlib.patches import Ellipse from reversible2.gaussian import get_gauss_samples set_random_seeds(2019011641, cuda) model = nn.Sequential( dense_add_block(2,200), dense_add_block(2,200), dense_add_block(2,200), dense_add_block(2,200), ) dist = TwoClassIndependentDist(2, truncate_to=None) tr_log_stds = (th.zeros_like(train_inputs, requires_grad=True) - 2).detach().clone().requires_grad_(True) from reversible2.dist_model import ModelAndDist model_and_dist = ModelAndDist(model, dist) optim = th.optim.Adam([{'params': model_and_dist.dist.parameters(), 'lr':1e-2}, {'params': list(model_and_dist.model.parameters()), 'lr': 1e-4}]) optim_stds = th.optim.Adam([{'params': [tr_log_stds], 'lr':1e-2},]) from reversible2.invert import invert n_epochs = 10001 for i_epoch in range(n_epochs): with th.no_grad(): outs = model_and_dist.model(train_inputs + (th.rand_like(train_inputs) -0.5) * 1e-2) outs = outs + (th.randn_like(outs) * th.exp(tr_log_stds)) ins = invert(model_and_dist.model, outs).detach() outs = model_and_dist.model(ins) nll = -th.mean(model_and_dist.dist.get_total_log_prob(0, outs)) optim.zero_grad() nll.backward() optim.step() tr_out = model_and_dist.model(train_inputs + (th.rand_like(train_inputs) -0.5) * 1e-2) va_out = model_and_dist.model(val_inputs + (th.rand_like(val_inputs) -0.5) * 1e-2).detach() demeaned = va_out.unsqueeze(1) - tr_out.unsqueeze(0) rescaled = demeaned / th.exp(tr_log_stds).unsqueeze(0) eps = 1e-8 log_probs = -(rescaled **2) / 2 - np.log(float(np.sqrt(2 * np.pi))) - tr_log_stds.unsqueeze(0) # sum over dimensions log_probs = th.sum(log_probs, dim=-1) probs = th.exp(log_probs) probs = th.mean(probs, dim=1) nll = -th.mean(th.log(probs + eps)) #optim.zero_grad() optim_stds.zero_grad() nll.backward() #optim.step() optim_stds.step() if i_epoch % (n_epochs // 20) == 0: tr_out = model_and_dist.model(train_inputs) va_out = model_and_dist.model(val_inputs) te_out = model_and_dist.model(test_inputs) demeaned = va_out.unsqueeze(1) - tr_out.unsqueeze(0) rescaled = demeaned / th.exp(tr_log_stds).unsqueeze(0) eps = 1e-8 log_probs = -(rescaled **2) / 2 - np.log(float(np.sqrt(2 * np.pi))) - tr_log_stds.unsqueeze(0) # sum over dimensions log_probs = th.sum(log_probs, dim=-1) probs = th.exp(log_probs) probs = th.mean(probs, dim=1) log_probs = th.log(probs + eps) fig = plt.figure(figsize=(5,5)) mean, std = model_and_dist.dist.get_mean_std(0) plt.plot(var_to_np(te_out)[:,0], var_to_np(te_out)[:,1], color=seaborn.color_palette()[1]) plt.scatter(var_to_np(tr_out)[:,0], var_to_np(tr_out)[:,1], color=seaborn.color_palette()[0]) plt.scatter(var_to_np(va_out)[:,0], var_to_np(va_out)[:,1], color=seaborn.color_palette()[2]) for lprob, out in zip(log_probs, va_out): plt.annotate("{:.1E}".format(-lprob.item()), var_to_np(out)) ellipse = Ellipse(var_to_np(mean), var_to_np(std[0]), var_to_np(std[1]), edgecolor='black', facecolor='None') ax = plt.gca() ax.add_artist(ellipse) for out, lstd in zip(tr_out, tr_log_stds): ellipse = Ellipse(var_to_np(out), var_to_np(th.exp(lstd)[0]), var_to_np(th.exp(lstd)[1]), edgecolor='blue', facecolor='None') ax.add_artist(ellipse) plt.title("Epoch {:d} of {:d}\nTrain NLL {:.1E}\nValid NLL {:.1E}\nTest NLL {:.1E}\n" "ValidMixNLL {:.1E}\n".format( i_epoch, n_epochs, -th.mean(model_and_dist.get_total_log_prob(0, train_inputs)).item(), -th.mean(model_and_dist.get_total_log_prob(0, val_inputs)).item(), -th.mean(model_and_dist.get_total_log_prob(0, test_inputs)).item(), -th.mean(log_probs).item(), )) plt.axis('equal') display_close(fig) examples = model_and_dist.get_examples(0,200) fig = plt.figure(figsize=(5,5)) plt.plot(var_to_np(test_inputs)[:,0], var_to_np(test_inputs)[:,1], color=seaborn.color_palette()[2]) plt.scatter(var_to_np(examples)[:,0], var_to_np(examples)[:,1], color=seaborn.color_palette()[1]) plt.scatter(var_to_np(train_inputs)[:,0], var_to_np(train_inputs)[:,1], color=seaborn.color_palette()[0]) plt.axis('equal') plt.title("Input space") plt.legend(("Test", "Fake","Train", ),) display_close(fig) cuda = False from reversible2.distribution import TwoClassIndependentDist from reversible2.blocks import dense_add_block from reversible2.rfft import RFFT, Interleave from reversible2.util import set_random_seeds from torch.nn import ConstantPad2d import torch as th from reversible2.splitter import SubsampleSplitter from matplotlib.patches import Ellipse from reversible2.gaussian import get_gauss_samples set_random_seeds(2019011641, cuda) model = nn.Sequential( dense_add_block(2,200), dense_add_block(2,200), dense_add_block(2,200), dense_add_block(2,200), ) dist = TwoClassIndependentDist(2, truncate_to=None) tr_log_stds = (th.zeros_like(train_inputs, requires_grad=True) - 2).detach().clone().requires_grad_(True) from reversible2.dist_model import ModelAndDist model_and_dist = ModelAndDist(model, dist) optim = th.optim.Adam([{'params': model_and_dist.dist.parameters(), 'lr':1e-2}, {'params': list(model_and_dist.model.parameters()), 'lr': 1e-4}]) optim_stds = th.optim.Adam([{'params': [tr_log_stds], 'lr':1e-2},]) n_epochs = 10001 for i_epoch in range(n_epochs): nll = -th.mean(model_and_dist.get_total_log_prob(0, train_inputs + (th.rand_like(train_inputs) -0.5) * 1e-2)) optim.zero_grad() nll.backward() optim.step() if i_epoch % (n_epochs // 20) == 0: tr_out = model_and_dist.model(train_inputs) va_out = model_and_dist.model(val_inputs) te_out = model_and_dist.model(test_inputs) fig = plt.figure(figsize=(5,5)) mean, std = model_and_dist.dist.get_mean_std(0) plt.plot(var_to_np(te_out)[:,0], var_to_np(te_out)[:,1], color=seaborn.color_palette()[1]) plt.scatter(var_to_np(tr_out)[:,0], var_to_np(tr_out)[:,1], color=seaborn.color_palette()[0]) plt.scatter(var_to_np(va_out)[:,0], var_to_np(va_out)[:,1], color=seaborn.color_palette()[2]) ellipse = Ellipse(var_to_np(mean), var_to_np(std[0] * 2), var_to_np(std[1]*2), edgecolor='black', facecolor='None') ax = plt.gca() ax.add_artist(ellipse) plt.axis('equal') plt.title("Epoch {:d} of {:d}\nTrain NLL {:.1E}\nValid NLL {:.1E}\nTest NLL {:.1E}".format( i_epoch, n_epochs, -th.mean(model_and_dist.get_total_log_prob(0, train_inputs)).item(), -th.mean(model_and_dist.get_total_log_prob(0, val_inputs)).item(), -th.mean(model_and_dist.get_total_log_prob(0, test_inputs)).item(), )) display_close(fig) examples = model_and_dist.get_examples(0,200) fig = plt.figure(figsize=(5,5)) plt.plot(var_to_np(test_inputs)[:,0], var_to_np(test_inputs)[:,1], color=seaborn.color_palette()[2]) plt.scatter(var_to_np(examples)[:,0], var_to_np(examples)[:,1], color=seaborn.color_palette()[1]) plt.scatter(var_to_np(train_inputs)[:,0], var_to_np(train_inputs)[:,1], color=seaborn.color_palette()[0]) plt.axis('equal') plt.title("Input space") plt.legend(("Test", "Fake","Train", ),) display_close(fig) from reversible2.invert import invert n_epochs = 10001 for i_epoch in range(n_epochs): tr_out = model_and_dist.model(train_inputs + (th.rand_like(train_inputs) -0.5) * 1e-2) va_out = model_and_dist.model(val_inputs + (th.rand_like(val_inputs) -0.5) * 1e-2).detach() demeaned = va_out.unsqueeze(1) - tr_out.unsqueeze(0) rescaled = demeaned / th.exp(tr_log_stds).unsqueeze(0) eps = 1e-8 log_probs = -(rescaled **2) / 2 - np.log(float(np.sqrt(2 * np.pi))) - tr_log_stds.unsqueeze(0) # sum over dimensions log_probs = th.sum(log_probs, dim=-1) probs = th.exp(log_probs) probs = th.mean(probs, dim=1) nll = -th.mean(th.log(probs + eps)) #optim.zero_grad() optim_stds.zero_grad() nll.backward() #optim.step() optim_stds.step() if i_epoch % (n_epochs // 20) == 0: tr_out = model_and_dist.model(train_inputs) va_out = model_and_dist.model(val_inputs) te_out = model_and_dist.model(test_inputs) demeaned = va_out.unsqueeze(1) - tr_out.unsqueeze(0) rescaled = demeaned / th.exp(tr_log_stds).unsqueeze(0) eps = 1e-8 log_probs = -(rescaled **2) / 2 - np.log(float(np.sqrt(2 * np.pi))) - tr_log_stds.unsqueeze(0) # sum over dimensions log_probs = th.sum(log_probs, dim=-1) probs = th.exp(log_probs) probs = th.mean(probs, dim=1) log_probs = th.log(probs + eps) fig = plt.figure(figsize=(5,5)) mean, std = model_and_dist.dist.get_mean_std(0) plt.plot(var_to_np(te_out)[:,0], var_to_np(te_out)[:,1], color=seaborn.color_palette()[1]) plt.scatter(var_to_np(tr_out)[:,0], var_to_np(tr_out)[:,1], color=seaborn.color_palette()[0]) plt.scatter(var_to_np(va_out)[:,0], var_to_np(va_out)[:,1], color=seaborn.color_palette()[2]) for lprob, out in zip(log_probs, va_out): plt.annotate("{:.1E}".format(-lprob.item()), var_to_np(out)) ellipse = Ellipse(var_to_np(mean), var_to_np(std[0]), var_to_np(std[1]), edgecolor='black', facecolor='None') ax = plt.gca() ax.add_artist(ellipse) for out, lstd in zip(tr_out, tr_log_stds): ellipse = Ellipse(var_to_np(out), var_to_np(th.exp(lstd)[0]), var_to_np(th.exp(lstd)[1]), edgecolor='blue', facecolor='None') ax.add_artist(ellipse) plt.title("Epoch {:d} of {:d}\nTrain NLL {:.1E}\nValid NLL {:.1E}\nTest NLL {:.1E}\n" "ValidMixNLL {:.1E}\n".format( i_epoch, n_epochs, -th.mean(model_and_dist.get_total_log_prob(0, train_inputs)).item(), -th.mean(model_and_dist.get_total_log_prob(0, val_inputs)).item(), -th.mean(model_and_dist.get_total_log_prob(0, test_inputs)).item(), -th.mean(log_probs).item(), )) plt.axis('equal') display_close(fig) examples = model_and_dist.get_examples(0,200) fig = plt.figure(figsize=(5,5)) plt.plot(var_to_np(test_inputs)[:,0], var_to_np(test_inputs)[:,1], color=seaborn.color_palette()[2]) plt.scatter(var_to_np(examples)[:,0], var_to_np(examples)[:,1], color=seaborn.color_palette()[1]) plt.scatter(var_to_np(train_inputs)[:,0], var_to_np(train_inputs)[:,1], color=seaborn.color_palette()[0]) plt.axis('equal') plt.title("Input space") plt.legend(("Test", "Fake","Train", ),) display_close(fig) from reversible2.invert import invert n_epochs = 10001 for i_epoch in range(n_epochs): with th.no_grad(): outs = model_and_dist.model(train_inputs + (th.rand_like(train_inputs) -0.5) * 1e-2) outs = outs + (th.randn_like(outs) * th.exp(tr_log_stds)) ins = invert(model_and_dist.model, outs).detach() outs = model_and_dist.model(ins) nll = -th.mean(model_and_dist.dist.get_total_log_prob(0, outs)) optim.zero_grad() nll.backward() optim.step() if i_epoch % (n_epochs // 20) == 0: tr_out = model_and_dist.model(train_inputs) va_out = model_and_dist.model(val_inputs) te_out = model_and_dist.model(test_inputs) demeaned = va_out.unsqueeze(1) - tr_out.unsqueeze(0) rescaled = demeaned / th.exp(tr_log_stds).unsqueeze(0) eps = 1e-8 log_probs = -(rescaled **2) / 2 - np.log(float(np.sqrt(2 * np.pi))) - tr_log_stds.unsqueeze(0) # sum over dimensions log_probs = th.sum(log_probs, dim=-1) probs = th.exp(log_probs) probs = th.mean(probs, dim=1) log_probs = th.log(probs + eps) fig = plt.figure(figsize=(5,5)) mean, std = model_and_dist.dist.get_mean_std(0) plt.plot(var_to_np(te_out)[:,0], var_to_np(te_out)[:,1], color=seaborn.color_palette()[1]) plt.scatter(var_to_np(tr_out)[:,0], var_to_np(tr_out)[:,1], color=seaborn.color_palette()[0]) plt.scatter(var_to_np(va_out)[:,0], var_to_np(va_out)[:,1], color=seaborn.color_palette()[2]) for lprob, out in zip(log_probs, va_out): plt.annotate("{:.1E}".format(-lprob.item()), var_to_np(out)) ellipse = Ellipse(var_to_np(mean), var_to_np(std[0]), var_to_np(std[1]), edgecolor='black', facecolor='None') ax = plt.gca() ax.add_artist(ellipse) for out, lstd in zip(tr_out, tr_log_stds): ellipse = Ellipse(var_to_np(out), var_to_np(th.exp(lstd)[0]), var_to_np(th.exp(lstd)[1]), edgecolor='blue', facecolor='None') ax.add_artist(ellipse) plt.title("Epoch {:d} of {:d}\nTrain NLL {:.1E}\nValid NLL {:.1E}\nTest NLL {:.1E}\n" "ValidMixNLL {:.1E}\n".format( i_epoch, n_epochs, -th.mean(model_and_dist.get_total_log_prob(0, train_inputs)).item(), -th.mean(model_and_dist.get_total_log_prob(0, val_inputs)).item(), -th.mean(model_and_dist.get_total_log_prob(0, test_inputs)).item(), -th.mean(log_probs).item(), )) plt.axis('equal') display_close(fig) examples = model_and_dist.get_examples(0,200) fig = plt.figure(figsize=(5,5)) plt.plot(var_to_np(test_inputs)[:,0], var_to_np(test_inputs)[:,1], color=seaborn.color_palette()[2]) plt.scatter(var_to_np(examples)[:,0], var_to_np(examples)[:,1], color=seaborn.color_palette()[1]) plt.scatter(var_to_np(train_inputs)[:,0], var_to_np(train_inputs)[:,1], color=seaborn.color_palette()[0]) plt.axis('equal') plt.title("Input space") plt.legend(("Test", "Fake","Train", ),) display_close(fig) cuda = False from reversible2.distribution import TwoClassIndependentDist from reversible2.blocks import dense_add_block from reversible2.rfft import RFFT, Interleave from reversible2.util import set_random_seeds from torch.nn import ConstantPad2d import torch as th from reversible2.splitter import SubsampleSplitter from matplotlib.patches import Ellipse from reversible2.gaussian import get_gauss_samples set_random_seeds(2019011641, cuda) model = nn.Sequential( dense_add_block(2,200), dense_add_block(2,200), dense_add_block(2,200), dense_add_block(2,200), ) dist = TwoClassIndependentDist(2, truncate_to=None) tr_log_stds = (th.zeros_like(train_inputs, requires_grad=True) - 2).detach().clone().requires_grad_(True) from reversible2.dist_model import ModelAndDist model_and_dist = ModelAndDist(model, dist) optim = th.optim.Adam([{'params': model_and_dist.dist.parameters(), 'lr':1e-2}, {'params': list(model_and_dist.model.parameters()), 'lr': 1e-4}]) optim_stds = th.optim.Adam([{'params': [tr_log_stds], 'lr':1e-2},]) n_epochs = 10001 for i_epoch in range(n_epochs): nll = -th.mean(model_and_dist.get_total_log_prob(0, train_inputs + (th.rand_like(train_inputs) -0.5) * 1e-2)) optim.zero_grad() nll.backward() optim.step() if i_epoch % (n_epochs // 20) == 0: tr_out = model_and_dist.model(train_inputs) va_out = model_and_dist.model(val_inputs) te_out = model_and_dist.model(test_inputs) fig = plt.figure(figsize=(5,5)) mean, std = model_and_dist.dist.get_mean_std(0) plt.plot(var_to_np(te_out)[:,0], var_to_np(te_out)[:,1], color=seaborn.color_palette()[1]) plt.scatter(var_to_np(tr_out)[:,0], var_to_np(tr_out)[:,1], color=seaborn.color_palette()[0]) plt.scatter(var_to_np(va_out)[:,0], var_to_np(va_out)[:,1], color=seaborn.color_palette()[2]) ellipse = Ellipse(var_to_np(mean), var_to_np(std[0] * 2), var_to_np(std[1]*2), edgecolor='black', facecolor='None') ax = plt.gca() ax.add_artist(ellipse) plt.axis('equal') plt.title("Epoch {:d} of {:d}\nTrain NLL {:.1E}\nValid NLL {:.1E}\nTest NLL {:.1E}".format( i_epoch, n_epochs, -th.mean(model_and_dist.get_total_log_prob(0, train_inputs)).item(), -th.mean(model_and_dist.get_total_log_prob(0, val_inputs)).item(), -th.mean(model_and_dist.get_total_log_prob(0, test_inputs)).item(), )) display_close(fig) examples = model_and_dist.get_examples(0,200) fig = plt.figure(figsize=(5,5)) plt.plot(var_to_np(test_inputs)[:,0], var_to_np(test_inputs)[:,1], color=seaborn.color_palette()[2]) plt.scatter(var_to_np(examples)[:,0], var_to_np(examples)[:,1], color=seaborn.color_palette()[1]) plt.scatter(var_to_np(train_inputs)[:,0], var_to_np(train_inputs)[:,1], color=seaborn.color_palette()[0]) plt.axis('equal') plt.title("Input space") plt.legend(("Test", "Fake","Train", ),) display_close(fig) from reversible2.invert import invert n_epochs = 5001 for i_epoch in range(n_epochs): tr_out = model_and_dist.model(train_inputs + (th.rand_like(train_inputs) -0.5) * 1e-2) va_out = model_and_dist.model(val_inputs + (th.rand_like(val_inputs) -0.5) * 1e-2).detach() demeaned = va_out.unsqueeze(1) - tr_out.unsqueeze(0) rescaled = demeaned / th.exp(tr_log_stds).unsqueeze(0) eps = 1e-8 log_probs = -(rescaled **2) / 2 - np.log(float(np.sqrt(2 * np.pi))) - tr_log_stds.unsqueeze(0) # sum over dimensions log_probs = th.sum(log_probs, dim=-1) probs = th.exp(log_probs) probs = th.mean(probs, dim=1) nll = -th.mean(th.log(probs + eps)) #optim.zero_grad() optim_stds.zero_grad() nll.backward() #optim.step() optim_stds.step() if i_epoch % (n_epochs // 20) == 0: tr_out = model_and_dist.model(train_inputs) va_out = model_and_dist.model(val_inputs) te_out = model_and_dist.model(test_inputs) demeaned = va_out.unsqueeze(1) - tr_out.unsqueeze(0) rescaled = demeaned / th.exp(tr_log_stds).unsqueeze(0) eps = 1e-8 log_probs = -(rescaled **2) / 2 - np.log(float(np.sqrt(2 * np.pi))) - tr_log_stds.unsqueeze(0) # sum over dimensions log_probs = th.sum(log_probs, dim=-1) probs = th.exp(log_probs) probs = th.mean(probs, dim=1) log_probs = th.log(probs + eps) fig = plt.figure(figsize=(5,5)) mean, std = model_and_dist.dist.get_mean_std(0) plt.plot(var_to_np(te_out)[:,0], var_to_np(te_out)[:,1], color=seaborn.color_palette()[1]) plt.scatter(var_to_np(tr_out)[:,0], var_to_np(tr_out)[:,1], color=seaborn.color_palette()[0]) plt.scatter(var_to_np(va_out)[:,0], var_to_np(va_out)[:,1], color=seaborn.color_palette()[2]) for lprob, out in zip(log_probs, va_out): plt.annotate("{:.1E}".format(-lprob.item()), var_to_np(out)) ellipse = Ellipse(var_to_np(mean), var_to_np(std[0]), var_to_np(std[1]), edgecolor='black', facecolor='None') ax = plt.gca() ax.add_artist(ellipse) for out, lstd in zip(tr_out, tr_log_stds): ellipse = Ellipse(var_to_np(out), var_to_np(th.exp(lstd)[0]), var_to_np(th.exp(lstd)[1]), edgecolor='blue', facecolor='None') ax.add_artist(ellipse) plt.title("Epoch {:d} of {:d}\nTrain NLL {:.1E}\nValid NLL {:.1E}\nTest NLL {:.1E}\n" "ValidMixNLL {:.1E}\n".format( i_epoch, n_epochs, -th.mean(model_and_dist.get_total_log_prob(0, train_inputs)).item(), -th.mean(model_and_dist.get_total_log_prob(0, val_inputs)).item(), -th.mean(model_and_dist.get_total_log_prob(0, test_inputs)).item(), -th.mean(log_probs).item(), )) plt.axis('equal') display_close(fig) examples = model_and_dist.get_examples(0,200) fig = plt.figure(figsize=(5,5)) plt.plot(var_to_np(test_inputs)[:,0], var_to_np(test_inputs)[:,1], color=seaborn.color_palette()[2]) plt.scatter(var_to_np(examples)[:,0], var_to_np(examples)[:,1], color=seaborn.color_palette()[1]) plt.scatter(var_to_np(train_inputs)[:,0], var_to_np(train_inputs)[:,1], color=seaborn.color_palette()[0]) plt.axis('equal') plt.title("Input space") plt.legend(("Test", "Fake","Train", ),) display_close(fig) from reversible2.invert import invert n_epochs = 10001 for i_epoch in range(n_epochs): outs = model_and_dist.model(train_inputs + (th.rand_like(train_inputs) -0.5) * 1e-2) outs = outs + (th.randn_like(outs) * th.exp(tr_log_stds)) ins = invert(model_and_dist.model, outs) outs = model_and_dist.model(ins) nll = -th.mean(model_and_dist.dist.get_total_log_prob(0, outs)) optim.zero_grad() nll.backward() optim.step() if i_epoch % (n_epochs // 20) == 0: tr_out = model_and_dist.model(train_inputs) va_out = model_and_dist.model(val_inputs) te_out = model_and_dist.model(test_inputs) demeaned = va_out.unsqueeze(1) - tr_out.unsqueeze(0) rescaled = demeaned / th.exp(tr_log_stds).unsqueeze(0) eps = 1e-8 log_probs = -(rescaled **2) / 2 - np.log(float(np.sqrt(2 * np.pi))) - tr_log_stds.unsqueeze(0) # sum over dimensions log_probs = th.sum(log_probs, dim=-1) probs = th.exp(log_probs) probs = th.mean(probs, dim=1) log_probs = th.log(probs + eps) fig = plt.figure(figsize=(5,5)) mean, std = model_and_dist.dist.get_mean_std(0) plt.plot(var_to_np(te_out)[:,0], var_to_np(te_out)[:,1], color=seaborn.color_palette()[1]) plt.scatter(var_to_np(tr_out)[:,0], var_to_np(tr_out)[:,1], color=seaborn.color_palette()[0]) plt.scatter(var_to_np(va_out)[:,0], var_to_np(va_out)[:,1], color=seaborn.color_palette()[2]) for lprob, out in zip(log_probs, va_out): plt.annotate("{:.1E}".format(-lprob.item()), var_to_np(out)) ellipse = Ellipse(var_to_np(mean), var_to_np(std[0]), var_to_np(std[1]), edgecolor='black', facecolor='None') ax = plt.gca() ax.add_artist(ellipse) for out, lstd in zip(tr_out, tr_log_stds): ellipse = Ellipse(var_to_np(out), var_to_np(th.exp(lstd)[0]), var_to_np(th.exp(lstd)[1]), edgecolor='blue', facecolor='None') ax.add_artist(ellipse) plt.title("Epoch {:d} of {:d}\nTrain NLL {:.1E}\nValid NLL {:.1E}\nTest NLL {:.1E}\n" "ValidMixNLL {:.1E}\n".format( i_epoch, n_epochs, -th.mean(model_and_dist.get_total_log_prob(0, train_inputs)).item(), -th.mean(model_and_dist.get_total_log_prob(0, val_inputs)).item(), -th.mean(model_and_dist.get_total_log_prob(0, test_inputs)).item(), -th.mean(log_probs).item(), )) plt.axis('equal') display_close(fig) examples = model_and_dist.get_examples(0,200) fig = plt.figure(figsize=(5,5)) plt.plot(var_to_np(test_inputs)[:,0], var_to_np(test_inputs)[:,1], color=seaborn.color_palette()[2]) plt.scatter(var_to_np(examples)[:,0], var_to_np(examples)[:,1], color=seaborn.color_palette()[1]) plt.scatter(var_to_np(train_inputs)[:,0], var_to_np(train_inputs)[:,1], color=seaborn.color_palette()[0]) plt.axis('equal') plt.title("Input space") plt.legend(("Test", "Fake","Train", ),) display_close(fig)
0.451568
0.340506
# © Dr. Arkaprabha Sau ### MBBS, MD(Gold Medalist), DPH, Dip. Geriatric Medicine, PhD(Research Scholar) ## Deputy Director (Medical), Group-A Central Civil Cervices ## Directorate General of Factory Advice Service and Labour Institutes ## Ministry of labour and Employment ## Govt. of India # Install and Import package and library ## Installing Basemap is critical. Please try it in Anaconda environment ``` import matplotlib as mpl import matplotlib.pyplot as plt from mpl_toolkits.basemap import Basemap from matplotlib.patches import Polygon from matplotlib.collections import PatchCollection from matplotlib.patches import PathPatch import numpy as np import matplotlib.cm import matplotlib.patches as mpatches ``` # Mapping ``` fig, ax=plt.subplots(figsize=(20, 15)) map = Basemap() map.drawmapboundary(fill_color='#D3D3D3') map.fillcontinents(color='black',lake_color='black') map.readshapefile('/Users/arka.doctor/Dropbox/PhD_Arka/Exploratory Indian Data Analysis/\ world_shapefile_git/world','Admin3') map.readshapefile('/Users/arka.doctor/Dropbox/PhD_Arka/Exploratory Indian Data Analysis/\ India Shapefile With Kashmir/india_shapefile_git/Admin2','Admin2',drawbounds=True) patches1=[] patches2=[] patches3=[] patches4=[] patches5=[] # For Brazil for info, shape in zip(map.Admin3_info, map.Admin3): if info['NAME'] == 'Brazil': patches2.append( Polygon(np.array(shape), True) ) ax.add_collection(PatchCollection(patches2, facecolor= '#FFFF00', edgecolor='#FFFF00', linewidths=1., zorder=2)) # For Russia for info, shape in zip(map.Admin3_info, map.Admin3): if info['NAME'] == 'Russia': patches3.append( Polygon(np.array(shape), True) ) ax.add_collection(PatchCollection(patches3, facecolor= 'b', edgecolor='b', linewidths=1., zorder=2)) # For China for info, shape in zip(map.Admin3_info, map.Admin3): if info['NAME'] == 'China': patches1.append( Polygon(np.array(shape), True) ) ax.add_collection(PatchCollection(patches1, facecolor= 'r', edgecolor='r', linewidths=1., zorder=2)) # For India ## It after china position to embed the full India map over the exixting shapefile for info, shape in zip(map.Admin2_info, map.Admin2): patches4.append( Polygon(np.array(shape), True) ) ax.add_collection(PatchCollection(patches4, facecolor= '#00FF00', edgecolor='#00FF00', linewidths=1., zorder=2)) # For South Africa for info, shape in zip(map.Admin3_info, map.Admin3): if info['NAME'] == 'South Africa': patches5.append( Polygon(np.array(shape), True) ) ax.add_collection(PatchCollection(patches5, facecolor= 'm', edgecolor='m', linewidths=1., zorder=2)) # Add legend yellow_patch = mpatches.Patch(color='#FFFF00', label='Brazil') blue_patch = mpatches.Patch(color='b', label='Russia') red_patch = mpatches.Patch(color='r', label='China') green_patch = mpatches.Patch(color='#00FF00', label='India') magenta_patch= mpatches.Patch(color='m', label='South Africa') plt.legend(handles=[yellow_patch,blue_patch,green_patch,red_patch,magenta_patch],loc=6,fontsize='xx-large') plt.title("BRICS Countries") #plt.savefig('BRICS.png',dpi=300) plt.show() ```
github_jupyter
import matplotlib as mpl import matplotlib.pyplot as plt from mpl_toolkits.basemap import Basemap from matplotlib.patches import Polygon from matplotlib.collections import PatchCollection from matplotlib.patches import PathPatch import numpy as np import matplotlib.cm import matplotlib.patches as mpatches fig, ax=plt.subplots(figsize=(20, 15)) map = Basemap() map.drawmapboundary(fill_color='#D3D3D3') map.fillcontinents(color='black',lake_color='black') map.readshapefile('/Users/arka.doctor/Dropbox/PhD_Arka/Exploratory Indian Data Analysis/\ world_shapefile_git/world','Admin3') map.readshapefile('/Users/arka.doctor/Dropbox/PhD_Arka/Exploratory Indian Data Analysis/\ India Shapefile With Kashmir/india_shapefile_git/Admin2','Admin2',drawbounds=True) patches1=[] patches2=[] patches3=[] patches4=[] patches5=[] # For Brazil for info, shape in zip(map.Admin3_info, map.Admin3): if info['NAME'] == 'Brazil': patches2.append( Polygon(np.array(shape), True) ) ax.add_collection(PatchCollection(patches2, facecolor= '#FFFF00', edgecolor='#FFFF00', linewidths=1., zorder=2)) # For Russia for info, shape in zip(map.Admin3_info, map.Admin3): if info['NAME'] == 'Russia': patches3.append( Polygon(np.array(shape), True) ) ax.add_collection(PatchCollection(patches3, facecolor= 'b', edgecolor='b', linewidths=1., zorder=2)) # For China for info, shape in zip(map.Admin3_info, map.Admin3): if info['NAME'] == 'China': patches1.append( Polygon(np.array(shape), True) ) ax.add_collection(PatchCollection(patches1, facecolor= 'r', edgecolor='r', linewidths=1., zorder=2)) # For India ## It after china position to embed the full India map over the exixting shapefile for info, shape in zip(map.Admin2_info, map.Admin2): patches4.append( Polygon(np.array(shape), True) ) ax.add_collection(PatchCollection(patches4, facecolor= '#00FF00', edgecolor='#00FF00', linewidths=1., zorder=2)) # For South Africa for info, shape in zip(map.Admin3_info, map.Admin3): if info['NAME'] == 'South Africa': patches5.append( Polygon(np.array(shape), True) ) ax.add_collection(PatchCollection(patches5, facecolor= 'm', edgecolor='m', linewidths=1., zorder=2)) # Add legend yellow_patch = mpatches.Patch(color='#FFFF00', label='Brazil') blue_patch = mpatches.Patch(color='b', label='Russia') red_patch = mpatches.Patch(color='r', label='China') green_patch = mpatches.Patch(color='#00FF00', label='India') magenta_patch= mpatches.Patch(color='m', label='South Africa') plt.legend(handles=[yellow_patch,blue_patch,green_patch,red_patch,magenta_patch],loc=6,fontsize='xx-large') plt.title("BRICS Countries") #plt.savefig('BRICS.png',dpi=300) plt.show()
0.302185
0.864597
<a href="https://colab.research.google.com/github/manuelP88/text_classification/blob/main/newspaper_vs_propaganda_text_classification.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> ``` !pip install datasets !pip install transformers !pip install torch import warnings from datasets import load_dataset, DatasetDict import pandas as pd import torch import torch.utils.data as torch_data from transformers import DistilBertModel, DistilBertTokenizer, AdamW, DistilBertForSequenceClassification from datasets import load_metric import transformers.modeling_outputs from transformers import get_scheduler from torch import cuda from tqdm.auto import tqdm import matplotlib.pyplot as plt import types def require_not_None(obj, message="Require not None") -> object: if obj is None: raise Exception(message) return obj def load_dataset_from_csv_enc_labels(csv_path=None) -> pd.DataFrame: require_not_None(csv_path) # Import the csv into pandas dataframe and add the headers df = pd.read_csv( csv_path, sep='§', #names=['url', 'domain', 'label', 'title', 'text'], engine='python' ) df = df[['domain', 'text', 'label']] encode_dict = {} def encode_cat(x): if x not in encode_dict.keys(): encode_dict[x] = len(encode_dict) return encode_dict[x] df['enc_label'] = df['label'].apply(lambda x: encode_cat(x)) if len(encode_dict.keys()) != 2: raise Exception("error! More tan 2 categories are detected!") return df def train_test_split(train:pd.DataFrame, frac: float = 0.8, random_state:int=200): require_not_None(train) require_not_None(frac) train_dataset = train.sample(frac=frac, random_state=random_state) test_dataset = train.drop(train_dataset.index).reset_index(drop=True) train_dataset = train_dataset.reset_index(drop=True) return train_dataset, test_dataset batch_size = 16 LEARNING_RATE = 1e-05 num_epochs = 20 from google.colab import drive drive.mount('/drive') ds2 = '/drive/My Drive/Colab Notebooks/datasets/newspaper_vs_propaganda.csv' model_out = '/drive/My Drive/Colab Notebooks/models/newspaper_vs_propaganda_model' df = load_dataset_from_csv_enc_labels( ds2 ) df.label.value_counts() def downsample(df:pd.DataFrame, label_col_name:str, random_state:int=200) -> pd.DataFrame: # find the number of observations in the smallest group nmin = df[label_col_name].value_counts().min() return (df # split the dataframe per group .groupby(label_col_name) # sample nmin observations from each group .apply(lambda x: x.sample(nmin, random_state=random_state)) # recombine the dataframes .reset_index(drop=True) ) df = downsample(df, "label") df.label.value_counts() train_dataset, test_dataset = train_test_split(df, frac=0.2)#0,8 train_dataset, val_dataset = train_test_split(train_dataset, frac=0.8) warnings.warn("What are the right dimensions of train-validation-test sets?") print("FULL Dataset: {}".format(df.shape)) print("TRAIN Dataset: {}".format(train_dataset.shape)) print("TEST Dataset: {}".format(test_dataset.shape)) print("VALID Dataset: {}".format(val_dataset.shape)) df.groupby("label")["label"].count().plot(kind='bar') train_dataset.groupby("label")["label"].count().plot(kind='bar') test_dataset.groupby("label")["label"].count().plot(kind='bar') val_dataset.groupby("label")["label"].count().plot(kind='bar') class GossipScience(torch_data.Dataset): def __init__(self, dataframe, tokenizer, max_len=512): self.len = len(dataframe) self.data = dataframe self.tokenizer = tokenizer self.max_len = max_len def _get_value(self, index): text = str(self.data.text[index]) text = " ".join(text.split()) return text def __getitem__(self, index): value = self._get_value(index) inputs = self.tokenizer.encode_plus( value, add_special_tokens=True, max_length=self.max_len, truncation=True, padding="max_length" ) return { 'input_ids': torch.tensor(inputs['input_ids'], dtype=torch.long), 'attention_mask': torch.tensor(inputs['attention_mask'], dtype=torch.long), 'labels': torch.tensor(self.data.enc_label[index], dtype=torch.long) } def get_my_item(self, index): return self.__getitem__(index) def __len__(self): return self.len checkpoint = 'distilbert-base-cased' tokenizer = DistilBertTokenizer.from_pretrained( checkpoint ) training_set = GossipScience(train_dataset, tokenizer) testing_set = GossipScience(test_dataset, tokenizer) validation_set = GossipScience(val_dataset, tokenizer) train_params = {'batch_size': batch_size, 'shuffle': True} test_params = {'batch_size': batch_size, 'shuffle': True} validation_params = {'batch_size': batch_size, 'shuffle': True} device = torch.device('cuda') if torch.cuda.is_available() else torch.device('cpu') training_loader:torch_data.DataLoader = torch_data.DataLoader(training_set, **train_params) test_loader:torch_data.DataLoader = torch_data.DataLoader(testing_set, **test_params) validation_loader:torch_data.DataLoader = torch_data.DataLoader(validation_set, **validation_params) model = DistilBertForSequenceClassification.from_pretrained( checkpoint ) warnings.warn("You have to train a more complex model!") import torch torch.cuda.is_available() training_loader optimizer = AdamW(model.parameters(), lr=LEARNING_RATE) num_training_steps = num_epochs * len(training_loader) progress_bar = tqdm(range(num_training_steps)) lr_scheduler = get_scheduler( "linear", optimizer=optimizer, num_warmup_steps=0, num_training_steps=num_training_steps ) loss_fct = torch.nn.CrossEntropyLoss() model.to(device) accuracies:dict = {} avg_training_loss: dict = {} avg_validation_loss: dict = {} validation_accuracy: dict = {} for epoch in range(num_epochs): accuracies[epoch] = [] training_loss = 0.0 valid_loss = 0.0 tr_accuracy = load_metric("accuracy") model.train() count=0 for batch in training_loader: optimizer.zero_grad() batch = {k: v.to(device) for k, v in batch.items()} outputs = model(**batch) loss = loss_fct(outputs.get('logits'), batch.get("labels")) predictions = torch.argmax(outputs.get('logits'), dim=-1) tr_accuracy.add_batch(predictions=predictions, references=batch["labels"]) accuracies[epoch].append( tr_accuracy.compute()['accuracy'] ) loss.backward() optimizer.step() training_loss+=loss.data.item() lr_scheduler.step() progress_bar.update(1) count=count+1 training_loss/=len(training_loader) avg_training_loss[epoch] = training_loss accuracy = load_metric("accuracy") model.eval() for batch in validation_loader: batch = {k: v.to(device) for k, v in batch.items()} with torch.no_grad(): outputs = model(**batch) loss = loss_fct(outputs.get('logits'), batch.get("labels")) valid_loss+=loss.data.item() predictions = torch.argmax(outputs.get('logits'), dim=-1) accuracy.add_batch(predictions=predictions, references=batch["labels"]) valid_loss/=len(validation_loader) avg_validation_loss[epoch] = valid_loss validation_accuracy[epoch] = accuracy.compute()['accuracy'] print("Epoch: "+str(epoch)+", Training Loss: "+str(avg_training_loss[epoch])+", Validation Loss: "+str(avg_validation_loss[epoch])+", accuracy = "+str(validation_accuracy[epoch])) def plot_multiple_line(points:list=None, xlabel:str=None, ylabel:str=None, title:str=None): plt.figure() x_axis:int = range(0, max( [len(i) for i in points] )) for i in range(len(points)): plt.plot(x_axis, [x for x in points[i]]) plt.xlabel(xlabel) plt.ylabel(ylabel) plt.title(title) def plot_dict(d:dict=None, xlabel:str=None, ylabel:str=None, title:str=None): lists = sorted(d.items()) # sorted by key, return a list of tuples x, y = zip(*lists) # unpack a list of pairs into two tuples plt.plot(x, y) plt.xlabel(xlabel) plt.ylabel(ylabel) plt.title(title) plot_multiple_line(list(accuracies.values()), 'batch', 'accuracy', 'Training Accuracy vs. Batches in same epoch') plot_dict(avg_training_loss, 'epoch', 'loss', 'Training Loss vs. No. of epochs') plot_dict(avg_validation_loss, 'epoch', 'loss', 'Validation Loss vs. No. of epochs') plot_dict(validation_accuracy, 'epoch', 'accuracy', 'Validation Accuracy vs. No. of epochs') test_accuracy = load_metric("accuracy") avg_test_loss = 0 model.eval() for batch in test_loader: batch = {k: v.to(device) for k, v in batch.items()} with torch.no_grad(): outputs = model(**batch) loss = loss_fct(outputs.get('logits'), batch.get("labels")) avg_test_loss+=loss.data.item() predictions = torch.argmax(outputs.get('logits'), dim=-1) test_accuracy.add_batch(predictions=predictions, references=batch["labels"]) avg_test_loss/=len(test_loader) print("Test Loss: "+str(avg_test_loss)+", accuracy = "+str(test_accuracy.compute()['accuracy'])) torch.save(model, model_out) ```
github_jupyter
!pip install datasets !pip install transformers !pip install torch import warnings from datasets import load_dataset, DatasetDict import pandas as pd import torch import torch.utils.data as torch_data from transformers import DistilBertModel, DistilBertTokenizer, AdamW, DistilBertForSequenceClassification from datasets import load_metric import transformers.modeling_outputs from transformers import get_scheduler from torch import cuda from tqdm.auto import tqdm import matplotlib.pyplot as plt import types def require_not_None(obj, message="Require not None") -> object: if obj is None: raise Exception(message) return obj def load_dataset_from_csv_enc_labels(csv_path=None) -> pd.DataFrame: require_not_None(csv_path) # Import the csv into pandas dataframe and add the headers df = pd.read_csv( csv_path, sep='§', #names=['url', 'domain', 'label', 'title', 'text'], engine='python' ) df = df[['domain', 'text', 'label']] encode_dict = {} def encode_cat(x): if x not in encode_dict.keys(): encode_dict[x] = len(encode_dict) return encode_dict[x] df['enc_label'] = df['label'].apply(lambda x: encode_cat(x)) if len(encode_dict.keys()) != 2: raise Exception("error! More tan 2 categories are detected!") return df def train_test_split(train:pd.DataFrame, frac: float = 0.8, random_state:int=200): require_not_None(train) require_not_None(frac) train_dataset = train.sample(frac=frac, random_state=random_state) test_dataset = train.drop(train_dataset.index).reset_index(drop=True) train_dataset = train_dataset.reset_index(drop=True) return train_dataset, test_dataset batch_size = 16 LEARNING_RATE = 1e-05 num_epochs = 20 from google.colab import drive drive.mount('/drive') ds2 = '/drive/My Drive/Colab Notebooks/datasets/newspaper_vs_propaganda.csv' model_out = '/drive/My Drive/Colab Notebooks/models/newspaper_vs_propaganda_model' df = load_dataset_from_csv_enc_labels( ds2 ) df.label.value_counts() def downsample(df:pd.DataFrame, label_col_name:str, random_state:int=200) -> pd.DataFrame: # find the number of observations in the smallest group nmin = df[label_col_name].value_counts().min() return (df # split the dataframe per group .groupby(label_col_name) # sample nmin observations from each group .apply(lambda x: x.sample(nmin, random_state=random_state)) # recombine the dataframes .reset_index(drop=True) ) df = downsample(df, "label") df.label.value_counts() train_dataset, test_dataset = train_test_split(df, frac=0.2)#0,8 train_dataset, val_dataset = train_test_split(train_dataset, frac=0.8) warnings.warn("What are the right dimensions of train-validation-test sets?") print("FULL Dataset: {}".format(df.shape)) print("TRAIN Dataset: {}".format(train_dataset.shape)) print("TEST Dataset: {}".format(test_dataset.shape)) print("VALID Dataset: {}".format(val_dataset.shape)) df.groupby("label")["label"].count().plot(kind='bar') train_dataset.groupby("label")["label"].count().plot(kind='bar') test_dataset.groupby("label")["label"].count().plot(kind='bar') val_dataset.groupby("label")["label"].count().plot(kind='bar') class GossipScience(torch_data.Dataset): def __init__(self, dataframe, tokenizer, max_len=512): self.len = len(dataframe) self.data = dataframe self.tokenizer = tokenizer self.max_len = max_len def _get_value(self, index): text = str(self.data.text[index]) text = " ".join(text.split()) return text def __getitem__(self, index): value = self._get_value(index) inputs = self.tokenizer.encode_plus( value, add_special_tokens=True, max_length=self.max_len, truncation=True, padding="max_length" ) return { 'input_ids': torch.tensor(inputs['input_ids'], dtype=torch.long), 'attention_mask': torch.tensor(inputs['attention_mask'], dtype=torch.long), 'labels': torch.tensor(self.data.enc_label[index], dtype=torch.long) } def get_my_item(self, index): return self.__getitem__(index) def __len__(self): return self.len checkpoint = 'distilbert-base-cased' tokenizer = DistilBertTokenizer.from_pretrained( checkpoint ) training_set = GossipScience(train_dataset, tokenizer) testing_set = GossipScience(test_dataset, tokenizer) validation_set = GossipScience(val_dataset, tokenizer) train_params = {'batch_size': batch_size, 'shuffle': True} test_params = {'batch_size': batch_size, 'shuffle': True} validation_params = {'batch_size': batch_size, 'shuffle': True} device = torch.device('cuda') if torch.cuda.is_available() else torch.device('cpu') training_loader:torch_data.DataLoader = torch_data.DataLoader(training_set, **train_params) test_loader:torch_data.DataLoader = torch_data.DataLoader(testing_set, **test_params) validation_loader:torch_data.DataLoader = torch_data.DataLoader(validation_set, **validation_params) model = DistilBertForSequenceClassification.from_pretrained( checkpoint ) warnings.warn("You have to train a more complex model!") import torch torch.cuda.is_available() training_loader optimizer = AdamW(model.parameters(), lr=LEARNING_RATE) num_training_steps = num_epochs * len(training_loader) progress_bar = tqdm(range(num_training_steps)) lr_scheduler = get_scheduler( "linear", optimizer=optimizer, num_warmup_steps=0, num_training_steps=num_training_steps ) loss_fct = torch.nn.CrossEntropyLoss() model.to(device) accuracies:dict = {} avg_training_loss: dict = {} avg_validation_loss: dict = {} validation_accuracy: dict = {} for epoch in range(num_epochs): accuracies[epoch] = [] training_loss = 0.0 valid_loss = 0.0 tr_accuracy = load_metric("accuracy") model.train() count=0 for batch in training_loader: optimizer.zero_grad() batch = {k: v.to(device) for k, v in batch.items()} outputs = model(**batch) loss = loss_fct(outputs.get('logits'), batch.get("labels")) predictions = torch.argmax(outputs.get('logits'), dim=-1) tr_accuracy.add_batch(predictions=predictions, references=batch["labels"]) accuracies[epoch].append( tr_accuracy.compute()['accuracy'] ) loss.backward() optimizer.step() training_loss+=loss.data.item() lr_scheduler.step() progress_bar.update(1) count=count+1 training_loss/=len(training_loader) avg_training_loss[epoch] = training_loss accuracy = load_metric("accuracy") model.eval() for batch in validation_loader: batch = {k: v.to(device) for k, v in batch.items()} with torch.no_grad(): outputs = model(**batch) loss = loss_fct(outputs.get('logits'), batch.get("labels")) valid_loss+=loss.data.item() predictions = torch.argmax(outputs.get('logits'), dim=-1) accuracy.add_batch(predictions=predictions, references=batch["labels"]) valid_loss/=len(validation_loader) avg_validation_loss[epoch] = valid_loss validation_accuracy[epoch] = accuracy.compute()['accuracy'] print("Epoch: "+str(epoch)+", Training Loss: "+str(avg_training_loss[epoch])+", Validation Loss: "+str(avg_validation_loss[epoch])+", accuracy = "+str(validation_accuracy[epoch])) def plot_multiple_line(points:list=None, xlabel:str=None, ylabel:str=None, title:str=None): plt.figure() x_axis:int = range(0, max( [len(i) for i in points] )) for i in range(len(points)): plt.plot(x_axis, [x for x in points[i]]) plt.xlabel(xlabel) plt.ylabel(ylabel) plt.title(title) def plot_dict(d:dict=None, xlabel:str=None, ylabel:str=None, title:str=None): lists = sorted(d.items()) # sorted by key, return a list of tuples x, y = zip(*lists) # unpack a list of pairs into two tuples plt.plot(x, y) plt.xlabel(xlabel) plt.ylabel(ylabel) plt.title(title) plot_multiple_line(list(accuracies.values()), 'batch', 'accuracy', 'Training Accuracy vs. Batches in same epoch') plot_dict(avg_training_loss, 'epoch', 'loss', 'Training Loss vs. No. of epochs') plot_dict(avg_validation_loss, 'epoch', 'loss', 'Validation Loss vs. No. of epochs') plot_dict(validation_accuracy, 'epoch', 'accuracy', 'Validation Accuracy vs. No. of epochs') test_accuracy = load_metric("accuracy") avg_test_loss = 0 model.eval() for batch in test_loader: batch = {k: v.to(device) for k, v in batch.items()} with torch.no_grad(): outputs = model(**batch) loss = loss_fct(outputs.get('logits'), batch.get("labels")) avg_test_loss+=loss.data.item() predictions = torch.argmax(outputs.get('logits'), dim=-1) test_accuracy.add_batch(predictions=predictions, references=batch["labels"]) avg_test_loss/=len(test_loader) print("Test Loss: "+str(avg_test_loss)+", accuracy = "+str(test_accuracy.compute()['accuracy'])) torch.save(model, model_out)
0.826292
0.781205
# Introduction to TensorFlow Core APIs This a notebook following the guide https://www.tensorflow.org/guide/low_level_intro Recall that * computations in TensorFlows happen by executing a ``tf.Graph``, * the graph can be defined but not necessarily run, * run is performed via a ``tf.Session`` object. ``` from __future__ import absolute_import from __future__ import division from __future__ import print_function import numpy as np import tensorflow as tf ``` **Definition** (*Tensor*) a *tensor* (more precisely *tensor value*) in mathematical sense is a $N$ dimensional vector into some field. For example, in the field of real numbers $\mathbb{R}$, $\mathbf{x} = [0,-1,\pi, 0.99]$ is a tensor. Notice that vectors and matrices are special case of tensor with $1$ and $2$ dimensions, respectively. The number of dimension is called the *rank* of the tensor and the list of the number of elements for in each dimension is called the *shape*. In the previous example the rank is $1$ (*i.e.*, vector) and the shape is $[4]$. Notice that the concept of tensor is very well represented by ``numpy`` ``ndarray`` objects, which is, in fact, what TensorFlow uses under the hood. ``` t = np.asarray([0,-1, np.pi, 0.99]) ``` ## Building the graph As stated above the first part in a TensorFlow computation is the construction of a graph which is an object ``tf.Graph``. In TensorFlow nodes and edges of the graph are object of type ``tf.Operation`` and ``tf.Tensor``, respectively. Notice that ``tf.Tensor`` is not a tensor value rather a "placeholder" for what can be substituted with a tensor value. It is interesting to notice how the documentation defines the ``tf.Tensor`` object as representing the "ouput of an ``Operation``". This stresses the fact that ``tf.Tensor`` objects are to be interpreted as ouputs and therefore they should always be constructed as result from some operation (indeed an edge of a graph must always have endpoints, constructing a ``Tensor`` from an ``Operation`` - *i.e.*, a node - guarantees that the starting end is always defined). ``` c = tf.constant([[1.0, 2.0], [3.0, 4.0]]) d = tf.constant([[1.0, 1.0], [0.0, 1.0]]) e = tf.matmul(c, d) sess = tf.compat.v1.Session() result = sess.run(e) print(result) ``` The result is what is to be expected, however to obtain the actual values of the matrix ``e=c*d`` we had to run the graph in a session. ``` a = tf.constant(3.0, dtype=tf.float32) b = tf.constant(4.0) total = a + b print(a) print(b) print(total) ``` Again ``Tensor`` do not have actal values, to produce result we need to run the computation (run the graph) ``` with tf.Session() as sess: result = sess.run(total) print(result) delta = tf.abs(a - b) with tf.Session() as sess: print(sess.run({"Tot." : total, "Delta" : delta})) ``` Because ``Tensor`` do not contain values, we can initialize them to be just "placeholder" for values that will be obtained during computation itself ``` x = tf.placeholder(tf.float32) y = tf.placeholder(tf.float32) z = x + y ``` When an ``Session`` is executed the placeholders can be filled passing a the ``feed_dict`` dictionary to the ``run`` method of the ``Session`` ``` with tf.Session() as sess: print(sess.run(z, feed_dict={x : "-3", y : "0.01"})) print(sess.run(z, feed_dict={x : [1,3], y : [4,5]})) ``` There are more preferable ways to feed data into a graph during runs, in particular the ``tf.data.Dataset`` object serves this purpse, but to use it a proper iterator is needed ``` data = [ [0, 1], [2, 3], [4, 5], [6, 7] ] slices = tf.data.Dataset.from_tensor_slices(data) next_item = slices.make_one_shot_iterator().get_next() ``` Even following step-by-step the official guide, you get a deprecated warning (sic!) ``` with tf.Session() as sess: while True: try: print(sess.run(next_item)) except tf.errors.OutOfRangeError: break ``` ## Layers Constructing graphs defining each single node may be tedious, moreover there could be the opportunity to re-use entire subgraph in different places. Besides higher level APIs (*e.g.* ``tf.keras``), even at low level TensorFlow uses the notion of *layer*. Many ready-to-use layers are available in TensorFlow, most notably the ``td.layers.Dense`` which constitutes a basic building block for MLP. ``` x = tf.placeholder(tf.float32, shape=[None, 3]) linear_model = tf.layers.Dense(units=1) y = linear_model(x) ``` The ``Dense`` object has many interesting property ``` print({ "units" : linear_model.units, "activation" : linear_model.activation, "trainable" : linear_model.trainable, "use_bias" : linear_model.use_bias, "kernel_constraint" : linear_model.kernel_constraint, }) ``` layers need some initialization, and we can use a default initializer ``` init = tf.global_variables_initializer() with tf.Session() as sess: sess.run(init) print(sess.run(y, {x : [[1,2,3],[4,5,6]]})) ``` Finally let's try to train a linear model Define the data ``` x = tf.constant([[1], [2], [3], [4]], dtype=tf.float32) y_true = tf.constant([[0], [-1], [-2], [-3]], dtype=tf.float32) ``` Define and initialize the model ``` linear_model = tf.layers.Dense(units=1) y_pred = linear_model(x) sess = tf.Session() init = tf.global_variables_initializer() sess.run(init) ``` make some untrained predictions ``` print(sess.run(y_pred)) # loss function loss = tf.losses.mean_squared_error(labels=y_true, predictions=y_pred) print(sess.run(loss)) # Optimization optimizer = tf.train.GradientDescentOptimizer(0.01) train = optimizer.minimize(loss) for i in range(100): _, loss_value = sess.run((train,loss)) print(loss_value) # prediction print(sess.run(y_pred)) ``` At the end it is always a good idea to close the session (in fact the best way is to use ``with``) ``` sess.close() ```
github_jupyter
from __future__ import absolute_import from __future__ import division from __future__ import print_function import numpy as np import tensorflow as tf t = np.asarray([0,-1, np.pi, 0.99]) c = tf.constant([[1.0, 2.0], [3.0, 4.0]]) d = tf.constant([[1.0, 1.0], [0.0, 1.0]]) e = tf.matmul(c, d) sess = tf.compat.v1.Session() result = sess.run(e) print(result) a = tf.constant(3.0, dtype=tf.float32) b = tf.constant(4.0) total = a + b print(a) print(b) print(total) with tf.Session() as sess: result = sess.run(total) print(result) delta = tf.abs(a - b) with tf.Session() as sess: print(sess.run({"Tot." : total, "Delta" : delta})) x = tf.placeholder(tf.float32) y = tf.placeholder(tf.float32) z = x + y with tf.Session() as sess: print(sess.run(z, feed_dict={x : "-3", y : "0.01"})) print(sess.run(z, feed_dict={x : [1,3], y : [4,5]})) data = [ [0, 1], [2, 3], [4, 5], [6, 7] ] slices = tf.data.Dataset.from_tensor_slices(data) next_item = slices.make_one_shot_iterator().get_next() with tf.Session() as sess: while True: try: print(sess.run(next_item)) except tf.errors.OutOfRangeError: break x = tf.placeholder(tf.float32, shape=[None, 3]) linear_model = tf.layers.Dense(units=1) y = linear_model(x) print({ "units" : linear_model.units, "activation" : linear_model.activation, "trainable" : linear_model.trainable, "use_bias" : linear_model.use_bias, "kernel_constraint" : linear_model.kernel_constraint, }) init = tf.global_variables_initializer() with tf.Session() as sess: sess.run(init) print(sess.run(y, {x : [[1,2,3],[4,5,6]]})) x = tf.constant([[1], [2], [3], [4]], dtype=tf.float32) y_true = tf.constant([[0], [-1], [-2], [-3]], dtype=tf.float32) linear_model = tf.layers.Dense(units=1) y_pred = linear_model(x) sess = tf.Session() init = tf.global_variables_initializer() sess.run(init) print(sess.run(y_pred)) # loss function loss = tf.losses.mean_squared_error(labels=y_true, predictions=y_pred) print(sess.run(loss)) # Optimization optimizer = tf.train.GradientDescentOptimizer(0.01) train = optimizer.minimize(loss) for i in range(100): _, loss_value = sess.run((train,loss)) print(loss_value) # prediction print(sess.run(y_pred)) sess.close()
0.627267
0.985909
# Formating the Yrbss file Small file I created to format and prepare the excel files, change the names and order of the columns, drop the empty cells and export them in csv Needs the excel name file at **In [2]** and the csv output name at **In[33]** ``` import pandas as pd import numpy as np df = pd.read_excel('../Dataset/yrbss_2011.xlsx') ``` * Raw Dataframe ``` #Raw dataframe df.head() df.info() ``` * Dataframe after cleaning up the "NaN" ``` df.dropna(inplace=True) df.head() df.info() # We go from 13583 observations to 11388 non-null observations ``` Renaming the columns ``` df.columns = ['age','gender','grade','height','weight','active','helmet','lifting'] df.head() ``` Resetting the indexes ``` df.reset_index(drop='index',inplace=True) df.head(10) ``` ** We need to replace the specification codes with their values like it is related in the documentation **<br/> https://github.com/chikoungoun/OpenIntro/blob/master/Chapter%204/Yrbss/2017_yrbs_sadc_documentation.pdf (starting page6) <br/> Or in my summary : https://github.com/chikoungoun/OpenIntro/blob/master/Chapter%204/Yrbss/Yrbss-Specifications%20fo%20OpenIntro.docx * ** Changing the ages** the numbers 1 to 7 correspond respectively to the ages between 12 and 18 or higher ``` df.head() #counting how many interation of each age category we have df['age'].value_counts() age = df['age'] #Replacing the mapping values age.replace([1,2,3,4,5,6,7],[12,13,14,15,16,17,18],inplace=True) df.head() #let's be sure of the amount of ages print(df['age'].value_counts()) print('Done') ``` * **Replacing the genders** 1 for females, and 2 for males ``` sex = df['gender'] sex.value_counts() sex.replace([1,2],['female','male'],inplace=True) df['gender'].value_counts() ``` * **Replacing the grades** replacing the values we have (1,2,3,4) with their high school grades equivalent (9th, 10th, 11th and 12th) ``` grades = df['grade'] grades.value_counts() grades.replace([1,2,3,4],[9,10,11,12],inplace=True) df['gender'].value_counts() ``` * **Replacing the helmet stat** measuring how often a biker wears his helmet ``` helmet = df['helmet'] helmet.value_counts() helmet.replace([1,2,3,4,5,6],['not riding','never','rarely','sometimes','most of the time','always'],inplace=True) df['helmet'].value_counts() ``` * ** Replacing the lifting stats** Measuring how often a student had strength training ``` lifting = df['lifting'] lifting.value_counts() lifting.replace(np.arange(1,9),np.arange(8),inplace=True) df['lifting'].value_counts() ``` ### Dataframe after replacing all the data following the book example ``` #Swap the columns to match exactly the dataframe df = df[['age','gender','grade','height','weight','helmet','active','lifting']] df.head(20) ``` ## * Exporting the file in csv ``` df.to_csv('../Dataset/yrbss_2011.csv') print('Done') ```
github_jupyter
import pandas as pd import numpy as np df = pd.read_excel('../Dataset/yrbss_2011.xlsx') #Raw dataframe df.head() df.info() df.dropna(inplace=True) df.head() df.info() # We go from 13583 observations to 11388 non-null observations df.columns = ['age','gender','grade','height','weight','active','helmet','lifting'] df.head() df.reset_index(drop='index',inplace=True) df.head(10) df.head() #counting how many interation of each age category we have df['age'].value_counts() age = df['age'] #Replacing the mapping values age.replace([1,2,3,4,5,6,7],[12,13,14,15,16,17,18],inplace=True) df.head() #let's be sure of the amount of ages print(df['age'].value_counts()) print('Done') sex = df['gender'] sex.value_counts() sex.replace([1,2],['female','male'],inplace=True) df['gender'].value_counts() grades = df['grade'] grades.value_counts() grades.replace([1,2,3,4],[9,10,11,12],inplace=True) df['gender'].value_counts() helmet = df['helmet'] helmet.value_counts() helmet.replace([1,2,3,4,5,6],['not riding','never','rarely','sometimes','most of the time','always'],inplace=True) df['helmet'].value_counts() lifting = df['lifting'] lifting.value_counts() lifting.replace(np.arange(1,9),np.arange(8),inplace=True) df['lifting'].value_counts() #Swap the columns to match exactly the dataframe df = df[['age','gender','grade','height','weight','helmet','active','lifting']] df.head(20) df.to_csv('../Dataset/yrbss_2011.csv') print('Done')
0.233269
0.926103
``` import tensorflow as tf from tensorflow import keras import numpy as np keras.backend.clear_session() np.random.seed(42) tf.random.set_seed(42) (train_data, train_label), (test_data, test_label) = keras.datasets.fashion_mnist.load_data() train_data = train_data / 255 test_data = test_data / 255 l2_reg = keras.regularizers.l2(0.05) lower_layers = keras.models.Sequential([ keras.layers.Flatten(input_shape=[28, 28]), keras.layers.Dense(100, activation="relu"), ]) upper_layers = keras.models.Sequential([ keras.layers.Dense(10, activation="softmax"), ]) model = keras.models.Sequential([ lower_layers, upper_layers ]) def random_batch(X, y, batch_size=32): idx = np.random.randint(len(X), size=batch_size) return X[idx], y[idx] def print_status_bar(iteration, total, loss, metrics=None): metrics = " - ".join(["{}: {:.4f}".format(m.name, m.result())for m in [loss] + (metrics or [])]) end = "" if iteration < total else "\n" print("\r{}/{} - ".format(iteration, total) + metrics, end=end) lower_optimizer = keras.optimizers.SGD(lr=1e-4) upper_optimizer = keras.optimizers.Nadam(lr=1e-3) n_epochs = 5 batch_size = 32 n_steps = len(train_data) // batch_size loss_fn = keras.losses.sparse_categorical_crossentropy mean_loss = keras.metrics.Mean() metrics = [keras.metrics.SparseCategoricalAccuracy()] from tqdm.notebook import trange from collections import OrderedDict with trange(1, n_epochs + 1, desc="All epochs") as epochs: for epoch in epochs: with trange(1, n_steps + 1, desc="Epoch {}/{}".format(epoch, n_epochs)) as steps: for step in steps: X_batch, y_batch = random_batch(train_data, train_label) with tf.GradientTape(persistent=True) as tape: y_pred = model(X_batch) main_loss = tf.reduce_mean(loss_fn(y_batch, y_pred)) loss = tf.add_n([main_loss] + model.losses) for layers, optimizer in ((lower_layers, lower_optimizer), (upper_layers, upper_optimizer)): gradients = tape.gradient(loss, layers.trainable_variables) optimizer.apply_gradients(zip(gradients, layers.trainable_variables)) del tape for variable in model.variables: if variable.constraint is not None: variable.assign(variable.constraint(variable)) status = OrderedDict() mean_loss(loss) status["loss"] = mean_loss.result().numpy() for metric in metrics: metric(y_batch, y_pred) status[metric.name] = metric.result().numpy() steps.set_postfix(status) y_pred = model(test_data) status["val_loss"] = np.mean(loss_fn(test_label, y_pred)) status["val_accuracy"] = np.mean(keras.metrics.sparse_categorical_accuracy( tf.constant(test_label, dtype=np.float32), y_pred)) steps.set_postfix(status) for metric in [mean_loss] + metrics: metric.reset_states() ```
github_jupyter
import tensorflow as tf from tensorflow import keras import numpy as np keras.backend.clear_session() np.random.seed(42) tf.random.set_seed(42) (train_data, train_label), (test_data, test_label) = keras.datasets.fashion_mnist.load_data() train_data = train_data / 255 test_data = test_data / 255 l2_reg = keras.regularizers.l2(0.05) lower_layers = keras.models.Sequential([ keras.layers.Flatten(input_shape=[28, 28]), keras.layers.Dense(100, activation="relu"), ]) upper_layers = keras.models.Sequential([ keras.layers.Dense(10, activation="softmax"), ]) model = keras.models.Sequential([ lower_layers, upper_layers ]) def random_batch(X, y, batch_size=32): idx = np.random.randint(len(X), size=batch_size) return X[idx], y[idx] def print_status_bar(iteration, total, loss, metrics=None): metrics = " - ".join(["{}: {:.4f}".format(m.name, m.result())for m in [loss] + (metrics or [])]) end = "" if iteration < total else "\n" print("\r{}/{} - ".format(iteration, total) + metrics, end=end) lower_optimizer = keras.optimizers.SGD(lr=1e-4) upper_optimizer = keras.optimizers.Nadam(lr=1e-3) n_epochs = 5 batch_size = 32 n_steps = len(train_data) // batch_size loss_fn = keras.losses.sparse_categorical_crossentropy mean_loss = keras.metrics.Mean() metrics = [keras.metrics.SparseCategoricalAccuracy()] from tqdm.notebook import trange from collections import OrderedDict with trange(1, n_epochs + 1, desc="All epochs") as epochs: for epoch in epochs: with trange(1, n_steps + 1, desc="Epoch {}/{}".format(epoch, n_epochs)) as steps: for step in steps: X_batch, y_batch = random_batch(train_data, train_label) with tf.GradientTape(persistent=True) as tape: y_pred = model(X_batch) main_loss = tf.reduce_mean(loss_fn(y_batch, y_pred)) loss = tf.add_n([main_loss] + model.losses) for layers, optimizer in ((lower_layers, lower_optimizer), (upper_layers, upper_optimizer)): gradients = tape.gradient(loss, layers.trainable_variables) optimizer.apply_gradients(zip(gradients, layers.trainable_variables)) del tape for variable in model.variables: if variable.constraint is not None: variable.assign(variable.constraint(variable)) status = OrderedDict() mean_loss(loss) status["loss"] = mean_loss.result().numpy() for metric in metrics: metric(y_batch, y_pred) status[metric.name] = metric.result().numpy() steps.set_postfix(status) y_pred = model(test_data) status["val_loss"] = np.mean(loss_fn(test_label, y_pred)) status["val_accuracy"] = np.mean(keras.metrics.sparse_categorical_accuracy( tf.constant(test_label, dtype=np.float32), y_pred)) steps.set_postfix(status) for metric in [mean_loss] + metrics: metric.reset_states()
0.909406
0.567967
# STUMPY Basics ## Analyzing Motifs and Anomalies with STUMP This tutorial utilizes the main takeaways from the research papers: [Matrix Profile I](http://www.cs.ucr.edu/~eamonn/PID4481997_extend_Matrix%20Profile_I.pdf) & [Matrix Profile II](http://www.cs.ucr.edu/~eamonn/STOMP_GPU_final_submission_camera_ready.pdf). To explore the basic concepts, we'll use the workhorse `stump` function to find interesting motifs (patterns) or discords (anomalies/novelties) and demonstrate these concepts with two different time series datasets: 1. The Steamgen dataset 2. The NYC taxi passengers dataset `stump` is Numba JIT-compiled version of the popular STOMP algorithm that is described in detail in the original [Matrix Profile II](http://www.cs.ucr.edu/~eamonn/STOMP_GPU_final_submission_camera_ready.pdf) paper. `stump` is capable of parallel computation and it performs an ordered search for patterns and outliers within a specified time series and takes advantage of the locality of some calculations to minimize the runtime. ## Getting Started Let's import the packages that we'll need to load, analyze, and plot the data. ``` %matplotlib inline import pandas as pd import stumpy import numpy as np import matplotlib.pyplot as plt import matplotlib.dates as dates from matplotlib.patches import Rectangle import datetime as dt plt.rcParams["figure.figsize"] = [20, 6] # width, height plt.rcParams['xtick.direction'] = 'out' ``` ## What is a Motif? Time series motifs are approximately repeated subsequences found within a longer time series. Being able to say that a subsequence is "approximately repeated" requires that you be able to compare subsequences to each other. In the case of STUMPY, all subsequences within a time series can be compared by computing the pairwise z-normalized Euclidean distances and then storing only the index to its nearest neighbor. This nearest neighbor distance vector is referred to as the `matrix profile` and the index to each nearest neighbor within the time series is referred to as the `matrix profile index`. Luckily, the `stump` function takes in any time series (with floating point values) and computes the matrix profile along with the matrix profile indices and, in turn, one can immediately find time series motifs. Let's look at an example: ## Loading the Steamgen Dataset This data was generated using fuzzy models applied to mimic a steam generator at the Abbott Power Plant in Champaign, IL. The data feature that we are interested in is the output steam flow telemetry that has units of kg/s and the data is "sampled" every three seconds with a total of 9,600 datapoints. ``` steam_df = pd.read_csv("https://zenodo.org/record/4273921/files/STUMPY_Basics_steamgen.csv?download=1") steam_df.head() ``` ## Visualizing the Steamgen Dataset ``` plt.suptitle('Steamgen Dataset', fontsize='30') plt.xlabel('Time', fontsize ='20') plt.ylabel('Steam Flow', fontsize='20') plt.plot(steam_df['steam flow'].values) plt.show() ``` Take a moment and carefully examine the plot above with your naked eye. If you were told that there was a pattern that was approximately repeated, can you spot it? Even for a computer, this can be very challenging. Here's what you should be looking for: ## Manually Finding a Motif ``` m = 640 fig, axs = plt.subplots(2) plt.suptitle('Steamgen Dataset', fontsize='30') axs[0].set_ylabel("Steam Flow", fontsize='20') axs[0].plot(steam_df['steam flow'], alpha=0.5, linewidth=1) axs[0].plot(steam_df['steam flow'].iloc[643:643+m]) axs[0].plot(steam_df['steam flow'].iloc[8724:8724+m]) rect = Rectangle((643, 0), m, 40, facecolor='lightgrey') axs[0].add_patch(rect) rect = Rectangle((8724, 0), m, 40, facecolor='lightgrey') axs[0].add_patch(rect) axs[1].set_xlabel("Time", fontsize='20') axs[1].set_ylabel("Steam Flow", fontsize='20') axs[1].plot(steam_df['steam flow'].values[643:643+m], color='C1') axs[1].plot(steam_df['steam flow'].values[8724:8724+m], color='C2') plt.show() ``` The motif (pattern) that we are looking for is highlighted above and yet it is still very hard to be certain that the orange and green subsequences are a match (upper panel), that is, until we zoom in on them and overlay the subsequences on top each other (lower panel). Now, we can clearly see that the motif is very similar! The fundamental value of computing the matrix profile is that it not only allows you to quickly find motifs but it also identifies the nearest neighbor for all subsequences within your time series. Note that we haven't actually done anything special here to locate the motif except that we grab the locations from the original paper and plotted them. Now, let's take our steamgen data and apply the `stump` function to it: ## Find a Motif Using STUMP ``` m = 640 mp = stumpy.stump(steam_df['steam flow'], m) ``` `stump` requires two parameters: 1. A time series 2. A window size, `m` In this case, based on some domain expertise, we've chosen `m = 640`, which is roughly equivalent to half-hour windows. And, again, the output of `stump` is an array that contains all of the matrix profile values (i.e., z-normalized Euclidean distance to your nearest neighbor) and matrix profile indices in the first and second columns, respectively (we'll ignore the third and fourth columns for now). Let's plot the matrix profile next to our raw data: ``` fig, axs = plt.subplots(2, sharex=True, gridspec_kw={'hspace': 0}) plt.suptitle('Motif (Pattern) Discovery', fontsize='30') axs[0].plot(steam_df['steam flow'].values) axs[0].set_ylabel('Steam Flow', fontsize='20') rect = Rectangle((643, 0), m, 40, facecolor='lightgrey') axs[0].add_patch(rect) rect = Rectangle((8724, 0), m, 40, facecolor='lightgrey') axs[0].add_patch(rect) axs[1].set_xlabel('Time', fontsize ='20') axs[1].set_ylabel('Matrix Profile', fontsize='20') axs[1].axvline(x=643, linestyle="dashed") axs[1].axvline(x=8724, linestyle="dashed") axs[1].plot(mp[:, 0]) plt.show() ``` What we learn is that the global minima (vertical dashed lines) from the matrix profile correspond to the locations of the two subsequences that make up the motif pair! And the exact z-normalized Euclidean distance between these two subsequences is: ``` mp[:, 0].min() ``` So, this distance isn't zero since we saw that the two subsequences aren't an identical match but, relative to the rest of the matrix profile (i.e., compared to either the mean or median matrix profile values), we can understand that this motif is a significantly good match. ## Find Anomalies using STUMP Conversely, the maximum value in the matrix profile (computed from `stump` above) is: ``` mp[:, 0].max() ``` The matrix profile index also tells us which subsequence within the time series does not have nearest neighbor that resembles itself: ``` np.argwhere(mp[:, 0] == mp[:, 0].max()).flatten()[0] ``` The subsequence located at this global maximum is also referred to as a discord, novelty, or anomaly: ``` fig, axs = plt.subplots(2, sharex=True, gridspec_kw={'hspace': 0}) plt.suptitle('Discord (Anomaly/Novelty) Discovery', fontsize='30') axs[0].plot(steam_df['steam flow'].values) axs[0].set_ylabel('Steam Flow', fontsize='20') rect = Rectangle((3864, 0), m, 40, facecolor='lightgrey') axs[0].add_patch(rect) axs[1].set_xlabel('Time', fontsize ='20') axs[1].set_ylabel('Matrix Profile', fontsize='20') axs[1].axvline(x=3864, linestyle="dashed") axs[1].plot(mp[:, 0]) plt.show() ``` Now that you've mastered the STUMPY basics and understand how to discover motifs and anomalies from a time series, we'll leave it up to you to investigate other interesting local minima and local maxima in the steamgen dataset. To further develop/reinforce our growing intuition, let's move on and explore another dataset! ## Loading the NYC Taxi Passengers Dataset First, we'll download historical data that represents the half-hourly average of the number of NYC taxi passengers over 75 days in the Fall of 2014. We extract that data and insert it into a pandas dataframe, making sure the timestamps are stored as *datetime* objects and the values are of type *float64*. Note that we'll do a little more data cleaning than above just so you can see an example where the timestamp is included. But be aware that `stump` does not actually use or need the timestamp column at all when computing the matrix profile. ``` taxi_df = pd.read_csv("https://zenodo.org/record/4276428/files/STUMPY_Basics_Taxi.csv?download=1") taxi_df['value'] = taxi_df['value'].astype(np.float64) taxi_df['timestamp'] = pd.to_datetime(taxi_df['timestamp']) taxi_df.head() ``` ## Visualizing the Taxi Dataset ``` # This code is going to be utilized to control the axis labeling of the plots DAY_MULTIPLIER = 7 # Specify for the amount of days you want between each labeled x-axis tick x_axis_labels = taxi_df[(taxi_df.timestamp.dt.hour==0)]['timestamp'].dt.strftime('%b %d').values[::DAY_MULTIPLIER] x_axis_labels[1::2] = " " x_axis_labels, DAY_MULTIPLIER plt.suptitle('Taxi Passenger Raw Data', fontsize='30') plt.xlabel('Window Start Date', fontsize ='20') plt.ylabel('Half-Hourly Average\nNumber of Taxi Passengers', fontsize='20') plt.plot(taxi_df['value']) plt.xticks(np.arange(0, taxi_df['value'].shape[0], (48*DAY_MULTIPLIER)/2), x_axis_labels) plt.xticks(rotation=75) plt.minorticks_on() plt.margins(x=0) plt.show() ``` It seems as if there is a general periodicity between spans of 1-day and 7-days, which can likely be explained by the fact that more people use taxis throughout the day than through the night and that it is reasonable to say most weeks have similar taxi-rider patterns. Also, maybe there is an outlier just to the right of the window starting near the end of October but, other than that, there isn't anything you can conclude from just looking at the raw data. ## Generating the Matrix Profile Again, defining the window size, `m`, usually requires some level of domain knowledge but we'll demonstrate later on that `stump` is robust to changes in this parameter. Since this data was taken half-hourly, we chose a value `m = 48` to represent the span of exactly one day: ``` m = 48 mp = stumpy.stump(taxi_df['value'], m=m) ``` ## Visualizing the Matrix Profile ``` plt.suptitle('1-Day STUMP', fontsize='30') plt.xlabel('Window Start', fontsize ='20') plt.ylabel('Matrix Profile', fontsize='20') plt.plot(mp[:, 0]) plt.plot(575, 1.7, marker="v", markersize=15, color='b') plt.text(620, 1.6, 'Columbus Day', color="black", fontsize=20) plt.plot(1535, 3.7, marker="v", markersize=15, color='b') plt.text(1580, 3.6, 'Daylight Savings', color="black", fontsize=20) plt.plot(2700, 3.1, marker="v", markersize=15, color='b') plt.text(2745, 3.0, 'Thanksgiving', color="black", fontsize=20) plt.plot(30, .2, marker="^", markersize=15, color='b', fillstyle='none') plt.plot(363, .2, marker="^", markersize=15, color='b', fillstyle='none') plt.xticks(np.arange(0, 3553, (m*DAY_MULTIPLIER)/2), x_axis_labels) plt.xticks(rotation=75) plt.minorticks_on() plt.show() ``` ## Understanding the Matrix Profile Let's understand what we're looking at. ### Lowest Values The lowest values (open triangles) are considered a motif since they represent the pair of nearest neighbor subsequences with the smallest z-normalized Euclidean distance. Interestingly, the two lowest data points are *exactly* 7 days apart, which suggests that, in this dataset, there may be a periodicity of seven days in addition to the more obvious periodicity of one day. ### Highest Values So what about the highest matrix profile values (filled triangles)? The subsequences that have the highest (local) values really emphasizes their uniqueness. We found that the top three peaks happened to correspond exactly with the timing of Columbus Day, Daylight Saving Time, and Thanksgiving, respectively. ## Different Window Sizes As we had mentioned above, `stump` should be robust to the choice of the window size parameter, `m`. Below, we demonstrate how manipulating the window size can have little impact on your resulting matrix profile by running `stump` with varying windows sizes. ``` days_dict ={ "Half-Day": 24, "1-Day": 48, "2-Days": 96, "5-Days": 240, "7-Days": 336, } days_df = pd.DataFrame.from_dict(days_dict, orient='index', columns=['m']) days_df.head() ``` We purposely chose spans of time that correspond to reasonably intuitive day-lengths that could be chosen by a human. ``` fig, axs = plt.subplots(5, sharex=True, gridspec_kw={'hspace': 0}) fig.text(0.5, -0.1, 'Subsequence Start Date', ha='center', fontsize='20') fig.text(0.08, 0.5, 'Matrix Profile', va='center', rotation='vertical', fontsize='20') for i, varying_m in enumerate(days_df['m'].values): mp = stumpy.stump(taxi_df['value'], varying_m) axs[i].plot(mp[:, 0]) axs[i].set_ylim(0,9.5) axs[i].set_xlim(0,3600) title = f"m = {varying_m}" axs[i].set_title(title, fontsize=20, y=.5) plt.xticks(np.arange(0, taxi_df.shape[0], (48*DAY_MULTIPLIER)/2), x_axis_labels) plt.xticks(rotation=75) plt.suptitle('STUMP with Varying Window Sizes', fontsize='30') plt.show() ``` We can see that even with varying window sizes, our peaks stay prominent. But it looks as if all the non-peak values are converging towards each other. This is why having a knowledge of the data-context is important prior to running `stump`, as it is helpful to have a window size that may capture a repeating pattern or anomaly within the dataset. ## GPU-STUMP - Faster STUMP Using GPUs When you have significantly more than a few thousand data points in your time series, you may need a speed boost to help analyze your data. Luckily, you can try `gpu_stump`, a super fast GPU-powered alternative to `stump` that gives speed of a few hundred CPUs and provides the same output as `stump`: ``` import stumpy mp = stumpy.gpu_stump(df['value'], m=m) # Note that you'll need a properly configured NVIDIA GPU for this ``` In fact, if you aren't dealing with PII/SII data, then you can try out `gpu_stump` using the [this notebook on Google Colab](https://colab.research.google.com/drive/1FIbHQoD6mJInkhinoMehBDj2E1i7i2j7). ## STUMPED - Distributed STUMP Alternatively, if you only have access to a cluster of CPUs and your data needs to stay behind your firewall, then `stump` and `gpu_stump` may not be sufficient for your needs. Instead, you can try `stumped`, a distributed and parallel implementation of `stump` that depends on [Dask distributed](https://distributed.dask.org/en/latest/): ``` import stumpy from dask.distributed import Client dask_client = Client() mp = stumpy.stumped(dask_client, df['value'], m=m) # Note that a dask client is needed ``` ## Summary And that's it! You have now loaded in a dataset, ran it through `stump` using our package, and were able to extract multiple conclusions of existing patterns and anomalies within the two different time series. You can now import this package and use it in your own projects. Happy coding! ## Resources [Matrix Profile I](http://www.cs.ucr.edu/~eamonn/PID4481997_extend_Matrix%20Profile_I.pdf) [Matrix Profile II](http://www.cs.ucr.edu/~eamonn/STOMP_GPU_final_submission_camera_ready.pdf) [STUMPY Documentation](https://stumpy.readthedocs.io/en/latest/) [STUMPY Matrix Profile Github Code Repository](https://github.com/TDAmeritrade/stumpy)
github_jupyter
%matplotlib inline import pandas as pd import stumpy import numpy as np import matplotlib.pyplot as plt import matplotlib.dates as dates from matplotlib.patches import Rectangle import datetime as dt plt.rcParams["figure.figsize"] = [20, 6] # width, height plt.rcParams['xtick.direction'] = 'out' steam_df = pd.read_csv("https://zenodo.org/record/4273921/files/STUMPY_Basics_steamgen.csv?download=1") steam_df.head() plt.suptitle('Steamgen Dataset', fontsize='30') plt.xlabel('Time', fontsize ='20') plt.ylabel('Steam Flow', fontsize='20') plt.plot(steam_df['steam flow'].values) plt.show() m = 640 fig, axs = plt.subplots(2) plt.suptitle('Steamgen Dataset', fontsize='30') axs[0].set_ylabel("Steam Flow", fontsize='20') axs[0].plot(steam_df['steam flow'], alpha=0.5, linewidth=1) axs[0].plot(steam_df['steam flow'].iloc[643:643+m]) axs[0].plot(steam_df['steam flow'].iloc[8724:8724+m]) rect = Rectangle((643, 0), m, 40, facecolor='lightgrey') axs[0].add_patch(rect) rect = Rectangle((8724, 0), m, 40, facecolor='lightgrey') axs[0].add_patch(rect) axs[1].set_xlabel("Time", fontsize='20') axs[1].set_ylabel("Steam Flow", fontsize='20') axs[1].plot(steam_df['steam flow'].values[643:643+m], color='C1') axs[1].plot(steam_df['steam flow'].values[8724:8724+m], color='C2') plt.show() m = 640 mp = stumpy.stump(steam_df['steam flow'], m) fig, axs = plt.subplots(2, sharex=True, gridspec_kw={'hspace': 0}) plt.suptitle('Motif (Pattern) Discovery', fontsize='30') axs[0].plot(steam_df['steam flow'].values) axs[0].set_ylabel('Steam Flow', fontsize='20') rect = Rectangle((643, 0), m, 40, facecolor='lightgrey') axs[0].add_patch(rect) rect = Rectangle((8724, 0), m, 40, facecolor='lightgrey') axs[0].add_patch(rect) axs[1].set_xlabel('Time', fontsize ='20') axs[1].set_ylabel('Matrix Profile', fontsize='20') axs[1].axvline(x=643, linestyle="dashed") axs[1].axvline(x=8724, linestyle="dashed") axs[1].plot(mp[:, 0]) plt.show() mp[:, 0].min() mp[:, 0].max() np.argwhere(mp[:, 0] == mp[:, 0].max()).flatten()[0] fig, axs = plt.subplots(2, sharex=True, gridspec_kw={'hspace': 0}) plt.suptitle('Discord (Anomaly/Novelty) Discovery', fontsize='30') axs[0].plot(steam_df['steam flow'].values) axs[0].set_ylabel('Steam Flow', fontsize='20') rect = Rectangle((3864, 0), m, 40, facecolor='lightgrey') axs[0].add_patch(rect) axs[1].set_xlabel('Time', fontsize ='20') axs[1].set_ylabel('Matrix Profile', fontsize='20') axs[1].axvline(x=3864, linestyle="dashed") axs[1].plot(mp[:, 0]) plt.show() taxi_df = pd.read_csv("https://zenodo.org/record/4276428/files/STUMPY_Basics_Taxi.csv?download=1") taxi_df['value'] = taxi_df['value'].astype(np.float64) taxi_df['timestamp'] = pd.to_datetime(taxi_df['timestamp']) taxi_df.head() # This code is going to be utilized to control the axis labeling of the plots DAY_MULTIPLIER = 7 # Specify for the amount of days you want between each labeled x-axis tick x_axis_labels = taxi_df[(taxi_df.timestamp.dt.hour==0)]['timestamp'].dt.strftime('%b %d').values[::DAY_MULTIPLIER] x_axis_labels[1::2] = " " x_axis_labels, DAY_MULTIPLIER plt.suptitle('Taxi Passenger Raw Data', fontsize='30') plt.xlabel('Window Start Date', fontsize ='20') plt.ylabel('Half-Hourly Average\nNumber of Taxi Passengers', fontsize='20') plt.plot(taxi_df['value']) plt.xticks(np.arange(0, taxi_df['value'].shape[0], (48*DAY_MULTIPLIER)/2), x_axis_labels) plt.xticks(rotation=75) plt.minorticks_on() plt.margins(x=0) plt.show() m = 48 mp = stumpy.stump(taxi_df['value'], m=m) plt.suptitle('1-Day STUMP', fontsize='30') plt.xlabel('Window Start', fontsize ='20') plt.ylabel('Matrix Profile', fontsize='20') plt.plot(mp[:, 0]) plt.plot(575, 1.7, marker="v", markersize=15, color='b') plt.text(620, 1.6, 'Columbus Day', color="black", fontsize=20) plt.plot(1535, 3.7, marker="v", markersize=15, color='b') plt.text(1580, 3.6, 'Daylight Savings', color="black", fontsize=20) plt.plot(2700, 3.1, marker="v", markersize=15, color='b') plt.text(2745, 3.0, 'Thanksgiving', color="black", fontsize=20) plt.plot(30, .2, marker="^", markersize=15, color='b', fillstyle='none') plt.plot(363, .2, marker="^", markersize=15, color='b', fillstyle='none') plt.xticks(np.arange(0, 3553, (m*DAY_MULTIPLIER)/2), x_axis_labels) plt.xticks(rotation=75) plt.minorticks_on() plt.show() days_dict ={ "Half-Day": 24, "1-Day": 48, "2-Days": 96, "5-Days": 240, "7-Days": 336, } days_df = pd.DataFrame.from_dict(days_dict, orient='index', columns=['m']) days_df.head() fig, axs = plt.subplots(5, sharex=True, gridspec_kw={'hspace': 0}) fig.text(0.5, -0.1, 'Subsequence Start Date', ha='center', fontsize='20') fig.text(0.08, 0.5, 'Matrix Profile', va='center', rotation='vertical', fontsize='20') for i, varying_m in enumerate(days_df['m'].values): mp = stumpy.stump(taxi_df['value'], varying_m) axs[i].plot(mp[:, 0]) axs[i].set_ylim(0,9.5) axs[i].set_xlim(0,3600) title = f"m = {varying_m}" axs[i].set_title(title, fontsize=20, y=.5) plt.xticks(np.arange(0, taxi_df.shape[0], (48*DAY_MULTIPLIER)/2), x_axis_labels) plt.xticks(rotation=75) plt.suptitle('STUMP with Varying Window Sizes', fontsize='30') plt.show() import stumpy mp = stumpy.gpu_stump(df['value'], m=m) # Note that you'll need a properly configured NVIDIA GPU for this import stumpy from dask.distributed import Client dask_client = Client() mp = stumpy.stumped(dask_client, df['value'], m=m) # Note that a dask client is needed
0.557725
0.991764
``` !pip install spacy-syllables !python -m spacy download en_core_web_sm !pip3 install wordfreq import torch import torch.nn as nn import torch.nn.functional as F import torch.optim as optim import numpy as np import pandas as pd from wordfreq import word_frequency from scipy import stats import csv import spacy from spacy_syllables import SpacySyllables import os import random !wget http://nlp.stanford.edu/data/glove.6B.zip !unzip glove*.zip # https://www.kaggle.com/bminixhofer/deterministic-neural-networks-using-pytorch # Seed all rngs for deterministic results def seed_all(seed = 0): random.seed(0) os.environ['PYTHONHASHSEED'] = str(seed) torch.manual_seed(seed) np.random.seed(seed) torch.cuda.manual_seed(seed) torch.backends.cudnn.deterministic = True seed_all(0) device = torch.device("cuda" if torch.cuda.is_available() else "cpu") nlp = spacy.load("en_core_web_sm") nlp.add_pipe("syllables", after='tagger') # Add the syllable tagger pipe SINGLE_TRAIN_DATAPATH = "https://raw.githubusercontent.com/MMU-TDMLab/CompLex/master/train/lcp_single_train.tsv" SINGLE_TEST_DATAPATH = "https://raw.githubusercontent.com/MMU-TDMLab/CompLex/master/test-labels/lcp_single_test.tsv" MULTI_TRAIN_DATAPATH = "https://raw.githubusercontent.com/MMU-TDMLab/CompLex/master/train/lcp_multi_train.tsv" MULTI_TEST_DATAPATH = "https://raw.githubusercontent.com/MMU-TDMLab/CompLex/master/test-labels/lcp_multi_test.tsv" def get_data_frames(): df_train_single = pd.read_csv(SINGLE_TRAIN_DATAPATH, sep='\t', quotechar="'", quoting=csv.QUOTE_NONE) df_test_single = pd.read_csv(SINGLE_TEST_DATAPATH, sep='\t', quotechar="'", quoting=csv.QUOTE_NONE) df_train_multi = pd.read_csv(MULTI_TRAIN_DATAPATH, sep='\t', quotechar="'", quoting=csv.QUOTE_NONE) df_test_multi = pd.read_csv(MULTI_TEST_DATAPATH, sep='\t', quotechar="'", quoting=csv.QUOTE_NONE) return df_train_single, df_test_single, df_train_multi, df_test_multi df_train_single, df_test_single, df_train_multi, df_test_multi = get_data_frames() ``` Features used * Word Embedding [GloVe 50 dimensional embeddings](http://nlp.stanford.edu/data/glove.6B.zip) * Length of word * Syllable count [PyPy](https://pypi.org/project/syllables/) * Word Frequency [PyPy](https://pypi.org/project/wordfreq/) * POS tag [Spacy](https://spacy.io/usage/linguistic-features#pos-tagging) [Reference](https://www.aclweb.org/anthology/W18-0508.pdf) ``` single_tokens_train_raw = df_train_single["token"].astype(str).to_list() single_tokens_test_raw = df_test_single["token"].astype(str).to_list() y_single_train = df_train_single["complexity"].astype(np.float32).to_numpy() y_single_test = df_test_single["complexity"].astype(np.float32).to_numpy() multi_tokens_train_raw = df_train_multi["token"].astype(str).to_list() multi_tokens_test_raw = df_test_multi["token"].astype(str).to_list() y_multi_train = df_train_multi["complexity"].astype(np.float32).to_numpy() y_multi_test = df_test_multi["complexity"].astype(np.float32).to_numpy() sent_train_single_raw = df_train_single["sentence"].to_list() sent_test_single_raw = df_test_single["sentence"].to_list() sent_train_multi_raw = df_train_multi["sentence"].to_list() sent_test_multi_raw = df_test_multi["sentence"].to_list() EMBEDDING_DIM = 50 def get_embeddings(): embedding_index = {} with open('glove.6B.{}d.txt'.format(EMBEDDING_DIM), 'r', encoding='utf-8') as f: for line in f: values = line.split() token = values[0] embedding_index[token] = np.asarray(values[1:], dtype='float32') return embedding_index embedding_index = get_embeddings() print('Token count in embeddings: {}'.format(len(embedding_index))) ``` biLSTM to predict target probability ``` HIDDEN_DIM = 10 def prepare_sequence(seq, to_ix): seq = seq.split() idxs = [to_ix[w.lower()] if w.lower() in to_ix else len(to_ix) for w in seq] idxs = torch.tensor(idxs) idxs = nn.functional.one_hot(idxs, num_classes=len(to_ix)) idxs = torch.tensor(idxs, dtype=torch.float32) return idxs def map_token_to_idx(): word_to_ix = {} word_to_ix_multi = {} for sent in sent_train_single_raw: sent = sent.split() for word in sent: word = word.lower() if word not in word_to_ix: word_to_ix[word] = len(word_to_ix) for sent in sent_train_multi_raw: sent = sent.split() for word in sent: word = word.lower() if word not in word_to_ix_multi: word_to_ix_multi[word] = len(word_to_ix_multi) return word_to_ix, word_to_ix_multi word_to_ix, word_to_ix_multi = map_token_to_idx() print('SWE vocab size: {}\nMWE vocab size: {}'.format(len(word_to_ix), len(word_to_ix_multi))) ``` biLSTM to calculate token probability given context ``` class biLSTM(nn.Module): def __init__(self, embedding_dim, hidden_dim, vocab_size, output_size): super(biLSTM, self).__init__() self.hidden_dim = hidden_dim self.lstm = nn.LSTM(embedding_dim, hidden_dim, bidirectional=True) self.hidden2tag = nn.Linear(2 * hidden_dim, output_size) def prepare_embedding(self, sentence): embeddings = [] for word in sentence: word = word.lower() if word in embedding_index: embeddings.extend(embedding_index[word]) else: embeddings.extend(np.random.random(EMBEDDING_DIM).tolist()) embeddings = torch.tensor(embeddings, dtype=torch.float32, device=device) return embeddings def forward(self, sentence): sentence = sentence.split() embeds = self.prepare_embedding(sentence) lstm_out, _ = self.lstm(embeds.view(len(sentence), 1, -1)) tag_space = self.hidden2tag(lstm_out.view(len(sentence), -1)) tag_scores = F.softmax(tag_space, dim=1) return tag_scores ``` biLSTM model for single word targets ``` model = biLSTM(EMBEDDING_DIM, HIDDEN_DIM, len(word_to_ix), len(word_to_ix)) print('Training biLSTM on single target expressions') # Train the model for 10 epochs model = biLSTM(EMBEDDING_DIM, HIDDEN_DIM, len(word_to_ix), len(word_to_ix)) loss_function = nn.MSELoss() optimizer = optim.Adam(model.parameters(), lr=0.01) for epoch in range(10): loss_sum = 0 for sentence in sent_train_single_raw: model.zero_grad() targets = prepare_sequence(sentence, word_to_ix) tag_scores = model(sentence) loss = loss_function(tag_scores, targets) loss_sum += loss loss.backward() optimizer.step() print('Epoch: {} Loss: {}'.format(epoch, loss_sum.item())) ``` biLSTM model for multi word targets ``` model_multi = biLSTM(EMBEDDING_DIM, HIDDEN_DIM, len(word_to_ix_multi), len(word_to_ix_multi)) print('Training biLSTM on multi target expressions') model_multi = biLSTM(EMBEDDING_DIM, HIDDEN_DIM, len(word_to_ix_multi), len(word_to_ix_multi)) loss_function = nn.MSELoss() optimizer = optim.Adam(model_multi.parameters(), lr=0.01) for epoch in range(10): loss_sum = 0 for sentence in sent_train_multi_raw: model_multi.zero_grad() targets = prepare_sequence(sentence, word_to_ix_multi) tag_scores = model_multi(sentence) loss = loss_function(tag_scores, targets) loss_sum += loss loss.backward() optimizer.step() print('Epoch: {} Loss: {}'.format(epoch, loss_sum.item())) def prepare_features_single_word(tokens, sentences): features = [] for idx, word in enumerate(tokens): word = word.lower() feature = [] # Word length feature.append(len(word)) doc = nlp(word) # Syllable count and word frequency in the corpus # Spacy tokenizes the input sentence # In this case we would have only one token, the target word for token in doc: feature.append(token._.syllables_count) feature.append(word_frequency(word, 'en')) # Probability of target word `word` in the sentence estimated from by `model` if word in word_to_ix: # Output scores for each of the word in the sentence out = model(sentences[idx]) pos = -1 for itr, token in enumerate(sentences[idx].split()): if token.lower() == word: pos = itr break id_pos = word_to_ix[word] # word to id mapping feature.append(float(out[pos][id_pos])) else: # `word` not in vocabulary, so cannot predict probability in context feature.append(0.0) features.append(feature) if (idx + 1) % 500 == 0: print('Prepared features for {} single target word sentences'.format(idx + 1)) return features def prepare_features_multi_word(tokens, sentences): features = [] for idx, word in enumerate(tokens): word = word.lower() feature = [] doc = nlp(word) word = word.split(' ') assert(len(word) == 2) # MWE length = sum(length of individual words) feature.append(len(word[0]) + len(word[1])) syllables = 0 probability = 1 embedding = np.zeros(EMBEDDING_DIM) # Syllable count and word frequency in the corpus # Spacy tokenizes the input sentence # In this case we would have two tokens for token in doc: word_ = token.text syllables += token._.syllables_count probability *= word_frequency(word_, 'en') # GloVE embedding current `word_` of the MWE if word_ in embedding_index: embedding = embedding + embedding_index[word_] else: # `word_` not in the GloVE corpus, take a random embedding embedding = embedding + np.random.random(EMBEDDING_DIM) # Average embedding of the two tokens in the MWE embedding = embedding / 2 feature.append(syllables) feature.append(probability) # Product of probabilities of constituent words in the MWE if word[0] in word_to_ix_multi and word[1] in word_to_ix_multi: # Output scores for each of the word in the sentence out = model_multi(sentences[idx]) pos0, pos1 = -1, -1 for itr, token in enumerate(sentences[idx].split()): if token.lower() == word[0]: pos0 = itr pos1 = itr + 1 break id_pos0 = word_to_ix_multi[word[0]] id_pos1 = word_to_ix_multi[word[1]] feature.append(float(out[pos0][id_pos0] * out[pos1][id_pos1])) else: # Either of the constituent words of the MWE not in vocabulary \ # So cannot predict probability in context feature.append(0.0) features.append(feature) if (idx + 1) % 500 == 0: print('Prepared features for {} multi target word sentences'.format(idx + 1)) return features print('+++ Generating Train features for Single word expressions +++') features_train_single = prepare_features_single_word(single_tokens_train_raw, sent_train_single_raw) print('+++ Generating Test features for Single word expressions +++') features_test_single = prepare_features_single_word(single_tokens_test_raw, sent_test_single_raw) print('+++ Generating Train features for Multi word expressions +++') features_train_multi = prepare_features_multi_word(multi_tokens_train_raw, sent_train_multi_raw) print('+++ Generating Test features for Multi word expressions +++') features_test_multi = prepare_features_multi_word(multi_tokens_test_raw, sent_test_multi_raw) # Convert all features to torch.tensor to enable use in PyTorch models X_train_single_tensor = torch.tensor(features_train_single, dtype=torch.float32, device=device) X_test_single_tensor = torch.tensor(features_test_single, dtype=torch.float32, device=device) X_train_multi_tensor = torch.tensor(features_train_multi, dtype=torch.float32, device=device) X_test_multi_tensor = torch.tensor(features_test_multi, dtype=torch.float32, device=device) # Reshape all output complexity scores to single dimension vectors y_single_train = y_single_train.reshape(y_single_train.shape[0], -1) y_single_test = y_single_test.reshape(y_single_test.shape[0], -1) y_multi_train = y_multi_train.reshape(y_multi_train.shape[0], -1) y_multi_test = y_multi_test.reshape(y_multi_test.shape[0], -1) # Convert all target outputs to torch.tensor to enable use in PyTorch models Y_train_single_tensor = torch.tensor(y_single_train, dtype=torch.float32, device=device) Y_test_single_tensor = torch.tensor(y_single_test, dtype=torch.float32, device=device) Y_train_multi_tensor = torch.tensor(y_multi_train, dtype=torch.float32, device=device) Y_test_multi_tensor = torch.tensor(y_multi_test, dtype=torch.float32, device=device) # Ensure each sample from test and train for single word expression is taken print(X_train_single_tensor.shape) print(X_test_single_tensor.shape) print(Y_train_single_tensor.shape) print(Y_test_single_tensor.shape) # Ensure each sample from test and train for multi word expression is taken print(X_train_multi_tensor.shape) print(X_test_multi_tensor.shape) print(Y_train_multi_tensor.shape) print(Y_test_multi_tensor.shape) def convert_tensor_to_np(y): if device == torch.device("cuda"): y = y.cpu() y = y.detach().numpy() return y from copy import deepcopy # Evaluate the metrics upon which the model would be evaluated def evaluate_metrics(labels, predicted): vx, vy = [], [] if torch.is_tensor(labels): vx = labels.clone() vx = convert_tensor_to_np(vx) else: vx = deepcopy(labels) if torch.is_tensor(predicted): vy = predicted.clone() vy = convert_tensor_to_np(vy) else: vy = deepcopy(predicted) pearsonR = np.corrcoef(vx.T, vy.T)[0, 1] spearmanRho = stats.spearmanr(vx, vy) MSE = np.mean((vx - vy) ** 2) MAE = np.mean(np.absolute(vx - vy)) RSquared = pearsonR ** 2 print("Peason's R: {}".format(pearsonR)) print("Spearman's rho: {}".format(spearmanRho)) print("R Squared: {}".format(RSquared)) print("MSE: {}".format(MSE)) print("MAE: {}".format(MAE)) ``` ## Neural Network * $N$ input sentences * d (=EMBEDDING_DIM) word embedding * $I$ = Word Embedding matrix ($N \times d$) * $W_1, W_2, W_3, W_4 := (d \times 256), (256 \times 128), (128 \times 64), (64 \times 1)$ * Equations * $o_1 = tanh(I \times W_1 + b_1)$ * $o_2 = tanh(o_1 \times W_2 + b_2)$ * $o_3 = tanh(o_2 \times W_3 + b_3)$ * $o_4 = \sigma(o_3 \times W_4)$ ``` class NN(nn.Module): def __init__(self, embedding_dim): super(NN, self).__init__() self.linear1 = nn.Linear(embedding_dim, 128, bias=True) self.linear2 = nn.Linear(128, 256, bias=True) self.linear3 = nn.Linear(256, 64, bias=True) self.linear4 = nn.Linear(64, 1) def forward(self, input): out = torch.tanh(self.linear1(input)) out = torch.tanh(self.linear2(out)) out = torch.tanh(self.linear3(out)) out = torch.sigmoid(self.linear4(out)) return out loss_function = nn.MSELoss() embedding_dim = X_train_single_tensor.shape[1] model_NN = NN(embedding_dim) model_NN.to(device) print('Training NN on single target expressions...') model_NN = NN(embedding_dim) model_NN.to(device) loss_function = nn.MSELoss() optimizer = optim.Adam(model_NN.parameters(), lr=0.002) for epoch in range(30): optimizer.zero_grad() out = model_NN(X_train_single_tensor) loss = loss_function(out, Y_train_single_tensor) loss.backward() optimizer.step() print("Epoch {} : {}".format(epoch + 1, loss.item())) out_NN = model_NN(X_test_single_tensor) evaluate_metrics(out_NN, Y_test_single_tensor) embedding_dim = X_train_multi_tensor.shape[1] model_NN_multi = NN(embedding_dim) model_NN_multi.to(device) print('Training NN on multi target expressions...') model_NN_multi = NN(embedding_dim) model_NN_multi.to(device) loss_function = nn.MSELoss() optimizer = optim.Adam(model_NN_multi.parameters(), lr=0.002) for epoch in range(30): optimizer.zero_grad() out = model_NN_multi(X_train_multi_tensor) loss = loss_function(out, Y_train_multi_tensor) loss.backward() optimizer.step() print("Epoch {} : {}".format(epoch + 1, loss.item())) out_NN_multi = model_NN_multi(X_test_multi_tensor) evaluate_metrics(out_NN_multi, Y_test_multi_tensor) ``` ## Machine Learning Methods * Linear Regression * Support Vector Regressor ``` X_train_single_np = np.array(features_train_single) X_test_single_np = np.array(features_test_single) Y_train_single_np = np.array(y_single_train.reshape(y_single_train.shape[0], -1)) Y_test_single_np = np.array(y_single_test.reshape(y_single_test.shape[0], -1)) print(X_train_single_np.shape) print(X_test_single_np.shape) print(Y_train_single_np.shape) print(Y_test_single_np.shape) X_train_multi_np = np.array(features_train_multi) X_test_multi_np = np.array(features_test_multi) Y_train_multi_np = np.array(y_multi_train.reshape(y_multi_train.shape[0], -1)) Y_test_multi_np = np.array(y_multi_test.reshape(y_multi_test.shape[0], -1)) print(X_train_multi_np.shape) print(X_test_multi_np.shape) print(Y_train_multi_np.shape) print(Y_test_multi_np.shape) ``` ### Linear Regression ``` from sklearn.pipeline import make_pipeline from sklearn.preprocessing import StandardScaler from sklearn.linear_model import LinearRegression def evaluateLinearRegression(X_train, Y_train, X_test, Y_test): reg = make_pipeline(StandardScaler(), LinearRegression()) reg.fit(X_train, Y_train) out = reg.predict(X_test) out = out.reshape((out.shape[0], 1)) evaluate_metrics(out, Y_test) return out print('Linear Regression for Single word expressions') out_LR = evaluateLinearRegression(X_train_single_np, Y_train_single_np, X_test_single_np, Y_test_single_np) print('Linear Regression for Multi word expressions') out_LR_multi = evaluateLinearRegression(X_train_multi_np, Y_train_multi_np, X_test_multi_np, Y_test_multi_np) ``` ### Support Vector Regressor * Radial basis function * C = 0.05 * epsilon = 0.01 ``` from sklearn.svm import SVR def evaluateSVR(X_train, Y_train, X_test, Y_test): svr = make_pipeline(StandardScaler(), SVR(C=0.05, epsilon=0.01)) svr.fit(X_train, Y_train.reshape(-1)) out = svr.predict(X_test) out = out.reshape((out.shape[0], 1)) evaluate_metrics(out, Y_test) return out print('SVR for Single word expressions') out_svr = evaluateSVR(X_train_single_np, Y_train_single_np, X_test_single_np, Y_test_single_np) print('SVR for Multi word expressions') out_svr_multi = evaluateSVR(X_train_multi_np, Y_train_multi_np, X_test_multi_np, Y_test_multi_np) single_ids = df_test_single["id"].astype(str).to_list() multi_ids = df_test_multi["id"].astype(str).to_list() out_ensemble = [] for idx in range(len(out_NN)): score = 0 score += float(out_NN[idx]) score += float(out_LR[idx]) score += float(out_svr[idx]) score /= 3 out_ensemble.append(score) out_ensemble = np.array(out_ensemble) out_ensemble = out_ensemble.reshape((out_ensemble.shape[0], 1)) evaluate_metrics(out_ensemble, Y_test_single_np) out_ensemble_multi = [] for idx in range(len(out_NN_multi)): score = 0 score += float(out_NN_multi[idx]) score += float(out_LR_multi[idx]) score += float(out_svr_multi[idx]) score /= 3 out_ensemble_multi.append(score) out_ensemble_multi = np.array(out_ensemble_multi) out_ensemble_multi = out_ensemble_multi.reshape((out_ensemble_multi.shape[0], 1)) evaluate_metrics(out_ensemble_multi, Y_test_multi_np) ```
github_jupyter
!pip install spacy-syllables !python -m spacy download en_core_web_sm !pip3 install wordfreq import torch import torch.nn as nn import torch.nn.functional as F import torch.optim as optim import numpy as np import pandas as pd from wordfreq import word_frequency from scipy import stats import csv import spacy from spacy_syllables import SpacySyllables import os import random !wget http://nlp.stanford.edu/data/glove.6B.zip !unzip glove*.zip # https://www.kaggle.com/bminixhofer/deterministic-neural-networks-using-pytorch # Seed all rngs for deterministic results def seed_all(seed = 0): random.seed(0) os.environ['PYTHONHASHSEED'] = str(seed) torch.manual_seed(seed) np.random.seed(seed) torch.cuda.manual_seed(seed) torch.backends.cudnn.deterministic = True seed_all(0) device = torch.device("cuda" if torch.cuda.is_available() else "cpu") nlp = spacy.load("en_core_web_sm") nlp.add_pipe("syllables", after='tagger') # Add the syllable tagger pipe SINGLE_TRAIN_DATAPATH = "https://raw.githubusercontent.com/MMU-TDMLab/CompLex/master/train/lcp_single_train.tsv" SINGLE_TEST_DATAPATH = "https://raw.githubusercontent.com/MMU-TDMLab/CompLex/master/test-labels/lcp_single_test.tsv" MULTI_TRAIN_DATAPATH = "https://raw.githubusercontent.com/MMU-TDMLab/CompLex/master/train/lcp_multi_train.tsv" MULTI_TEST_DATAPATH = "https://raw.githubusercontent.com/MMU-TDMLab/CompLex/master/test-labels/lcp_multi_test.tsv" def get_data_frames(): df_train_single = pd.read_csv(SINGLE_TRAIN_DATAPATH, sep='\t', quotechar="'", quoting=csv.QUOTE_NONE) df_test_single = pd.read_csv(SINGLE_TEST_DATAPATH, sep='\t', quotechar="'", quoting=csv.QUOTE_NONE) df_train_multi = pd.read_csv(MULTI_TRAIN_DATAPATH, sep='\t', quotechar="'", quoting=csv.QUOTE_NONE) df_test_multi = pd.read_csv(MULTI_TEST_DATAPATH, sep='\t', quotechar="'", quoting=csv.QUOTE_NONE) return df_train_single, df_test_single, df_train_multi, df_test_multi df_train_single, df_test_single, df_train_multi, df_test_multi = get_data_frames() single_tokens_train_raw = df_train_single["token"].astype(str).to_list() single_tokens_test_raw = df_test_single["token"].astype(str).to_list() y_single_train = df_train_single["complexity"].astype(np.float32).to_numpy() y_single_test = df_test_single["complexity"].astype(np.float32).to_numpy() multi_tokens_train_raw = df_train_multi["token"].astype(str).to_list() multi_tokens_test_raw = df_test_multi["token"].astype(str).to_list() y_multi_train = df_train_multi["complexity"].astype(np.float32).to_numpy() y_multi_test = df_test_multi["complexity"].astype(np.float32).to_numpy() sent_train_single_raw = df_train_single["sentence"].to_list() sent_test_single_raw = df_test_single["sentence"].to_list() sent_train_multi_raw = df_train_multi["sentence"].to_list() sent_test_multi_raw = df_test_multi["sentence"].to_list() EMBEDDING_DIM = 50 def get_embeddings(): embedding_index = {} with open('glove.6B.{}d.txt'.format(EMBEDDING_DIM), 'r', encoding='utf-8') as f: for line in f: values = line.split() token = values[0] embedding_index[token] = np.asarray(values[1:], dtype='float32') return embedding_index embedding_index = get_embeddings() print('Token count in embeddings: {}'.format(len(embedding_index))) HIDDEN_DIM = 10 def prepare_sequence(seq, to_ix): seq = seq.split() idxs = [to_ix[w.lower()] if w.lower() in to_ix else len(to_ix) for w in seq] idxs = torch.tensor(idxs) idxs = nn.functional.one_hot(idxs, num_classes=len(to_ix)) idxs = torch.tensor(idxs, dtype=torch.float32) return idxs def map_token_to_idx(): word_to_ix = {} word_to_ix_multi = {} for sent in sent_train_single_raw: sent = sent.split() for word in sent: word = word.lower() if word not in word_to_ix: word_to_ix[word] = len(word_to_ix) for sent in sent_train_multi_raw: sent = sent.split() for word in sent: word = word.lower() if word not in word_to_ix_multi: word_to_ix_multi[word] = len(word_to_ix_multi) return word_to_ix, word_to_ix_multi word_to_ix, word_to_ix_multi = map_token_to_idx() print('SWE vocab size: {}\nMWE vocab size: {}'.format(len(word_to_ix), len(word_to_ix_multi))) class biLSTM(nn.Module): def __init__(self, embedding_dim, hidden_dim, vocab_size, output_size): super(biLSTM, self).__init__() self.hidden_dim = hidden_dim self.lstm = nn.LSTM(embedding_dim, hidden_dim, bidirectional=True) self.hidden2tag = nn.Linear(2 * hidden_dim, output_size) def prepare_embedding(self, sentence): embeddings = [] for word in sentence: word = word.lower() if word in embedding_index: embeddings.extend(embedding_index[word]) else: embeddings.extend(np.random.random(EMBEDDING_DIM).tolist()) embeddings = torch.tensor(embeddings, dtype=torch.float32, device=device) return embeddings def forward(self, sentence): sentence = sentence.split() embeds = self.prepare_embedding(sentence) lstm_out, _ = self.lstm(embeds.view(len(sentence), 1, -1)) tag_space = self.hidden2tag(lstm_out.view(len(sentence), -1)) tag_scores = F.softmax(tag_space, dim=1) return tag_scores model = biLSTM(EMBEDDING_DIM, HIDDEN_DIM, len(word_to_ix), len(word_to_ix)) print('Training biLSTM on single target expressions') # Train the model for 10 epochs model = biLSTM(EMBEDDING_DIM, HIDDEN_DIM, len(word_to_ix), len(word_to_ix)) loss_function = nn.MSELoss() optimizer = optim.Adam(model.parameters(), lr=0.01) for epoch in range(10): loss_sum = 0 for sentence in sent_train_single_raw: model.zero_grad() targets = prepare_sequence(sentence, word_to_ix) tag_scores = model(sentence) loss = loss_function(tag_scores, targets) loss_sum += loss loss.backward() optimizer.step() print('Epoch: {} Loss: {}'.format(epoch, loss_sum.item())) model_multi = biLSTM(EMBEDDING_DIM, HIDDEN_DIM, len(word_to_ix_multi), len(word_to_ix_multi)) print('Training biLSTM on multi target expressions') model_multi = biLSTM(EMBEDDING_DIM, HIDDEN_DIM, len(word_to_ix_multi), len(word_to_ix_multi)) loss_function = nn.MSELoss() optimizer = optim.Adam(model_multi.parameters(), lr=0.01) for epoch in range(10): loss_sum = 0 for sentence in sent_train_multi_raw: model_multi.zero_grad() targets = prepare_sequence(sentence, word_to_ix_multi) tag_scores = model_multi(sentence) loss = loss_function(tag_scores, targets) loss_sum += loss loss.backward() optimizer.step() print('Epoch: {} Loss: {}'.format(epoch, loss_sum.item())) def prepare_features_single_word(tokens, sentences): features = [] for idx, word in enumerate(tokens): word = word.lower() feature = [] # Word length feature.append(len(word)) doc = nlp(word) # Syllable count and word frequency in the corpus # Spacy tokenizes the input sentence # In this case we would have only one token, the target word for token in doc: feature.append(token._.syllables_count) feature.append(word_frequency(word, 'en')) # Probability of target word `word` in the sentence estimated from by `model` if word in word_to_ix: # Output scores for each of the word in the sentence out = model(sentences[idx]) pos = -1 for itr, token in enumerate(sentences[idx].split()): if token.lower() == word: pos = itr break id_pos = word_to_ix[word] # word to id mapping feature.append(float(out[pos][id_pos])) else: # `word` not in vocabulary, so cannot predict probability in context feature.append(0.0) features.append(feature) if (idx + 1) % 500 == 0: print('Prepared features for {} single target word sentences'.format(idx + 1)) return features def prepare_features_multi_word(tokens, sentences): features = [] for idx, word in enumerate(tokens): word = word.lower() feature = [] doc = nlp(word) word = word.split(' ') assert(len(word) == 2) # MWE length = sum(length of individual words) feature.append(len(word[0]) + len(word[1])) syllables = 0 probability = 1 embedding = np.zeros(EMBEDDING_DIM) # Syllable count and word frequency in the corpus # Spacy tokenizes the input sentence # In this case we would have two tokens for token in doc: word_ = token.text syllables += token._.syllables_count probability *= word_frequency(word_, 'en') # GloVE embedding current `word_` of the MWE if word_ in embedding_index: embedding = embedding + embedding_index[word_] else: # `word_` not in the GloVE corpus, take a random embedding embedding = embedding + np.random.random(EMBEDDING_DIM) # Average embedding of the two tokens in the MWE embedding = embedding / 2 feature.append(syllables) feature.append(probability) # Product of probabilities of constituent words in the MWE if word[0] in word_to_ix_multi and word[1] in word_to_ix_multi: # Output scores for each of the word in the sentence out = model_multi(sentences[idx]) pos0, pos1 = -1, -1 for itr, token in enumerate(sentences[idx].split()): if token.lower() == word[0]: pos0 = itr pos1 = itr + 1 break id_pos0 = word_to_ix_multi[word[0]] id_pos1 = word_to_ix_multi[word[1]] feature.append(float(out[pos0][id_pos0] * out[pos1][id_pos1])) else: # Either of the constituent words of the MWE not in vocabulary \ # So cannot predict probability in context feature.append(0.0) features.append(feature) if (idx + 1) % 500 == 0: print('Prepared features for {} multi target word sentences'.format(idx + 1)) return features print('+++ Generating Train features for Single word expressions +++') features_train_single = prepare_features_single_word(single_tokens_train_raw, sent_train_single_raw) print('+++ Generating Test features for Single word expressions +++') features_test_single = prepare_features_single_word(single_tokens_test_raw, sent_test_single_raw) print('+++ Generating Train features for Multi word expressions +++') features_train_multi = prepare_features_multi_word(multi_tokens_train_raw, sent_train_multi_raw) print('+++ Generating Test features for Multi word expressions +++') features_test_multi = prepare_features_multi_word(multi_tokens_test_raw, sent_test_multi_raw) # Convert all features to torch.tensor to enable use in PyTorch models X_train_single_tensor = torch.tensor(features_train_single, dtype=torch.float32, device=device) X_test_single_tensor = torch.tensor(features_test_single, dtype=torch.float32, device=device) X_train_multi_tensor = torch.tensor(features_train_multi, dtype=torch.float32, device=device) X_test_multi_tensor = torch.tensor(features_test_multi, dtype=torch.float32, device=device) # Reshape all output complexity scores to single dimension vectors y_single_train = y_single_train.reshape(y_single_train.shape[0], -1) y_single_test = y_single_test.reshape(y_single_test.shape[0], -1) y_multi_train = y_multi_train.reshape(y_multi_train.shape[0], -1) y_multi_test = y_multi_test.reshape(y_multi_test.shape[0], -1) # Convert all target outputs to torch.tensor to enable use in PyTorch models Y_train_single_tensor = torch.tensor(y_single_train, dtype=torch.float32, device=device) Y_test_single_tensor = torch.tensor(y_single_test, dtype=torch.float32, device=device) Y_train_multi_tensor = torch.tensor(y_multi_train, dtype=torch.float32, device=device) Y_test_multi_tensor = torch.tensor(y_multi_test, dtype=torch.float32, device=device) # Ensure each sample from test and train for single word expression is taken print(X_train_single_tensor.shape) print(X_test_single_tensor.shape) print(Y_train_single_tensor.shape) print(Y_test_single_tensor.shape) # Ensure each sample from test and train for multi word expression is taken print(X_train_multi_tensor.shape) print(X_test_multi_tensor.shape) print(Y_train_multi_tensor.shape) print(Y_test_multi_tensor.shape) def convert_tensor_to_np(y): if device == torch.device("cuda"): y = y.cpu() y = y.detach().numpy() return y from copy import deepcopy # Evaluate the metrics upon which the model would be evaluated def evaluate_metrics(labels, predicted): vx, vy = [], [] if torch.is_tensor(labels): vx = labels.clone() vx = convert_tensor_to_np(vx) else: vx = deepcopy(labels) if torch.is_tensor(predicted): vy = predicted.clone() vy = convert_tensor_to_np(vy) else: vy = deepcopy(predicted) pearsonR = np.corrcoef(vx.T, vy.T)[0, 1] spearmanRho = stats.spearmanr(vx, vy) MSE = np.mean((vx - vy) ** 2) MAE = np.mean(np.absolute(vx - vy)) RSquared = pearsonR ** 2 print("Peason's R: {}".format(pearsonR)) print("Spearman's rho: {}".format(spearmanRho)) print("R Squared: {}".format(RSquared)) print("MSE: {}".format(MSE)) print("MAE: {}".format(MAE)) class NN(nn.Module): def __init__(self, embedding_dim): super(NN, self).__init__() self.linear1 = nn.Linear(embedding_dim, 128, bias=True) self.linear2 = nn.Linear(128, 256, bias=True) self.linear3 = nn.Linear(256, 64, bias=True) self.linear4 = nn.Linear(64, 1) def forward(self, input): out = torch.tanh(self.linear1(input)) out = torch.tanh(self.linear2(out)) out = torch.tanh(self.linear3(out)) out = torch.sigmoid(self.linear4(out)) return out loss_function = nn.MSELoss() embedding_dim = X_train_single_tensor.shape[1] model_NN = NN(embedding_dim) model_NN.to(device) print('Training NN on single target expressions...') model_NN = NN(embedding_dim) model_NN.to(device) loss_function = nn.MSELoss() optimizer = optim.Adam(model_NN.parameters(), lr=0.002) for epoch in range(30): optimizer.zero_grad() out = model_NN(X_train_single_tensor) loss = loss_function(out, Y_train_single_tensor) loss.backward() optimizer.step() print("Epoch {} : {}".format(epoch + 1, loss.item())) out_NN = model_NN(X_test_single_tensor) evaluate_metrics(out_NN, Y_test_single_tensor) embedding_dim = X_train_multi_tensor.shape[1] model_NN_multi = NN(embedding_dim) model_NN_multi.to(device) print('Training NN on multi target expressions...') model_NN_multi = NN(embedding_dim) model_NN_multi.to(device) loss_function = nn.MSELoss() optimizer = optim.Adam(model_NN_multi.parameters(), lr=0.002) for epoch in range(30): optimizer.zero_grad() out = model_NN_multi(X_train_multi_tensor) loss = loss_function(out, Y_train_multi_tensor) loss.backward() optimizer.step() print("Epoch {} : {}".format(epoch + 1, loss.item())) out_NN_multi = model_NN_multi(X_test_multi_tensor) evaluate_metrics(out_NN_multi, Y_test_multi_tensor) X_train_single_np = np.array(features_train_single) X_test_single_np = np.array(features_test_single) Y_train_single_np = np.array(y_single_train.reshape(y_single_train.shape[0], -1)) Y_test_single_np = np.array(y_single_test.reshape(y_single_test.shape[0], -1)) print(X_train_single_np.shape) print(X_test_single_np.shape) print(Y_train_single_np.shape) print(Y_test_single_np.shape) X_train_multi_np = np.array(features_train_multi) X_test_multi_np = np.array(features_test_multi) Y_train_multi_np = np.array(y_multi_train.reshape(y_multi_train.shape[0], -1)) Y_test_multi_np = np.array(y_multi_test.reshape(y_multi_test.shape[0], -1)) print(X_train_multi_np.shape) print(X_test_multi_np.shape) print(Y_train_multi_np.shape) print(Y_test_multi_np.shape) from sklearn.pipeline import make_pipeline from sklearn.preprocessing import StandardScaler from sklearn.linear_model import LinearRegression def evaluateLinearRegression(X_train, Y_train, X_test, Y_test): reg = make_pipeline(StandardScaler(), LinearRegression()) reg.fit(X_train, Y_train) out = reg.predict(X_test) out = out.reshape((out.shape[0], 1)) evaluate_metrics(out, Y_test) return out print('Linear Regression for Single word expressions') out_LR = evaluateLinearRegression(X_train_single_np, Y_train_single_np, X_test_single_np, Y_test_single_np) print('Linear Regression for Multi word expressions') out_LR_multi = evaluateLinearRegression(X_train_multi_np, Y_train_multi_np, X_test_multi_np, Y_test_multi_np) from sklearn.svm import SVR def evaluateSVR(X_train, Y_train, X_test, Y_test): svr = make_pipeline(StandardScaler(), SVR(C=0.05, epsilon=0.01)) svr.fit(X_train, Y_train.reshape(-1)) out = svr.predict(X_test) out = out.reshape((out.shape[0], 1)) evaluate_metrics(out, Y_test) return out print('SVR for Single word expressions') out_svr = evaluateSVR(X_train_single_np, Y_train_single_np, X_test_single_np, Y_test_single_np) print('SVR for Multi word expressions') out_svr_multi = evaluateSVR(X_train_multi_np, Y_train_multi_np, X_test_multi_np, Y_test_multi_np) single_ids = df_test_single["id"].astype(str).to_list() multi_ids = df_test_multi["id"].astype(str).to_list() out_ensemble = [] for idx in range(len(out_NN)): score = 0 score += float(out_NN[idx]) score += float(out_LR[idx]) score += float(out_svr[idx]) score /= 3 out_ensemble.append(score) out_ensemble = np.array(out_ensemble) out_ensemble = out_ensemble.reshape((out_ensemble.shape[0], 1)) evaluate_metrics(out_ensemble, Y_test_single_np) out_ensemble_multi = [] for idx in range(len(out_NN_multi)): score = 0 score += float(out_NN_multi[idx]) score += float(out_LR_multi[idx]) score += float(out_svr_multi[idx]) score /= 3 out_ensemble_multi.append(score) out_ensemble_multi = np.array(out_ensemble_multi) out_ensemble_multi = out_ensemble_multi.reshape((out_ensemble_multi.shape[0], 1)) evaluate_metrics(out_ensemble_multi, Y_test_multi_np)
0.877765
0.517571
# 6 - Transformers for Sentiment Analysis In this notebook we will be using the transformer model, first introduced in [this](https://arxiv.org/abs/1706.03762) paper. Specifically, we will be using the BERT (Bidirectional Encoder Representations from Transformers) model from [this](https://arxiv.org/abs/1810.04805) paper. Transformer models are considerably larger than anything else covered in these tutorials. As such we are going to use the [transformers library](https://github.com/huggingface/transformers) to get pre-trained transformers and use them as our embedding layers. We will freeze (not train) the transformer and only train the remainder of the model which learns from the representations produced by the transformer. In this case we will be using a multi-layer bi-directional GRU, however any model can learn from these representations. ## Preparing Data First, as always, let's set the random seeds for deterministic results. ``` import torch import random import numpy as np SEED = 1234 random.seed(SEED) np.random.seed(SEED) torch.manual_seed(SEED) torch.backends.cudnn.deterministic = True !pip install transformers ``` The transformer has already been trained with a specific vocabulary, which means we need to train with the exact same vocabulary and also tokenize our data in the same way that the transformer did when it was initially trained. Luckily, the transformers library has tokenizers for each of the transformer models provided. In this case we are using the BERT model which ignores casing (i.e. will lower case every word). We get this by loading the pre-trained `bert-base-uncased` tokenizer. ``` from transformers import BertTokenizer tokenizer = BertTokenizer.from_pretrained('bert-base-uncased') ``` The `tokenizer` has a `vocab` attribute which contains the actual vocabulary we will be using. We can check how many tokens are in it by checking its length. ``` len(tokenizer.vocab) ``` Using the tokenizer is as simple as calling `tokenizer.tokenize` on a string. This will tokenize and lower case the data in a way that is consistent with the pre-trained transformer model. ``` tokens = tokenizer.tokenize('Hello WORLD how ARE yoU?') print(tokens) ``` We can numericalize tokens using our vocabulary using `tokenizer.convert_tokens_to_ids`. ``` indexes = tokenizer.convert_tokens_to_ids(tokens) print(indexes) ``` The transformer was also trained with special tokens to mark the beginning and end of the sentence, detailed [here](https://huggingface.co/transformers/model_doc/bert.html#transformers.BertModel). As well as a standard padding and unknown token. We can also get these from the tokenizer. **Note**: the tokenizer does have a beginning of sequence and end of sequence attributes (`bos_token` and `eos_token`) but these are not set and should not be used for this transformer. ``` init_token = tokenizer.cls_token eos_token = tokenizer.sep_token pad_token = tokenizer.pad_token unk_token = tokenizer.unk_token print(init_token, eos_token, pad_token, unk_token) ``` We can get the indexes of the special tokens by converting them using the vocabulary... ``` init_token_idx = tokenizer.convert_tokens_to_ids(init_token) eos_token_idx = tokenizer.convert_tokens_to_ids(eos_token) pad_token_idx = tokenizer.convert_tokens_to_ids(pad_token) unk_token_idx = tokenizer.convert_tokens_to_ids(unk_token) print(init_token_idx, eos_token_idx, pad_token_idx, unk_token_idx) ``` ...or by explicitly getting them from the tokenizer. ``` init_token_idx = tokenizer.cls_token_id eos_token_idx = tokenizer.sep_token_id pad_token_idx = tokenizer.pad_token_id unk_token_idx = tokenizer.unk_token_id print(init_token_idx, eos_token_idx, pad_token_idx, unk_token_idx) ``` Another thing we need to handle is that the model was trained on sequences with a defined maximum length - it does not know how to handle sequences longer than it has been trained on. We can get the maximum length of these input sizes by checking the `max_model_input_sizes` for the version of the transformer we want to use. In this case, it is 512 tokens. ``` max_input_length = tokenizer.max_model_input_sizes['bert-base-uncased'] print(max_input_length) ``` Previously we have used the `spaCy` tokenizer to tokenize our examples. However we now need to define a function that we will pass to our `TEXT` field that will handle all the tokenization for us. It will also cut down the number of tokens to a maximum length. Note that our maximum length is 2 less than the actual maximum length. This is because we need to append two tokens to each sequence, one to the start and one to the end. ``` def tokenize_and_cut(sentence): tokens = tokenizer.tokenize(sentence) tokens = tokens[:max_input_length-2] return tokens ``` Now we define our fields. The transformer expects the batch dimension to be first, so we set `batch_first = True`. As we already have the vocabulary for our text, provided by the transformer we set `use_vocab = False` to tell torchtext that we'll be handling the vocabulary side of things. We pass our `tokenize_and_cut` function as the tokenizer. The `preprocessing` argument is a function that takes in the example after it has been tokenized, this is where we will convert the tokens to their indexes. Finally, we define the special tokens - making note that we are defining them to be their index value and not their string value, i.e. `100` instead of `[UNK]` This is because the sequences will already be converted into indexes. We define the label field as before. ``` from torchtext import data TEXT = data.Field(batch_first = True, use_vocab = False, tokenize = tokenize_and_cut, preprocessing = tokenizer.convert_tokens_to_ids, init_token = init_token_idx, eos_token = eos_token_idx, pad_token = pad_token_idx, unk_token = unk_token_idx) LABEL = data.LabelField(dtype = torch.float) ``` We load the data and create the validation splits as before. ``` from torchtext import datasets train_data, test_data = datasets.IMDB.splits(TEXT, LABEL) train_data, valid_data = train_data.split(random_state = random.seed(SEED)) print(f"Number of training examples: {len(train_data)}") print(f"Number of validation examples: {len(valid_data)}") print(f"Number of testing examples: {len(test_data)}") ``` We can check an example and ensure that the text has already been numericalized. ``` print(vars(train_data.examples[6])) ``` We can use the `convert_ids_to_tokens` to transform these indexes back into readable tokens. ``` tokens = tokenizer.convert_ids_to_tokens(vars(train_data.examples[6])['text']) print(tokens) ``` Although we've handled the vocabulary for the text, we still need to build the vocabulary for the labels. ``` LABEL.build_vocab(train_data) print(LABEL.vocab.stoi) ``` As before, we create the iterators. Ideally we want to use the largest batch size that we can as I've found this gives the best results for transformers. ``` BATCH_SIZE = 16 device = torch.device('cuda' if torch.cuda.is_available() else 'cpu') train_iterator, valid_iterator, test_iterator = data.BucketIterator.splits( (train_data, valid_data, test_data), batch_size = BATCH_SIZE, device = device) ``` ## Build the Model Next, we'll load the pre-trained model, making sure to load the same model as we did for the tokenizer. ``` from transformers import BertTokenizer, BertModel bert = BertModel.from_pretrained('bert-base-uncased') ``` Next, we'll define our actual model. Instead of using an embedding layer to get embeddings for our text, we'll be using the pre-trained transformer model. These embeddings will then be fed into a GRU to produce a prediction for the sentiment of the input sentence. We get the embedding dimension size (called the `hidden_size`) from the transformer via its config attribute. The rest of the initialization is standard. Within the forward pass, we wrap the transformer in a `no_grad` to ensure no gradients are calculated over this part of the model. The transformer actually returns the embeddings for the whole sequence as well as a *pooled* output. The [documentation](https://huggingface.co/transformers/model_doc/bert.html#transformers.BertModel) states that the pooled output is "usually not a good summary of the semantic content of the input, you’re often better with averaging or pooling the sequence of hidden-states for the whole input sequence", hence we will not be using it. The rest of the forward pass is the standard implementation of a recurrent model, where we take the hidden state over the final time-step, and pass it through a linear layer to get our predictions. ``` import torch.nn as nn class BERTGRUSentiment(nn.Module): def __init__(self, bert, hidden_dim, output_dim, n_layers, bidirectional, dropout): super().__init__() self.bert = bert embedding_dim = bert.config.to_dict()['hidden_size'] self.rnn = nn.GRU(embedding_dim, hidden_dim, num_layers = n_layers, bidirectional = bidirectional, batch_first = True, dropout = 0 if n_layers < 2 else dropout) self.out = nn.Linear(hidden_dim * 2 if bidirectional else hidden_dim, output_dim) self.dropout = nn.Dropout(dropout) def forward(self, text): #text = [batch size, sent len] with torch.no_grad(): embedded = self.bert(text)[0] #embedded = [batch size, sent len, emb dim] _, hidden = self.rnn(embedded) #hidden = [n layers * n directions, batch size, emb dim] if self.rnn.bidirectional: hidden = self.dropout(torch.cat((hidden[-2,:,:], hidden[-1,:,:]), dim = 1)) else: hidden = self.dropout(hidden[-1,:,:]) #hidden = [batch size, hid dim] output = self.out(hidden) #output = [batch size, out dim] return output ``` Next, we create an instance of our model using standard hyperparameters. ``` HIDDEN_DIM = 256 OUTPUT_DIM = 1 N_LAYERS = 2 BIDIRECTIONAL = True DROPOUT = 0.25 model = BERTGRUSentiment(bert, HIDDEN_DIM, OUTPUT_DIM, N_LAYERS, BIDIRECTIONAL, DROPOUT) ``` We can check how many parameters the model has. Our standard models have under 5M, but this one has 112M! Luckily, 110M of these parameters are from the transformer and we will not be training those. ``` def count_parameters(model): return sum(p.numel() for p in model.parameters() if p.requires_grad) print(f'The model has {count_parameters(model):,} trainable parameters') ``` In order to freeze paramers (not train them) we need to set their `requires_grad` attribute to `False`. To do this, we simply loop through all of the `named_parameters` in our model and if they're a part of the `bert` transformer model, we set `requires_grad = False`. ``` for name, param in model.named_parameters(): if name.startswith('bert'): param.requires_grad = False ``` We can now see that our model has under 3M trainable parameters, making it almost comparable to the `FastText` model. However, the text still has to propagate through the transformer which causes training to take considerably longer. ``` def count_parameters(model): return sum(p.numel() for p in model.parameters() if p.requires_grad) print(f'The model has {count_parameters(model):,} trainable parameters') ``` We can double check the names of the trainable parameters, ensuring they make sense. As we can see, they are all the parameters of the GRU (`rnn`) and the linear layer (`out`). ``` for name, param in model.named_parameters(): if param.requires_grad: print(name) ``` ## Train the Model As is standard, we define our optimizer and criterion (loss function). ``` import torch.optim as optim optimizer = optim.Adam(model.parameters()) criterion = nn.BCEWithLogitsLoss() ``` Place the model and criterion onto the GPU (if available) ``` model = model.to(device) criterion = criterion.to(device) ``` Next, we'll define functions for: calculating accuracy, performing a training epoch, performing an evaluation epoch and calculating how long a training/evaluation epoch takes. ``` def binary_accuracy(preds, y): """ Returns accuracy per batch, i.e. if you get 8/10 right, this returns 0.8, NOT 8 """ #round predictions to the closest integer rounded_preds = torch.round(torch.sigmoid(preds)) correct = (rounded_preds == y).float() #convert into float for division acc = correct.sum() / len(correct) return acc def train(model, iterator, optimizer, criterion): epoch_loss = 0 epoch_acc = 0 model.train() for batch in iterator: optimizer.zero_grad() predictions = model(batch.text).squeeze(1) loss = criterion(predictions, batch.label) acc = binary_accuracy(predictions, batch.label) loss.backward() optimizer.step() epoch_loss += loss.item() epoch_acc += acc.item() return epoch_loss / len(iterator), epoch_acc / len(iterator) def evaluate(model, iterator, criterion): epoch_loss = 0 epoch_acc = 0 model.eval() with torch.no_grad(): for batch in iterator: predictions = model(batch.text).squeeze(1) loss = criterion(predictions, batch.label) acc = binary_accuracy(predictions, batch.label) epoch_loss += loss.item() epoch_acc += acc.item() return epoch_loss / len(iterator), epoch_acc / len(iterator) import time def epoch_time(start_time, end_time): elapsed_time = end_time - start_time elapsed_mins = int(elapsed_time / 60) elapsed_secs = int(elapsed_time - (elapsed_mins * 60)) return elapsed_mins, elapsed_secs ``` Finally, we'll train our model. This takes considerably longer than any of the previous models due to the size of the transformer. Even though we are not training any of the transformer's parameters we still need to pass the data through the model which takes a considerable amount of time on a standard GPU. ``` N_EPOCHS = 5 best_valid_loss = float('inf') for epoch in range(N_EPOCHS): start_time = time.time() train_loss, train_acc = train(model, train_iterator, optimizer, criterion) valid_loss, valid_acc = evaluate(model, valid_iterator, criterion) end_time = time.time() epoch_mins, epoch_secs = epoch_time(start_time, end_time) if valid_loss < best_valid_loss: best_valid_loss = valid_loss torch.save(model.state_dict(), 'tut6-model.pt') print(f'Epoch: {epoch+1:02} | Epoch Time: {epoch_mins}m {epoch_secs}s') print(f'\tTrain Loss: {train_loss:.3f} | Train Acc: {train_acc*100:.2f}%') print(f'\t Val. Loss: {valid_loss:.3f} | Val. Acc: {valid_acc*100:.2f}%') ``` We'll load up the parameters that gave us the best validation loss and try these on the test set - which gives us our best results so far! ``` model.load_state_dict(torch.load('tut6-model.pt')) test_loss, test_acc = evaluate(model, test_iterator, criterion) print(f'Test Loss: {test_loss:.3f} | Test Acc: {test_acc*100:.2f}%') ``` ## Inference We'll then use the model to test the sentiment of some sequences. We tokenize the input sequence, trim it down to the maximum length, add the special tokens to either side, convert it to a tensor, add a fake batch dimension and then pass it through our model. ``` def predict_sentiment(model, tokenizer, sentence): model.eval() tokens = tokenizer.tokenize(sentence) tokens = tokens[:max_input_length-2] indexed = [init_token_idx] + tokenizer.convert_tokens_to_ids(tokens) + [eos_token_idx] tensor = torch.LongTensor(indexed).to(device) tensor = tensor.unsqueeze(0) prediction = torch.sigmoid(model(tensor)) return prediction.item() predict_sentiment(model, tokenizer, "This film is terrible") predict_sentiment(model, tokenizer, "This film is great") ```
github_jupyter
import torch import random import numpy as np SEED = 1234 random.seed(SEED) np.random.seed(SEED) torch.manual_seed(SEED) torch.backends.cudnn.deterministic = True !pip install transformers from transformers import BertTokenizer tokenizer = BertTokenizer.from_pretrained('bert-base-uncased') len(tokenizer.vocab) tokens = tokenizer.tokenize('Hello WORLD how ARE yoU?') print(tokens) indexes = tokenizer.convert_tokens_to_ids(tokens) print(indexes) init_token = tokenizer.cls_token eos_token = tokenizer.sep_token pad_token = tokenizer.pad_token unk_token = tokenizer.unk_token print(init_token, eos_token, pad_token, unk_token) init_token_idx = tokenizer.convert_tokens_to_ids(init_token) eos_token_idx = tokenizer.convert_tokens_to_ids(eos_token) pad_token_idx = tokenizer.convert_tokens_to_ids(pad_token) unk_token_idx = tokenizer.convert_tokens_to_ids(unk_token) print(init_token_idx, eos_token_idx, pad_token_idx, unk_token_idx) init_token_idx = tokenizer.cls_token_id eos_token_idx = tokenizer.sep_token_id pad_token_idx = tokenizer.pad_token_id unk_token_idx = tokenizer.unk_token_id print(init_token_idx, eos_token_idx, pad_token_idx, unk_token_idx) max_input_length = tokenizer.max_model_input_sizes['bert-base-uncased'] print(max_input_length) def tokenize_and_cut(sentence): tokens = tokenizer.tokenize(sentence) tokens = tokens[:max_input_length-2] return tokens from torchtext import data TEXT = data.Field(batch_first = True, use_vocab = False, tokenize = tokenize_and_cut, preprocessing = tokenizer.convert_tokens_to_ids, init_token = init_token_idx, eos_token = eos_token_idx, pad_token = pad_token_idx, unk_token = unk_token_idx) LABEL = data.LabelField(dtype = torch.float) from torchtext import datasets train_data, test_data = datasets.IMDB.splits(TEXT, LABEL) train_data, valid_data = train_data.split(random_state = random.seed(SEED)) print(f"Number of training examples: {len(train_data)}") print(f"Number of validation examples: {len(valid_data)}") print(f"Number of testing examples: {len(test_data)}") print(vars(train_data.examples[6])) tokens = tokenizer.convert_ids_to_tokens(vars(train_data.examples[6])['text']) print(tokens) LABEL.build_vocab(train_data) print(LABEL.vocab.stoi) BATCH_SIZE = 16 device = torch.device('cuda' if torch.cuda.is_available() else 'cpu') train_iterator, valid_iterator, test_iterator = data.BucketIterator.splits( (train_data, valid_data, test_data), batch_size = BATCH_SIZE, device = device) from transformers import BertTokenizer, BertModel bert = BertModel.from_pretrained('bert-base-uncased') import torch.nn as nn class BERTGRUSentiment(nn.Module): def __init__(self, bert, hidden_dim, output_dim, n_layers, bidirectional, dropout): super().__init__() self.bert = bert embedding_dim = bert.config.to_dict()['hidden_size'] self.rnn = nn.GRU(embedding_dim, hidden_dim, num_layers = n_layers, bidirectional = bidirectional, batch_first = True, dropout = 0 if n_layers < 2 else dropout) self.out = nn.Linear(hidden_dim * 2 if bidirectional else hidden_dim, output_dim) self.dropout = nn.Dropout(dropout) def forward(self, text): #text = [batch size, sent len] with torch.no_grad(): embedded = self.bert(text)[0] #embedded = [batch size, sent len, emb dim] _, hidden = self.rnn(embedded) #hidden = [n layers * n directions, batch size, emb dim] if self.rnn.bidirectional: hidden = self.dropout(torch.cat((hidden[-2,:,:], hidden[-1,:,:]), dim = 1)) else: hidden = self.dropout(hidden[-1,:,:]) #hidden = [batch size, hid dim] output = self.out(hidden) #output = [batch size, out dim] return output HIDDEN_DIM = 256 OUTPUT_DIM = 1 N_LAYERS = 2 BIDIRECTIONAL = True DROPOUT = 0.25 model = BERTGRUSentiment(bert, HIDDEN_DIM, OUTPUT_DIM, N_LAYERS, BIDIRECTIONAL, DROPOUT) def count_parameters(model): return sum(p.numel() for p in model.parameters() if p.requires_grad) print(f'The model has {count_parameters(model):,} trainable parameters') for name, param in model.named_parameters(): if name.startswith('bert'): param.requires_grad = False def count_parameters(model): return sum(p.numel() for p in model.parameters() if p.requires_grad) print(f'The model has {count_parameters(model):,} trainable parameters') for name, param in model.named_parameters(): if param.requires_grad: print(name) import torch.optim as optim optimizer = optim.Adam(model.parameters()) criterion = nn.BCEWithLogitsLoss() model = model.to(device) criterion = criterion.to(device) def binary_accuracy(preds, y): """ Returns accuracy per batch, i.e. if you get 8/10 right, this returns 0.8, NOT 8 """ #round predictions to the closest integer rounded_preds = torch.round(torch.sigmoid(preds)) correct = (rounded_preds == y).float() #convert into float for division acc = correct.sum() / len(correct) return acc def train(model, iterator, optimizer, criterion): epoch_loss = 0 epoch_acc = 0 model.train() for batch in iterator: optimizer.zero_grad() predictions = model(batch.text).squeeze(1) loss = criterion(predictions, batch.label) acc = binary_accuracy(predictions, batch.label) loss.backward() optimizer.step() epoch_loss += loss.item() epoch_acc += acc.item() return epoch_loss / len(iterator), epoch_acc / len(iterator) def evaluate(model, iterator, criterion): epoch_loss = 0 epoch_acc = 0 model.eval() with torch.no_grad(): for batch in iterator: predictions = model(batch.text).squeeze(1) loss = criterion(predictions, batch.label) acc = binary_accuracy(predictions, batch.label) epoch_loss += loss.item() epoch_acc += acc.item() return epoch_loss / len(iterator), epoch_acc / len(iterator) import time def epoch_time(start_time, end_time): elapsed_time = end_time - start_time elapsed_mins = int(elapsed_time / 60) elapsed_secs = int(elapsed_time - (elapsed_mins * 60)) return elapsed_mins, elapsed_secs N_EPOCHS = 5 best_valid_loss = float('inf') for epoch in range(N_EPOCHS): start_time = time.time() train_loss, train_acc = train(model, train_iterator, optimizer, criterion) valid_loss, valid_acc = evaluate(model, valid_iterator, criterion) end_time = time.time() epoch_mins, epoch_secs = epoch_time(start_time, end_time) if valid_loss < best_valid_loss: best_valid_loss = valid_loss torch.save(model.state_dict(), 'tut6-model.pt') print(f'Epoch: {epoch+1:02} | Epoch Time: {epoch_mins}m {epoch_secs}s') print(f'\tTrain Loss: {train_loss:.3f} | Train Acc: {train_acc*100:.2f}%') print(f'\t Val. Loss: {valid_loss:.3f} | Val. Acc: {valid_acc*100:.2f}%') model.load_state_dict(torch.load('tut6-model.pt')) test_loss, test_acc = evaluate(model, test_iterator, criterion) print(f'Test Loss: {test_loss:.3f} | Test Acc: {test_acc*100:.2f}%') def predict_sentiment(model, tokenizer, sentence): model.eval() tokens = tokenizer.tokenize(sentence) tokens = tokens[:max_input_length-2] indexed = [init_token_idx] + tokenizer.convert_tokens_to_ids(tokens) + [eos_token_idx] tensor = torch.LongTensor(indexed).to(device) tensor = tensor.unsqueeze(0) prediction = torch.sigmoid(model(tensor)) return prediction.item() predict_sentiment(model, tokenizer, "This film is terrible") predict_sentiment(model, tokenizer, "This film is great")
0.841858
0.985257
# Parse and prepare a dataset of abc music notations Download the [Nottingham Dataset](https://github.com/jukedeck/nottingham-dataset) or this [dataset of abc music notation from Henrik Norbeck ](http://norbeck.nu/abc/download.asp) select the 'one big zip file (549 kilobytes).' at the end of the page. If we use the Henrik Norbeck DS the first thing we are going to do is parse all the files and concatenate the text in one 'big' text file. We will train our model using Char-RNN for TF, you can clone it from [https://github.com/sherjilozair/char-rnn-tensorflow](https://github.com/sherjilozair/char-rnn-tensorflow) ## ABC Notations We will need some software to work with `abc` and `mid` files, you can install by using on Ubuntu: ``` $ sudo apt-get install abcmidi timidity ``` On Mac: ``` $ brew install abcmidi timidity ``` For mac user you can also install [easy abc](https://www.nilsliberg.se/ksp/easyabc/) to read the files Here’s a simple example: ``` X: 1 T:"Hello world in abc notation" M:4/4 K:C "Am" C, D, E, F,|"F" G, A, B, C|"C"D E F G|"G" A B e c ``` To test the installation we can listen to this by saving the above snippet into a `hello.abc` file and running (Mac and Ubuntu): ``` $ abc2midi hello.abc -o hello.mid && timidity hello.mid ``` ``` import os # input_folder_fp = '/home/gu-ma/Downloads/hn201809' input_folder_fp = '/Users/guillaume/Downloads/nottingham-dataset/ABC_cleaned' abc_raw_txt = '' abc_all_txt = '' # Parse all files in the input folders for root, subdirs, files in os.walk(input_folder_fp): print(root) for filename in files: file_path = os.path.join(root, filename) print('\t- %s ' % filename) if filename.lower().endswith('.abc'): with open(file_path, 'r') as f: abc_raw_txt += f.read() print('\nabc_raw_txt:\n--\n' + abc_raw_txt[:1000]) ``` Then we remove the 'unecessary' parts, clean up the text ``` import re # Helper function to extract (and delete) chunks of text from abc_raw_text def extract_text(regex, txt, delete): output = '' # extract the text for result in re.findall(regex, txt, re.S): output += result + "\n" # delete from the original file if delete: global abc_raw_txt abc_raw_txt = (re.sub(regex, '', abc_raw_txt, flags=re.S)) # remove empty lines abc_raw_txt = ''.join([s for s in abc_raw_txt.strip().splitlines(True) if s.strip()]) return output # Helper function to delete selected lines from a text def delete_lines(regex, txt): txt = (re.sub(regex, '', txt, flags=re.S)) txt = ''.join([s for s in txt.strip().splitlines(True) if s.strip()]) return txt # Extract intro text useless_txt = extract_text(r'(This file.*?- Questions?.[^\n]*)', abc_raw_txt, True) # Save the file without the intro text abc_all_txt = abc_raw_txt # Delete 'comments' abc_raw_txt = delete_lines(r'".[^\n]*', abc_raw_txt) # Delete Lyrics abc_raw_txt = delete_lines(r'%.[^\n]*', abc_raw_txt) # Delete some more comments abc_raw_txt = delete_lines(r'W:.[^\n]*', abc_raw_txt) # Extract headers abc_headers_txt = extract_text(r'(X:.*?K:.[^\n]*)', abc_raw_txt, True) print('\nabc_raw_txt:\n--\n' + abc_raw_txt[:1000]) print('\nabc_headers_txt:\n--\n' + abc_headers_txt[:1000]) print(len(abc_raw_txt)) print(len(abc_headers_txt)) ``` Once we have what we need we can save the file to disk ``` output_raw_fp = os.path.join(input_folder_fp, 'abc_raw.txt') output_all_fp = os.path.join(input_folder_fp, 'abc_all.txt') output_header_fp = os.path.join(input_folder_fp, 'abc_headers.txt') with open(output_raw_fp, 'w') as f: f.write(abc_raw_txt) with open(output_all_fp, 'w') as f: f.write(abc_all_txt) with open(output_header_fp, 'w') as f: f.write(abc_headers_txt) ``` Now that we have our input text file ready we can run it through our RNN, we will use char-rnn for tensorflow, you can download it and install it from [here](https://github.com/sherjilozair/char-rnn-tensorflow) ``` import shutil import subprocess charrnn_folder_fp = '/home/gu-ma/Documents/Projects/201809-HSLU-COMPPX/References/char-rnn-tensorflow' # We try with the full text first shutil.move(output_all_fp, os.path.join(charrnn_folder_fp, 'data', 'abc', 'input.txt')) ``` Go to the directory and run the training
github_jupyter
$ sudo apt-get install abcmidi timidity $ brew install abcmidi timidity X: 1 T:"Hello world in abc notation" M:4/4 K:C "Am" C, D, E, F,|"F" G, A, B, C|"C"D E F G|"G" A B e c $ abc2midi hello.abc -o hello.mid && timidity hello.mid import os # input_folder_fp = '/home/gu-ma/Downloads/hn201809' input_folder_fp = '/Users/guillaume/Downloads/nottingham-dataset/ABC_cleaned' abc_raw_txt = '' abc_all_txt = '' # Parse all files in the input folders for root, subdirs, files in os.walk(input_folder_fp): print(root) for filename in files: file_path = os.path.join(root, filename) print('\t- %s ' % filename) if filename.lower().endswith('.abc'): with open(file_path, 'r') as f: abc_raw_txt += f.read() print('\nabc_raw_txt:\n--\n' + abc_raw_txt[:1000]) import re # Helper function to extract (and delete) chunks of text from abc_raw_text def extract_text(regex, txt, delete): output = '' # extract the text for result in re.findall(regex, txt, re.S): output += result + "\n" # delete from the original file if delete: global abc_raw_txt abc_raw_txt = (re.sub(regex, '', abc_raw_txt, flags=re.S)) # remove empty lines abc_raw_txt = ''.join([s for s in abc_raw_txt.strip().splitlines(True) if s.strip()]) return output # Helper function to delete selected lines from a text def delete_lines(regex, txt): txt = (re.sub(regex, '', txt, flags=re.S)) txt = ''.join([s for s in txt.strip().splitlines(True) if s.strip()]) return txt # Extract intro text useless_txt = extract_text(r'(This file.*?- Questions?.[^\n]*)', abc_raw_txt, True) # Save the file without the intro text abc_all_txt = abc_raw_txt # Delete 'comments' abc_raw_txt = delete_lines(r'".[^\n]*', abc_raw_txt) # Delete Lyrics abc_raw_txt = delete_lines(r'%.[^\n]*', abc_raw_txt) # Delete some more comments abc_raw_txt = delete_lines(r'W:.[^\n]*', abc_raw_txt) # Extract headers abc_headers_txt = extract_text(r'(X:.*?K:.[^\n]*)', abc_raw_txt, True) print('\nabc_raw_txt:\n--\n' + abc_raw_txt[:1000]) print('\nabc_headers_txt:\n--\n' + abc_headers_txt[:1000]) print(len(abc_raw_txt)) print(len(abc_headers_txt)) output_raw_fp = os.path.join(input_folder_fp, 'abc_raw.txt') output_all_fp = os.path.join(input_folder_fp, 'abc_all.txt') output_header_fp = os.path.join(input_folder_fp, 'abc_headers.txt') with open(output_raw_fp, 'w') as f: f.write(abc_raw_txt) with open(output_all_fp, 'w') as f: f.write(abc_all_txt) with open(output_header_fp, 'w') as f: f.write(abc_headers_txt) import shutil import subprocess charrnn_folder_fp = '/home/gu-ma/Documents/Projects/201809-HSLU-COMPPX/References/char-rnn-tensorflow' # We try with the full text first shutil.move(output_all_fp, os.path.join(charrnn_folder_fp, 'data', 'abc', 'input.txt'))
0.240239
0.859664
# Intro It's the last time we meet in class for exercises! And to celebrate this mile-stone, we've put together an amazing set of exercises. * We'll start with looking at communities and their words in two exercise - Part A1: First we finish up the work on TF-IDF from last week. - Part A2: Second, we play around with sentiment analysis - trying to see if we can find differences between the communities. * In the latter half of the exercises, Part B, we'll try something completely new. As it turns out, **The two Comic Book Universes have been infected with COVID-19**. There's a lot of cool stuff you can do to understand disease spreading on networks, but we're also mindful of your time. Therefore, many of the exercises in part B are optional ![im](https://raw.githubusercontent.com/SocialComplexityLab/socialgraphs2020/master/files/wonder_woman_mask.png "mask") ## The informal intro First the pep-talk. A bit about the exercises today and some general silliness. ``` from IPython.display import YouTubeVideo, HTML, display YouTubeVideo("mbQHqFnqAqw",width=800, height=450) ``` Next (and I've separated this part out for your convenience) I talk a bit about the final projects. Among other things, I * Explain what the whole thing is about * Why the project has two parts, and how that's brilliant. * What you'll need to do to succeed in the written part of the project. * The time-table for the last weeks of the class. ``` YouTubeVideo("JMVCVY8LB54",width=800, height=450) ``` # Part A1: Communities TF-IDF word-clouds We continue where we left off last time, so the aim of this part is to create community wordclouds based on TF-IDF. Once again, it's still OK to only work with a single universe (e.g. Marvel or DC). The aim is to understand which words are important for each community. And now that you're TF-IDF experts, we're going to use that strategy. Let's start by creating $N_C$ documents, where $N_C$ is the number of communities you have found in exercise 3. **We will work with the 10 largest communities.** _Exercise 1_: > * Now that we have the communities, let's start by calculating a the TF list for each community (use whichever version of TF you like). Find the top 5 terms within each universe. > * Next, calculate IDF for every word in every list (use whichever version of IDF you like). > * Which base logarithm did you use? Is that important? > * We're now ready to calculate TF-IDFs. Do that for each community. > * List the 10 top words for each universe accourding to TF-IDF. Are these 10 words more descriptive of the universe than just the TF? Justify your answer. > * Create a wordcloud for each community. Do the wordclouds/TF-IDF lists enable you to understand the communities you have found (or is it just gibberish)? Justify your answer. # Part A2 - Sentiment analysis Sentiment analysis is another highly useful technique which we'll use to make sense of the Wiki data. Further, experience shows that it might well be very useful when you get to the project stage of the class. > **Video Lecture**: Uncle Sune talks about sentiment and his own youthful adventures. ``` from IPython.display import YouTubeVideo YouTubeVideo("JuYcaYYlfrI",width=800, height=450) # There's also this one from 2010 YouTubeVideo("hY0UCD5UiiY",width=800, height=450) ``` > Reading: [Temporal Patterns of Happiness and Information in a Global Social Network: Hedonometrics and Twitter](http://journals.plos.org/plosone/article?id=10.1371/journal.pone.0026752) _Exercise_ 2: Sentiment within the communities data. It's still OK to work with data from a single universe, and - unlike above - we work all communities. > > * Download the LabMT wordlist. It's available as supplementary material from [Temporal Patterns of Happiness and Information in a Global Social Network: Hedonometrics and Twitter](http://journals.plos.org/plosone/article?id=10.1371/journal.pone.0026752) (Data Set S1). Describe briefly how the list was generated. > * Based on the LabMT word list, write a function that calculates sentiment given a list of tokens (the tokens should be lower case, etc). > * Iterage over the nodes in your network, tokenize each page, and calculate sentiment every single page. Now you have sentiment as a new nodal property. > * Remember histograms? Create a histogram of all character's associated page-sentiments. > * What are the 10 characters with happiest and saddest pages? > * Now we average the average sentiment of the nodes in each community to find a *community level sentiment*. > - Name each community by its three most connected characters. > - What are the three happiest communities? > - what are the three saddest communities? > - Do these results confirm what you can learn about each community by skimming the wikipedia pages? # Part B1 ### Epidemics intro In this exercise we will now look at a network based approach to epidemic phenomena (which in the light of the world today seem pretty relevant). This will not only give some great visualizations, where we can see how the spread envolves over time, but also allow us to understand and predict the impact of for example communities, superspreaders and vaccines. We will use the SIR-model (as this is what we very much hope describes Covid-19) - have a look at chapter 10 in the [Network Science book](http://networksciencebook.com/) and/or at the [wikipage](https://en.wikipedia.org/wiki/Compartmental_models_in_epidemiology) if you want to read about the SIR-model (as well as other types of compartmental models in epidemiology): > *Reading*. For Chapter 10: Epidemics of the Network Science book, the part we will read is about epidemiology without networks. So you get a bit of background. > > * Read 10.1 > * Read 10.2. (Ok to just skim the derivations of SI & SIS model solution) Instead of the book 10.3 (Network Epidemics) and 10.4 (Contact Networks), 10.5 (Beyond the Degree Distribution), 10.6 (Immunization), 10.7 (Epidemic Prediction), we will do some exercises (see below), which will give you a sense of thinking about epidemics on networks. But - as you can tell from the titles, you should check out those latter parts of the chapter if you’re interested in this stuff. So much is relevant in this COVID-19 times. It gives you a sense of why I’m part of the Danish COVID-19 modeling task force. If we had more time, I would have loved to dive into this. *Exercise*: Check your reading > * What is homogeneous mixing. Is that a good assumption? > * What is the endemic state in the SI model? > * What is the definition basic reproductive number R0? > * Why is the SIR model the right model for modeling COVID-19? Can you think of other diseases that it would work for. ### Step 0: Building a temporal model - This is a temporal model, so you need to keep track of time. We start at time $t=0$. - You must be able to keep track of the state of all nodes (e.g. using a dictionary or a node-property). Nodes should be able to take on the states $S$ (susceptible), $I$ (Infected), $R$ (Recovered). - Once a node enters the $I$ state, they stay there for 10 time-steps, then enter the $R$ state. - Once a nodes enters the $R$ state, no further can happen to them. - Initialize by assigning the state $S$ to all nodes at $t = 0$. - Now we’re ready to simulate the epidemic - At $t = 0$. Pick a random node, infect that node. - For $t > 0$. - For all infected nodes. - Get neighbors in $S$ state (ignore nodes in state $I$ or $R$). - Infect these susceptible nodes to infect with probability $p = 0.1$. (Use the random module) - Change their state to $I$. - Update the state of any $I$ nodes that have been infectious for $10$ time-steps to $R$. - Save the graph and increment time $t$ by 1 (**Hint**: You can save the graph (including all attributes) for each timestep in a list, but you have to use do deepcopy of the graph before appending it to the list (see [Python’s copy.deepcopy](https://docs.python.org/3/library/copy.html))*) ### Step 1: Visualizing infection on the network First lets make a nice visualization of our network, where we can see how the epidemic spreads over time. ![GIF](https://github.com/SocialComplexityLab/socialgraphs2020/raw/master/files/Covid_Gif_0%2C25.gif) *Exercise* > * Run a single infection on your superhero network (Use only the giant conponent and convert to undirected) and create a movie showing how nodes turn from gray (S) to red (I) to light gray (R) . > - The movie can be created as a gif https://stackoverflow.com/questions/753190/programmatically-generate-video-or-animated-gif-in-python > * In the SIR equations (see the book chapter 10, figure XXX), everyone eventually gets infected. Explain in your own words why that happens. > * In our visualization here, you can see that there are some highly visible degree 1 nodes on the periphery that *never get infected*. Explain in your own words what's going on with those guys. > * More generally, can we expect everyone to get infected when we simulate spreading on a real network? > > > *Hints:* > - *Only calculate the positions for the nodes once* > - *This might take a while on your network, so you it might be helpfull to try on a small random network first* ### Step 2: Introduce visualizations to summarize properties of the spread across many runs We will now look at some other visualizations that might be more usefull when evaluating the spread of the epidemic, as well as the initiatives that can be taken (social distancing, vaccines etc.). We consider two different vizualizations here (we return to these in each of the subsequent exercises): * Left: Susceptible, Infected and Recovered * Right: Infected + Recovered ![Trend plots](https://github.com/SocialComplexityLab/socialgraphs2020/raw/master/files/Covid_Trend.png) *Exercise*: Plotting epidemic properties > - Do a single run with a random seed up to $t=100$ > - Save the nodes that in each timestep are $S$, $I$ and $R$ respectively eg. in a `dict`. > - Plot fraction of susceptible, infected, recovered nodes over time (left panel) and fraction of Infected + Recovered (right panel). Comment on your results. > - Now repeat, so that you do many (say on the order of $\mathcal{N} = 25$) runs (each with an new random seed). Store the results of each run. > - Plot the average fractions (both *Susceptible, Infected and Recovered* and *Infected + Recovered*) across your $\mathcal{N}$ runs. Average per time-steps. We call these *ensemble plots* (because we average over an ensemble of runs). For an example of ensemble plots, see figure above. In the figure, we show the individual runs as light colors and the averages as solid lines. ## Appendix: Optional exercises When we created these execises we were full of ideas. So we came up with all kinds of cool things that one could play with. But there's already a lot to do today. And we're all out of classes. So we decided to make the execises available in case you wanted to try them out. BUT. To make sure none of you are completely overwhelmed, we have made the rest of the exercises optional. In these exercises we investigate the effect on the different initiatives. Most of the exercises do not overlap, so if you don't plan on doing them all you can choose the ones that interest you the most. ### Optional 1: Vaccinations One of the major topics in the discussion regarding Covid-19 is a potential vaccine. Let's look at how a (perfect) vaccine will affect the spread we saw in step 2. *Exercise* > - Vaccinate 10, 40, 70% of nodes > - *hint: Vaccination can in our model be obtained by putting nodes in the recovered state from the beginning* > - Create ensemble plots of *Infected + recovered* for each of the vacination rates. > - Explain your results in your own words. > - Can you say anything about [herd immunity](https://en.wikipedia.org/wiki/Herd_immunity) from looking at these plots? ### Optional 2: Social distancing Social distancing might be largest buzz word (even above vaccines) these days, but let's be honest, while we might socially distance our self from strangers, we still see our friends. Let's have a look at how this affect the spread of the epedemic. We can visualize this using the communities you found earlier; we imagine each community represents a different friend group. *Exercise* > - Use the network you created in step 1 (i.e. no vaccination) > - Disconnect the communities > - identify which edges are **within** a community and which are **between** communities > - loop over all edges > - If edge is within a community: Keep it > - If edge is between a communities: Remove it with some probability (you can play around with different probabilities and see how this affect the spread). > - Create ensemble plots for each scenario and compare the results by comparing *Infected + recovered* plots for the various amounts of social distancing. ### Optional 3: The effect of who is infected early One might suspect that it has a large significance on the spread who (in the network) is infected first. Let's investigate this by controling who is infected in the first timestep. *Exercise* > - Plot averages of many runs (ensemble plots), seeding random individuals based on centrality (Choose your random seed among top 5% central characters). > - Repeat, but this time chhose your random seed among the bottom 5% central characters. > - Create ensemble plots for each scenario and compare the results by comparing *Infected + recovered* plots for top/bottom centrality individuals. To make it extra fancy, put in a plot of random seeding (just grab the results from above). ### Optional 4: Superspreaders To illustrate the concept of superspreaders we turn away from our super hero network and consider a watts-strogatz model instead. We will track the spread of the epedemic and then at some timestep introduce a superspreader (a node that is infected and has many connections) to see how the effect on the spread is. *Exercise* > - Create a watts-strogatz network with $n = 10000$, $k = 5$, $p = 0.1$. > - Simulate the spread (as done in the previous exercises) setting a random node to Infected > - at time $t = 50$ introduce a new node that is infected and has many connections > - Repeat many times and create ensemble *Infected + Recovered* plots of the development of the spread as above. > - Comment on the result > - *Extra*: Play around with the timing of introducing the superspreader; does this have an effect on what happens? ### Optional 5: A non-perfect vaccine One of the concerns regarding a vaccine, is that it is faulty. We will try to simulate what this could mean in the case of a random network under the following assumptions: - People social distance a lot prior to the vaccine (i.e. the probability of nodes being connected is low) - Once the vaccine is introduced and everyone is vaccinated people stop socially distancing (e.i. the probability of nodes being connceted increases), - The probability that a vaccinated person is immune is 30%. *Exercise* > - Create a random network > - Use an Barabási–Albert network with $n = 10000$ and $m = 2$ (you're welcome to try different networks and settings too) > - Start simulating the epidemic > - At time $t = 50$ > - Vaccinate everyone, but only move nodes to Recovered with probability $0.3$. > - Add $1000$ edges at random > - Continue the simulation until $t = 100$. > - Repeat many times and plot ensemble graphs for Infected + Recovered to understand what's going on. > - Comment on your results. **Takehome message**: The point of these last two exercises is **not** to make accurate predictions, but to show you all that we can use simple mathematical models to sharpen our intuitions about things that might happen in the real world. *Big thanks* to TA Benjamin for helping design the exercises for today.
github_jupyter
from IPython.display import YouTubeVideo, HTML, display YouTubeVideo("mbQHqFnqAqw",width=800, height=450) YouTubeVideo("JMVCVY8LB54",width=800, height=450) from IPython.display import YouTubeVideo YouTubeVideo("JuYcaYYlfrI",width=800, height=450) # There's also this one from 2010 YouTubeVideo("hY0UCD5UiiY",width=800, height=450)
0.426799
0.9357